Sorry if this isn't the right place, this is my first time posting anything here. Anyway, I know I'm not supposed to but I can think of no other way.
I need to use a small MySQL table (70 rows x 6 columns) as a queue.
I'm writing a python application that requires a work queue to be shared between process windows (not sure what the proper name for them is).
Each job is repeatable, each use must be recorded and cleared at regular intervals (so each job is "weighted" so usage is evenly and fairly distributed). I attempted to base it on another work queue where each job is NOT repeatable, but it seems that multiple writes to the database (up to 19 at once per 12 seconds) are not properly being counted up.
Is there an alternative to doing something like this?
Perhaps some kind of cache sitting between python and MySQL that would convert many single "job + 1" updates into a singular "job + 19"?
I assumed being on a shiny new NVMe drive and a more than sufficient buffer_pool_size would make it plenty fast enough to handle that, but instead of counting up properly, over the course of 60 seconds it may reach 9, instead of 100+ where it should be.
Asked by Calvin DiBartolo
(1 rep)
Mar 7, 2017, 06:06 PM
Last activity: Jul 24, 2025, 02:06 AM
Last activity: Jul 24, 2025, 02:06 AM