How can I queue MySQL queries to make them sequential rather than concurrent and prevent excessive I/O usage?
I have multiple independent processes each submitting bulk insert queries (millions of rows) into a MySQL database, but it is much slower to have them run concurrently than in sequence.
How can I throttle the execution of these queries so that only one at a time can be executed, in sequence?
I have thought of checking if PROCESSLIST contains any running queries but it may not be the best way to properly queue queries on a real first-come, first-queued, first-served basis.
I am using C# and the MySQL Connector for .NET.
I'm guessing that you're using InnoDb (which allows you to do concurrent writes). MyISAM only has table level locking so would queue up the writes.
I'd recommend an approach similar to ruakh's but that you use a table in the database to manage the locking with. The table would be called something like lock_control
Just before you try to do a bulk insert to the table you request a LOCK TABLES lock_control WRITE on this lock_control table. If you are granted the lock then continue with your bulk write and afterwards release the lock. If the table is write locked by another thread then the LOCK TABLES command will block until the lock is released.
You could do this locking with the table you're inserting into directly but I believe that no other thread would be able to read from the table either whilst you hold the lock.
The advantage over doing this locking in the db rather than on the filesystem is that you could have inserts coming in from multiple client machines and it somehow feels a little simpler to handle the locking/inserting all within MySQL.