MySQL uses table-level locking for MyISAM
and
MEMORY
tables, page-level locking for
BDB
tables, and row-level locking for
InnoDB
tables.
In many cases, you can make an educated guess about which locking type is best for an application, but generally it is difficult to say that a given lock type is better than another. Everything depends on the application and different parts of an application may require different lock types.
To decide whether you want to use a storage engine with
row-level locking, you should look at what your application does
and what mix of select and update statements it uses. For
example, most Web applications perform many selects, relatively
few deletes, updates based mainly on key values, and inserts
into a few specific tables. The base MySQL
MyISAM
setup is very well tuned for this.
Table locking in MySQL is deadlock-free for storage engines that use table-level locking. Deadlock avoidance is managed by always requesting all needed locks at once at the beginning of a query and always locking the tables in the same order.
The table-locking method MySQL uses for WRITE
locks works as follows:
-
If there are no locks on the table, put a write lock on it.
-
Otherwise, put the lock request in the write lock queue.
The table-locking method MySQL uses for READ
locks works as follows:
-
If there are no write locks on the table, put a read lock on it.
-
Otherwise, put the lock request in the read lock queue.
When a lock is released, the lock is made available to the
threads in the write lock queue and then to the threads in the
read lock queue. This means that if you have many updates for a
table, SELECT
statements wait until there are
no more updates.
You can analyze the table lock contention on your system by
checking the Table_locks_waited
and
Table_locks_immediate
status variables:
mysql> SHOW STATUS LIKE 'Table%';
+-----------------------+---------+
| Variable_name | Value |
+-----------------------+---------+
| Table_locks_immediate | 1151552 |
| Table_locks_waited | 15324 |
+-----------------------+---------+
If a MyISAM
table contains no free blocks in
the middle, rows always are inserted at the end of the data
file. In this case, you can freely mix concurrent
INSERT
and SELECT
statements for a MyISAM
table without locks.
That is, you can insert rows into a MyISAM
table at the same time other clients are reading from it. (Holes
can result from rows having been deleted from or updated in the
middle of the table. If there are holes, concurrent inserts are
disabled but are re-enabled automatically when all holes have
been filled with new data.)
If you want to perform many INSERT
and
SELECT
operations on a table when concurrent
inserts are not possible, you can insert rows in a temporary
table and update the real table with the rows from the temporary
table once in a while. This can be done with the following code:
mysql>LOCK TABLES real_table WRITE, insert_table WRITE;
mysql>INSERT INTO real_table SELECT * FROM insert_table;
mysql>TRUNCATE TABLE insert_table;
mysql>UNLOCK TABLES;
InnoDB
uses row locks and
BDB
uses page locks. For these two storage
engines, deadlocks are possible because they automatically
acquire locks during the processing of SQL statements, not at
the start of the transaction.
Advantages of row-level locking:
-
Fewer lock conflicts when accessing different rows in many threads.
-
Fewer changes for rollbacks.
-
Possible to lock a single row for a long time.
Disadvantages of row-level locking:
-
Requires more memory than page-level or table-level locks.
-
Slower than page-level or table-level locks when used on a large part of the table because you must acquire many more locks.
-
Definitely much slower than other locks if you often do
GROUP BY
operations on a large part of the data or if you must scan the entire table frequently.
Table locks are superior to page-level or row-level locks in the following cases:
-
Most statements for the table are reads.
-
A mix of reads and writes, where writes are updates or deletes for a single row that can be fetched with one key read:
UPDATE
tbl_name
SETcolumn
=value
WHEREunique_key_col
=key_value
; DELETE FROMtbl_name
WHEREunique_key_col
=key_value
; -
SELECT
combined with concurrentINSERT
statements, and very fewUPDATE
orDELETE
statements. -
Many scans or
GROUP BY
operations on the entire table without any writers.
With higher-level locks, you can more easily tune applications by supporting locks of different types, because the lock overhead is less than for row-level locks.
Options other than row-level or page-level locking:
-
Versioning (such as that used in MySQL for concurrent inserts) where it is possible to have one writer at the same time as many readers. This means that the database or table supports different views for the data depending on when access begins. Other common terms for this are “time travel,” “copy on write,” or “copy on demand.”
-
Copy on demand is in many cases superior to page-level or row-level locking. However, in the worst case, it can use much more memory than using normal locks.
-
Instead of using row-level locks, you can employ application-level locks, such as
GET_LOCK()
andRELEASE_LOCK()
in MySQL. These are advisory locks, so they work only in well-behaved applications. (See Section 12.9.4, “Miscellaneous Functions”.)
To achieve a very high lock speed, MySQL uses table locking
(instead of page, row, or column locking) for all storage
engines except InnoDB
and
BDB
.
For InnoDB
and BDB
tables,
MySQL uses only table locking if you explicitly lock the table
with LOCK TABLES
. For these storage engines,
we recommend that you not use LOCK TABLES
at
all, because InnoDB
uses automatic row-level
locking and BDB
uses page-level locking to
ensure transaction isolation.
For large tables, table locking is much better than row locking for most applications, but there are some pitfalls:
-
Table locking enables many threads to read from a table at the same time, but if a thread wants to write to a table, it must first get exclusive access. During the update, all other threads that want to access this particular table must wait until the update is done.
-
Table updates normally are considered to be more important than table retrievals, so they are given higher priority. This should ensure that updates to a table are not “starved” even if there is heavy
SELECT
activity for the table. -
Table locking causes problems in cases such as when a thread is waiting because the disk is full and free space needs to become available before the thread can proceed. In this case, all threads that want to access the problem table are also put in a waiting state until more disk space is made available.
Table locking is also disadvantageous under the following scenario:
-
A client issues a
SELECT
that takes a long time to run. -
Another client then issues an
UPDATE
on the same table. This client waits until theSELECT
is finished. -
Another client issues another
SELECT
statement on the same table. BecauseUPDATE
has higher priority thanSELECT
, thisSELECT
waits for theUPDATE
to finish, and for the firstSELECT
to finish.
The following items describe some ways to avoid or reduce contention caused by table locking:
-
Try to get the
SELECT
statements to run faster so that they lock tables for a shorter time. You might have to create some summary tables to do this. -
Start mysqld with
--low-priority-updates
. This gives all statements that update (modify) a table lower priority thanSELECT
statements. In this case, the secondSELECT
statement in the preceding scenario would execute before theUPDATE
statement, and would not need to wait for the firstSELECT
to finish. -
You can specify that all updates issued in a specific connection should be done with low priority by using the
SET LOW_PRIORITY_UPDATES=1
statement. See Section 13.5.3, “SET
Syntax”. -
You can give a specific
INSERT
,UPDATE
, orDELETE
statement lower priority with theLOW_PRIORITY
attribute. -
You can give a specific
SELECT
statement higher priority with theHIGH_PRIORITY
attribute. See Section 13.2.7, “SELECT
Syntax”. -
You can start mysqld with a low value for the
max_write_lock_count
system variable to force MySQL to temporarily elevate the priority of allSELECT
statements that are waiting for a table after a specific number of inserts to the table occur. This allowsREAD
locks after a certain number ofWRITE
locks. -
If you have problems with
INSERT
combined withSELECT
, you might want to consider switching toMyISAM
tables, which support concurrentSELECT
andINSERT
statements. (See Section 7.3.3, “Concurrent Inserts”.) -
If you mix inserts and deletes on the same table,
INSERT DELAYED
may be of great help. See Section 13.2.4.2, “INSERT DELAYED
Syntax”. -
If you have problems with mixed
SELECT
andDELETE
statements, theLIMIT
option toDELETE
may help. See Section 13.2.1, “DELETE
Syntax”. -
Using
SQL_BUFFER_RESULT
withSELECT
statements can help to make the duration of table locks shorter. See Section 13.2.7, “SELECT
Syntax”. -
You could change the locking code in
mysys/thr_lock.c
to use a single queue. In this case, write locks and read locks would have the same priority, which might help some applications.
Here are some tips concerning table locks in MySQL:
-
Concurrent users are not a problem if you do not mix updates with selects that need to examine many rows in the same table.
-
You can use
LOCK TABLES
to increase speed, because many updates within a single lock is much faster than updating without locks. Splitting table contents into separate tables may also help. -
If you encounter speed problems with table locks in MySQL, you may be able to improve performance by converting some of your tables to
InnoDB
orBDB
tables. See Section 14.2, “TheInnoDB
Storage Engine”, and Section 14.5, “TheBDB
(BerkeleyDB
) Storage Engine”.
For a MyISAM
table, you can use concurrent
inserts to add rows at the same time that
SELECT
statements are running if there are no
deleted rows in middle of the table.
Under circumstances where concurrent inserts can be used, there
is seldom any need to use the DELAYED
modifier for INSERT
statements. See
Section 13.2.4.2, “INSERT DELAYED
Syntax”.
If you are using the binary log, concurrent inserts are
converted to normal inserts for CREATE ...
SELECT
or INSERT ... SELECT
statements. This is done to ensure that you can re-create an
exact copy of your tables by applying the log during a backup
operation.
With LOAD DATA INFILE
, if you specify
CONCURRENT
with a MyISAM
table that satisfies the condition for concurrent inserts (that
is, it contains no free blocks in the middle), other threads can
retrieve data from the table while LOAD DATA
is executing. Using this option affects the performance of
LOAD DATA
a bit, even if no other thread is
using the table at the same time.