Avoid Lock Escalation by Disk Swap
Lock Escalation is to change row locks into table lock. This method is dangerous and not effective. That is because it makes dead lock. In addition to that, vacancy of lock occurs.
Sometimes a lot of row locks are necessary, in order to execute application. That is inevitable to keep data consistency.
On handling big data, number of records depends on master table's one also becomes big.
Batch process cases this problem most often. It is to collect data, and calculate the result. Then transaction have to lock a lot of records.
In this case, it should not use table lock, because other database user uses database, so their transaction is locked in surplus.
Sometimes OLTP procedure also needs a lot of locks at once. When transaction have to calculate summary of records rapidly, the occurs.
Online transaction needs real time data, therefore background batch is too late.
Then online transaction have to do same thing with batch process.
This problem occurs when huge number of row locks are stored in memory of a single server machine.
Therefore by swap them into disk space, the problem is solved.
Database Management Systems which do not have this mechanism avoid that by lock escalation.
Most of databases written in Java can not solve this problem. Even if it is written in C or C++ language, if it does not have memory swapping, it can not avoid this problem.
Alinous Elastic DB has memory management system to swap too many locks into disk.
The mmap() linux system call is effective way to use virtual addressing of the CPU. The malloc() function in glibc actually uses it when a certain size of memory is allocated.
The memory manager of this database system uses mmap() at first, split them into segments, and assign memory from the segments.
When page fault exception occurs, the program have to load data from disk by DMA. This process is very slow, but necessary.
Therefore it have to make the frequency least.
The memory allocator makes best effort to use segments, which are released or accessed recently, as possible it can.
If possible, we have to lessen opportunities to make huge low locks. It is avoidable by following way.
This is used in batch process. If the batch process is working on another database which is not main one used by OLTP, and it is single thread, it is the best way.
But if it uses multi threading, and process data parallelly, it becomes slow.
By using table partition, the row locks are distributed into multiple servers.
This is the vest way when you use database system to handle OLTP, and when the batch process use multiple threads.