Editorials

Screaming Fast SQL Server Database

Kalen Delaney wrote an excellent white paper on the enhanced optimization of SQL Server 2016 In-Memory OLTP Optimization. The changes are dramatic, and enable incredible performance gains at the price of RAM. There are some key things she writes about that I wish to point out, because they result in a truly viable option for systems suffer from Disk Bound OLTP performance roadblocks.

One huge change is the ability to remove Locking and Latching processes. The In-memory technique utilizes multiple sets of data for the same modified record so that there is no need for blocking kinds of techniques to assure users get the correct, ACID level, version of data. Because the data is 100% addressed from memory, the data structures have been modified to optimize memory storage, rather than aligning to page structures which utilize page latches.

A second change that is quite impressive is the amount of data you can now place in your In-memory tables. The recommended storage capacity is 2TB. It is not an enforced max limit. Even so, that is a lot of data for an OLTP database.

Much of the SQL syntax has been implemented for In-memory table queries. Before 2016, In-memory tables supported a minimal subset of SQL statements and operators. Today it sports an enhanced list that now includes Left and Right Outer Joins, distinct, Or and Not operators, Union and much more. If you are using simple CRUD for your tables, this may not be all that important. However, this brings some real power to a highly tuned powerhouse.

In-memory tables can now be altered, whereas before, once a table was created its structure was fixed. Only by creating new tables could you modify the table structure. You can now add, drop or alter columns, as well as add, drop or rebuild indexes.

Comparing this to any other optimization technique for your SQL Server, there is not that comes close for the money. Even SSD performance, removing the overhead of a spinning disk, does not come close to the performance gains of In-memory, because it is still linked to the data structures and ACID implementations of any disk based system. This paper claims 100 fold performance increase over a disk based database.

In-memory is not a completely disk free implementation. Checkpoint files are used to support your memory-optimized tables in the case of a database shutdown. When the database is opened again, the in memory tables are re-serialized from the checkpoint files.

If your database server isn’t quite keeping up with your load, perhaps In-memory could be a solution for you.

Cheers,

Ben