back to Jitbit Blog home About this blog

Speed up your cloud-based SQL Server by 200% for free

by Alex Yumashev · Jun 22 2017

For those of you who run SQL Server on cloud VMs (AWS or Azure), here's an awesome performance hack I have recently discovered. It's called "buffer pool extensions" and it's been introduced in SQL Server 2014 and later ("standard" edition or higher).


If your Amazon EC2 instance (or Azure VM) has a local SSD drive, run this command:

    (FILENAME = 'Z:\MyFastSSDCache.BPE', SIZE = 30 GB);

Assuming Z: is the SSD drive.

You get upto 170% faster i/o performance, cause SQL Server will use it as an extra cache layer between RAM and HDD.

Longer version

First, let's see, how SQL Server works with data. A very, very simplified version.

  1. SQL Server reads data-pages from disk into memory, and keeps it in memory for later access.
  2. This is called "buffer pool" and this is 80% of how SQL server consumes your server's memory.
  3. Now, when the server runs out of RAM - some of these pages are, obviously, cleared from memory, on an "LRU" basis. So if you need this data again later - the server will have to read it from the disk again.
  4. (usually only the "clean" data pages are purged from cache, but clean/dirty pages are a whole other story, let's not dive into this now)

Enter "buffer pool extensions"

Now, you can set the SQL server to purge those "clean" pages from memory, but still keep 'em in a "faster than HDD" storage. Something faster than a magnetic hard-drive, but slower than RAM. Bingo - an SSD drive.

Microsoft has introduced this feature back in 2013, when SSD drives were not that reliable, but offered amazing I/O performance. We all needed a way to benefit from that super-fast I/O but still keep the integrity of our data if that I/O storage suddenly goes away, crashes or gets corrupted in some way.

You might ask - how is that relevant today, when SSD drives became cheaper, faster and way more reliable?

Because AWS.

Cloud-storage is slow

Amazon's EBS drives are super slow, even compared to a 10-year old magnetic hard-drive. Because it is, essentially, network storage. That's the price you pay for availability - the data is kept safe even if the server literally burns down. AWS has been trying to solve this problem for a while now: by introducing "provisioned I/O" volumes, "burstable" disks, advising to combine slow drives into RAID-arrays etc.

But here's the thing. Most of AWS "instances" (or Azure VMs) come with a local "ephemeral" storage. Which, of course, is wiped every time you stop/start the server (because the server launches on a different physical machine). But this disk is an ideal candidate for "buffer pool extensions".

Just make sure you grant read/write permissions to your MSSQLSERVER user-account on that SSD and run the self-explanatory command from the first paragraph.

Oh, and don't forget to read my epic blog post on what else can be moved onto an ephemeral drive. I keep updating it with new info every month.

Best practices

MS recommends the size of this buffer pool extension to be from 1:4 to 1:8 of the server's RAM. If you have enough space, set it to your database size.

Enabling this feature on a running production server is fine. But once enabled - avoid making changes in production. Reconfiguring the size or turning the feature on/off will severely impact performance for a couple a minutes.

Remember, that data pages are not the only thing SQL Server uses RAM for. For example, cached execution plans will still stay in memory. Which is why buffer pool extension is NOT a replacement of adding more RAM.

This feature will be most effective in "write heavy" workload. It won't speed things up in a reporting/data analysis environment with terrabyte-scaled storage.