Storage Spaces 2019 – Slow with parity

I’ve written a lot about Storage Spaces and slow performance. You can find some of my articles here:

We’ve done a lot of work on Storage Spaces recently to try and find out why our new parity array on server 2019 was slow.

The hardware is the following:

  • Lenovo SR650
  • 32GB Memory
  • Xeon CPU
  • 12x 8TB 7200RPM 512e drives

When creating the Storage Space, the logical sector size is set from the disk. These disks are 512e drives with a 512 sector size. The default sector size created is 4k.

You can either buy native 4k drives at the outset, or set your Storage Space to 4k.

Here is our storage spaces before the change:



From my techs

By default when creating a storage pool, the logical sector size should match the highest physical sector in the disk pool, in this case it should be 4K, but maybe due to this drive is 512e drive, so the logical sector stays in 512. This will cause 8 times delay and performance delay due to the OS and disk controller is doing so called “RMW” operation. The picture below described how the performance get affected by this way.

Once we change our Storage Space to 4k, we started to get 120-160mb/s across the array, all the time. Before this, we were getting 30-60mb/s with many stutters.

I hope this helps.

Storage Spaces and Parity – Slow write speeds

Updated Post 2019

I’ve recently been playing around with Windows Storage Spaces on Microsoft Windows Server 2012 R2. They are fantastic. ReFS brings so many benefits over NTFS.

But it’s half complete it seems.

I originally created a parity volume, as I assumed this would be quite similar to RAID 6. You have the option of having a write array, or write cache using SSD drives. I haven’t done this at this stage. I’m currently using 6x6TB Western Digital 7200RPM drives.

After creating the very large volume, I started copying some data. I was copying the data over a 1gbit network interface, so I was expecting to see 100mb/s, or close to it.

At first, I did get 100mb/s. For a minute or so anyway. Then I saw the speed slowly drop to around 30-45mb/s. I thought this was rather strange.

I upgraded all the drivers on the server, mainly the network drivers, as I saw the network speed drop to around that level at the same time as well. However, this made no difference.

I then started to do some research to figure out what was going on.

What I saw was the following: The memory was increasing to a certain, pre-defined point, then it would stop. This indicated that the copying was actually being buffered to memory (write-cache). I assume this is happening because I used the default options when creating a parity drive without a SSD array. This creates a 2GB buffer in memory, which you can clearly see here.

memory

Once the memory buffer, or write-cache is full, you can see the speed drop and the memory start writing the data to disk.

memory

Annoying huh? One way to fix this is by using a cache array of SSD hard drives, but there is another fix.

In PowerShell, you can set the storage space to believe it has battery backup. This is like having battery backup on a raid card. First you need to get the friendly name of your storage volume.

The command is

Get-StoragePool

You will get something similar to the following
powershell

Now set the power protected mode of the pool as follows

Set-StoragePool -FriendlyName Backup -IsPowerProtected $true

replace backup with the name of your storage pool.

Here it is set as $false

3

Here it is set as $true

4

Quite a difference.

**** I should warn you though that if your server crashes, or has a power failure, your storage space may become corrupt. Make sure you have a UPS in place ****

Like I said earlier, this can be improved with a SSD cache array.

Hopefully this helps someone out there.

*** UPDATED 15/12/2015 ***

I highly recommend you view the Fujitsu white paper on Storage Spaces here.