Storage Spaces 2019 – Slow with parity

I’ve written a lot about Storage Spaces and slow performance. You can find some of my articles here:

We’ve done a lot of work on Storage Spaces recently to try and find out why our new parity array on server 2019 was slow.

The hardware is the following:

  • Lenovo SR650
  • 32GB Memory
  • Xeon CPU
  • 12x 8TB 7200RPM 512e drives

When creating the Storage Space, the logical sector size is set from the disk. These disks are 512e drives with a 512 sector size. The default sector size created is 4k.

You can either buy native 4k drives at the outset, or set your Storage Space to 4k.

Here is our storage spaces before the change:

From my techs

By default when creating a storage pool, the logical sector size should match the highest physical sector in the disk pool, in this case it should be 4K, but maybe due to this drive is 512e drive, so the logical sector stays in 512. This will cause 8 times delay and performance delay due to the OS and disk controller is doing so called “RMW” operation. The picture below described how the performance get affected by this way.

Once we change our Storage Space to 4k, we started to get 120-160mb/s across the array, all the time. Before this, we were getting 30-60mb/s with many stutters.

I hope this helps.


I’ve taken this post from Reddit, which you can find here. The author is BloodyIron.

So I just went down a rabbit hole for about 3hrs, as I NEEDED to know if iSCSI and NFS can pass TRIM over their protocols to SSD-backed storage. And here are my findings.

  1. iSCSI is capable of passing TRIM correctly
  2. NFS requires v4.2 on server and client to pass TRIM correctly, otherwise earlier versions DO NOT

I cobbled this together from an eye-spinning number of sources on the internet. So if you feel you can conclusively prove me wrong, by all means.

I’m primarily posting this for myself (as my blog/website is not yet production ready), and maybe it can help some other people who are looking.


  1. RHEL 7.4+ has official support for NFS v4.2
  2. FreeBSD, unsure when it will get NFS v4.2, I’m trying to find out, so far I haven’t found info
  3. Proxmox, to get it to use NFS v4.2 (not sure if it can or not) you have to change NFS options for mounts at the CLI, I’ve opened a feature request to add options/settings like this to the webGUI
  4. FreeNAS seems to conclusively not be NFS v4.2 capable, as of this writing (since it also relies on FreeBSD)

Hope this helps someone! 😀