Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It is - depending on the read-vs-write workload. For our workload, we landed on a record size (blocksize) of 128K which gives us 3x-5x compression. Contrary to the 8KB/16KB suggestions on the internet, our testing indicated 128K was the best option. And, using compression allows us to run much smaller storage volume sizes in Azure (thus, saving money).

We did an exhaustive test of our use-cases, and the best ZFS tuning options with Postgres we found (again, for our workload):

  * Enable ZFS on-disk compression

  * Disable ZFS in-memory compression (enabling this option costs us 30% perf penalty)

  * Enable primary caching

  * Limit read-ahead caching

Edit: Forgot to add, here are the required PGSQL options when using ZFS:

  * full_page_writes = off

  * wal_compression = off

Once the above options were set, we were getting close to EXT4 read/write speeds with the benefit of compression.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: