While it’s great to look to an all NVMe deployment as the ideal the budgetary side of things can put a constraint on the configuration we deploy.
No matter if we’re deploying a cache to capacity solution with SATA SSD for cache to spinning rust for capacity or NVMe for cache and SATA SSD for capacity we need to be mindful of a number of things prior to purchasing our Storage Spaces Direct (S2D)/Azure Stack HCI (ASHCI/AzSHCI) solution set.
Here are some suggestions as far as what to look at when looking to deploy any hyper-converged (HCI) solution that incorporates a cache to capacity setup.
- Prefer higher count smaller volume
- Cache Device Count should be divisible by total capacity drives in and future in the system
- Minimum device count should be 3/4 depending on capacity count
- Total cache volume should hold all workload’s churn so no spillage to capacity
- Baseline workloads and do it fairly often
Use baseline growth to establish today’s cache count with a % for growth over 24-36 months if planning to add cache devices down the road
Cache endurance should be a minimum of 5 Drive Writes Per Day (DWPD)
- DWPD can be balanced to cache device size … larger device lower endurance okay (over provisioning)
- No baselines? Never, ever, underestimate the needed cache capacity: Star Trek Engineer’s Rule of Thumb: Promise it in 2 but deliver in 1
Cache Device Count and Ratio
Our preference is to always deploy more than two cache devices no matter what. If we end up in a situation where one of the cache devices fails we’re not sitting on a node with 50% of its cache gone. That’s going to hurt. Big time.
Our cache to capacity ratio preference depending on the storage layout in each node is:
If S2D/ASHCI is to serve as a Scale-Out File Server cluster then one would need to adjust the cache volume based on the above or, in the case of archival and backup storage lower the needed volume based on data ingest needs.
Microsoft: Planning volumes in Storage Spaces Direct
Slightly different topic
Our Blog: SSDs and Endurance
Thanks for reading!