We have been using Intel SATA SSDs in server settings for a very long time now.
We have yet to hit one server, all virtualization platforms, exhaust any of the SSDs we’ve deployed.
We’ve also been deploying HGST SAS SSDs in production for a very long time now. Same story, no SSDs exhausted.
It’s very simple to baseline the current workloads to see what the real world write amounts are. We can baseline for a day, week, month, quarter, or even a year to get an idea of what’s happening with our workloads.
Read intensive tend to be low Drive Write Per Day (DWPD) counts.
We set up a calculator in Excel that we plug the warranty period and TBs or PBs written into to get an idea of a drive’s DWPD status.
- Intel SSD DC P4610 NVMe 3.2TB : DWPD .374 (11.69% of capacity per day)
- Intel SSD DC P4608 NVMe 6.4TB : DWPD .301 (4.7% of capacity per day)
- Intel SSD DC P4800X Optane 750GB: DWPD 29.954 (3,993.91% of capacity per day) (Entire P4800X line is 30 DWPD)
- Intel SSD Pro 7600p NVMe Consumer 1TB: DWPD .316 (31.56% of capacity per day)
- Intel SSD DC P4511 NVMe M.2 2TB: DWPD .534 (26.71% of capacity per day)
Endurance, which is what the above DWPD refer to, is pretty good across the board. Would the workloads write that much _every_ day, day in, and day out? We highly doubt it.
The only place we get concerned about DWPD are in our Hyper-Converged clusters where cache endurance is important. In these settings we balance the size of the cache devices relative to their endurance aiming for about a 5 DWPD number.
A higher capacity drive with lower endurance can still meet that need (it’s called over provisioning if we actually modify the GB/TB available for format lower to “increase” endurance).
The key in all of this is knowing the baseline for GB written per day for our workloads.