To avoid scenarios in which the memory of an SSD is filled to the brim with invalid files, the garbage collection of the SSD controller works with over-provisioning as an intermediate step. In this temporary work area, valid pages are merged and blocks filled with invalid (or deleted) pages are regenerated.
All regenerated files/blocks are then added to the over-provisioning memory to accommodate write operations from the SSD controller and maximize performance during peak loads, otherwise the read, delete, modify, and write operations of blocks filled with invalid pages could affect performance.
The garbage collection works independently of the operating system and is triggered automatically in periods with low activity. This is done at regular intervals or by triggering the corresponding ATA Data Set Management TRIM command to set a garbage collection.
Free blocks of over-provisioning memory available at any time effectively support wear-leveling on the NAND flash, where the SSD controller intelligently and evenly distributes write operations across all NAND flash memory cells without affecting the overall performance of the SSD even at peak loads.
In addition, the ATA Data Set Management TRIM command, which cleans invalid pages and unused user memory, can make more space available for the SSD by cleaning invalid pages and unused storage capacity.
To help you understand how OP works, we’ll take a closer look at Kingston Enterprise SSDs, the DC400 SSDs, for illustration. These SSDs come in capacities up to 1.8TB and users can customize their over-provisioning with Kingston SSD Manager. By adjusting the size of the OP, effects in performance and lifetime are achieved with 7% or higher OP.
Fig. 3 – Over-Provisioning based on storage capacity and user class
In Fig. 3 we compare the individual connected storage capacities of the DC400 (400/480GB, 800/960GB, 1600/1800GB) in different OP-levels.
When comparing all connected storage capacities, we can see the following:
- The drives with more storage capacity (less OP) in each pair can maintain the same transfer speed (bandwidth), but the Random Write IOs per second (IOPS) is significantly lower This means that drives with lower OPs perform well in read-intensive applications. However, they can be slower in write-intensive applications than drives with 28% OP.
- Lower over-provisioning also means that the total bytes written (TBW) expressed in terabytes are lower in each drive. The higher the OP percentage, the longer an SSD can be used. A DC400 with 960GB can hold up to 564TB of written data, whereas DC400 with 800GB can only hold 860TBW. Kingston derives the TBW count from the JEDEC workloads .
- If the TBW count is transferred to Drive Writes Per Day (DWPD) during the warranty period, we can see that the 28% OP drives almost double the number of writes per day. This is why it is recommended to use 28% of the OP for more write-intensive applications.
References: JESD219: Solid-State Drive (SSD) Endurance Workloads, JEDEC Committee https://www.jedec.org/standards-documents/docs/jesd219a. These client and enterprise workloads represent the industry standard for evaluating their SSDs and are derived from the classified TBWs supported by their SSDs. Please note that your workload may be different and the classified TBW specifications may be above or below your overall workload due to the unique WAF from your application.
Picture Copyrights: Kingston Technology Corporation
Michael Nuncic is Marketing Communications Manager at the German Ontrack Data Recovery office in Böblingen for more than 5 years. Highly experienced in computer, network and software topics, he is a professional editor for blog and technical articles for almost 20 years now.