Go to Top

The future looks bright for HDD…with a few exceptions (Part 1)

hard disk drive

Even though sales of the chip based SSD medium are on the rise, traditional magnetic hard disk drives (HDDs) are still quite popular both for private use as well as in companies. Regardless of the decline of SSD prices in previous years, HDDs are still cheaper as far as storage space is concerned.

The producers of traditional HDDs know that they have to continue to deliver technological advances to make sure that SSDs will not make the HDD obsolete. There are significant differences in SSD versus HDD technologies.  One of which is that SSDs decrease in life expectancy or storage every time the writing of data occurs on the chips, but the main reasons for using a HDD are price per terabyte and ease of data recovery in the event of a  loss.  Unfortunately, these advantages over SSD technology can vanish and producers are eager to find new HDD technologies to keep or widen the gap between these competitors.

 The technological evolution of HDDs

IBM introduced the first hard disk drive in 1956.  Since then the industry has increased storage capacity exponentially to meet an ever-growing need. With the introduction of consumer electronic devices that record and play videos in 4k, the demand for processing so called Big Data for enterprises, is growing even further.  For decades HDD manufacturers have focused on a method called longitudinal recording technology (LTR) to record data on drives. In longitudinal recording, the magnetization of each data bit (i.e., the binary digit 0 or 1) horizontal, parallel to the disk (or disks) that spins inside the hard drive.

The problem with using this method is that we are rapidly approaching the point where the microscopic magnetic grains on the disk are so tiny that they could start to interfere with one another, thus losing their ability to hold their magnetic orientations. The resulting data corruption could render a hard drive unreliable, and thus unusable. This phenomenon is the superparamagnetic effect (SPE).  It improves coercivity, or the ability of a bit to retain its magnetic charge, which overcomes SPE.

Perpendicular magnetic recording

That’s why perpendicular magnetic recording (PMR) was invented and first introduced into the market in August of 2005 by Toshiba.  Just a couple of months later, Western Digital and Seagate followed with their PMR HDD products.  The magnetic dipole moments in PMR recording represent a logic bit along with the used logical writing method as PRML, and not parallel to the surface of the disc, but perpendicular to it.  In other terms, the data goes to a certain extent in the depth and this results in a potentially much higher data density (about three times as dense) than with its precursor LRT.  With this technology on the same surface, it can accommodate more data.

This is also the drawback of this technology: The smaller Weiss` districts (the magnetized domains in the crystals of a ferromagnetic material) needs a shorter distance between the read-/write head and the magnetic surface to still read or write data, therefore, this technique is difficult to realize and comes at a certain point to its natural end since the sizes of the heads cannot have smaller designs anymore.

But even with this problem, PMR is still the standard of hard disk recording to date. The move to PMR has increased the maximum platter density by an order of magnitude—from about 100 Gb to 1000 Gb per square inch—but we’re now beginning to hit the limits of PMR.  Currently, we’re seeing 8 TB HDDs with 6 (!) platters inside hitting the market.  Experts expect to see 10 TB HDDs to go to market with PMR technology.  It is still unclear if manufacturers will be able to downsize the heads beyond this point and put more platters into a HDD.

Solving the problem?

To solve this problem, a new technology known as Shingled Magnetic Recording (SMR), which is still based on the original PMR method (so-called PMR +), was invented in 2013.  This technology raises disk density in order to gain a 25 percent jump in capacity. Simply put, it increases the number of data tracks per inch by squeezing them together so they overlap slightly like shingles on the roof of a house.

This new method also comes with a problem.  In some circumstances, writing a track can affect an adjacent track so severely that it needs a rewrite. To avoid that (from the point of change), the total SMR area of the HDD requires replacement and there is typically a gap between tracks so that a refresh doesn’t continue.  The necessary refresh of neighboring tracks reduces the writing speed in general, so SMR has more space and less speed.

In the second part of this article we will highlight “The future technologies of HDD”…


Picture Copyright: Tim Reckmann  / pixelio.de

2 Responses to "The future looks bright for HDD…with a few exceptions (Part 1)"

  • Jared Palmer
    15th October 2016 - 9:08 pm Reply

    As much as I’d love to see the HDD stick around forever (working in data recovery especially) I think this new tech is only delaying the inevitable. SSDs will eventually replace the HDD completely drastically changing the data recovery field. Already we are starting to see very high-density NAND chips which are increasing in capacity exponentially every year. With 3D Xpoint memory about to start making its inroads into the consumer market, this will jump yet again. It’s been a nice long road for the old spinners but within another decade they’ll go the way of the transistor tube and mechanical relay in computers. It’s honestly amazing they’ve lasted as long as they have without solid state taking over.

  • Anonymous
    5th July 2017 - 5:14 am Reply

    It is an informative post for knowledge.

Leave a Reply

Your email address will not be published. Required fields are marked *