Why the Future of Data Storage is (Still) Magnetic Tape – IEEE Spectrum

IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.
It should come as no surprise that recent advances in big-data analytics and artificial intelligence have created strong incentives for enterprises to amass information about every measurable aspect of their businesses. And financial regulations now require organizations to keep records for much longer periods than they had to in the past. So companies and institutions of all stripes are holding onto more and more.
Studies show [PDF] that the amount of data being recorded is increasing at 30 to 40 percent per year. At the same time, the capacity of modern hard drives, which are used to store most of this, is increasing at less than half that rate. Fortunately, much of this information doesn’t need to be accessed instantly. And for such things, magnetic tape is the perfect solution.
Seriously? Tape? The very idea may evoke images of reels rotating fitfully next to a bulky mainframe in an old movie like Desk Set or Dr. Strangelove. So, a quick reality check: Tape has never gone away!
Indeed, much of the world’s data is still kept on tape, including data for basic science, such as particle physics and radio astronomy, human heritage and national archives, major motion pictures, banking, insurance, oil exploration, and more. There is even a cadre of people (including me, trained in materials science, engineering, or physics) whose job it is to keep improving tape storage.
Tape has been around for a long while, yes, but the technology hasn’t been frozen in time. Quite the contrary. Like the hard disk and the transistor, magnetic tape has advanced enormously over the decades.
The first commercial digital-tape storage system, IBM’s Model 726, could store about 1.1 megabytes on one reel of tape. Today, a modern tape cartridge can hold 15 terabytes. And a single robotic tape library can contain up to 278 petabytes of data. Storing that much data on compact discs would require more than 397 million of them, which if stacked would form a tower more than 476 kilometers high.
It’s true that tape doesn’t offer the fast access speeds of hard disks or semiconductor memories. Still, the medium’s advantages are many. To begin with, tape storage is more energy efficient: Once all the data has been recorded, a tape cartridge simply sits quietly in a slot in a robotic library and doesn’t consume any power at all. Tape is also exceedingly reliable, with error rates that are four to five orders of magnitude lower than those of hard drives. And tape is very secure, with built-in, on-the-fly encryption and additional security provided by the nature of the medium itself. After all, if a cartridge isn’t mounted in a drive, the data cannot be accessed or modified. This “air gap” is particularly attractive in light of the growing rate of data theft through cyberattacks.
The offline nature of tape also provides an additional line of defense against buggy software. For example, in 2011, a flaw in a software update caused Google to accidentally delete the saved email messages in about 40,000 Gmail accounts. That loss occurred despite there being several copies of the data stored on hard drives across multiple data centers. Fortunately, the data was also recorded on tape, and Google could eventually restore all the lost data from that backup.
The 2011 Gmail incident was one of the first disclosures that a cloud-service provider was using tape for its operations. More recently, Microsoft let it be known that its Azure Archive Storage uses IBM tape storage equipment.
1951: Magnetic tape first used to record data on a computer (Univac).
All these pluses notwithstanding, the main reason why companies use tape is usually simple economics. Tape storage costs one-sixth the amount you’d have to pay to keep the same amount of data on disks, which is why you find tape systems almost anyplace where massive amounts of data are being stored. But because tape has now disappeared completely from consumer-level products, most people are unaware of its existence, let alone of the tremendous advances that tape recording technology has made in recent years and will continue to make for the foreseeable future.
All this is to say that tape has been with us for decades and will be here for decades to come. How can I be so sure? Read on.
Tape has survived for as long as it has for one fundamental reason: It’s cheap. And it’s getting cheaper all the time. But will that always be the case?
You might expect that if the ability to cram ever more data onto magnetic disks is diminishing, so too must this be true for tape, which uses the same basic technology but is even older. The surprising reality is that for tape, this scaling up in capacity is showing no signs of slowing. Indeed, it should continue for many more years at its historical rate of about 33 percent per year, meaning that you can expect a doubling in capacity roughly every two to three years. Think of it as a Moore’s Law for magnetic tape.
That’s great news for anyone who has to deal with the explosion in data on a storage budget that remains flat. To understand why tape still has so much potential relative to hard drives, consider the way tape and hard drives evolved.
Both rely on the same basic physical mechanisms to store digital data. They do so in the form of narrow tracks in a thin film of magnetic material in which the magnetism switches between two states of polarity. The information is encoded as a series of bits, represented by the presence or absence of a magnetic-polarity transition at specific points along a track. Since the introduction of tape and hard drives in the 1950s, the manufacturers of both have been driven by the mantra “denser, faster, cheaper.” As a result, the cost of both, in terms of dollars per gigabyte of capacity, has fallen by many orders of magnitude.
These cost reductions are the result of exponential increases in the density of information that can be recorded on each square millimeter of the magnetic substrate. That areal density is the product of the recording density along the data tracks and the density of those tracks in the perpendicular direction.
Early on, the areal densities of tapes and hard drives were similar. But the much greater market size and revenue from the sale of hard drives provided funding for a much larger R&D effort, which enabled their makers to scale up more aggressively. As a result, the current areal density of high-capacity hard drives is about 100 times that of the most recent tape drives.
Nevertheless, because they have a much larger surface area available for recording, state-of-the-art tape systems provide a native cartridge capacity of up to 15 TB—greater than the highest-capacity hard drives on the market. That’s true even though both kinds of equipment take up about the same amount of space.
img Inside and Out: A modern Linear Tape-Open (LTO) tape cartridge consists of a single reel. After the cartridge is inserted, the tape is fed automatically to a reel built into the drive mechanism. Photo: Victor Prado
With the exception of capacity, the performance characteristics of tape and hard drives are, of course, very different. The long length of the tape held in a cartridge—normally hundreds of meters—results in average data-access times of 50 to 60 seconds compared with just 5 to 10 milliseconds for hard drives. But the rate at which data can be written to tape is, surprisingly enough, more than twice the rate of writing to disk.
Over the past few years, the areal density scaling of data on hard disks has slowed from its historical average of around 40 percent a year to between 10 and 15 percent. The reason has to do with some fundamental physics: To record more data in a given area, you need to allot a smaller region to each bit. That in turn reduces the signal you can get when you read it. And if you reduce the signal too much, it gets lost in the noise that arises from the granular nature of the magnetic grains coating the disk.
It’s possible to reduce that background noise by making those grains smaller. But it’s difficult to shrink the magnetic grains beyond a certain size without compromising their ability to maintain a magnetic state in a stable way. The smallest size that’s practical to use for magnetic recording is known in this business as the superparamagnetic limit. And disk manufacturers have reached it.
Until recently, this slowdown was not obvious to consumers, because disk-drive manufacturers were able to compensate by adding more heads and platters to each unit, enabling a higher capacity in the same size package. But now both the available space and the cost of adding more heads and platters are limiting the gains that drive manufacturers can make, and the plateau is starting to become apparent.
There are a few technologies under development that could enable hard-drive scaling beyond today’s superparamagnetic limit. These include heat-assisted magnetic recording (HAMR) and microwave-assisted magnetic recording (MAMR), techniques that enable the use of smaller grains and hence allow smaller regions of the disk to be magnetized. But these approaches add cost and introduce vexing engineering challenges. And even if they are successful, the scaling they provide is, according to manufacturers, likely to remain limited. Western Digital Corp., for example, which recently announced that it will probably begin shipping MAMR hard drives in 2019, expects that this technology will enable areal density scaling of only about 15 percent per year.
In contrast, tape storage equipment currently operates at areal densities that are well below the superparamagnetic limit. So tape’s Moore’s Law can go on for a decade or more without running into such roadblocks from fundamental physics.
Still, tape is a tricky technology. Its removable nature, the use of a thin polymer substrate rather than a rigid disk, and the simultaneous recording of up to 32 tracks in parallel create significant hurdles for designers. That’s why my research team at the IBM Research–Zurich lab has been working hard to find ways to enable the continued scaling of tape, either by adapting hard-drive technologies or by inventing completely new approaches.
In 2015, we and our collaborators at FujiFilm Corp. showed that by using ultrasmall barium ferrite particles oriented perpendicular to the tape, it’s possible to record data at more than 12 times the density achievable with today’s commercial technology. And more recently, in collaboration with Sony Storage Media Solutions, we demonstrated the possibility of recording data at an areal density that is about 20 times the current figure for state-of-the-art tape drives. To put this in perspective, if this technology were to be commercialized, a movie studio, which now might need a dozen tape cartridges to archive all the digital components of a big-budget feature, would be able to fit all of them on a single tape.
IBM 726 A Data Deluge: Modern tape libraries can hold hundreds of petabytes, whereas the IBM 726 (right), introduced in 1952, could store just a couple of megabytes. Photos: David Parker/Science Source; right: IBM
To enable this degree of scaling, we had to make a bunch of technical advances. For one, we improved the ability of the read and write heads to follow the slender tracks on the tape, which were just 100 or so nanometers wide in our latest demo.
We also had to reduce the width of the data reader—a magnetoresistive sensor used to read back the recorded data tracks—from its current micro­meter size to less than 50 nm. As a result, the signal we could pick up with such a tiny reader got very noisy. We compensated by increasing the signal-to-noise ratio inherent to the media, which is a function of the size and orientation of the magnetic particles as well as their composition and the smoothness and slickness of the tape surface. To help further, we improved the signal processing and error-correction schemes our equipment employed.
To ensure that our new prototype media can retain recorded data for decades, we changed the nature of the magnetic particles in the recording layer, making them more stable. But that change made it harder to record the data in the first place, to the extent that a normal tape transducer could not reliably write to the new media. So we used a special write head that produces magnetic fields much stronger than a conventional head could provide.
Combining these technologies, we were able to read and write data in our laboratory system at a linear density of 818,000 bits per inch. (For historical reasons, tape engineers around the world measure data density in inches.) In combination with the 246,200 tracks per inch that the new technology can handle, our prototype unit achieved an areal density of 201 gigabits per square inch. Assuming that one cartridge can hold 1,140 meters of tape—a reasonable assumption, based on the reduced thickness of the new tape media we used—this areal density corresponds to a cartridge capacity of a whopping 330 TB. That means that a single tape cartridge could record as much data as a wheelbarrow full of hard drives.
In 2015, the Information Storage Industry Consortium, an organization that includes HP Enterprise, IBM, Oracle, and Quantum, along with a slew of academic research groups, released what it called the “International Magnetic Tape Storage Roadmap.” That forecast predicted that the areal density of tape storage would reach 91 Gb per square inch by 2025. Extrapolating the trend suggests that it will surpass 200 Gb per square inch by 2028.
The authors of that road map each had an interest in the future of tape storage. But you needn’t worry that they were being too optimistic. The laboratory experiments that my colleagues and I have recently carried out demonstrate that 200 Gb per square inch is perfectly possible. So the feasibility of keeping tape on the growth path it’s had for at least another decade is, to my mind, well assured.
Indeed, tape may be one of the last information technologies to follow a Moore’s Law–like scaling, maintaining that for the next decade, if not beyond. And that streak in turn will only increase the cost advantage of tape over hard drives and other storage technologies. So even though you may rarely see it outside of a black-and-white movie, magnetic tape, old as it is, will be here for years to come.
This article appears in the September 2018 print issue as “Tape Storage Mounts a Comeback.”
A patent was granted that lists it as the creator of a food container
Kathy Pretz is editor in chief for The Institute, which covers all aspects of IEEE, its members, and the technology they’re involved in. She has a bachelor’s degree in applied communication from Rider University, in Lawrenceville, N.J., and holds a master’s degree in corporate and public communication from Monmouth University, in West Long Branch, N.J.
The South African patent office made history in July when it issued a patent that listed an artificial intelligence system as the inventor.
The patent is for a food container that uses fractal designs to create pits and bulges in its sides. Designed for the packaging industry, the new configuration allows containers to fit more tightly together so they can be transported better. The shape also makes it easier for robotic arms to pick up the containers.
The patent’s owner, AI pioneer Stephen L. Thaler, created the inventor, the AI system known as Dabus (device for the autonomous bootstrapping of unified sentience).
The patent success in South Africa was thanks to Thaler’s attorney, Ryan Abbott.
Abbott and his team filed applications in 2018 and 2019 in 17 patent offices around the world, including in the United States, several European countries, China, Japan, and India.
The European Patent Office (EPO), the U.K. Intellectual Property Office (UKIPO), the U.S. Patent and Trademark Office (USPTO), and Intellectual Property (IP) Australia all denied the application, but Abbott filed appeals. He won an appeal in August, when the Federal Court of Australia ruled that the AI system can be an inventor under the country’s 1990 Patents Act.
The EPO Boards of Appeal and the U.K. Court of Appeal recently ruled that only humans can be inventors. Abbott is asking the U.K. Supreme Court to allow him to challenge that point. He says he expects a decision to be made this year.
Abbott, a physician as well as a lawyer, is a professor of law and health sciences at the University of Surrey’s School of Law. He also is an adjunct assistant professor at the Geffen School of Medicine at the University of California, Los Angeles, and he wrote The Reasonable Robot: Artificial Intelligence and the Law.
He spoke about the decision by South Africa during the Artificial Intelligence and the Law virtual event held in September by the IEEE student branch at the University of South Florida, in Tampa. The event was a collaboration among the branch and several other IEEE groups including Region 3 and Region 8, the Africa Council, the University of Cape Town student branch, the Florida Council, and the Florida West Coast Section. More than 340 people attended. Abbott’s talk is available on IEEE.tv.
The Institute recently interviewed Abbott to find out how an AI entity could invent something, the nuances of patent law, and the impact the Australian and South African decisions could have on human inventors. The interview has been condensed and edited for clarity.
In 2014 Abbott began noticing that companies were increasingly using AI to do a variety of tasks including creating designs. A neural network–based system can be trained on data about different types of car suspensions, for example, he says. The network can then alter the training data, thereby generating new designs.
A second network, which he calls a critical neural network, can monitor and evaluate the output. If you tell the AI system how to evaluate new designs, and that you are looking for a car suspension that can reduce friction better than existing designs, it can alert you when a design comes out that meets that criterion, Abbott says.
“Some of the time, the AI is automating the sort of activity that makes a human being an inventor on a patent,” he says. “It occurred to me that this sort of thing was likely to become far more prevalent in the future, and that it had some significant implications for research and development.”
Some patent applicants have been instructed by their attorney to use a person’s name on the patent even if a machine came up with the invention.
But Abbott says that’s a “short-sighted approach.” If a lawsuit is filed challenging a patent, the listed inventor could be deposed as part of the proceedings. If that person couldn’t prove he or she was the inventor, the patent couldn’t be enforced. Abbott acknowledges that most patents are never litigated, but he says it still is a concern for him.
Meanwhile he found that companies using AI to invent were growing worried.
“AI is automating the sort of activity that makes a human being an inventor on a patent.”
“It wasn’t clear what would happen if you didn’t have a human inventor on a patent,” he says. “There was no law on it anywhere. Just a bunch of assumptions.”
He and a group of patent lawyers decided to seek out a test case to help establish a legal precedent. They approached Thaler, founder of Imagination Engines, in St. Charles, Mo. The company develops artificial neural network technology and associated products and services. Thaler created Dabus in part to devise and develop new ideas. He had Dabus generate the idea for a new type of food container, but he did not instruct the system what to invent specifically or do anything that would traditionally qualify him to be an inventor directly.
The lawyers decided the food container design was patentable because it met all the substantive criteria: It was new, not obvious, useful, and appropriate subject matter.
They filed Thaler’s application in the U.K. and in Europe first because, Abbott says, those jurisdictions don’t initially require an application to list an inventor.
The patent offices did their standard evaluations and found the application to be “substantively patentable in preliminary examination.”
Next the lawyers adjusted the application to list Dabus as the inventor.
Typically an inventor’s employer is made the owner of a patent. Even though Dabus is not an employee, Abbott says, “We argue that Dr. Thaler is entitled to own the patents under the general principles of property ownership—such as a rule called accession—which refers to owning some property by virtue of owning some other property. If I own a fruit tree, I own fruit from that tree. Or, if Dabus had been a 3D printer and made a 3D-printed beverage container, Thaler would own that.”
Abbott says he believes the decisions in Australia and South Africa will encourage people to build and use machines that can generate inventive output and use them in research and development. That would in turn, he says, promote the commercialization of new technologies.
He says he hopes the decisions also encourage people to be open about whether their invention was developed by a machine.
“The reason we have a patent system is to get people to disclose inventions to add to the public store of knowledge in return for these monopoly rights,” he says.
Human inventors likely will face more competition from AI in the future, he says.
“AI hasn’t gotten to the point where it is going to be driving mass automation of research,” he says. “When it does, it will likely be in certain areas where AI has natural advantages, like discovering and repurposing medicines. In the medium term, there will still be plenty of ways for human researchers to stay busy while society gets to enjoy dramatic advances in research.”
You can listen to an interview with Abbott on IEEE Spectrum’s Fixing the Future podcast: Can a Robot Be Arrested? Hold a Patent? Pay Income Taxes?
Exaggerated stopping movements help pedestrians read autonomous cars’ minds
Edd Gent is a freelance science and technology writer based in Bangalore, India. His writing focuses on emerging technologies across computing, engineering, energy and bioscience. He’s on Twitter at @EddytheGent and email at edd dot gent at outlook dot com. His PGP fingerprint is ABB8 6BB3 3E69 C4A7 EC91 611B 5C12 193D 5DFC C01B. His public key is here. DM for Signal info.
Judging whether it’s safe to cross the often road involves a complex exchange of social cues between pedestrian and driver. But what if there’s no one behind the wheel? As self-driving cars become more common, helping them communicate with human road users is crucial. Autonomous vehicle company Motional thinks making the vehicles more expressive could be the key.

When he’s waiting at a cross walk, Paul Schmitt, chief engineer at Motional, engages in what he calls the “glance dance”—a rapid and almost subconscious assessment of where an oncoming driver is looking and whether they’re aware of him. “With automated vehicles, half of that interaction no longer exists,” he says. “So what cues are then available for the pedestrian to understand the vehicles’ intentions?”

To answer that question, his team hired animation studio CHRLX to create a highly realistic virtual reality (VR) experience designed to test pedestrian reactions to a variety of different signaling schemes. Reporting their results in IEEE Robotics and Automation Letters, they showed that exaggerating the car’s motions—by braking earlier or stopping well short of the pedestrian—was the most effective way to communicate its intentions.

The company is now in the process of integrating the most promising expressive behaviors into their motion planning systems, and they’ve also open sourced the VR environment so other groups can experiment. Getting these kinds of interactions right will be essential for building trust in self-driving cars, says Schmitt, as this is likely to be most people’s first encounter with the technology.

Self-driving car behaviors included having the car brake earlier and harder than the baseline, stopping the car a vehicle’s length away, adding exaggerated braking and low-revving sounds, and finally combining these sounds with an exaggerated dipping of the nose of the car as if it was braking hard.
“That motivates a lot of the work that we’re doing, to ensure that those first interactions go well,” he says. “We want to make sure that people feel comfortable with this new technology.”

The study carried out by Motional saw 53 participants don a VR headset that transported them to the corner of a four way intersection in an urban area. Each participant faced 33 trials of a car approaching the intersection as they try to cross, with the vehicle exhibiting a variety of different behaviors and appearances. While they were able to look around, they could not move and instead had to indicate when they felt it was safe to cross by pressing a button on a handheld controller.

Three baseline scenarios mimicked the way a human driver would come to a halt at a stop sign, but one featured a human driver behind the wheel, another had no driver and conspicuous sensors dotted around the car, and the third featured a large LED display that indicates when the vehicle is yielding—an approach popular among makers of driverless cars.

They then designed various expressive behaviors designed to implicitly signal to the pedestrian that the car is stopping for them. These included having the car braking earlier and harder than the baseline, stopping the car a vehicle’s length away, adding exaggerated braking and low-revving sounds, and finally combining these sounds with an exaggerated dipping of the nose of the car as if it was braking hard.

To keep the participants honest, they also included a control scenario where the car didn’t stop, and Schmitt says their reactions were testimony to the realism of the simulation. “I literally had people in our VR lab on the third floor of this office building raise the middle finger at a virtual car that just cut them off,” he says.

The team then measured how quickly participants decided to cross and also gave them a quick survey after each trial to find out how safe they felt, how confident they were of their decision to cross, and how clearly they understood a car’s intention. Both early, hard braking and stopping short led to a considerably higher proportion of participants crossing the street before the vehicle had come to a complete stop. But in the surveys, stopping short elicited the highest ratings for sense of safety, decision confidence, and intention understanding.

The fact that stopping short elicited the best response isn’t surprising, says Schmitt, as this approach was inspired by the way human drivers behave when slowing down for pedestrians. What was more surprising was that there was little difference in reactions between the baseline scenarios with and without a driver, which suggests pedestrians are paying more attention to the movement of the vehicle than to the driver behind the wheel, he adds.

That’s backed up by other research, says Wilbert Tabone, a doctoral student at Delft University of Technology in the Netherlands who works on robot-human interaction. While most attempts to solve this problem have focused on displays that stand in for explicit cues like eye contact or hand gestures, he says studies keep showing that the implicit behavior of the car is what most people are looking out for.

Nonetheless, he thinks a combination of explicit and implicit signaling will ultimately be the most effective. One promising avenue is augmented reality, and he has developed a system that would allow driverless vehicles to communicate their intention directly to a pedestrian’s smart glasses, which would then indicate visually whether or not its safe to cross. The downside, he admits, is that it first requires widespread adoption of smart glasses, which is no sure thing.

One potential challenge for integrating these expressive behaviors could be driver acceptance, says Catherine Burns, a professor in systems design engineering at the University of Waterloo in Canada. “To what degree would people purchase a car that made exaggerated sounds? Or compressed its suspension to make an expressive nose dive?” she writes in an email.

Nonetheless, the study opens up an interesting new line of research, she says, and shows that making self-driving cars more expressive could significantly improve their interactions with pedestrians.

Join Teledyne for a three-part webinar series on high-performance data acquisition basics
Date: Tuesday, December 7, 2021
Time: 10 AM PST | 1 PM EST
Duration: 45 minutes
Join Teledyne SP Devices for part 3 in a three-part introductory webinar series on high-performance digitizers.
Topics covered in this Part 3 of the webinar series:
Who should attend? Developers working with high-performance data acquisition systems that would like to understand the capabilities and building blocks of a digitizer.
What attendees will learn? How digitizer features and functions can be used in different applications and measurement scenarios.
Presenter: Thomas Elter, Senior Field Applications Engineer
** Click here to watch Part 1 “What is a High-Performance Digitizer?” on demand.
** Click here to watch Part 2 “How to Select a High-Performance Digitizer” on demand.


Share this post:

Leave a Reply