Vast Data Eyes A Role In The Datacenter Beyond Storage – The Next Platform

In its short lifetime, Vast Data has been able to put its stamp on a fast-changing data storage market. The company was founded in 2016 by Renek Hallak, chief executive officer, Shachar Fienblit, vice president of research and development, Jeff Denworth, vice president of products, who collectively saw an opportunity to change the way HPC organizations and large enterprises stored and accessed their massive amounts of data.
They leveraged such technologies as flash, the NVM-Express storage protocol, its NVM-Express-over-Fabrics extension, and storage-class memory (SCM) to create what they call Universal Storage – a distributed architecture that relieves organizations from having to decide between cost and capacity when figuring out their storage requirements and dispense with the traditional tiering of data.
“We started to unravel this realization that you could build one storage system within an environment to basically support the needs of all of your applications and all of your workloads,” Denworth tells The Next Platform. “That spectrum of performance and capacity doesn’t need to be expressed at the storage level. It can be expressed at the application level, where each set of different applications consumes more or less, depending upon what they need. If you have a mix of applications that are archival and applications that are high performance, they can start to coexist in a system that isn’t designed for absolute performance per SSD but is designed to give you more than enough performance in the aggregate to meet the needs of all of your workloads.”
Vast Data was in stealth mode until 2018 and in early 2019 began rolling out its storage hardware and software. The company’s growth has been fast, and in May 2021 it pulled in $83 million in Series D financing, driving Vast’s valuation to $3.7 billion and creating a balance sheet of $230 million. According to Denworth, the 300 employee company has an annualized run rate of more than $100 million, 350 percent year-over-year growth and a presence on five continents.
However, the plan for Vast’s founders was not to create a storage data company. The idea has been to build a new generation of infrastructure, according to Denworth. He references Thinking Machines – a company founded in 1983 to build highly parallel supercomputers that could leverage the artificial intelligence technology of the time, competing against such companies as Cray and nCUBE. Thinking Machines had some profitable years but eventually filed for bankruptcy just over 10 years later, with Sun Microsystems buying its hardware and parallel computing businesses.

The company’s brief life left an impact on Denworth and the other Vast Data co-founders. Thinking Machines “was a very bespoke supercomputing company that endeavored to make some really interesting systems and over time,” he says. “That’s ultimately what we’re going to aim to make: a system that can ultimately think for itself. But to start with, we wanted to build upon our roots – including XtremeIO, DataDirect Networks and Kaminario, now known as Silk – and we said, ‘Okay, let’s build the data store to start.’ The thinking was, let’s question every single element of how to build a company and how to build the system so that we can get the greatest gain out of this effort, both in terms of what the product can accomplish, but also in terms of what the company looks like and its composition and valuation.”
Denworth says the company will get into specifics of its plans later this year when it unveils its ten-year vision at a large event.
“We want to show the first article evidence of what we’re making as a testament to what we think we can do very special,” he says. “The objective all along is to make everything simpler for customers to build infrastructure that’s higher value. More utility solves greater problems than conventional architectures and conventional systems do and to use the inventions that we’ve brought out into the market today to pay for the inventions that we want to deliver to the market tomorrow or pay for the development itself. That’s the gist of it.”
Vast Data views itself as a data company, he says. Looking at spectrum of storage systems available, some are more intelligent and evolved than others.
File systems are more evolved then than block storage systems are or J-Bots and what you have are essentially data management systems,” Denworth says. “We realized that we could take that far beyond the classic definitions of a file system, but the realization was that the architecture that has the most intimate understanding of data can make the best decisions about what to do with that data. First by determining what’s inside of it. Second of all, by either moving the data to where the compute is or the compute to where the data is, depending upon what the most optimized decision is at any given time.”
There is a “classic IT stack that infrastructure teams have been running for forever, products like VMware and Oracle and EMC,” he says. “That’s good for classic IT applications. Our thesis is that the next 20 years will be defined by a computing renaissance that is definitely punctuated by the adoption of AI in the market.”
Vast’s launch coincided with the rising adoption of AI, machine learning and deep learning, all of which need access to as much data as possible. The vendor’s huge pool of flash in a Disaggregated Shared-Everything (DASE) architecture makes all data available in milliseconds. The need for tiering goes away in these environments.

“Machine learning completely transforms the customer’s relationship with their data and in order to appropriately train and retrain your models, you have to keep going back to your capacity stores over and over and over again,” Denworth says. “The high-performance storage industry of yesterday optimized to catch a transaction, like when you swipe your credit card. But in the future, you will be optimizing for understanding the total universe of possibility and that can only come by exposing these new algorithms to the most amount of data. As data ages, it becomes more valuable in the era, and that turns that idea [and] the conceptions of the trait of that data pyramid upside down.”

As Vast Data pulls together a roadmap for the next decade, it continues to evolve its data storage portfolio. As we wrote about last year, Vast Data exited the hardware business, decoupling the software from its appliances and partnering with contract manufacturer Avnet. The move, which Denworth said took about eight weeks, echoes shifts by others in the industry, such as Nutanix, taking hardware off the vendors’ books and giving enterprises more hardware options for running the software.
Vast Data likely will have two to three OEM partners in the next six to 12 months and possibly a cloud provider that builds it own stuff, he said, declining to name who those companies could be.
More recently, the company last month doubled the storage density of the hardware platforms that are supported by Vast’s Universal Storage software. Vast Data is leveraging Intel’s 30 TB quad-level cell (QLC) SSDs to grow the density of 2U Vast Enclosure (DBox) to deliver more than a petabyte of capacity per rack unit to enterprise, hyperscale and cloud environments. The enclosure will now support up to 1,350 TB of flash capacity, and in a nod to power efficiency, the architecture offers 500 watts per petabyte. In addition, Vast Data this year will roll out a new UPC feature to enforce limits on power consumption by systems through intelligent scheduling of CPUs, which will reduce peak power draw by 33 percent.

It’s part of Vast Data’s larger mission to drive down the cost of flash to the point that hard drives become irrelevant. That might be easier said than done. According to IDC analysts, while the HDD market will continue to decline over the in the face of rising SDD attack rates, through 2025 HDD petabyte shipments will grow 18.5 percent a year and the average capacity per drive will rise an average of 22.5 percent per year.
Storage vendors also are finding new ways to leverage hard drives. Western Digital last year unveiled OptiNAND, a new technology that integrates iNAND Universal Flash Storage with an HDD to address the growing rise in the amount of data being generated.
Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now
One of the common themes – and one could say even the main theme – of The Next Platform is that some of technologies developed by the high performance supercomputing centers (usually in conjunction with governments and academia), the hyperscalers, the big cloud builders, and a handful of big and
After a decade of vendor consolidation that saw some of the world’s biggest IT firms acquire first-class HPC providers such as SGI, Cray, and Sun Microsystems, as well as smaller players like Penguin Computing, WhamCloud, Appro, and Isilon, it is natural to wonder who is next. Or maybe, more to
For storage startups focused on the highest end of infrastructure, removing the costs associated with a hardware business might be the only way to reach potential. But only if you’re not leaving customers to fend for themselves integration-wise. Look as profitable as possible and focus on only the largest deals.
Your email address will not be published.


document.getElementById( “ak_js” ).setAttribute( “value”, ( new Date() ).getTime() );
This site uses Akismet to reduce spam. Learn how your comment data is processed.
The Next Platform is published by Stackhouse Publishing Inc in partnership with the UK’s top technology publication, The Register.
It offers in-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Read more…
Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now
All Content Copyright The Next Platform

source

Share this post:

Leave a Reply