I had an opportunity late in 2021 to do a virtual roundtable article on some digital storage and memory topics where I collected comments on trends in NVMe storage, CXL, heterogenous memory and object storage for the January 2022 IEEE Computer Magazine (published by the IEEE Computer Society). This piece looks at a few of the insights from that article (you can read the entire article from the link above).
I spoke to several folks on NVMe and Computational Storage. Scott Shadley from NGD said, “There is a lot going on in the market today, with vendors releasing updated or new versions of products and the ongoing work of the standards groups to address the needs of the end users. SNIA recently re- leased a new public review of the Computational Storage Architecture and Programming Model document (version 0.8 rev. 0) to allow for feedback from a now broader market of available users and to showcase the value of the 51 companies working on the effort.”
JB Baker, a member of the SNIA Computational Storage Technical Work Group said, “Scaleflux announced quad-level cell and other recent updates to their transparent compression solutions. NGD Systems recently announced that a single customer has received over five petabytes of storage solutions, and the company has a partnership with Los Alamos National Labs to continue the development of the technology. Samsung continues to showcase their partnership with Xilinx and Eideticom. Other implementations of computational storage capabilities include the IBM flash core module with Arm-based transparent compression, and NetInt with their video encoding/ transcoding focused devices.”
Jason Molgaard, another member of the SNIA Computational Storage Technical Work Group discussed clarification of dictionary terms for computational storage. The figure below from the article illustrates the functions of the agreed upon terminology.
Computational Storage Terminology
Roger Bertschmann, CEO of Eideticom, thought that the SNIA Architectural and Programing model will make it to a 1.0 release as a standard in early 2022 and API work will be closely tied to this.
Debendra Das Sharma from Intel said that the CXL interconnect takes advantage of the massive PCIe infrastructure that already exists. He said that, “…the relationship between the CPU and CXL devices is similar to the relationship between two CPUs in dual symmetric multiprocessor systems. The result is a significant advancement in computer architecture for heterogeneous computing.” Also, “Since CXL. memory is transactional, it supports memory-agnostic heterogeneous and tiered memory support.“
Jim Pappas, also from Intel, thinks the industry is excited about CXL in two key areas. First, CXL is great for all types of application accelerators in heterogenous computing. Second, people are interested in CXL to enable memory expansion and pooling. The figure below shows some CXL uses. Jim expects to see the first CPUs with direct CXL support in 2022.
Representative CXL Use Cases
Ugur Tigli, CTO of MinIO says that, “Today, most enterprises use a mix of file storage and object storage. Over time, however, object storage will become the dominant storage type.” Further he said that, “object storage is the storage of choice for both modern applications and for working with big data. As data volumes continue to grow and analytics and AI proliferate, object storage will be increasingly used to meet the requirements of these programs.”
Object storage, he thinks, will become ubiquitous, “Cloud-scale applications are stateless, containerized, and orchestrated by Kubernetes. In turn, Kubernetes takes physical CPU, memory, storage, and network resources; abstracts them; and exposes them to containers. Once physical resources are abstracted, applications and data stores can run as containers anywhere. “
The recently published Computer magazine Digital Storage and Memory Roundtable article contains insights from experts in the storage and memory industry on NVMe and computational storage, CXL and object storage.