Cloudian's Gary Ogasawara on Object Storage and Edge Computing – Solutions Review

Link to Data Storage Buyer's Guide
Cloudian's Gary Ogasawara on Object Storage and Edge ComputingLess than a decade ago, when object storage was announced to the world, the technology distinguished itself as a distributed, scalable, high-performing, cost-effective technology that redefined the limits of what legacy storage can do. But edge computing has emerged and with its decentralized infrastructure, it demands a newer method of storing terabytes and petabytes of data. We had the opportunity to speak with the Chief Technology Officer of Cloudian, Gary Ogasawara about the distributed properties of object storage and how it can best support edge computing.


Download Link to Data Storage Buyer's Guide
Today enterprise organizations are producing large volumes of unstructured data on a regular basis. Object storage solves the scale problem with an architecture that lets you combine multiple types of data in a single, global namespace. Traditional storage systems were designed with an upper limit on capacity. Unlike NAS, in which the final hierarchy limits growth, object storage stores data as natively distributed and horizontally scalable – allowing the management of exabytes of data. As such, object storage is the only storage type that can scale to exabyte volumes and beyond with the required application APIs, security, and performance.
Here are additional facts on what separates object storage from the more traditional SAN and NAS:
The distributed properties of the technology allow object storage to be the “go-to” for edge computing. Why? Object storage software can run at both the edge in small edge processors as well as large servers at central hubs. Networked together, the object storage software distributes the data across the different layers. Putting object storage and data processing code at the edge enables quick data filtering, image processing, and distributed processing.
In the legacy batch processing model, data must travel to “compute” to be filtered and processed. This means the code is not portable, requiring larger, physical resources and limiting the volume of data that can be managed. When it comes to edge technologies, the exact opposite needs to occur: The code needs to move to where the data is being generated.
Edge computing demands that the processing and managing of data needs to be as close as possible to the source of where data is being generated. This enables filtering where only certain important data is pushed to a data center or an intermediate hub, rather than all of it. In other words, object storage can help make sense of the data being processed at the edge, so only a finite amount gets sent back. For example, autonomous cars have a camera rig on each vehicle that can generate pictures of the road –approximately 5GB of data per second – resulting in terabytes of data per day and petabytes per year. Object storage can be deployed at the edge (the car) to collect all data, then leverage machine learning to only send anomalous or important data back to the hub.
Looking to learn more? Download our free 2020 Buyer’s Guide for Data Storage with full profiles on the top 28 providers in the space.


Download Link to Data Storage Buyer's Guide
You must be logged in to post a comment.


Data Management Solutions
Solutions Review brings all of the technology news, reviews, best practices and industry events together in one place. Every day our editors scan the Web looking for the most relevant content about Enterprise Information Management and posts it here.

source

Share this post:

Leave a Reply