NetApp Promises Kubernetes-Native Unified Data Store – Container Journal

Long Live Containerization!
NetApp this week at the KubeCon + CloudNativeCon North America conference announced it will soon make available a preview of a data store for containers and virtual machines that runs natively on Kubernetes.
Eric Han, vice president of product management for NetApp, says the NetApp Astra Data Store also lays the groundwork for providing IT teams with a data store that eventually will be able to support any storage protocol.
NetApp Astra Data Store is scheduled to be available in the first half of 2022,  and will initially provide access to file services running on Kubernetes clusters. Most file services provided on Kubernetes today are layered on top of object storage systems that require IT teams to deploy client software to access rather than simply being able to employ the same network file system (NFS) client used to access file services everywhere else, notes Han. That requires applications that need to access file services to be rearchitected, adds Han.
IT teams are also required to manage separate data stores for containers and virtual machines. The NetApp Astra Data Store will ultimately reduce the management headaches that those separate data stores create today, says Han.
That capability will prove critical as organizations look to build and deploy a mix of legacy monolithic and microservices-based applications across a hybrid cloud computing environment, adds Han. In fact, the multiple parallel file system based on a common pool of storage resources will enable IT teams to manage storage more efficiently, notes Han.
NetApp Astra Data Store extends the company’s portfolio of offerings for Kubernetes that includes the NetApp Astra Control, a fully managed application-aware Kubernetes data management service and NetApp Astra Trident, an external provisioning tool for Kubernetes clusters that provides an alternative to the container storage interface (CSI).
There’s currently a lot of debate over the degree to which data should be stored on a Kubernetes cluster versus deploying stateless applications that store data on an external storage platform. Kubernetes itself creates permanent storage mechanisms for containers based on Kubernetes persistent volumes (PV). This makes it possible to access data far beyond the lifespan of any given pod. Kubernetes Volumes allows users to mount storage units to expand how much data they can share between nodes.  Regular volumes will still be deleted if and when the pod hosting that particular volume is shut down. The permanent volume, however, is hosted on its own pod to ensure data remains accessible. Upon creation, the PV is bound to the pod that requested the PVC. IT teams can then manage storage in Kubernetes via a PersistentVolumeClaim (PVC) function to request storage; a PersistentVolume (PV) to manage storage life cycle and a StorageClass function that defines different classes of storage services.
In most cases, IT teams will find themselves needing to access data stored both on a Kubernetes cluster and an external storage system. The challenge will be managing all the data being accessed by both cloud-native and monolithic applications regardless of where it happens to be physically stored.
 
Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.
Mike Vizard has 1254 posts and counting. See all posts by Mike Vizard
document.getElementById( “ak_js” ).setAttribute( “value”, ( new Date() ).getTime() );
Code is everywhere. Code makes up applications, drives cloud environments, is used for infrastructure configuration, runs operational systems and manages data. Rapid application development leverages open source and pre-designed components to make up a software bill of materials (SBOM). As supply chain attacks have taken center stage, organizations need to take a full-stack approach that […] The post Eliminating Security Gaps in the Software Supply Chain – A Collaborative Approach appeared first on DevOps.com. […]
As Kubernetes use grows, so, too, does risk resulting from poorly designed K8s network security policies. Tune in to learn how you can automate the creation of secure K8s network policies and inject those policies into your go-to CI/CD tool(s) like Jenkins. The post DevSecOps Best Practices: Automating K8s Network Security Policy appeared first on DevOps.com. […]
One of the biggest pain points in managing application security—and open source security in particular—is the quick remediation of open source vulnerabilities. To address this challenge, DevSecOps tools and practices are increasingly being put in place to ensure that application security is addressed from the earliest stages of the software development life cycle. The post How to Fix NPM Vulnerabilities Quickly and Painlessly appeared first on DevOps.com. […]
GitOps is an operational framework that takes DevOps best practices typically used for building applications and extends them to infrastructure as part of an effort to automate the application delivery process. The post GitOps appeared first on DevOps.com. […]
Modern, cloud-native enterprises rely on Kubernetes as the foundation of their multi-cloud strategy. Multi-cloud deployments leverage public cloud resources from more than one cloud provider and are typically combined with private cloud and on-premises solutions. Multi-cloud is rapidly becoming the deployment method of choice because it decreases vendor lock-in and reduces costs while still allowing […] The post Multicloud Object Storage on Kubernetes With MinIO and Red Hat OpenShift appeared first on DevOps.com. […]
As ransomware attacks become more frequent and costly, having a solid ransomware remediation plan is a critical component of your security posture. That said, simply securing your production environment is not enough. Bad actors are constantly shifting their attack strategies, often targeting backup data as well. Implementing a strong data security platform is a key.. The post How Secure Backups and Analytics Can Help You Recover From Ransomware in Hours appeared first on Security Boulevard. […]
For cloud development teams, the good news is that it’s easier than ever to access infrastructure. For cloud security teams, the bad news is that it’s easier than ever to access infrastructure. And in practice, organizations tend to grant far more access than is actually necessary. Gartner reported in June 2021 that “more than 95%.. The post The Least Is the Most You Should Do – Establishing Least-Privilege Access in Cloud Infrastructure appeared first on Security Boulevard. […]
For cloud development teams, the good news is that it’s easier than ever to access infrastructure. For cloud security teams, the bad news is that it’s easier than ever to access infrastructure. And in practice, organizations tend to grant far more access than is actually necessary. Gartner reported in June 2021 that “more than 95%.. The post appeared first on Security Boulevard. […]
Modern development is accelerating software release frequency and complexity, making it challenging to implement DevSecOps and address security risks without slowing down development. At the same time, developers are being asked to resolve vulnerabilities, supply chain risks and compliance issues earlier in the software development life cycle. To discover, understand and address application risks, engineering.. The post Developer Security: Building Security Culture Into Your Engineering Team appeared first on Security Boulevard. […]
Join strongDM’s CTO Justin McCarthy for a hands-on session explaining how to run roles and access discovery across your scaling organization. The post Access in the Fast Lane: How to Secure Access for a Growing Team appeared first on Security Boulevard. […]

source

Share this post:

Leave a Reply