Software-defined storage provides flexible capacity and performance levels that can be tuned to specific requirements.
OpenStack is, by definition, a cloud operating system that sees everything as a pool of shared services, giving operators robust visibility and comprehensive control via a dashboard that enables them to provision resources as needed through a web-based interface.
OpenStack helps enterprises build and manage scalable, flexible services. But to achieve this, it must be supported by storage that is also scalable and flexible — ideally software-defined, scale-out storage technologies and architectures that integrate more tightly with OpenStack than traditional, proprietary storage solutions. This is how storage needs to be engineered for modern cloud architectures.
The New Age Of Applications
In parallel with the confluence of cloud and big data, more and more of the great applications of today and tomorrow consist of microservices tucked neatly into containers along with everything they need to operate, including the OS kernel, libraries, resources, and more. This makes them portable, which is exactly what they need to be in the cloud.
Left ignored, storage poses a challenge for many of the new breed of microservices developers creating interesting new DevOps relationships. Containerized applications can access the local storage on their host server, but when they move elsewhere — as they are designed to do — stateless containers don’t take any of the data with them.
Pool Your Storage Out Of The Dark Ages
Consistent with the general foundation of cloud computing, storage for these “cloud-native” applications must leverage a horizontal scale-out architecture connecting geographically diverse server and storage resources to form one contiguous pool that can be accessed by containers, wherever they may be running. This will allow developers to specify the resources that microservices residing in any given container will require to operate. Since containers are designed to instantiate, operate, and be discarded quickly in most cases, each subsequent container participating in a given process need only be similarly specified to access the same storage from the pool.
The fundamental need is for storage that is every bit as portable, lightweight, and agile as the containers are. Sounds like this is a job for software-defined storage, which provides flexible capacity, performance levels that can be tuned to specific requirements, data protection inherent in the structure, and orchestration to make best use of available storage resources.
Containers = App Deployment As A Service
Cloud is often defined as Infrastructure as a Service. Similarly, container technologies can be seen as Application Deployment as a Service, offering more elasticity and more agility for building apps that go across public and private clouds easily.
Software-defined storage not only delivers the flexibility needed to adapt to the needs of today’s and tomorrow’s most advanced applications, it does so using industry-standard hardware that is considerably less expensive than the storage appliances so many operators have struggled to budget for so long.
Daniel Gilfix is part of the emerging storage business unit at Red Hat, responsible for Red Hat Ceph Storage marketing. His career has spanned over two decades, heavily focused on leading-edge technologies and integrated software solutions aimed at the enterprise. He was most …View Full Bio