Open Stackhas made its first release. It's a major moment for the nascent open cloud initiative, a service that combines the Rackspace object storage capabilities with NASA's Nebula, the open computing effort from the U.S federal space agency.It's official.
It feels like the start of something, doesn't it? Just writing "U.S. federal space agency," gives us a sense of what makes this exciting for the cloud computing movement. We are on the edge of the great beyond in many ways. The compute power of open networks is just beginning to be understood. How OpenStack fares has the potential to play a part in defining how cloud computing will evolve in the years ahead. If successful, the opportunity is huge with the potential of opening the world of storage and big data to a wide ranging spectrum of communities from the commercial, nonprofit and government sectors.
OpenStack Object Storage
This is the storage environment that Rackspace turned over to OpenStack. Rackspace released the code in July. It consists of a network of commoditized servers that operate in clusters for redundant, and large-scale storage of static objects, written to multiple hardware devices. This means that if a node goes down, another can take its place.
Dell community evangelist Barton George did this interview with project lead Will Reese:
From the OpenStack site:
"Because OpenStack uses software logic to ensure data replication and distribution across different devices, inexpensive commodity hard drives and servers can be used in lieu of more expensive equipment. "
According to Adrian Otto, some use cases for OpenStack Object Storage include:
- Storing media libraries (photos, music, videos, etc.)
- Archiving video surveillance files
- Archiving phone call audio recordings
- Archiving compressed log files
- Archiving backups (<5GB each object)
- Storing and loading of OS Images, etc.
- Storing file populations that grow continuously on a practically infinite basis.
- Storing small files (<50 KB). OpenStack Object Storage is great at this.
- Storing billions of files.
- Storing Petabytes (millions of Gigabytes) of data.
According to the OpenStack web site, "OpenStack Compute is software for provisioning and managing large-scale deployments of compute instances." In other words, it is the platform for heavy duty processing of big data. Remember, NASA uses this technology to explore space.
Again, we'll reference Barton's blog and the interview he did with Rick Clark, who is the project lead and chief architect "former engineering manager at Canonical for Ubuntu server and security as well as lead on their virtualization for their cloud efforts."
Of note to mention is the OpenStack API, known internally as the "artist formerly known as the Rackspace API."
The API provides the access to the object storage and the underlying platform. According to Jim Curry: "It will also include additional functionality such as role-based access controls and additional networking actions. This API will be the official OpenStack API and it will evolve with the platform and needs of the community."
What that means is the API will be tied to the OpenStack road map. But to support the widest possible community, it will also work with Amazon's APIs:
"The EC2 compatible API, already in the code base today, will remain and be maintained; however, it is important for the project to have an official API that is tied directly to the OpenStack roadmap and feature set. We want to ensure that future OpenStack innovation can be driven by the community and not be restricted to the functionality of outside cloud APIs. The sub-projects are built in a way that will allow multiple APIs to be supported, so if there is an existing API that is really important to you (or one that comes along in the future), it is possible for you to add in support for that as well."
In that vein, OpenStack will be hypervisor agnostic, supporting XenServer, KVM and UML.
OpenStack will be on a three-month development cycle. What we expect to see come out of this is the continued rise of cloud management services.
We have moved past virtualization density and entered a new phase that is focused primarily on automation. In an open cloud environment, that's pretty important.
If open clouds do proliferate then the automation will be a core component. It means a move away from manually coded configurations that only a few wizards understand. And that's a really good thing.