One thing you don’t quite get accustomed to in reporting developments in cloud technology is how even the virtual things become virtualized. Last December, Red Hat released a software storage appliance based on the GlusterFS software-based NAS system that Red Hat acquired in October. That product is a way to apply the same methodology that GlusterFS customers used to build network-attached storage pools completely from existing storage.
That product had been described as a “virtual storage appliance” – in fact, it was given that name in Red Hat graphs we used. Today, Red Hat announced the, um, virtual version of that, for use in pooling elastic storage from Amazon Elastic Block Storage.
Here’s the full and complete name of the product now: Red Hat Virtual Storage Appliance for Amazon Web Services. Although there will probably continue to be folks who call it “Gluster for Amazon.”
“After the acquisition, we essentially rebranded the virtual storage appliance for Amazon Web Services for Red Hat,” says Tom Trainer, Red Hat’s software product marketing manager, in an interview with RWW. “The basic premise here is that we deliver inside Amazon Web services NAS in the cloud.”
Trainer tells us his strategy in competing against object store technologies such as CAstor will be to emphasize ease of transition and ease of management for data centers’ existing file structures. Since Amazon’s EC2 storage structure is POSIX-compliant, and Linux-based data centers are also POSIX-compliant, he characterizes deploying a massive database in Amazon’s cloud via the new Red Hat Virtual Appliance as more of a relocation than a transmutation.
“If you’re not rewriting your applications, then you have been reliant upon a physical appliance in the data center to package up those files, turn them into objects, and push them to the cloud,” explains Trainer. “And in most cases, those objects that are being pushed to the cloud are for backup or archiving. The problem therein lies is, they’re providing a bridge but they’re not really solving the widespread dilemma that users have had, in being able to port their applications directly into the cloud.”
One charter customer he described specifically eschewed the use of a bridge or an appliance for changing the data structure, even if the effects of those changes were abstracted and the result looked like an ordinary storage pool. It wanted a one-to-one transfer, especially since its applications were geared for Amazon EC2 instances. “Now they’re building their development apps in the cloud, and running it just as if it ran in the data center,” he says, “but not buying the additional compute servers and mass hardware appliances that they traditionally purchased, and then had to keep for three to five years to amortize it over time. They’ve taken their cap-ex and physical software licenses, and moved it to an op-ex environment where they’re only paying for services and time.”
So if a new customer wants to deploy a clustered file server with two EC2 instances and 150 TB of storage, the Red Hat appliance will attach that much EBS to those instances as part of its automated installation procedure. “We stripe our whole file system across all of that, and we benefit from parallelization of the I/O,” Red Hat’s Trainer explains. “That helps to compensate for and overcome a lot of the performance issues that users have faced in trying to build something like a file server within Amazon. What they run into is the mass network bottleneck that could exist within a public cloud.”
Once this customer began moving its own customers’ resources into Amazon’s cloud through Red Hat’s appliance, Trainer reports it could then completely renegotiate new terms with those customers, reducing their costs in turn.
Earlier today, Amazon announced a big price drop for its S3 storage service, weighted towards its lower-capacity users. S3 is an object store, unlike EC2 which utilizes virtual devices.