Today we are at Gluecon, talking with three companies about managing that complexity in the cloud. Let's take a quick look at these three companies and the services they provide.
Makara is a startup that provides a service for making applications cloud ready. In the traditional enterprise, there is a full view of an application and how it is behaving. The server is there to check and fix. In the cloud you do not have that visibility. If a node dies, there is no view into what happened. The provider may be able to provide some information. But pretty much if the application is not built correctly then there can be some serious issues in play.
Makara builds data infrastructures for applications. It outfits application capsules with a data infrastructure so the developer can keep tabs on the application after it is deployed to the cloud.
OpsCode developed CHEF, which is essentially a cook book with a variety of of recipes for automating the grueling manual tasks that have historically been required to fix server issues. The cloud is dynamic. There may be a spike in demand that requires adding 100 servers. Opscode's automation capabilities give the capability to detect and add nodes. In essence, you write code to describe how you want each part of your infrastructure built, then apply it to your servers.
SOASTA is in the business of cloud testing. It tests the performance of Web applications. SOASTA built its test platform on top of a real time engine that can look at an application and its performance in the dynamic world of the cloud.
Here's a bit about SOASTA's cloud analytics. From the Amazon Web Services blog:
"The new product, CloudTest Analytics, builds on SOASTA's existing CloudTest product. It consists of data extraction layer that is able to extract real time performance metrics from a number of existing APM (Application Performance Management) tools from vendors such as IBM, CA, RightScale, Dynatrace, New Relic, Nimsoft. The data is pulled from the entire application stack, including resources that are in the cloud or behind the firewall or at the content distribution layer. All of the metrics are aggregated, stored in a single scalable data warehouse, and displayed on a single, correlated timeline. Performance engineers can use this information to understand, optimize, and improve application and system performance."