More and more companies are moving from traditional servers to virtual servers in the cloud, and many new service-based deployments are starting in the cloud. However, despite the overwhelming popularity of the cloud here, deployments in the cloud look a lot like deployments on traditional servers. Companies are not changing their systems architecture to take advantage of some of the unique aspects of being in the cloud.
The key difference between remotely-hosted, virtualized, on-demand-by-API servers (the definition of the “cloud” for this post) and any other hardware-based deployment (e.g., dedicated, co-located, or not-on-demand-by-API virtualized servers) is that servers are software on the cloud.
Software applications traditionally differ from server environments in several key ways:
Guest author Joe Masters Emison is VP of research and development at BuildFax, the only national provider of building, remodel, and repair records on homes and businesses. He designed BuildFax’s cloud infrastructure and Pragmatic Extract-Transform-and-Load (PETL) data-processing pipeline.
- Traditional servers require humans and hours–if not days–to launch; Software launches automatically and on demand in seconds or minutes
- Traditional servers are physically limited–companies have a finite number available to them; Software, as a virtual/information resource, has no such physical limitation
- Traditional servers are designed to serve many functions (often because of the above-mentioned physical limitations); Software is generally designed to serve a single function
- Traditional servers are not designed to be discarded; Software is built around the idea that it runs ephemerally and can be terminated at any moment
On the cloud, these differences can disappear. The operative word is “can”–a look at the current mainstream discussions and advertisements of cloud services shows a distinct lack of interest in taking advantage of the crumbling wall between server and software.
Many hosting services that provide “cloud servers” have IT support staff who have to allocate the servers. Others have pricing plans (e.g., large monthly minimums) that essentially force the cloud servers to have the same physical limitations as traditional servers. And finally, very few triumphal case studies for the cloud focus on how the ephemeral or single-function possibilities of cloud servers have led to better, cheaper, faster, more fault-tolerant, and more secure infrastructure designs.
(Of note, when considering the above definition of the cloud, Google had the first cloud and the first cloud-enabled infrastructure with its essentially unlimited white-box, cheap servers and revolutionary MapReduce and BigTable technologies that enable ephemeral, single-function applications.)
Why Servers are Better as Software
Servers as software are better: cheaper, faster, more fault-tolerant, and more secure than traditional servers. Much of why servers as software are better goes to the two most commonly-named benefits of the cloud: the cloud is cheaper, and the cloud has virtually-unlimited resources (i.e., it’s easy to scale).
The cloud is cheaper in theory because payment is per resource used, but may not be cheaper in practice–because you may be using resources inefficiently. For example, if you have a server that gets used heavily during weekdays, but rarely during the night or on weekends, then if you are paying for a cloud server to handle the heavy traffic 24/7, you’re not paying just for real usage; you’re paying for idle as well. On the other hand, if your server is software, it can relaunch with less resources at night and on weekends and with more resources on weekdays.
The cost benefits increase beyond the above day/night example when one considers the single-function paradigm: if each server-as-software only exists to serve a single function, both the resources and the run-time allocated to that function can be limited to the bare essentials. Also, in a single-function server-as-software environment, resources can be scaled up to make the one function run faster without having to worry about having to keep those resources around for other functions.
In case it is not wholly obvious, an architecture designed around ephemerality will be more fault-tolerant. Finally, single-function, ephemeral servers can be more secure than traditional servers because they are limited in their access to data and resources (being single-function) and they are built to disappear (being ephemeral).
Photo by Manthy
Taking Advantage of the Cloud Infrastructure: Database Servers
So how will the server-as-software nature of the cloud change infrastructure design? The biggest advantages should be realized in uses that write critical data (as opposed to read-only or temporary-data write uses), because those are the hardest to change and scale.
Take, for example, the database server. In many ways, the place and design of a database server in system infrastructure is much the same as it was at the beginning of the personal computer revolution: a beefy machine that handles all data-related writing and retrieval, perhaps with a backup waiting in the wings. An Oracle database server circa 1995 in a LAN client/server architecture looks very similar to a cloud-hosted MySQL instance serving a web application frontend: humans are required to launch or fail over, physical resources are limiting (prohibitively expensive, humans required, and–in the cloud–slow I/O), and dependent clients require the database server to have 100% uptime.
And perhaps the only software-like aspect that a database server might have–a single function–is usually untrue, as database servers often handle many different functions (e.g. authentication, storage and retrieval of user data, storage and retrieval of global data, exact record retrieval, full-text searching, and BI-like aggregation and number-crunching).
However, by embracing the software-like aspects of the cloud, database servers can be redesigned, getting better, cheaper, faster, and more secure. One step in redesigning the database server can be to split out its various functions to different servers: a cache server for frequently-read items (e.g. Memcached), an OLAP database server for business intelligence (e.g. Mondrian with a local MySQL database), a full-text index database for text-based searches (e.g. Apache Solr), and different relational databases for other purposes (e.g. PostGIS for geospatial data and MySQL for typical structured transactional data).
By breaking out these various functions, the infrastructure designer can allocate exactly the right amount of needed resources to each function, each function will be running on a platform designed to deliver optimal performance, and the security settings can admit only those authorized for the particular function into each server.
However, the above functional breakout does not harness the cloud’s essentially-unlimited resources and ephemeral nature. To get the full cloud benefit, one must abandon all traditional databases and move to a database built for the cloud; something like HBase. HBase, based on Google’s BigTable, is probably the most well-known open source database that takes advantage of the unlimited and ephemeral nature of the cloud.
Although the most dramatic cloud-driven infrastructure changes may be with respect to database servers, other types of servers may also benefit from the advantages of the cloud.
HBase is meant to hold lots of data and be run across many servers in many data centers, and the underlying data can be replicated to clusters of other servers in many data centers with easily automated fail-over. HBase can dynamically scale up and scale down to more or less servers. In other words, if HBase’s single function can serve your database needs, then HBase takes advantage of all four of the server-as-software characteristics of the cloud.
Taking Advantage of the Cloud Infrastructure: Other Servers
Although the most dramatic cloud-driven infrastructure changes may be with respect to database servers, other types of servers may also benefit from the advantages of the cloud.
For example, application servers on the cloud may not be as “single function” as they may appear on traditional servers.
An application server might have both customer-facing and administrative functions, where the administrative functions (such as running complex reports or extracting and compressing massive amount of data for download) are both rarely used and a drain on resources for customers. In a properly-designed cloud architecture, the administrative functions could run on a separate, more powerful server that is launched on demand and terminated automatically after some period of idleness.
An application server might also serve a website and an API, where web site traffic is more consistent, can be cached, and must return within a second, whereas the API results cannot be cached and must be able to handle huge influxes of requests. In a traditional server environment, these functions would likely be combined onto web application servers, but in the cloud they should be divided onto separate servers with different hardware profiles and auto-scaling plans.
Servers used for internal data processing can also benefit from the advantages of the cloud. For example, restoring a large compressed database from online storage can be very taxing on server resources in the cloud; simply un-gzipping a few multi-gigabyte files can bring regular operations to halt. This is the perfect time to spin up a separate cloud server to handle the decompression and restoration tasks. Or, another internal example: database backups can also be resource-intensive and slow down database operations; instead, launch a replication slave, sync up the database, and then lock tables and back up from the slave.
Final Thoughts on Cloud Architecture
In general, architectures that take advantage of the cloud should break down work into jobs that can be run separately on servers that are designed to terminate when not needed. This architectural move from traditional servers to the cloud may be seen as roughly analogous to the move from functional programming to event-driven programming in software design: react specifically to only what is needed, and do not design around waiting or idle time.
If you can achieve this in your cloud-based architecture, then you will truly have a faster, cheaper, more fault tolerant, and more secure deployment.