Cloud computing weathered a lot of storms in 2012 – both literal and figurative – and we should expect more of the same in 2013, as IT administrators finally start getting their heads wrapped around the cloud enough to start properly using it. Because if they don’t, cloud computing will be unhelpful and even an active danger for business users.
Protect From Without
As the Eastern Seaboard continues to deal with the aftermath of Hurricane Sandy, data centers up and down the coast are performing after-action reports on how they could have handled the storm better. When ISPs in Lower Manhattan were down to bucket-brigading diesel fuel up 17 flights of stairs, it may have dawned on them that there should be a better way.
There was, and Hurricane Sandy and other man-made and natural catastrophes should have driven that lesson home: anyone with a mission-critical or even a mission-gee-it-would-be-nice-to-be-on-the-web needs to give serious thought to re-architecting for the cloud. No company is truly safe from the possibility of disaster, so you have to be prepared to get the hell out of the way when trouble comes, and to have a way to get back on your feet if you do get knocked down.
There will be a lot of approaches IT managers can take: site replication, database backup, or a full-on cloud architecture to nimbly jump from one cloud instance to another. Whether the new server is on the next rack or on the other side of the continent, it should be able to be picked up and moved with some notice.
In 2013, I predict, we’ll be hearing about more success stories of cloud disaster mitigation than in years past, as IT managers start cluing in on the fact the world is a dangerous place.
Protect From Within
Cloud is a bit paradoxical: on many levels, it’s very complicated. Getting compute, storage and application management components configured to work together in an environment that actively provisions itself elastically as workloads change is, well – let’s say it’s no picnic.
But that’s from the admin’s point of view. From the user’s side of the house, cloud computing can be very easy. Just sign up with Amazon Web Services or some other public cloud, fill in some credit card info and push a few buttons and boom! You’ve got yourself a website.
That’s a pattern seen repeated in companies all over the world. Instead of waiting for the IT folks to finally answer your umpteenth request for a server on which to host the new team blog, just fill out an expense report and fire up an instance on Rackspace or Google. What could be easier?
Unfortunately, there’s an even chance that you’ll come out looking like the goat just as much as the hero. Sure, you’ve got your cloud instance up and running, but is it properly configured? More importantly, is it secure?
A lot of people don’t realize that when a company like Amazon says its public cloud is secure, they’re not talking about the instances themselves — they’re only talking about the infrastructure that supports the cloud instances. So, if you’re running a Joomla or Tomcat cloud server to handle some task, it’s up to you to lock that server down, says Rand Wacker, VP of CloudPassage.
This kind of thing is what Wacker and his team call “unsanctioned cloud”: instances of cloud computing that have been launched by non-IT personnel in order to take advantage of the relative ease of starting a cloud instance.
Unsanctioned clouds are not some abstract concern, Wacker emphasized. When one CloudPassage client started a marketing blog running WordPress on Amazon Web Services, hackers were able to break into the WordPress site and make off with the site admin’s login information, credentials the staffer also happened to use for their corporate network. Using the clear shot to the corporate network, the hackers were able to conduct a massive security breach into the company’s firewalled infrastructure.
In this instance, the cloud is not at fault; the poor practices of the user are to blame. But the cloud makes it much easier for tech-savvy but not tech-professional workers to spin up servers as needed, providing all sorts of new and interesting problems with which IT managers must contend.
Developers, for instance, can be particularly vulnerable to the problem, since you would think (and so would they) that they know their way around a server.
“They may be IT savvy,” Wacker explained, “but they’re not security savvy. DevOps guys don’t know security, either. They’re more interested in automation than IT policies.”
Moving forward, IT managers will need to proactively work with employees to make sure they understand that all instances created should follow the IT policies in place for the company. They might, Wacker added, want to build IT-approved instances that will protect cloud-based servers on the same level as the rest of the company.
Unfortunately, this is one of those problems that may get worse before it gets better. Like new drivers, cloud users are going to find out it’s a lot harder to drive a car than just start the car. In 2013, we will be seeing more cloud-related breaches until IT departments can get in front of this problem.
Image courtesy of Shutterstock.