A lot of electrons have been consumed already about the Sony PlayStation Network outage that lasted several weeks and brought down more than 75 million gamers and music lovers. Also in the news last week was how Facebook brought its new Oregon data center (the company's third) online after months of extensive testing. But what hasn't been covered as extensively is what enterprise IT managers can learn from both of these experiences. As a way to start off my tenure here at ReadWriteWeb, I offer a few suggestions.

First, apologize early and get out in front of any breach. Reveal what was revealed, make amends, and offer something free to compensate the victims. Don't do a Steve Jobs and clam up. Whatever you do, do it within 24 hours of the event, and sooner if you can. While your corporate legal team will want to wait, convince them that waiting has its own costs and drawbacks. It took Sony three weeks to have this post on its own blog to explain anything.

Second, consider your responses carefully to the public and your customers. Overall, be transparent, take a positive tone, and understand the customer service implications of your actions. These are standard social media response tactics, but they bear mentioning in this context. Use social media to get your message out in terms of blogs, Twitter, and other posts. Jeremiah Owyang mentions these along with other advice on how to plot your crisis response social media strategy here.

Contrast this with how transparent Facebook has been with its own data center rollout here, providing details on its architecture and testing plan, among other items.

If you like thinking in terms of a flow chart, take a look at this one from the US Air Force that walks you their rules of engagement for bloggers. Every IT manager should print out a copy and put on their walls as a reference.

Third, make sure when you do restore service that you can handle the traffic flood of requests, as all of us are somewhat OCD about retrying or reloading our browsers when we hit a problem page. When the Sony PSN was brought back online last week, they had two separate issues. First, the network was overwhelmed with password resets (which makes sense since users' passwords were one of the key data elements that were compromised) and they had to take things offline for 30 minutes, according to their blog. And then later in the next they had to take down the actual password reset page itself, because of coding errors that could have made it easier for someone to hack the reset operation. Ooops.

One of the things that Facebook did in bringing up their new data center was testing it with actual traffic, and segmenting a portion of their Virginia data center to make it look like a third facility. That was smart, and enabled them to find problems that - taking a page from Donald Rumsfeld's playbook -- they called "unknown unknowns."

Fourth, put in place better leak protection, especially for your outward-facing Web apps. Sony announced these measures that they had taken:

"updating and adding advanced security technologies, additional software monitoring and penetration and vulnerability testing, and increased levels of encryption and additional firewalls. The company also added ... an early-warning system for unusual activity patterns that could signal an attempt to compromise the network."

There are two basic types of technologies that are suggested by these remarks: