Virtualizing your business critical apps is not a task to be done piecemeal. It can lead to confusion about job responsibilities, poor application performance and in the end a lot of money spent on new servers to make up for the taxed virtualized environment.
That may seem simple enough but the journey to virtualization is one with a history that dates back to a time in the IT world when complexity became a symptom of application proliferation.
According to Michael J. Martin of SeachNetworking.com, the issue became such a problem that it made sense to start separating IT into smaller units, each with its own center of knowledge.
That worked out quite nicely for a while but there’s a problem when that happens. Everyone seems smarter than they really are. The knowledge becomes so hyped that it’s a jolt when there is the need to work across teams within an organization. It starts to seem like everyone is arguing to be right without seeing the larger picture.
Now we face a time when smart devices are becoming commonplace. The applications that run the organization need to have a common infrastructure. The borders need to fall. Virtualization and the sharing of resources is helping force that change.
Martin writes:
However, virtualization and other application layer technologies demand the removal of “team” borders. Simply put, you can’t build a great infrastructure unless it is tailored to handle specific applications. Networks have to be aware of virtualized machines and network managers have to manage virtual traffic within virtual machines. Going further, not every application can be moved to a new OS version as some are created by combines that have long ago disappeared.
That makes things complicated. And it’s why audits are so recommended. You need to audit all those resources that have been “balkanized,” to some extent.
Martin suggests IT teams ask themselves a series of questions. Those should reflect the need to learn as much about the resources that sit in the silos across the organization. Once that is done you can move on to a process that Martin says should have the following four steps:
- Step 1: Conduct a documentation review. When implementing network layer services it is critical to know what applications, protocols, data exchange and service dependencies exist between the servers that support applications.
- Step 2: Conduct an audit. Teams need different kinds of data to perform their functions. A collective effort is required here to design a bottom-to-top audit.
- Step 3: Update documentation. Once the audit is complete, that data should be used to update documentation. This sounds obvious, but collected data often collects dust. Furthermore, an accommodation to refresh the data in a timely manner needs to be made. Otherwise, you will quickly find yourself back at Step 2.
- Step 4: Use this information to pilot a new technology. With all of this information in place, you can asses a new technology, such as virtualization. Your audit data should serve as your baseline and your success criteria should in part be based on how those baseline elements will be affected by implementation.
The network is changing fast with virtualization part of the play. The process of adopting virtualization for business-critical apps can be plagued with headaches. An audit can help smooth out the issues and provide a full view of resources across the organization.