Home 5 Takeaways from the VMware/Intel Virtualization Chat

5 Takeaways from the VMware/Intel Virtualization Chat

Each time ReadWriteWeb holds a live chat with an expert panel, we get more attendees than the time before, and Tuesday was no exception. But not everyone gets a chance to show up and submit a question. If you missed us Tuesday, engineers Steven Shultz from VMware and Mitch Shults from Intel (that’s right, the tag team of Shultz and Shults) discussed the difficulties folks have in moving heavy-workload, mission-critical data centers from a physical to a virtual environment.

The transcript of the entire chat is located here, but if you only have a few minutes, here’s some of the main points we learned:

1. Virtualization management software has applied itself more to business intelligence workloads in the past 12 months. BI databases tend to be, first and foremost, big. But perhaps not as well appreciated or understood is the fact that it’s unwieldy. VM managers used to measure the behavior of users of certain types of databases in order to determine such factors as peak usage times and maximum record draws per query. VMware’s Shultz (with the “z”) pointed us to recent studies on VMware vCenter Operations Management Suite (PDF available here), that show the use of parametric formulas to determine usage characteristics really does break down with heavy workloads, including BI.

Here’s a key excerpt from one VMware white paper on data normalcy:

The recognition of the ineffectiveness of parametric methods for determining dynamic thresholds has led VMware to develop a set of non-parametric methods for data analysis. The other key insight of our extensive research is that no single algorithm can be effective in analyzing the myriad of data types present in IT environments. The VMware vCenter Operations product incorporates eight different non-parametric techniques for determining the best thresholds for IT data. And since the algorithms are data agnostic they can work with Boolean type data (up/down metrics), batch (sparse data sets), and even text based metrics to determine whether they are acting normally or abnormally. The key to the accuracy of the algorithms is not only in their non-parametric approach, but also the fact that the exact cycles of data are uncovered and appropriately utilized for best threshold determination. All of these features combine to provide the most comprehensive and accurate determination of thresholds for the entire enterprise.

2. Only 29% of respondents to our live poll were satisfied with their virtualization management software today. That’s not a good number, and when Intel’s Shults (with the “s”) asked attendees to offer some clarification as to why they said “no”… well, we didn’t get any clear responses. (None worth sharing on a family-centric service, at least.)

3. Not a lot of folks know about the performance increases of VMware’s latest ESXi hypervisor. The number of virtual CPUs per VM has increased to 32, with virtual on-board memory pooling up to 1 TB. So think of a four-way, eight-core Intel Core i7 server with a terabyte of RAM, and reproducing several of these in your virtual data center.

4. The capability of hypervisors to handle virtual workloads is increasing faster than the workloads themselves. This from Intel’s Mitch Shults.

“Let’s take the example of SAP. Five years ago, it was a rare company that would run their core ERP system on a virtualized platform,” Shults explained. “Today, running R/3 on VMware is commonplace. Now, you might argue that R/3 isn’t all that large. Okay. But the big thing at SAP these days is HANA – their in-memory analytics appliance. And HANA is anything but small. Being in-memory means that it wants a big memory footprint. Interestingly, though, SAP’s specifications call for a maximum of 1 TB of physical memory on an 8-socket system. There are reasons for that limitation on SAP’s part, but the point is that 1 TB is within limit of what VMware can handle these days.”

Shults pulled back a bit, adding a cautionary note for folks who may see this as the go-ahead for running HANA on VMware. It’s in the midst of being tested, he said. VMware’s Shultz added that BI workloads typically require more RAM, but it’s that point exactly which makes them prime candidates for virtualization, since memory prioritization and consolidation are two tasks where virtualization management can add value and performance. (More information on that front in this PDF from VMware.)

5. The performance hit a data center absorbs by moving from physical to virtual is approaching zero, but can never go below zero. This is something that has intrigued me about this subject from the beginning: Why can’t it be theoretically possible to overcompensate for the overhead introduced in the virtualization process, making general performance faster when viewed in the broad sense?

There may still be some valid reasons why it can’t happen. However, as VMware’s Shultz explained, there are some processes where the hypervisor can cheat the laws of physics. “Let’s take a specific example: If we have an application composed of multiple virtual machines communicating over the network and those VMs are running on the same physical server, they may ‘think’ they’re talking at 100 Mb [megabit] or 1 Gb, but they’re actually talking ‘within the server’ at bus speeds,” he told us. “Not breaking any laws of physics, but certainly performing faster than physical!”

But as Intel’s Shults added, overhead is introduced with each virtual transaction. “It’s possible to cheat the laws of physics by taking full advantage of virtualization to put more communicating workloads within a single machine,” he said. “If a VM needs to communicate with other VM’s on other machines, however, there is a finite amount of overhead involved, since communication is being mediated by the hypervisor. That overhead is falling with each generation of Intel and VMware technology, but it can never be exactly zero. It’s small enough not to notice for most applications today, however.”

A ReadWriteWeb chat is like an interview we conduct with experts every day, but with you as one of the panelists. That is, if you’re there to join us. Stay with us on RWW as we continue our in-depth live chats, made possible through the generous assistance of the folks at CoverItLive.

VMware and Intel are principal sponsors of ReadWriteWeb Enterprise Channels.

About ReadWrite’s Editorial Process

The ReadWrite Editorial policy involves closely monitoring the tech industry for major developments, new product launches, AI breakthroughs, video game releases and other newsworthy events. Editors assign relevant stories to staff writers or freelance contributors with expertise in each particular topic area. Before publication, articles go through a rigorous round of editing for accuracy, clarity, and to ensure adherence to ReadWrite's style guidelines.

Get the biggest tech headlines of the day delivered to your inbox

    By signing up, you agree to our Terms and Privacy Policy. Unsubscribe anytime.

    Tech News

    Explore the latest in tech with our Tech News. We cut through the noise for concise, relevant updates, keeping you informed about the rapidly evolving tech landscape with curated content that separates signal from noise.

    In-Depth Tech Stories

    Explore tech impact in In-Depth Stories. Narrative data journalism offers comprehensive analyses, revealing stories behind data. Understand industry trends for a deeper perspective on tech's intricate relationships with society.

    Expert Reviews

    Empower decisions with Expert Reviews, merging industry expertise and insightful analysis. Delve into tech intricacies, get the best deals, and stay ahead with our trustworthy guide to navigating the ever-changing tech market.