A huge issue with desktop virtualization is the storage question. How do you best allocate storage when potentially thousands of people are working on virtualized desktops?
It’s a different world. Before, people kept their data on their own desktops and laptops. Now the data sits in shared storage environment.
He writes that you can take advantage of thinly provisioned volumes so you do not have to allocate all the potential capacity, thus saving resources that you may need later. Thin provisioning limits the allocation to the initial storage. It means you do not have to use the full storage capabilities. In turn, that frees storage can be used when needed.
“Additionally the amount of virtual desktop storage that is going to be needed is often difficult to predict, since so many optimization techniques will be applied. Having that space allocation dynamically eases this burden.”
The second part has a lot to do with what is known as the “boot storm.” A boot storm occurs when there is a sudden surge in demand across a virtualized network. This can happen, for instance, at the beginning of the work day when everyone logs in at about the same time.
The solution may be in creating master images that can manage hundreds of virtual desktops. The question becomes what is most efficient? Space needs to be optimized. And that means it’s important to eliminate as much duplication as possible.
Crump maintains that by following these principles, a company can decrease its capacity requirement as much as 90%.
That’s a considerable reduction.
Boot storms lead to a number of complex questions. Storage on a desktop or laptop is pretty inexpensive. It can lead to questions about why to use virtualization at all. That’s a good question but the overall reasons for adopting desktop virtualization are far more numerous, especially in terms of security.
Photo by RobinUtrac