October 16, 2006
Yes, you can overvirtualize!
I was pointing out the benefits of machine virtualization in my last blog entry. It's a pretty cool technology and the standardization of the hardware platform that it offers and the admin savings from that make it an attractive technology for many customers, especially now with very powerful blades coming with 8 cores or more at attractive pricing.
However, there are issues with virtualization that you need to be careful with. Application reliability is the main one. If you run an application on a box and then overload that application, bad things can happen. Out of memory, bugs due to really poor response times etc. Usually, you avoid those issues by running the application on an overprovisioned box or cluster. This means that you have enough hardware to allow load the boxes at around 60% CPU when under full load.
The virtualization angle is about using that margin to run more virtual machines on the box. Of course, this means that you lost the margin, the available CPU cycles. So, you may need to be careful about using those CPU cycles on a box with an important application for consolidation. You can still run the application in a virtualized VM. You just need to make sure that you don't give away the CPU margin to other VMs unless you can guarantee that when the important virtual machine needs those CPUs, that it can get them instantly, i.e. run the important VM with a higher priority for the physical resources than the other VMs assigned to that physical box.
Eliminating idle CPU is a worthy goal and offers the potential to save customers a lot of money. But, care needs to be taken to make sure that the critical applications in an enterprise are not squeezed when under load by this consolidation. Allocate sufficient CPU or add real time prioritization so that if the VM needs the resources then they are instantly available. Instant means instant. If it takes a couple of minutes to gain access to the resources then you may have reliability problems or lost business because of the poor response times while the VM is waiting.
There are many types of application where a delay of a couple of minutes for this resource is acceptable, where the load builds slowly and you can provision in additional resource over time or the extra load can be antipicated using a calendar. But, there are some applications, like trading applications where the load can increase by a factor of 10 in a second and the application must be able to deal with that spike. Usually, these applications are overprovisioned by a factor of ten to anticipate this load. A virtualized environment would need to be carefully structured to allow the same flexibility when that load spike comes in.
So, the bottom line is, virtualization is a good thing. Care needs to be taken however when assigning VMs to physical boxes and setting up rules on priorities and how much spare CPU you leave on a physical box. If you don't do this then depending on the scenario, you'll likely have unhappy customers and business sponsors.
October 16, 2006 | Permalink
So... would a policy of overvirtualization be overvirutalizationism? Or would a company with a tendancy to do it be overvirtualizationistic?
Good points.. The problems I've had with virtualized systems are the fact that alot of users of these systems (developers or admins 'further down the line' that just get machine name/login info) end up running out of resources because they don't know or remember that they're really just using a slice of resources. CPU is easy to remember, but things like network and memory, especially if it's a dynamically shared resource with fractions being doled out to the virtual devices that might change. I think alot of these users have gotten used to a full GB pipe, so their batch job works great at 12am but at 8am when they only get 1/5th of their pipe it's all of a sudden a cow. Everyone is going to need to start thinking more virtualized, and treat ever resource as a dynamic quantity.
Posted by: Rob Wisniewski | Oct 16, 2006 11:21:42 AM