Virtualizing Tier 1 Applications

With virtualization finding its way into every nook and cranny of the data center, it would seem that tier 1 applications are the only safe harbor for the few remaining “Server Huggers” out there.  Their mantra usually sounds something like this …

“My application is too I/O intensive for virtualization,” or “MY xyz application vendor doesn’t support VMware” or possibly “My application is too important to be virtualized” (this is one of my favorites).  Believe it or not, I even heard one guy say “you can virtualize my server when you pry it from my cold dead hands” … um, wow.  He has issues.  Last I heard, he was de-virtualizing a server farm at the NRA.  Hehehe.

Anyway, for the rest of us with our heads NOT buried in the sand, I’m here to tell you that tier 1 applications can and should be virtualized.  I’ll go so far to say that if you’re not virtualizing tier 1 applications, you are doing your company a major disservice.

Below is a brief overview of a presentation I gave in Cincinnati a few weeks ago to a group of about 75 professionals.  The topic was “Virtualizing Microsoft Exchange.” And while the content that follows is geared towards the Microsoft Exchange application, it can really apply to any tier 1 application.

Performance

I’ll start with performance because this is typically the first objection to virtualizing a Tier 1 app.  The perception is that virtualization creates too much overhead and therefore applications in a VM will certainly underperform applications running on a physical server.  This current perception was born out of a previous reality.  In the early days, virtualization really did introduce enough overhead to warrant physical servers for applications with high I/O. But a perfect storm is a-brewin’ and I summarize it with the following equation:

hypervisor improvements + server hardware improvements + application improvements =
better than native performance

That’s right.  Mileage will vary, but given a properly architected solution, virtual can actually outperform physical. And even in scenarios where physical outperforms virtual, the delta is probably measurable, but not observable.  So let’s take a closer look at the three areas I mentioned in the equation above.

Hypervisor Improvements

The hypervisor (AKA, the virtualization layer, AKA the Server Hugger’s worst nightmare) has come a long way in the past few years.  And in VMware’s ESX product, the latest version has the following performance improvements over previous versions:

  • Increased guest OS memory to 64GB
  • Increased physical RAM on ESX to 256GB
  • TCP segment offload to further lower CPU utilization
  • NUMA optimizations improve multiple VM performance
  • Support for 64-bit clustering with boot from SAN

These improvements alone can capture almost all tier 1 applications, but combined with the next two, almost no tier 1 app can hide from becoming a candidate for virtualization.



Server Hardware Improvements

We’re now seeing server hardware with 256GB+ of physical RAM. Multi-core CPU’s with 2 and 4 cores are running in production today and 6/8/12 cores are coming soon. And best of all, hardware-assisted virtualization technologies are emerging, pushing the virtualization overhead down to the hardware, getting the hypervisor ever closer to near native performance.

And because the vast majority applications simply can’t fully utilize hardware with this much horsepower, ironically, virtualization is the only way to truly capture the full ROI of these physical investments.



Application Improvements

As applications continue to evolve, bugs are fixed and bad code is optimized, performance improvements within the application are being realized, further reducing the need for a physical server. Speaking specifically about Microsoft Exchange, the following performance improvements exist in 2007 over 2003:

Exchange 2003

Exchange 2007

32-bit Windows 64-bit Windows
900MB database cache Multi-GB database cache
4Kb block size 8Kb block size
High read/write ratio 1:1 read/write ratio
Requires high-end storage Affordable storage (iSCSI)
Storage is common pain point Eliminates storage pain point
50% reduction in disk I/O

Of course the improvements for this piece of the equation will vary from one app to the next.



Bottom Line: Performance should not be a barrier to virtualizing an application.


A Virtual Server is Better than a Physical Server

Tier 1 applications are the most critical, important applications in your organization and therefore they need to run on the best infrastructure possible.  So almost by definition, tier 1 applications need run in a VM.  Here are a few of my favorite reasons why a VM is better than a physical server.  Keep in mind, these aren’t the only reasons, just my favorites.

Reason #1: Better up time

The “eggs in one basket” argument no longer applies.  And for those of you who don’t know what I’m talking about, the objection usually sounds something like this … “If I put 30 VMs on a single physical server, and that physical server crashes, then I’ve just lost 30 applications instead of one!”  This was a very legitmate concern five years ago.  But today you can get better uptime in a VM than you can with a physical machine.  In the worst case scenario, if a physical server dies, those VMs are automatically powered up on a different physical server.  In my experience, the VMs are usually back up and taking requests in under two minutes (and yes, I’ve timed it with a stop watch).  And this is worst case scenario for a VM today!  What’s best case scenario for restoring a physical server after a hardware crash?  Weeks?  Days?  Hours (if you’re lucky and really prepared)?

So with today’s technology (and it’s only going to get better with what’s coming soon), worst case scenario for a VM is better than best case scenario for a physical server.  And you might ask, what’s best case scenario?  Even with hardware maintence, you can achieve 100% uptime with VMs.  How?  Check out a few of VMware’s features like VMotion, DRS and Update Manager.


Reason #2: Better hardware utilization

The average server utilization across the globe is less than 10% and in my experience, it’s often less than 5%.  Why?  A single application can rarely harness the power of the hardware it’s running on.  And for a ton of different reasons (which I won’t go in to here), critical applications typically require a dedicated server.  That is like buying a Ferrari and never driving it more than 5 mph … what an awful waste!  Get the most for your money by putting each app in a VM, running multiple VMs per physical server.  Open that baby up and let it do what it was built to do!  I think the following two screen shots do a great job of showing you what I’m talking about.

CPU

CPU Utilization Before VMware

CPU of a Physical Server after VMware

CPU Utilization After VMware



Reason #4: Avoid over provisioning

Why waste time and energy planning for future capacity (which is really nothing more than an educated guess based upon a ton of assumptions)?  The tendency has been to over provision hardware to account for future growth, but this often leads to under utilized hardware.  With Virtual Machines, additional CPU and RAM can be added at anytime with a few clicks of a mouse.  And moving to more powerful systems in the future can be done in real time with VMotion and/or Storage VMotion.  With virutalization, it only makes sense to simply build your application for the capacity you need and then throttle as necessary.



Reason #5:  Better Security

Typically, protection engines come in two forms, host based and network based.  The problem with network based security software is that it has no (or very limited) visibility in to the host.  And the problem with host based security software is that it’s running in the same context as the malware that it’s trying to protect against.  And the creators of malware are not stupid! They continually find new ways to hide their malware and/or attack the protection engine, creating a never ending viscious circle of cat-and-mouse.

But we now have new, trusted layer with the much smaller codebase of the hypervisor where we can provide protection from outside of the operating system.  A protection engine from this layer provides a much stronger defense because it’s “underneath” the VM, completely isolated from the malware.  And this is a great place for a protection engine to live because it can see all I/O of the VM and inspect each of the virtual components (CPU, Memory, Network and Storage).  Better yet, we now have the ability to do things like:

  • Intercept, view, modify and replicate I/O traffic from one, many or all VMs
  • Provide inline protection or passive monitoring
  • Mount and read virtual disks

Securing a Virtual Machine



Reason #6: DR made easy

In the physical world, DR is a pain in the butt and super expensive.  The reason is DR solutions for physical servers often require similar hardware at the DR site to avoid issues with driver, hardware, and software compatibility.  These dependencies are eliminated in a virtual world, which means any VM can run on any physical server with an ESX hypervisor.  And because a VM is completely encapsulated, the entire VM exists in a small set of files.  This simplifies replication and therefore simplifies the process of keeping your production and your DR environment in  sync.  And finally, servers at the DR site can be used for other purposes, like test and development, until they are required for DR purposes.  Which means an investment in a DR infrastructure will not site idle.


Support

I love it when I hear someone say “my application vendor says they won’t support VMware.” Hmmmmm.  Here’s a crazy question for ya, isn’t it VMware’s job to support VMware?  Now, I’m sure what they really mean is that the vendor won’t support their application in a virtualized environment.  But just to make things clear, if you have a problem with VMware … call VMware.

And support for applications in a virtualized environment is rapidly changing.  Examples are numerous, but two big ones that come to mind are SAP and Microsoft.  In the earlier part of the year, SAP announced full support for their software on VMware.  And just recently, Microsoft announced the Server Virtualization Validation Program (SVVP) where they will support their OS’s and a good list of their applications in a virtualized environment. And VMware’s ESX is the industry’s first hypervisor to be validated by Microsoft.

What about those vendors who still don’t support their applications in a virtualized environment?  Most of my customers do two things.  First, they put pressure on the vendor to start providing support.  For large companies, this can be very effective since the software providers want to keep their big customers happy.  Second, many of them have a “swing server.”  So when a vendor’s support team requires them to reproduce the problem on physical hardware, they simply V2P the VM on the swing server and continue on their merry way.  (Yes, I know, this isn’t always as easy as I make it sound.  Though it often can be just that easy)


Still not convinced?

The table above is the results of a survey of 500 VMware customers taken over a year ago, and the numbers are growing rapidly.   Simply put, customers are virtualizing tier 1 applications today.



Powered by ScribeFire.