Archive for September, 2009
–The vTrooper Report –
In an effort to gel up an internal billing and allocation model (GaaS – Goughing as a Service) I’ve been struggling with the concept of cost per vm. I was asking a simple question in the twittersphere about that idea and it turned into a discussion and well…got out of hand. I apologize for that, as this is a better format to explain. (Special thanks to @asweemer for a dumping ground)
If I had a Nickel for each VM…
At VMWorld 2009 there was a presentation in the keynote that showed the price of a vm hosted with Terramark that was $.05 per hour. I thought wow. A nickel per hour. If I had a nickel per vm/hour; How much would I have available to spend on coffee?
Then I thought wait. I have VM’s. How much do they really cost me per hour? Well the answer is … it depends. Old servers with high power consumption and low density versus a new system with Intel 5500’s and packed in blades have different burn rates visible to different systems(power, cooling, depreciation). I haven’t found a great model to break those units down to my satisfaction yet. I need another way.
As a general practice I create in my mind some maxims that I follow in the creation of a VM.
- S – 1vCPU, 1GB Ram, 1GB Net, 10GB Disk
- M -2vCPU, 2GB Ram, 1GB Net, 20GB Disk
- L – 4vCPU, 4GB Ram, 2GB Net, 40GB Disk
Seems simple enough, but it doesn’ t really generate a cost model on a consistant basis. Hardware continues to change and each VM that consumes resources does it at different rates and times of the day. A VM that isn’t doing anything isn’t really ‘consuming’ anything, right? I thought I would try break it down further by creating a 4 quadrant block with two macro categories: Compute (CPU and Memory) and I/O (Network and Disk)
Each resource area could increase\decrease for a reason without changing the size of the original maxim it was created under. This allows for small variations of size without having a customer yell that their bill when up by $2 this month.
The Measureable Unit
Use a unit of measure to identify the four quadrants: vCPU : vMEM : vNET : vDISK or C:M:N:D . Then overlay the VM creation to count up the units. This way the growth of a ‘VM’ during its lifecycle can be adequately allocated back into the proper IT metric. Using the VM creation maxims up above this may be:
- S – 1:1:1:1
- M -2:2:1:2
- L – 4:4:2:4
This isn’t perfect but it at least allows for the average cpu cost to be allocated seperately from a memory, network, and disk cost. Afterall, you don’t get to upgrade all four parts of the quadrant in the same fiscal year usually. This also allows a way to trend an average of your cost rate per unit over a period of months and years to see which cost areas are improving. It is an interesting metric for the business and IT. Win-Win in my book. Even if no-one internally ever has to pay the values back (Showback). It also helps police which VM is consuming too much of a specific value which would skew the numbers if you simply took the cost of the esx hosts and divide by the number of VM’s.
Apples , Oranges, Lemons, and Grapes = Frutti Results
So you have a unit of measure and a type of system to match the measurement up towards over a period of time. Here’s where the fruit cart and the horse get hooked up.
This is all very complex, why can’t I just buy the same server I have purchased for the last 5 years?
Sorry Kids. They don’t build’em like they used to. But in todays market, the UCS system from Cisco has a new buzz to the original players of IBM, HP, and Dell. How do you sort any of that out among the offerings, and how do you select the right platform for your new ESX System? By the Socket ! Every system of the x86 family has them from both the Intel and AMD families. And now that you have to pay for your hypervisor and additional tools (Capacity IQ, AppSpeed, Nexus1000v) per socket it matters more. I need to squeeze the value out of those sockets.
Still staying in the upper half of the Quad; lets measure cores and RAM as a ratio assuming dual rank 4GB Dimms and measure them to some of the standard 2 socket servers.
Standard Intel x5450
2 Socket – 4 Core – 16 Dimms (8 per socket) produces 4 cores/ 32 GB Ram
Standard Intel Nehalem x5500
2 Socket – 4 Core – 18 Dimms (9 per socket) produces 4 cores/ 45 GB Ram
Cisco UCS extention on x5500
2 Socket – 4 Core – 48 Dimms (24 per socket) produces 4 cores/ 96 GB ram
What this shows is that for every license of ESX consumed in the environment there are different amounts of memory available for a VM to use. The approach by the UCS system allows for a much higher allowance of memory to a VM at the same licensing cost. Sure you could buy 4 way servers and claim that the 256 GB of RAM gives the VM more allowance but in reality the vm will have ratios of contention to the vCPU and Memory within each of the 4 sockets. You can change the size of the container by moving to a 4 way, but it won’t change the value of the ratio for that container in regards to the cores and memory.
The idea of CPU contention is becoming more visible to most administrators of virtualized environments because the desire to pack the vm’s onto a host is so strong. If I can get 10 VM’s on a host for $5000 then getting 25 VM’s on the same host is lowering my cost per vm. It could also be cheating your customers of the performance they paid. Especially if you have multiple vCPU’s assigned to those 25 VM’s. This is where the ratio of VM per host becomes obsolete and vCPU/core makes more sense.
Using the example containers above you can generate an expected number of VM’s per socket. There is no reason to do a 1:1 ratio of cores to VM because the point of virtualization is to run more with less. I think a good ratio to start with is 4:1 for a production VM and 16:1 for a VDI implementation:
Standard Intel x5450 - (4 /32 GB SocketRatio) yields 16 VM’s with a 1 vCPU/ 2GB ram configuration per socket
Standard Intel Nehalem x5500 - (4 /45GB SocketRatio) yields 16 VM’s with a 1 vCPU/ 2.8GB ram configuration per socket
Cisco UCS - (4 /96GB SocketRatio) yields 16 VM’s with a 1 vCPU/ 6GB ram configuration per socket
You can always adjust your actual deployment if these ratios don’t match up for your environment. The expected deployment number helps determine how large the pizza slices are for the team. Not how many slices each of them consume. In these configurations you can see where the density of the RAM per socket (SocketRatio) of the UCS allows for much larger VM configurations before overcommitment. A nice fit for the new 64bit installations. These expected numbers of VM per socket help determine what the burn rate of a C:M:N:D value is for the CapX spend you made.
To fully understand how much a VM costs, one has to look at what was spent in the CapX of the host and agree on the measuring stick to measure the C:M:N:D value of the created VM. If a series of hosts are in service from different families and are at different parts of lifecycle there may have to be some averaging. The SocketRatio of Cores/RAM is a consistent way to measure systems from different form factors and families and levelset the expected allocation of VM’s. The expected allocation of VM’s for a host helps determine what density ratio is desired for vCPU:vMEM.
This is the end of Part 1 – In Part Deux I will take a deeper dive into the Compute and I/O areas and assign a more detail cost per VM model.
I ran into an issue the other day as I was trying to deploy a VM from from a template using the vCenter Customization Specification Manager. I was trying to use a custom Sysprep.inf file that would automatically have the newly created VM join my AD domain and placed in a specific OU.
Now, when I Sysprep’d the VM normally with my custom .inf file, everything worked fine. But importing that exact .inf file into vCenter and deploying the VM from a template, the Sysprep failed. So I had some digging to do, and here’s what I found out.
First, vCenter doesn’t actually store Sysprep.inf files. Rather, vCenter stores the configuration parameters in XML and then generates the Sysprep.inf file on the fly during the deployment process. (This part I actually already knew, the next part I didn’t).
Second, and most importantly, when importing a customized Sysprep.inf file, vCenter does not store each parameter as a separate XML element. So for example, given the following custom text …
MachineObjectOU="OU = MyOU,DC = mydomain,DC = com"
I thought this would be stored in the normal, expected format, like this …
<MachineObjectOU>"OU = MyOU,DC = mydomain,DC = com" </MachineObejctOU>
But it turns out, when importing sysprep.inf files, vCenter stores the parameters as a single XML element with a modified <_type> element like this …
<value> [Identification] JoinDomain=mydomain.com DomainAdmin=administrator DomainAdminPassword=1234 MachineObjectOU="OU = MyOU,DC = mydomain,DC = com"</value>
There’s a couple important points to note here:
- vCenter only stores the XML this way when importing a sysprep.inf file. When using the customization wizard, vCenter generates XML which is formatted the normal way.
- The first element, <_type>, contains the value vim.vm.customization.SysprepText. When using the wizard, the value for this element is vim.vm.customization.Sysprep (without the trailing “Text”).
- When the XML is stored this way, whitespace matters! Notice how whitespace is the delimiter in the <value> element? And notice the spaces in the MachineObjectOU parameter? Removing the spaces did the trick.
Mr. Dudley Smith has updated his PDF diagram with some minor corrections and additions. Get the latest, most up-to-date version here (click the graphic) …
He also updated “the brain” which can be found at it’s new home http://webbrain.com/brainpage/brain/89EFA582-2C35-F6A2-9ED1-7AD4810266C2/. Make sure you update your bookmark accordingly.
Chris Everette is a colleague of mine, a Sr. Systems Engineer out of Detroit. He is a very sharp, seasoned, virtualization industry pro. In working with him over the past two years, I’ve noticed that he writes very well. So of course, I’ve been trying to get him to start a blog, or contribute to this one, for quite a while.
My hard work has paid off It looks like for now he’ll be a guest blogger here on Virtual Insanity. Whether or not he starts his own blog remains to be seen. Please welcome Chris Everette to Virtual Insanity!
During VMworld last week, he sent me his first post, which I’m just now getting a chance to post …
What will your role be in the cloud?
Well being at my 3rd vmworld gets me thinking. I am wondering about this “cloud thing” just like everyone else who has been in IT for a long time. I sometimes follow other blogger’s articles, and I like Chris Wolf’s writing. He got me thinking, as well. His article, titled the “Cloud and the Wal-martification of IT” struck a chord. If this Cloud thing really takes off, which by all indications it has and will continue to gain momentum, companies may scratch their heads and wonder if it makes sense to have their own IT assets and resources and IT professionals. Especially companies where their core business is not IT. So, what does that mean for IT professionals and particularly my customers? Are we really going to all get our computing from several large cloud providers and many smaller ones? Does anyone remember mainframe time sharing?
Do I think that companies will outsource all of their IT to the cloud in the next year? Probably not. Chris Wolf uses the timeframe 5-10 years. However, will portions be moved to the cloud? I was speaking with one of my customers and he reminded me that he is already “outsourcing” web filtering and spam filtering to two different providers. Many companies have their web presences already “in the cloud”. Software as a Service has had some bumpy starts and stops, but is now a reality for many types of applications. It will be an evolution. Security is still of concern. There will be companies that will try something, not like it, pull back, only to move again to a model that provides them more flexibility and reduces costs.
What does this mean for the IT professional? I believe that if you are working for a company that is not an IT company, that you will want to manage your companies migration to the cloud. Get out in front of it. Volunteer to do the research. Use it to further your own knowledge and career. Can you convert your IT department from a cost center to a profit center and be a cloud provider for other types of similar businesses? If all else fails, take your expertise to a cloud company. The exciting thing is in the future you can work for a company that will service many different types of customers and provide you many interesting job challenges. I believe cloud providers (since their business is IT) will be on the forefront of exciting and new technology as well as need the best and brightest to operate them. Do an inventory of your skills. Do you understand databases? Great, cloud providers need to manage many databases (if not for their customers) for their own internal systems for billing, monitoring, reporting, etc. Do you understand networking and security? Cloud providers are going to need to guarantee that data cannot bleed from one customer to another. Same for storage. And obviously, also for virtualization.
We call this concept of being able to support multiple customers on the same infrastructure as multi-tenancy. Since a cloud provider’s model is multiple customers on the same infrastructure (“cloud”), it better be secure. Is your expertise Enterprise Applications, messaging, development? Guess what, cloud providers need application expertise as well so they can meet the business needs of their customers. If you are a developer you need to be able to write applications that are “cloud aware”. Applications that can ask for more resources if they need them. Applications that may service more than one customer. Do you understand service management? Do you work with infrastructure services such as backup and recovery, business continuity or data center design? Do you work for an IT provider, already? Your customers of the future may change to include a mix of large “cloud providers”. They exist, today and are called many things. They may provide Hosting (infrastructure as a service), application services (Software as a service) or multiple platforms including items such as storage (Platforms as a service) and even voice services. I am sure there are some cloud “thinkers” that may challenge my simple definitions of these terms, but you get the idea.
I believe the security concern is the single largest inhibitor to companies running full tilt into cloud computing. However, just like we use VPNs, SSL web services, and other forms of digital encryption of data on the public internet, these security problems will get solved and enforced.
IT is an interesting business. Sometimes things build up until there is a tidal wave and things change rapidly. Sometimes things move more slowly and there is an evolution. However, if you hear the term “cloud” and think it is only a marketing term, you may want to think again. It is already affecting and will affect all of us in IT sooner than we may think.
What do you think?
I’ve been here at VMworld2009 in San Francisco since Sunday. Monday was Partner Day and marked the unofficial first day of the event. Yesterday, however, was the actual first day, open to all attendees. There is much coverage of the event by numerous bloggers, so I won’t reinvent the wheel and bore you with duplicate content. Instead, here are a few of my favorite things, so far (we’ve to two more days). Oh, and this is by no means a complete list. there are a LOT of cool things happening here and I don’t have the time and/or energy to write about all of them.
John Troyer Streaming Live from the Solutions Exchange
First, I often find myself watching John Troyer’s live coverage from the Solutions Exchange. Which is weird because I could literally walk there in about 2 minutes. But when I’m in my room in between meetings, it’s nice to have it on in the background so I can listen in on all the stuff I’m missing. And John has been interviewing some very cool people.
Check it out here … http://www.ustream.tv/channel/vmworld
The vCloud Express is …
The VMware vCloud™ Express service delivers the ability to provision infrastructure on-demand, via credit card, and pay for use by the hour. As a VMware Virtualized ™ service, it ensures compatibility with other VMware environments both internally and with external services.
VMware actually demoed vCloud Express with Terramark, one of the service providers in the program. It was pretty slick to see them simply add some user and credit card information and then spin up a VM quickly and easily on stage.
Now that I’m having serious power problems in my house because of my home lab (hence the reason this blog keeps going up and down), I really think I’ll be using vCloud Express very soon.
VMware’s acquisition of SpringSource was actually announced weeks ago, but this was the first time there was really any lengthy discussion about it. Frankly, the SpringSource acquisition is probably the thing that I am most excited about. And I personally believe it will play a significant role in VMware’s future. There is actually a lot of things I’d like to say about this, but will save it for a later post.
Running VMware View / RDP sessions on your iPhone with the Wyse Pocket Cloud client
Given the fact that I “eat the dog food” and actually run my VMware corporate desktop as a VMware View image, and I am also an iPhone user, I think this is super slick and something I know I will use …
A Shameless Plug
The folks over at Virtual Strategy Magazine have asked me to do a video blog of the event. Our first recording was last night and I would guess they’ll have it posted sometime today. When it’s up on their site, you can find it here … http://www.virtual-strategy.com/VMworld-2009.html
Also, if you’re here at VMworld and undecided about your Thursday schedule, why not come to my session? I’ll be presenting at 10AM in room 135. The topic? How to convert old PCs to thin clients using a Linux OS and the VMware View Open Client. Hope to see you there!