ShmargeBack–Get IT!

Charge-back, Show-Back, Shmargeback, call it what you will but get off your duff and do it.  As I travel around and work with customers on building out their Private/Hybrid cloud strategies I’m amazed at how few organizations actually have a clue about what it costs to deliver IT services.  Sure, the CFO could look at his budget and say, “Mmmm, yup you guys cost me X.”  Ok, but what are you actually delivering for X??

Can you clearly articulate, “I deliver this, this, and that at these service levels that backstop and deliver this revenue for the business”?

Increasingly IT is being questioned:  “What do you actually provide me?” and “Is there another way I could do it more efficiently (cheaper in CFO speak)?”  VMware’s Paul Maritz like to point out that for the first time in IT’s recent history, corporate IT now has an external (competitive) rate card against what their services can be compared.   It’s easy today for a Line of Business to go to Amazon, Rackspace, you-name-it, and simply procure IT services.  Sure, it’s fraught with issues  and considerations (that the business user won’t consider) but at the end of the day, it’s cheap, easy and moves at their pace — NOW!

I firmly believe corporate IT still can provide HUGE value to the business, but to maintain relevance it needs to dramatically change.  You MUST be able to articulate what it costs to deliver a given service.  You MUST be able to differentiate your services and the value you provide to the business in terms they understand and care about.  The fundamental basis for doing so is measurement.  So what should you do?

  • Examine yourselves.
  • Procure the tools needed to start metering your services.
  • Deploy those tools
    • It’s amusing how often people buy capabilities yet never deploy them….usually because they are too busy or lack the skills in-house.
  • Differentiate your services, for example:
    • Compute and Storage Performance tiers
    • RTO/RPO tiers
  • Start providing your LOB’s  with metrics of what they are actually consuming and the associated costs for those services (showback, baby!)
    • If you don’t do Chargeback today this will start to condition the business to see your services defined in these terms, all-the-while setting the stage for moving to a chargeback model.
    • This will also give you an internal scorecard by which you can measure yourself against those external providers.  Believe me, if you are not already, your internal customers are.
  • For new services/requests, begin engaging in meaningful business-based conversations about what is actually needed for a given service.  Show the consumer the costs associated with the various tiers of service (“NO, DR isn’t free”, it costs $20/month/VM”, for example).

At the end of the day it’s economics.  When we as IT service providers can define various levels of service at graduating degrees of cost, the business will decide what they are willing to pay for based upon their requirements.  Furthermore, you will be able to truly measure yourself against external providers and clearly articulate your value-add.

Without it, your days are numbered.

View 3.1 HID Filtering

With the release of View 3.1 we received some more flexibility with presenting/hiding Human Interface Devices  (think foot pedals for a transcriptionist, some type of bardcode scanners, etc).

HID devices are filtered out by default as it would be a bad thing if your local mouse was redirected for example.  So to enable a specific device to be passed through we need to do a few things:

1:  First we need to determine the VID/PID of the HID device.  There are two ways to determine this:

a:  Debug Logs:  Go to C:\Documents and Settings\All Users\Application Data\Vmware\VDM\logs.  Search through the log for “Devices”  This will contain the information of all the devices available before filtering takes place

b:  Windows Device Manager:  Open Windows Device Manager and find the HID device you are interested in.  If you right-click–>Properties on the object and go to the Details tab you will see a drop down.  Choose Device Instance ID from the drop-down.  The VID/PID value will be displayed.  Usually this looks something like USB\VID_xxxx&PID_xxxx\…

2:  Now that we know the VID/PID we can go to the client and create the appropriate registry keys to tell the View Client to pass that particular HID device through:

a:  Go to HKLM\Software\VMware, Inc.\VMware VDM\USB\

b:  Create a new Multi-String Value named AllowHardwareIDs

c:  Set the value data to the VID_xxxx&PID_xxxx you documented earlier

d:  Restart the client and things should work upon the next connection

Special Thanks to Pete Barber for this info!

View 3.1 USB Redirection Improvements

As you probably have seen, View 3.1 GA’d yesterday.  One of the improvements listed in the release notes was:

  • USB Improvements – View 3.1 offers more reliable and broader device support with reduced bandwidth consumption. A separate TCP/IP stream is used.

From what I understand in talking to some people is that a lot of time was spent on the USB redirection stack to further optimize and tune it.

ALSO, USB redirection traffic is now split out onto it’s own traffic stream.  USB redirection traffic will now communicate from the client to the host vm on TCP port 32111.  I imagine this opens up a few new opportunities to do some USB specific traffic prioritization/trottling.  Very interesting!   In previous versions, the USB traffic was inside of the RDP stream (virtual channel).  This prevented us from ever REALLY seeing the USB specific traffic or having any control over it.  Simply put, now we do.  Gotta love progress!

VMware View Open Client

So…that crazy, proprietary company, VMware, today released the first open-source VDI client! (Just a small jab) :-) It’s actually a very exciting event in terms of the possibilities it opens up. Already out of the gates, VMware has announced in the press release a bunch of different partners that are leveraging this View Open client in their own solutions (ChipPC, Novell, HP, Sun…). http://vmware.com/company/news/releases/view_open_client.html

The new View Open Client includes all the major components needed for someone to take the software, adapt it to their needs and package up a rich, customized solution. This should really assist all the players in the eco-system to reduce their time to market on solutions. I’m hoping this results in some new and innovative ways to deliver virtual desktops!

Another great use case that I hope we soon see more of are commercially supported (by the vendor and VMware), turn-key solution for turning your fat PC into a dumb, highly managed “thin client’. There are some solutions out there today, but I would think that this new View Open Client would allow someone to put together a package to do this easily with out-of-the-box View integration. The great part is, that a solution someone in the eco-system puts together using the View Open Client can be submitted to VMware for formal certification and support!

If you have some good ideas on how to apply this, have at it: http://code.google.com/p/vmware-view-open-client/

Project Minty Fresh (Desktop)

The fresh flavor that lasts and lasts……that’s goal behind one of my customer’s latest desktop projects.  This customer has been working the View3 pre-release code for some time now.  Using View Composer, we now have the capability to very easily, and programmatically refresh a user’s desktop back to the original golden master image state.  View Composer supports three primary operations after initial linked clone creation:
1:  Refresh – A Refresh takes a desktop back to the original state of the master
2:  Recompose – A Recompose takes a linked clone and re-homes it if you will, to a new parent image (think instant OS updates or software rollouts)
3:  Rebalance – A rebalance takes all the linked clone VM’s in a pool and re-balances them across a set of LUN’s
For the sake of this conversation, we will focus on the Refresh operation.

Problem:
My customer’s goal is to maintain the integrity of their corporate desktop image deployed to users.  Over time, their user’s have a particular habit of destroying their desktops.  So much so, that they had to put in place a mandatory, ongoing re-imaging program so that all desktops never go more than six months without a re-image.  This policy has had some very positive results in terms of reduced help desk calls and time spent just sustaining a rotting OS.  That said, the effort required to sustain an perpetual, semi-manual re-imaging program is substantial.

Solution?:
Enter VMware View Refresh.  Right now, they have rolled out a program for a set of 50 users to see how well it would work to refresh a user’s desktop much more aggressively (every 5 days to start).  This means that after a linked clone is created and the user begins to use the VM, the VM will automatically refresh every 5 days back to it’s original state (configured in the desktop pool settings…screenshot to come).  The goal is to make this a highly seamless event for the users.  With View Composer’s User Identity Disk, C:\Documents and Settings\ is redirected to another, persistent (thin provisioned) .vmdk that is presented as the D: drive.  This is configured when you create the pool as shown below:

Based upon our initial tests, this is working really well.  We can refresh a user’s desktop without them ever knowing, as the next time they log on their profile is completely intact.  Currently we are testing all of their applications to ensure this will work across the board.  I am sure we will find some applications that do not, gasp!, save their preferences in the user’s profile (something TS/Citrix admins deal with constantly).  For those applications, our plan is to ThinApp the application and set the User Sandbox to live in the proper, user’s profile directory.  We have also found that we need to re-register each VM with the anti-virus console after a refresh operation which we are now achieving through a post-sync script.

I’ll be sure to keep everyone posted on our progress and experiences.  It’s certainly something to consider and explore.  Let me know what you think!  Until then, I wish you a very minty fresh desktop experience!  :-)

A New “View” of Virtual Desktop Computing

Today is a very exciting day for those working in and around desktop computing. VMware has released a major new version of it’s end-to-end virtualized desktop solution, View3 (yes that’s a new name. VDI as a product name has fallen to the way-side). I have had the privilege of working with the product since it’s early beta days and with some customers who had early beta access. I’ve been impressed with the amount of customer feedback that was incorporated into the product between beta cycles. It’s a real testament to VMware’s desktop solutions group’s willingness to listen and to truly mold this into something that customers not only want to use, but are already deploying. SO, congratulations to all those who worked so hard to get this product out the door!

Rather than just reprint the marketing press releases, I thought I would highlight some of the key new features of View3, give a short explanation, and add some initial thoughts. As the (borrowed) graphic below shows, “View3” really is the umbrella name that covers all the components of the total solution. View Manager 3 is the desktop broker that sets up and manages connections between end users and back-end desktop virtual machines. Let’s dig into some of these features.

  • Unified Access View Manager now brokers connections to physical PCs, terminal servers, and blade PCs in addition to virtual desktops hosted on VI3. This allows you to make the View client or web portal a true, one-stop-shop for user computing. For example, I have a customer that is a hospital that has blade pc’s in use for a very specific radiology application. Since users connect to the blade PC’s over RDP, their connections can now be seamlessly be brokered through the same interface as their virtual desktops. There is also an interesting application here for MS Terminal Servers as View now can not only broker connections to Terminal Servers, but also easily add a load balancing mechanism.

Below: A screenshot of the various choices you have for types of desktop connection you can create for brokering:

  • Virtual Printing Provides end users the ability to print to any local or network printer. Virtual Printing includes a universal print driver, compression for print jobs, and auto detection of local printers from the View Client. Printing has always been a thorn in the desktop administrator’s side. The issue is magnified when we are talking about hundreds or thousands of virtual desktops. How do we ensure that the printer driver the user needs for their local printer will be available on the desktop that they land on. Either I have to do that work ahead of time and pin a user to a desktop (not very flexible and a bunch of work), or I have to install all the possible drivers across all the desktops in the pool (scary!). VMware did a great thing here, in my opinion, they partnered with ThinPrint to license the best of breed solution on the market (again my opinion J). The universal printer driver is installed with the View agent on the virtual desktop side and with the View client on the client side so there’s no extra work for the administrator. It’s just there and it works! Oh, and it works VERY well! The universal print driver is smart enough to pick up many of the unique features of the user’s printer, supporting all the bell’s and whistles your user’s require. The last key feature of Virtual Printing is the incredible print job compression it provides. The universal driver does adaptive compression of the print job on the VM side for a much lower impact on the network for print jobs. This is very important for those deploying virtual desktops to remote locations or even home users. That said, ThinPrint still provides some fantastic add-on’s to this technology. It’s worth checking out their website for a full comparison of what they can do, in addition to the technology VMware licensed from them! http://dotprint.thinprint.com/euen/Features/tabid/93/language/en-US/Default.aspx
  • Enhanced User ExperienceExtends MMR (multi-media redirection) to all Win XP and Win XPe based clients. Provides increased support for critical codecs- MPEG1, MPEG2, MPEG4 part2, WMV 7/8/9, WMA, AC3, MP3. Provides granular policies for USB redirection. What can I say about multimedia over RDP? Well it usually sucks. With MMR, the world becomes a much brighter place for the modern desktop user trying to work over RDP. MMR makes the playback of all the codecs above extremely usable over RDP. I’ve even tested this over a WAN connection with some fairly high latency numbers. The content just took a little bit longer to queue up but then the playback was seamless. A key change here is that MMR is now available to all WinXP and WinXPe fat and thin clients. Before, it was limited to only WinXP devices and Wyse XP/XPe thin clients. In regards to USB redirection, it works great! View3 adds the ability to enable/disable USB redirection at the pool or even desktop level.
  • Offline Desktop (Experimental) — Provides the flexibility to intelligently and securely move virtual desktops between the datacenter and local resources. Users can check out their virtual desktops onto physical clients, use the virtual desktop locally, and then check it back in. Offline Desktop is one of those new, game changing type of features everyone has been asking about for years. There always will be a segment of your user population that will need to be able to work in a mobile, disconnected fashion. Offline Desktop solves some problems for this user segment. With View3, the administrator can configure a desktop for a user and then the user can “check-out” their desktop. The desktop is then block-level streamed down to the endpoint and then can be run the encapsulated desktop locally….without a network connection. Obviously only applications that reside within the VM and local data will be accessible. But still, a user could be very productive offline. The beauty is, that the next time the View client is signed into and can connect back to corporate, it will allow a block-level sync of all changes back to the corporate datacenter. And what happens if your user looses their laptop or it is stolen? Not to fear, strong encryption is always applied. The VM can “self-destruct/mothball” itself after x days of not checking into the View Manager (the administrator can configure this), or it can even be remotely disabled if it’s still accessible.
  • Fully Internationalized product
  • View Composer is a new product fully integrated with View Manager 3.  View Composer provides significant benefits to VDI solutions including:

· View Composer uses VMware Linked Clone technology to rapidly create desktop images that share virtual disks with a master image to conserve disk space and streamline management.

· User data and settings are separated from the desktop image, so they can be administered independently.

· All desktops that are linked to a master image can be patched or updated simply by updating the master image, without affecting users’ settings, data.

· This reduces storage needs and costs by up to 70% while simplifying desktop management.

View Composer is what I consider to be one of the most exciting new features of this release (even though it’s really a separate product). The storage cost associated with deploying virtual desktops has been up to now, one of the largest barriers of adoption. Many organizations I deal with loved VDI and what it represented in terms of data security and lowered management costs, but they just couldn’t get over putting all their desktop storage on expensive, SAN-based storage. That said, there have been a large number of customers who have moved forward with VDI because of all it’s great benefits. Many have leveraged features of their storage arrays to do things like thin provisioning, writable snap-shots, or even single instancing to significantly cut the storage costs. View Composer solves this problem for the rest of the world as it allows you to significantly reduce the amount of storage used by employing linked clones. Composer allows you to identify a “gold image” from which you desktop pool will be created. You then tell Composer what LUN’s to store the VM’s on and then the fun begins. Composer creates a replica on each of the LUN’s you provided and then there, the small linked clones are built. The provisioning is extremely fast and as you can imagine, highly space efficient. For a more detailed look at the guts, take a look at Rod Haywood’s excellent examination of the process: http://rodos.haywood.org/2008/12/storage-analysis-of-vmware-view.html

Composer isn’t just a storage savings tool. It’s also a game changer for desktop management. Now that you have all these linked clones for your desktop pool, you have the option to now manage the lifecycle of these desktops from the image. That’s in contrast to how thing normally work where once a desktop is created you have to continually patch it and upgrade it to maintain it (applications, windows updates, virus updates, and security updates). With the linked clones, we can now simply update the image at the top of the tree and re-home all the downstream desktops to the new version of the image. This is called a “Re-Compose” operation Think about the ramifications of that! You could roll out a new application to 1000’s of users with a few clicks, with a high degree of certainty by simply Re-Composing your users to a new version of the master image. Good stuff!! With the addition of the User Data Drive option which employs Windows Profile Folder Redirection technology, you can ensure that your user’s personal settings persist even after refreshing their desktop or even moving them to a completely new version of their desktop. Heck, you can even schedule a refresh of your user’s desktops every x days to ensure that your user’s never experience “Windows Rot” through the “Refresh” function. I could go on and on. I plan to do a follow-up post just on Composer but I hope this get’s your creative juices flowing in terms of the possibilities here!

There was a lot to cover here, but I think I covered most of the salient points. I hope you found it useful! I would encourage you to read more about it, play with it and try it out!

Here are some key links for the product:

Product Landing Page:   http://www.vmware.com/products/view/

Release Notes:              http://www.vmware.com/support/viewmanager/doc/releasenotes_viewmanager3.html

Documentation Page: http://www.vmware.com/support/pubs/view_pubs.html

Download Trial Link: https://www.vmware.com/tryvmware/?p=view&lp=1

Bring on the 10Gig Ethernet!

VMware recently updated its networking performance tests to see if the ESX hypervisor could efficiently leverage the ever-expanding bandwidth available at the Ethernet level. In short, it sure can! A single VM can effectively saturate a 10Gbps link when jumbo frames are enabled. But that’s not to say it can’t perform well with multiple virtual machines. Things scaled nicely and equitably for all VM’s. This type of scalable performance is reassuring as customers continue to raise consolidation ratios within their datacenters and virtualize the largest of workloads.

To save you some reading, here is the summary from the whitepaper, which can be found at: http://www.vmware.com/pdf/10GigE_performance.pdf

Conclusion:The results presented in the previous sections show that virtual machines running on ESX 3.5 Update 1 can efficiently share and saturate 10Gbps Ethernet links. A single uniprocessor virtual machine can push as much as 8Gbps of traffic with frames that use the standard MTU size and can saturate a 10Gbps link when using jumbo frames. Jumbo frames can also boost receive throughput by up to 40 percent, allowing a single virtual machine to receive traffic at rates up to 5.7Gbps.

Our detailed scaling tests show that ESX scales very well with increasing load on the system and fairly allocates bandwidth to all the booted virtual machines. Two virtual machines can easily saturate a 10Gbps link (the practical limit is 9.3Gbps for packets that use the standard MTU size because of protocol overheads), and the throughput remains constant as we add more virtual machines. Scaling on the receive path is similar, with throughput increasing linearly until we achieve line rate and then gracefully decreasing as system load and resource contention increase.

Thus, ESX 3.5 Update 1 supports the latest generation of 10Gbps NICs with minimal overheads and allows high virtual machine consolidation ratios while being fair to all virtual machines sharing the NICs and maintaining 10Gbps line rates.

VMware Compliance Center

Twice this week I have had customers contact me about how virtualization impacts their compliance with xyz (fill in your favorite regulation or bureaucratic oversight committee). In my effort to assist these customers, I was pleasantly surprised to find that VMware has launched it’s new Compliance Center portal on the VMware.com website. http://vmware.com/technology/security/compliance/

There is a massive amount of valuable whitepapers, webinars, and reference links on this site to assist with many different types of compliance questions. Initially there appears to be a focus on HIPPA (health-care), and PCI (credit cards) related info. This is fine by me as those two topics are probably the largest areas of concern that I have run into. I’ve been told there is much more coming, so stay tuned!

If for some reason, you still need more help, I would encourage you to contact your friendly local VMware partner or sales team. There are numerous additional resources they can bring to the table to help. Good luck and happy complying!

Welcome To Myself

Hello all,

My name is Rick Westrate. Aaron Sweemer has been gracious enough to invite me to contribute some I (hopefully) insightful content to his snazzy new site. Now what in the world would qualify me to comment on the world of virtualization you ask? Let’s set the record straight. I certainly do not claim to be the authority on the subject. However, I do believe my 11 years of experience in the enterprise IT industry and my position with my employer, VMware, provides me with an occasional unique perspective on the virtualization industry. Before joining VMware, I worked as a consultant focused on a wide array of datacenter technologies ranging from VMware virtualization, large Citrix PS implementations, (woops, it’s called XenApp these days), and storage arrays implementations (primarily EMC). These days, I work as a Systems Engineer in West Michigan, focused on large Enterprise Accounts. I travel around spending time with customers, listening to their problems, concerns, and needs. I then work with them on walking through and understanding the many game-changing solutions VMware provides. It’s certainly an exciting time to be in the virtualization industry. The pace of innovation, and change is amazing. SO, hello to you all! Hang on tight and stay tuned. I will be publishing some additional content soon. I look forward to interacting with everyone and hearing what you have to say!