Running Zimbra Desktop in a Web Browser

I came across this tip from a fellow colleague today and wanted to share it with everyone. You can run Zimbra Desktop in your default web browser. My default web browser is Chrome and I have found running Zimbra Desktop in Chrome to be very responsive.

 

First step is to open the native Zimbra Desktop Client and then click Setup in the upper right corner.

zd_1

 

This will open the setup screen for the Zimbra Desktop Client. Located in the bottom right you will see an option for open in web browser; click this link.

 

zd_2

 

This will open Zimbra Desktop in your default web browser, once it opens click Launch Desktop.

 

zd_3

 

And there you go, the Zimbra Desktop Client running in a web browser, in this case Chrome.

zd_4

 

A couple notes:

The native Zimbra Desktop Client must remain open, so I just minimize the native Zimbra Desktop Client.

You have to “launch in a web browser” each time you close your web browser or the native Zimbra Desktop Client.

VMware Clarifies Support for Microsoft Clustering

VMware published KB Article 1037959 ( http://kb.vmware.com/kb/1037959 ) on April 18, 2011 in an effort to clarify VMware’s position on running Microsoft Clustering technologies on vSphere. Below is a snapshot of the support matrix published by VMware in the KB (always refer to KB 1037959 for the most current information).

vmw_mscs_graph

 

For those familiar with VMware’s previous position on Microsoft Clustering, you will notice a couple changes. First, VMware has made a distinction in Microsoft Clustering technologies by segmenting them into Shared disk and Non-shared Disk.

  • Shared Disk – solution in which the the data resides on the same disks and the VMs share the disks (think MSCS)
  • Non-shared Disk – solution in which the data resides on different disks and uses a replication technology to keep the data in sync (think Exchange 2007 CCR / 2010 DAG).

Next, VMware has extended support for Microsoft Clustering to include In-Guest iSCSI for MSCS.

For those interested in leveraging Microsoft SQL Mirroring, the KB states that VMware does not consider Microsoft SQL Mirroring a clustering solution and will fully support Microsoft SQL Mirroring on vSphere.

Under the Disk Configurations section of the KB, the KB discusses how if using VMFS, the virtual disks used as shared storage for clustered virtual machines must reside on VMFS datastores and must be created using the eagerzeroedthick option. The KB provides detail on how to create the eagerzeroedthick disks for both ESX and ESXi via command line or GUI.  Additional information regarding eagerzeroedthick can be found in KB article 1011170 (http://kb.vmware.com/kb/1011170). Something to note in KB 1011170, at the bottom of the article it states using the vmkfstools –k command you can convert a preallocated (eagerzeroed) virtual disk to eagerzeroedthick and maintain any existing data. Note, the VM must be powered off for this action.

In closing, the VMware support statement exists to explicitly define what VMware will and will not support. It is very important for you to remember these support statements do not make any determination (either directly or indirectly) about what the software ISV (Independent Software Vendor) will and will not support.  So be sure to review the official support statements from your ISV and carefully choose the configuration that makes sense for your organization and will be supported by each vendor.

Limiting Processors and Memory To Windows For POCs

As customers transition from phase 1 into phase 2 of their virtualization journey, they begin virtualizing business critical applications. As they move into this phase, they often perform a POC to understand how their application performs on physical versus a virtual platform. Customers often ask for guidance on conducting a POC and we talk to them about the importance of an apple to apples analysis. What I mean by this is making sure the physical server and the virtual machine are configured identically (or as identical as possible). One area we often find differences is in the number of processors a physical server has versus the number of virtual CPUs (vCPUs) you can assign to a virtual machine.  Using the Microsoft System Configuration utility we can bring these two into alignment.

In our example, we will look at how to take a server that has 8 processors and 32 GB of RAM and configure this server to access 4 processors and 16 GB of RAM.

Below is a screen shot of System Properties screen and is accessible by clicking START > right click COMPUTER > Properties

From this screen shot we see the system has 8 processors and 32 GB of RAM.

cores_ram_1

Windows Task Manager, accessed by right clicking the Task bar, displays the same information.

cores_ram_2

Since we have determined the baseline for this analysis will be 4 processors and 16 GB of RAM, we will move onto configuring this server using the System Configuration utility.

First, click START > RUN > and type msconfig

step_1

This will open the System Configuration dialog box, and in this box click the Boot tab. On the Boot tab, click the Advanced options… button.

step_2

On the BOOT Advanced Options dialog box, check the  box next to Number of processors and then use the drop down to select the number of processors you want Windows to be able to access. In this case we selected 4.

Next, check the box next to Maximum memory and enter the amount of memory you want Windows to access. In this case we entered 17,408 (17*1024) since we want the OS to have 16 GB usable memory.

step_3

Once satisfied with the configuration, click OK to close the BOOT Advanced Options dialog box, then click OK to close the System Configuration dialog box, and then click Restart to apply the configuration changes you just made

step_4

After the system restarts, log in and open Computer Properties by clicking START > right click COMPUTER > Properties

As you can see from the screen shot below, the system has 4 processors and 16 GB of usable RAM

step_5

Windows Task Manager, accessed by right clicking the Task Bar, displays the same information.

step_6

To remove these settings, open the System Configuration dialog box by clicking START > RUN  > type msconfig

Next, click the Advanced options… button

And then uncheck the boxes next to Number of processors and Maximum memory.

step_7

Click OK on the System Configuration dialog box and then click Restart so your changes are saved. When the system reboots it will return to the original configuration.

Host Affinity

In vSphere 4.1 VMware introduced a new feature call Host Affinity. Host Affinity allows for the creation of a “sub-cluster” within a VMware Cluster. This features lets a vSphere Administrator create a relationship between virtual machines and the ESX hosts on which they reside. The vSphere Administrator can configure rules that either allow virtual machines to run inside an ESX Host grouping or force these virtual machines to run outside this ESX Host grouping. If virtual machines are not specifically identified to run inside, or outside an ESX host grouping, they can drift in and out of an ESX host grouping.

Why should one consider Host Affinity? One use case is for ISV (Independent Software Vendor) licensing requirements. If an ISV requires that an organization license every ESX host the virtual machine can possibly run on inside the cluster, an organization can use Host Affinity configured with the “must run on hosts in this group” rule to limit which ESX hosts the virtual machine can reside. Organizations should check with their ISVs to ensure licensing compliance.

Another reason for using Host Affinity is for increased availability. Host Affinity, in combination with VMware HA allows for a higher level of availability. For example, a vSphere Administrator can configure an ESX Host DRS group in which they select ESX hosts that reside in different physical racks. The vSphere Administrator can then configure virtual machine anti-affinity rules to ensure the VMs do not run on the same ESX hosts. The net-net of this design is two virtual machines that will not reside on the same physical ESX hosts and hosts are in different physical racks. The virtual machines are protected against a hardware failure at the server level (meaning a physical server outage will not take down both virtual machines) and they are also protected against physical failure within the rack. Extending this example to a blade environment, use Host Affinity to create an ESX Host DRS group that has ESX hosts in different enclosures.

In the example below, we will create two (2) virtual machine DRS groups, one for the SQL virtual machines and one for the Exchange Mailbox Server virtual machines. We will create one ESX Host DRS group consisting of two (2) ESX hosts. We will then configure VM to Host Affinity rules to keep the SQL virtual machines inside the ESX Host DRS Grouping and the Exchange Mailbox virtual machines outside this ESX Host DRS grouping. The reason for this is to ensure we are meeting our SQL Licensing requirements and to keep the Exchange Mailbox server outside this ESX Host DRS grouping. All other virtual machines will be able to run on any of the ESX hosts in the cluster. Note, the lab I am working with has four (4) ESX Hosts, so the example below is for illustrative purposes and not necessary what one would do in a production environment.

In the diagram below, please note the the following:

  • CLUSTER02 has four (4) ESX Hosts
  • SQL2K8_01 is running on ESX06
  • SQL2K8_02 is running on ESX05
  • EX_2010_mbx01 is running on ESX07
  • EX_2010_mbx02 is running on ESX08

Screen_1

Next, create the virtual machine and ESX host DRS groups.

  • Right click the Cluster, and select Edit Settings… to bring up the properties dialog box.

Screen_2

  • Click on DRS Group Manager, the click on the Add… button inside the Virtual Machine DRS Group section to bring up the Virtual Machine DRS Group dialog box

Screen_3

  • Once the Virtual Machine DRS Group dialog box appears, give the DRS group a name (SQL VM DRS Group), then locate the virtual machines (in this case the SQL2K8 virtual machines) and move them from the “Virtual machines not in this DRS group” on the left to “Virtual machines in this DRS group” on the right using the >> button, and click OK to save the DRS Group configuration.

Screen_4

  • Repeate the same steps to create the Exchange MBX VM DRS Group.
  • Next, create the ESX Host DRS group – click the Add… button under Host DRS Groups to bring up the Host DRS Groups dialog box.

Screen_5

  • Once the ESX Host DRS Group dialog box appears, give the DRS group a name (SQL ESX Host DRS Group), then  located the ESX Hosts that will be part of this group (in this case ESX07 and ESX08) and move these ESX Hosts from the “Hosts not in this DRS group” on the left to “Hosts in this DRS group” on the right using the >> button. Click OK to save the DRS Group.

Screen_6

  • On the Cluster Settings dialog box, click Rules and then click Add.. to open the Rule dialog box
  • In the Rule dialog box, under Name, give the rule a name (SQL VM-Host Affinity)
  • Under Type, select Virtual Machines to Hosts
  • Under Cluster Vm Group: Select the appropriate group (created earlier, SQL VM DRS Group)
  • Next, select the affinity type and level, Must run on hosts in group
  • Under Cluster Host Group, select the ESX DRS group, SQL ESX Host DRS Group
  • Click OK to save the configuration and close the Rule dialog box

Screen_7

  • Click Add…, this time we are going to add the Exchange Mailbox server anti-affinity rules.
  • On the Rule dialog box, under Name, give the rule a name (Exchange_MBX VM-Host Anti-Affinity)
  • Under Type, select Virtual Machines to Hosts
  • Under Cluster Vm Group, select the previously configured Exchange MBX VM DRS Group
  • Next, select Must Not run on hosts in group
  • Under Cluster Host Group, select SQL ESX Host DRS Group
  • Click OK to save the settings.

Screen_8

We have configured virtual machine to ESX Host Affinity rules to keep our SQL virtual machines contained within our pre-defined sub-cluster and we have configured our Exchange mailbox servers so they will not run inside this sub-cluster. Next, we will create a virtual machine to virtual machine rule. This rule will be used to keep the Exchange mailbox servers on different hosts.

  • On the Cluster settings page, click Add… to bring up the Rule dialog box once again
  • Under Name, give the rule a name (Exchange_MBX_VMs Anti-Affinity
  • Under Type, select Separate Virtual Machines
  • Click Add… to bring up the Virtual Machine dialog box and select the appropriate virtual machines, in this case EX_2010_mbx01 and EX_2010_mbx02 and click OK to close the virtual machines dialog box

Screen_9

  • Click OK to close the Cluster settings dialog box. Once this is done, vCenter will apply the changes and begin to migrate virtual machines (because DRS is set to Fully automated) in adherence to the configuration changes.

Screen_10

After a minute, we see that vCenter Server has migrated virtual machines in accordance with our virtual machine to host affinity rules as well as our virtual machine to virtual machine anti-affinity rule.

  • SQL2K8_01 migrated from ESX06 to ESX07 (adhering to VM-ESX affinity rule)
  • SQL2K8_02 migrated from ESX05 to ESX08 (adhering to the VM-ESX affinity rule)
  • EX_2010_mbx01 migrated from ESX07 to ESX05 (adhering to VM-ESX Host anti-affinity rule)
  • EX_2010_mbx02 migrated from ESX08 to ESX06 (adhering to VM-ESX Host anti-affinity rule)
  • EX_2010_mbx01/EX_2010_mbx02 are not on the same host (adhering to VM-VM anti-affinity rule)

Screen_11

To prove strict adherence, we will attempt to migrate the SQL2K8_01 VM from ESX07 to ESX 05. Remember we configured the SQL 2K8 servers to run on ESX07 and ESX08, not ESX05 or ESX06.

screen_12

We can see from the error message above we are not able to move the virtual machine to an ESX Host that is not part of the ESX Host DRS Group.

For more information on Host Affinity, consult the VMware vSphere 4.1 Resource Management Guide

Give Me the Whole Core and Nothing but the Whole Core

Greetings Virtual Insanity readers!

My name is Jeff and I am Sr. Systems Engineer / Solutions Specialist at VMware. I specialize in virtualizing messaging systems (Microsoft Exchange, Zimbra), databases (Microsoft SQL), ESX Performance, and Security. My posts are my thoughts at the time I write them and do not reflect those of my employer. I reserve the right to change my opinion as technology evolves and I learn about new ways of getting IT done. And please test any settings described in our posts in a test environment to see if how they will impact your production environment.

I recently had a conversation with a colleague of mine regarding hyper-threading which led to this post.

Here is the question: VMware’s Performance Best Practices for vSphere whitepaper recommends enabling hyper-threading in the BIOS (see page 15), but the application vendor recommends disabling hyper-threading, can I enable hyper-threading for the ESX host while disabling hyper-threading for a specific virtual machine? Yes.

The conversation centered around Microsoft’s recommendation to disable hyper-threading for production Exchange installations and configuring a virtual machine to accommodate this recommendation.

TechNet, “Hyper-threading causes capacity planning and monitoring challenges, and as a result, the expected gain in CPU overhead is likely not justified. Hyper-threading should be disabled by default for production Exchange servers and only enabled if absolutely necessary as a temporary measure to increase CPU capacity until additional hardware can be obtained.”

So, how do I go about disabling hyper threading for a specific virtual machine while leaving the option enabled in the BIOS of the ESX host?

Continue reading