If you have HP BL 460 G7′s with the on-board 10GB CNA, you’re going to want to read this post regarding a problem with the latest firmware.

This was first noticed this issue when updating firmware to troubleshoot an issue where the storage doesn’t come back up after rebooting an upstream Nexus switch.

The symptoms are: the NIC comes back up, and the vfc is up, but all storage paths on that side of the fabric are still dead in ESXi 5.0.  To fix this issue, the vfc or port channel  must be shut /no shut.

I also saw an issue where the storage paths were dead, and the NIC never came back up.  A reset of the Ethernet port will not fix this.  A reboot of the ESXi host is required.  Pay attention to the NIC state if you lose storage paths in this configuration with FCoE.

As part of my troubleshooting, I went to update the firmware on the CNA.  The latest version of the firmware from HP is 4.0.360.15a.  When updating using the Emulex utility, on about 20% of my blades, I got a CRC error during the upgrade process.  Below is a screenshot of this error.

After retrying the firmware update, as stated in the utility, the same error occurred.
This is where you need to pay attention!!

During the POST process, the blade WILL report the correct firmware.

Since the firmware version is correct, one might assume the update was indeed successful.  That’s a bad assumption.  Upon further testing, we found the blades that failed the firmware update were the ones failing during the switch reloads.

There were only 2 blades that did NOT fail the firmware update, but still failed the switch reload process.  They were replaced, and now I have no blades failing to reacquire storage paths after an upstream switch failure.

I must point out that HP has been unusually proactive with this issue, which is a nice change!  I still have several blades in another datacenter that are not taking the firmware update.  When I scheduled to have those all replaced, HP got some of their top people on it and scheduled a call.  I tested their proposed fix this morning, which didn’t work.

They are actively working on a fix, so you won’t have to replace your blades.  I will update this post as soon as I get word back from them on that fix.  Meanwhile, if you’ve seen this, you might want to schedule some switch reloads during a maintenance window to make sure you are good to go.

Update 2/6:

As of today, there is no fix that I’m aware of. . . HP replaced the remaining blades after we tried a couple more proposed fixes.  If I get word of a fix, I will post it here.

2012 ushers in some great new changes from the field technical team at VMware.  I  am merging the Ohio Valley Newsletter with the Wisconsin based field newsletter (aka vNews) in an effort to make it more all encompassing.  This content is designed to inform our customers of important updates from VMware from a technical perspective.  It also highlights some great public blog posts that might have snuck by you while you weren’t looking.  We will be moving away from the older legacy.pdf based version of the newsletter to a modernized delivery method, “SlideRocket”.  Here is the link to the first addition!

Please make sure you subscribe  to the newsletter if you wish to receive these monthly newsletters in your inbox.  As always feedback is welcome and will help shape the content for future issues of the vNews!  Special thanks to Ben Sier, Vitaly Tsipris and Jeff Whitman for their contributions and driving to pull this off.  Let us know your thoughts!

-Scott

Introduction

Virtualizing and running Java workloads on vSphere is absolutely a reality, but when I talk to customers I emphasize the same best practices as virtualizing Tier 1 workloads.  The rules are not the same as basic consolidation and containment and you need to understand, plan, and architect your virtualization platform if you want to be successful.

I spend much of my time working with customer infrastructure engineers and architects, and when topics of Java come up, the conversation takes a turn.  The infrastructure teams typically don’t want to get into the application stack and I can’t say that I blame them.  Java and programming are a completely different skillset and the infrastructure engineers already have enough full time jobs keeping the datacenter running.  The purpose of the blog post is to help shed some light on a new technology in vSphere 5 called “Elastic Memory for Java” or EM4J and hopefully some other simple Java best practices and information as well.  The end state of this blog is to help you bring up an EM4J configuration of your own so you can begin to see the value and test your own JVM configurations.  I am also writing this to help educate some of the infrastructure engineers and help explain why this feature matters (Disclaimer = I am not a Java programming guy).

What is EM4J?

Hopefully you are somewhat familiar with the intelligent memory management features that come with the vSphere platform such as memory ballooning.  Ballooning is a great technique that allows you to reclaim memory from virtual machines if it’s not in use by the VM.  When dealing with Java workloads a VMware best practice has always been to set reservations for the virtual machine.  This means we are always guaranteeing (or backing) that the memory will be available to the VM when it needs it.  When a memory reservation is set for a VM the hypervisor won’t reclaim memory from this VM (which means VM’s memory won’t be ballooned, compressed or swapped to persistent storage) if memory is tight on the host.

If you consider the definition of JVM (Java Virtual Machine) the last two words are important to consider when talking VMware virtualization.  Running a VM on a VM creates somewhat of a problem for the hypervisor.  The JVM is essentially a black box to the hypervisor and it has no visibility into what’s going on inside it’s environment.  EM4J on the other hand allows one to reclaim memory through a much cheaper mechanism, and induces GCs at the moments when VM is handling relatively low load. It does not eliminate long pauses as VMs without full reservations can end up swapping, but it significantly reduces pause time and provides a more graceful performance degradation when running overcommitted, making workload’s performance more predictable.  Now that I have described some of the characteristics, here is the actual definition according to the VMware documentation:

“Elastic Memory for Java (EM4J) manages a memory balloon that sits directly in the Java heap and works with new memory reclamation capabilities introduced in ESXi 5.0. EM4J works with the hypervisor to communicate system-wide memory pressure directly into the Java heap, forcing Java to clean up proactively and return memory at the most appropriate times—when it is least active. You no longer have to be so conservative with your heap sizing because unused heap memory is no longer wasted on uncollected garbage objects. And you no longer have to give Java 100% of the memory that it needs; EM4J ensures that memory is used more efficiently, without risking sudden and unpredictable performance problems.”

As you can see VMware is taking the same underlying technology that has been used for years across our customer base and applying it to Java workloads to gain more/better efficiencies at scale.  The same performance characteristics apply to EM4J as they do to the ballooning in the VMware ESX hypervisor.  Ballooning will only be invoked if the system is over committing memory, and has to begin utilizing its advanced memory management techniques.  The benefit of EM4J is when the host is under memory pressure, the end user experience will be the same as if the VM was hard backed with physical RAM as we discussed earlier.

Getting started

EM4J is a product that works in conjunction with vSphere 5 and vFabric tc Server that is bundled with vFabric Standard and Advanced.  EM4J can also work directly with Apache Tomcat.  You might be asking yourself what is vFabric tc Server at this point and why the hell do I care about that?    vFabric tc Server is a Java application server based on Apache Tomcat that VMware maintains and supports.  This is a competitive product to a IBM WebSphere or an Oracle WebLogic, but is a much lighter weight Java container that allows faster deployments in development as well as production environments.   As a systems infrastructure engineer it is imperative that you understand these types of Java workloads from a high level.  Your success in moving these workloads into a virtual infrastructure depends on it and is irrelevant to EM4J.  Before I jump in and show you how to set this up there are a few things we need to get out of the way first.  Here is what your going to need to begin utilizing EM4J for your own testing, grab it now:

Making it work in vSphere

As noted in my disclaimer above, I am not a Java guy so this took me some time to get my lab environment up and running with the right components since I am new to vFabric.  RHEL is the officially supported operating system today, but Linux is Linux so I chose to grab the latest Ubuntu 11 distribution for my testing.  Work with your internal Java guru to get vFabric tc Server setup and running on your Linux VM for testing.  Once you get through setting up and installing your operating system and vFabric tc Server, there are some technical pre-requisites you need to accomplish in order to enable EM4J balloon driver and gain visibility into the JVM itself.

The first step you need to perform in your testing is to enable an advanced parameter within the Linux VM your are testing with.  The virtual machine will need to be powered down to perform this action.  Right click on the virtual machine, select edit settings, and the select the options tab.  Go down to the advanced section and select “General” and then select the “Configuration Parameters” button that is now visible:

Once you select the “Configuration Parameters” button you are going to select the “Add Row” button and add the following configuration parameter to the VM:

sched.mem.pshare.guestHintsSyncEnable and set the value to “true” as shown below:

Making it work in tc Server

Once you have enabled the virtual machine for EM4J, you also need to ensure your instance of tc server utilizes the EM4Jbaloon driver.  Execute the command listed below to create a new instance, in this example my instance name is “scott” and the “elastic memory” option is what enabled the EM4J balloon driver.  Once you have created the instance, go ahead and start it up!

Next we will configure a few parameters within out instance so we can  monitor them via the VMware vSphere web console interface which I will show you next.  Add the following parameters to the setenv.sh file of your new instance name as follows:

JVM_OPTS="-Dcom.sun.management.jmxremote=true
-Dcom.sun.management.jmxremote.port=6969
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false"

Next step we need to setup what is called the Console Guest Collector (CGC).  The CGC is a process that allows the vSphere web console to pull data from the EM4J balloon driver and place it with each VM so the web client can then display performance data about the current workloads.  This needs to be setup via a cron so we can continuously pull real-time data into vSphere.  The cgc.sh script can be found in the /opt/vmware/vfabric-tc-server-standard-2.6.0.RELEASE/templates/elastic-memory/bin/ directory.  Here is a command to add an entry to the crontab for every 5 minutes:

*/5 * * * * /opt/vmware/vfabric-tc-server-standard-2.6.1.RELEASE/templates/elastic-memory/bin/cgc.sh >
/dev/null 2>&1

Making it work in the vSphere Web Client

You downloaded the EM4J UI plug-in earlier and now we need to extract it and set it up on your vSphere 5 Virtual Center server.  Extract the contents of the following directory then re-start the vSphere Web Client Service:

C:\Program Files\VMware\Infrastructure\vSphere Web Client\plugin-packages\em4j-client

The data!

Now that we are through the tedious stuff we can actually see some of the more interesting performance data, and frankly the reason you are probably reading this blog post!  Log-in to your Virtual Center’s web interface and navigate to your virtual machine you are using to test with.  Select the fourth tab at the top of the options section which is titles “Workloads”.  You should now see something similar to this and the EM4J Agent Enabled should be selected if you setup everything correctly:

Selecting the “Alerts” tab will give you any relevant data and tell you if any issues are occurring.  This will also display some Java Best Practices and instruct you on how to fine tune your JVM.  Selecting the “Resource Management” tab will display much more performance centric detailed information which gives you full visibility into the JVM itself.  Excellent performance visibility into that problematic Java workload:

Conclusion

From the documentation, “EM4J helps the system behave gracefully and predictably when memory becomes scarce. It helps you to more easily determine the over-commit ratio that provides acceptable performance at peak loads.”  Hopefully you learned a little bit about what Elastic Memory for Java is and how it works within vFabric and VMware vSphere 5.  As with most technology features and functionality I suggest understanding the best use cases for EM4J and how it fits into your own environment.  The documentation that I linked to, gives plenty of examples of when EM4J should be utilized effectively.  Look for more performance benchmarks around optimal overcommit ratios as our vFabric team completes some great performance testing on this exciting new technology.  The EM4J architecture will not only allow you to run your JVM’s more efficiently, but will also provide you some great performance visibility and give insight into your Java workloads.

Over the past week, I have been reflecting on just how amazing 2011 was for me, with lots of help from the entire VMware community.  I won’t bore all my readers with EVERY detail, but what good is a blog if you can’t boast once in a while?

In 2011, after a couple of years of planning, evaluating, and trying to get funding, I started implementing VMware on a large scale at the company I work for.  We had used it in development, and for certain niche apps, but now it’s coming in wholesale.  Thanks to VMware, and their amazing development staff, I was able to create some MONSTER clusters without worrying about too many HA Primaries on each blade chassis. Thank you VMware!

Also in 2011, there was much deliberation and evaluation of many different storage arrays from several vendors.  I needed something to replace some old HP EVA’s.  Yes. . .I have been critical of EMC in the past, and honestly, they still deserve some criticism.  However, in the end, we bought VMAX’s.

One of the main reasons VMAX was the only one left standing was its support for mainframe.  Also, Chad’s army of vSpecialists shows EMC’s commitment to tightly integrating VMware into their products, which is comforting.  Was VMAX extraordinarily expensive?  Yes.  Has VMAX been a bit of a pain in the rear to get up and running right?  Indeed.  But as of the end of the year, the things are absolutely screaming, and I am very pleased with the performance, and the integration points.

All the work I have done this year to get this new environment up and running, and begin migrating environments over to the new VMware platform would not have been possible without the help of many people in the community.  I have thoroughly enjoyed reading everyone’s blogs.  Also, reading both of Scott Lowe’s (Forbes & Maish too) 2011 books, Frank and Duncan’s second amazing ESXi Clustering book were extremely helpful.  I have Mike Laverick’s SRM book, as well as a few other recent ones on my desk for 2012 reading.  Never before have we had so much access to so much in-depth knowledge on every aspect of VMware.  This speaks very highly of VMware’s care and feeding of the community.

The most time saved this year for me has been via the use of William Lam’s scripts, and Luc Dekens et. al’s PowerCLI Reference.  These guys are amazing, and I urge you to both buy the book, and support Lam’s virtuallyGhetto site and script repository.

With the help of Jason Nash, and J Michel Metz, I got my 1000V nailed down, and FCoE smoking on the the rest of the Nexus stack.  As Metz says, if FCoE were a video game, he would be the boss fight at the end!  Thanks!

With the help of Simon Long’s SLOG, I passed VCP5 this year.  Chris Kusek’s blog helped me prepare for and pass the entry level EMC exam, which will enable me to take the Symmetrix cert next year.

I didn’t make it to VMworld this year with all the work going on here.  I did get to attend Backup Central Live with W. Curtis Preston.  What a super cool seminar.  Definitely not your typical one day BS event.  I came away with real knowledge that I could put to use right away.  Here’s my review of the event.

I was part of a VMware focus group for the portal redesign this year.  That was fun, but my NDA won’t allow me to mention details.  I think this was worthwhile, and I took many of your comments on Twitter to the guys doing the redesign.  We will see a much more efficient VMware site really soon that will save us all time!

The coolest thing I got to do in 2011 is join Gestalt IT, and attend Tech Field Day 7 in Austin.  That was an amazing experience.  I got to interact with amazingly smart, independent thinkers in the industry.  I also saw some cool new products and ideas from Dell, SolarWinds, Symantec, and Veeam.  I haven’t had much time to blog about these, but I do plan on evaluating a few of the products I saw, and posting my opinions as soon as time allows in 2012.  I’m definitely looking forward to my next TFD event!  I would encourage any of my readers who are not employed by a vendor to contact myself, or Stephen Foskett if you’d like to attend yourself!  Stephen and Matt Simmons work very hard to make these events quite valuable for both presenters, and participants.

I’m sure I forgot to thank plenty of folks.  Sorry.

Ohh yea. . . I nearly forgot one other thing.  I also got to enjoy the birth of my second son in 2011.  Amazing!

Happy New Year!

For a while, I’ve been looking for a way to pick which “slots” our VEM’s go into on the 1000V VSM.  It would make troubleshooting much easier, and it just makes more sense to the networking guys who are used to working with physical line cards and supervisors.

A network escalation engineer over at VMware came through with a process for renumbering the VEM’s.  It’s simple, but it never really occurred to me that it was this simple.

All you need to do is grab the host id of the VMware host from the VSM config, shut down the host to take the VEM offline, and then renumber it in the VSM config.

Here’s a screenshot @benperove sent over detailing the process.  I’m definitely doing this ASAP on my 1000V’s!  Thanks Ben!

A few weeks ago, I did a couple rounds of testing with PowerPath VE to see how it would perform against VMware Round Robin.  If you missed Round 1 or Round 2, you may want to click and read those now.

Based on the comments, and the other posts that said there was no point in setting IOPS to 1 on Round Robin, I decided I was going to have to get more aggressive and test a wide variety of workloads on multiple hosts and datastores.  My goal is to see if there would be any significant difference between Round Robin and PowerPath VE in a larger environment than I was testing with previously.

For Round 3 of my tests, I use 3 hosts, 9 Win2008 R2 VM’s, and 3 datastores.  My hosts are HP BL460 G7 blades with HP CNA’s.  All hosts are running ESXi 5 and are connected via passthrough modules to Cisco Nexus switches.  FCoE is being used to the Nexus, and then FC from there to Cisco MDS’s, then to the VMAX.  No Storage IO Control, DRS, or FAST is active on these hosts / LUN’s.

Here are the test VM’s, and their respective IOMeter setup:

The first test is Round Robin with the IOPS=1 setting.  We’re seeing 20,673 IOPS with an average read latency of 7.69ms.  Write latency is 7.5ms on this test.  When we change all LUN’s back to the default of IOPS=1000, we see a significant drop in IOPS, and a 40% increase in latency.  Since the bulk of my IOMeter profiles are sequential, this makes sense.  EMC tests, as well as my own, show that there is little difference between IOPS=1 and IOPS=1000 when dealing with small block 100% random I/O.

When switching to PowerPath hosts, we see the IOPS increase around 6%.  This is probably not statistically significant or anything, but what I did find interesting is the 15% better read latency.  My guess is that PowerPath is dynamically tuning based on the workload profile from each host, where Round Robin is stuck at whatever I set as the IOPS= number.

Here’s the scorecard for Round 3:

To sum up our last round of comparisons, it was nice to see results using more hosts, datastores, and VM’s with varying I/O profiles.  While this was helpful, no one can really simulate what real workloads are going to do in production, with IOMeter.

PowerPath for physical servers is a no-brainer.  Based on my results, I am recommending the purchase of PowerPath VE for my VMware environment as well.  In my opinion, it comes down to predictability, and peace of mind.  I cannot predict what all workloads are going to look like in my environment for the future, and I am not willing to test and tune individual LUN’s with different Round Robin settings.  I’d much rather leave that up to a piece of software.

Thanks for all the comments and ideas for these tests and posts.

Since there is very little out there on this, and since it caused plenty of heartache for me, I thought I’d write this up for those having issues with FCoE on the bigger Nexus 5k’s.

Apparently there is a bug feature in the 5548 / 5596 switch where they left out the default QoS policies.  Those have to be in place for FCoE to work.  So they’re shipping the switch with FCoE enabled, but this QoS policy is missing.

What results is once you get everything setup, you’ll see some FLOGI logins, but it’s very sporadic.  The logins will come in and out of the fabric, and no FCoE will happen.  Your FCoE adapters will report link down, even though the vfc’s are up, and the ethernet interfaces are up.

What I suspect is happening – and take this for what it’s worth from an expired CCNA – is the MTU isn’t set properly for the FCoE b/c the system QoS policies aren’t letting the switch know that there is FCoE.  It wasn’t until I mentioned that I changed the default MTU that the Cisco TAC level 2 guy finally remembered this little QoS problem with the big switches.

But he sent me the article, so I’ll save you some time.

If you copy the code in blue and paste it, your links will come up instantly and you’ll be ready to roll.  Here’s the link to the Cisco article.

The FCoE class-fcoe system class is not enabled in the QoS configuration.

Solution

For a Cisco Nexus 5548 switch, the FCoE class-fcoe system class is not enabled by default in the QoS configuration. Before enabling FCoE, you must include class-fcoe in each of the following policy types:

The FCoE class-fcoe system class is not enabled in the QoS configuration.

Solution

For a Cisco Nexus 5548 switch, the FCoE class-fcoe system class is not enabled by default in the QoS configuration. Before enabling FCoE, you must include class-fcoe in each of the following policy types:

Network-QoS

Queuing

QoS

The following is an example of a service policy that needs to be configured:

F340.24.10-5548-1
class-map type qos class-fcoe
class-map type queuing class-fcoe
match qos-group 1
class-map type queuing class-all-flood
match qos-group 2
class-map type queuing class-ip-multicast
match qos-group 2
class-map type network-qos class-fcoe
match qos-group 1
class-map type network-qos class-all-flood
match qos-group 2
class-map type network-qos class-ip-multicast
match qos-group 2
system qos
 service-policy type qos input fcoe-default-in-policy
 service-policy type queuing input fcoe-default-in-policy
 service-policy type queuing output fcoe-default-out-policy

service-policy type network-qos fcoe-default-nq-policy


Back in June I was asked to build and captain the vFabric SQLFire lab at VMworld.  Now, I’ve been a big champion of the vFabric technologies since the beginning, so I was certainly excited to be given the opportunity.  But I don’t think I fully realized just how fortunate I was at the time.  Because not only did the experience force me to go deep into the technology, but it also forced me to focus on a layer where I previously had little direct involvement in my career.  Let me explain.

We often like to think of the cloud in three distinct layers:  Infrastructure (IaaS), Platform (PaaS) and Software (SaaS).  But in my opinion, these categories may be a bit too broad because there are definitely layers between the layers.  For example, where does the data live?  Infrastructure guys often mentally push the data up into the Platform layer, while application guys often mentally push the data down towards the Infrastructure layer.  Obviously, regardless of which side of the infrastructure-platform fence you live on, we all “touch” the data all the time.  And of course we all recognize how vitally important the data is.  But unless you’re a DBA, the data usually ends up being a problem for someone on the other side of that fence.

So now I mentally place data in its own layer in between the Infrastructure and the Platform, where it in many ways (at least in my mind) serves as the “glue” which binds the two.  You might be asking yourself, “What’s his point?” or even “Who the heck cares?”  Well, I want to make the distinction for a couple reasons.

First, I believe the way we categorize and compartmentalize things in our mind has a dramatic affect on our focus and behavior.  Mentally misplacing important concepts in the wrong compartment usually leads to confusion and misunderstanding, and we can miss tremendous opportunities.  However the correct mental placement will bring clarity, focus and potentially open up a whole new world to us.  So now for me, instead of just thinking of data as something that lived in file somewhere or in a database that some DBA was responsible for, a whole new world has been unlocked.  I’ll come back to this point in a bit.

Second, lots of Infrastructure folks are a bit concerned about their future due to the high degree of automation and integration happening in the Infrastructure layer right now.  We’re starting to hear whispers of things like “infrastructure guys need to move up the stack or they’ll be left behind.”  Scott Lowe’s recent post The End of the Infrastructure Engineer? not only articulates the concern well, but he also suggests the concern may be unwarranted.  I’m not sure I fully agree with Scott, but I’m simply trying to highlight that the concern is out there and it’s growing.  Shoot, I know I’ve certainly implied numerous times here on this blog that we all need to start moving towards the application/development space (here, here and here).  But would I actually go so far as to say that everyone needs to stop what they’re doing and go buy the latest copy of Programming for Dummies?  Probably not.

What I do believe, however, is this new data layer (not that the data layer is actually new, of course) may be a way for infrastructure engineers to stay relevant as the world moves towards application centric clouds.  It may be a way for us to “move up the stack” by taking a few steps, rather than a career changing leap of faith.  After all, building skills in this layer isn’t a shock to the system because, like I said in the beginning, we’ve all indirectly worked with the data layer our entire careers.  Whereas application development is a completely different world for an infrastructure engineer (and vise versa), data lives much closer to home.  Infrastructure and data are like “kissing cousins” … kind of awkward, but not completely taboo either.

So, if you couldn’t tell by now, I’ve been thinking a lot about data.  Databases, data grids, data fabrics, data warehouses, data in the cloud, moving data, securing data, big data, data data data data data data.  Most of the hard problems (not all, of course) with cloud computing are with data.  It’s always the data that seems to trip us up …

User:     “Why can’t I VMotion my server from here to China?”
Admin:   “Well, aside from the fact that we don’t have a data center in China, your application has a Terabyte of data and it would take month to get there.”

User:  “Why can’t I use dropbox anymore?”
Admin:  “Because you put sensitive company data on it, which was compromised and now we’re being sued.  By the way, your boss is looking for you.”

User:  “The performance of my application you put on our hybrid cloud is pitiful.  You suck.”
Admin:  “You told us there would be no more than 100 users, and now there are 50k users trying to access the same database at the same time.  You’re an idiot.”

Granted, in all of these situations, it’s not just the data that’s the issue.  Well, actually, come to think of it, it’s not really the data at all, now is it?  It’s all the things we must do in order to deliver/secure/migrate/manage/scale the data in the cloud that becomes the issue.  So data is often the root of our problems, but never really the problem itself.  Instead it’s data handling in the cloud that’s the big challenge.  Yes, that’s it!  And it would appear someone really smart from JPMorgan Chase would agree with me …

Whew!  Validation gives me warm fuzzies.  Anyway, circling back to a point I made earlier, since I’ve been focused on the data layer a whole big crazy world has opened up for me.  Much like what vSphere did for servers, there is a ton of activity happening at the data layer to transform the way we handle data at scale in the cloud.  And again, what’s so cool about this layer is that it is all too familiar.  When studying up on how data grids can make data globally available via their own internal WAN replication techniques, or when learning about how a new breed of load balancers are enabling linear scalability for databases, or when exploring how in-memory databases can dramatically improve application performance … the concepts/language/lingo are easily understood and relatable to things I already know.

Now in the midst of all this learnin’ it occurred to me, everyone has been talking about the Platform as the next big thing (myself included) … but I would think the data problems need to be solved first, don’t they?  Sure, I know things won’t happen serially here; lots of smart people and cool new companies are working to solve cloudy problems at both layers in parallel.  But we all know that where there are problems, there are opportunities.  And it would appear to me that the more immediate problems needed to be solved are with data handling.  So could it be that the next big thing is actually in the layer between the layers?  And could this really be the place where developers and engineers finally meet and give each other a great big awkward hug?

Which brings me back to the very beginning of this blog post (the next big thing, not the awkward hug).  After digging pretty deep into SQLFire, I’ve found it’s a radically new kind of database that addresses many of the issues with data handling in the cloud.  It’s a database built differently from the ground up because it is built on amazing, disruptive data grid technology, yet presents itself to an application as a regular old database.  It can unobtrusively slide in between applications and their existing databases to solve performance problems, or it can stand on its own as a complete database solution.  It can instantly scale linearly, it can make your data extremely fault tolerant, and it can make your data available globally, all with very little effort and/or overhead.  Pretty amazing stuff.  You should check it out and let me know what you think.  And even if you don’t take a look at SQLFire, what do you think about the “layer between the layers?”  The next big thing?

As promised in the first post, here is round 2 of my testing with PowerPath VE and vSphere 5 NMP Round Robin on VMAX.  For this round of testing, I changed the Round Robin iooperationslimit to 1, from the default of 1000.

I understand that this is not recommended, and I also understand that further testing is needed with multiple hosts, multiple VM’s and multiple LUN’s.  As soon as I get the time, I’ll do that and report back.

For the background, and methodology, click the link above to read the first post.  For now, I’ll skip right to the scorecard.

As we can see here, setting Round Robin IOPS to 1 definitely evens the score with PowerPath.  I expected to see more CPU activity than PP, but that wasn’t the case.  I also expect to see more overhead on the array once I add more hosts, VM’s and LUN’s to the mix.  It might be a few weeks before I can pull that off.

Thanks for reading, and commenting.  Round 3 to come.

This past year, I did an exhaustive analysis of potential candidates to replace an aging HP EVA infrastructure for storage.  After narrowing the choices down, based on several factors, the one that had the best VMware integration, along with mainframe support was the EMC Symmetrix VMAX.

One of the best things about choosing VMAX in my mind was PowerPath.  It can be argued whether PowerPath provides benefits, but most people I have talked to in the real world swear that PowerPath is brilliant.  But let’s face it, it HAS to be brilliant to justify the cost per socket.  Before tallying up all my sockets and asking someone to write a check, I needed to do my own due diligence.  There aren’t many comprehensive PowerPath VE vs. Round Robin papers out there, so I needed to create my own.

My assumption was that I’d see a slight performance edge on PowerPath VE, but not enough to justify the cost.  Part of this prejudice comes from hearing the other storage guys out there say there’s no need for vendor specific SATP / PSP’s since VMware NMP is so great these days.  Here’s hoping there’s no massive check to write!  By the way, if you prefer to skip the beautiful full color screen shots, go ahead and scroll down to the scorecard for the results.

Tale of the Tape

My test setup was as follows:

 Test Setup for PowerPath vs. Round Robin 2 – HP DL380G6 dual socket servers 2 – HP branded Qlogic 4Gbps HBA’s each server 2 – FC connections to a Cisco 9148 and then direct to VMAX VMware ESXi 5 loaded on both servers All tests were run on 15K FC disks – no other activity on the array or hosts

Let’s Get It On!

(i’m sure there’s a royalty I will have to pay for saying that)

Host 1 has PowerPath VE 5.7 b173, and host 2 has Round Robin with the defaults. Each HBA has paths to 2 directors on 2 engines. I used IOmeter from a Windows 2008 VM with fairly standard testing setups.  Results are from ESXTOP captures at 2 second intervals.

The first test I ran was 4k 100% read 0% random.  All these are with 32 outstanding IO’s, unless otherwise specified.

Here is Round Robin

And PowerPath VE

First thing I noticed was that Round Robin looks exactly like my mind thought it would look.  Not that that means anything.  I do realize that this test could have been faster on RR with the IOPS set to 1, and maybe I’ll do that in Round 2.  As for round 1, with more than twice the number of IOPS, PowerPath is earning its license fee here for sure.

How about writes?  Here’s 4k 100% write 0% random.

Round Robin

PowerPath

Once again, PowerPath VE shows near 2x the IOPS and data transfer speeds.  I’m starting to see a pattern emerge.

Round Robin

PowerPath

PowerPath is really pulling ahead here with over 2x the IOPS yet again.

32K 100% write 0% random

Round Robin

PowerPath

Wow!  PowerPath is killing it on writes!  Maybe PP has some super-secret password to unlock some extra oomph from VMAX’s cache.

Nevertheless, it’s obvious that PP is beating up on the default Round Robin here, so let’s throw something tougher at them.

Here’s 4K 50% read 25% random with 4 outstanding IO’s.

PowerPath

The gap between the contenders closes a bit with this latest workload at only a 24% improvement for PP.  But as we all know, IOPS doesn’t tell the entire story.  What about latency?

4k 100% write 0% random

Round Robin

PowerPath

Write latency is 138% higher with Round Robin!  That’s a pretty big gap.  Is it meaningful?  Depends on your workload I guess.

Scorecard after Round 1

So far, PowerPath looks like a necessity for folks running EMC arrays.  I’m not sure how it would work on other arrays, but it really shines on the VMAX.  In some of my tests the IOPS with PowerPath were three times greater than with the standard Round Robin configuration!  I do believe that the gap will shrink if I drop the IOPS setting to 1, but I doubt it will shrink to anywhere near even.  We will see.

In addition to the throughput and latency testing, I also did some failover tests.  I’m going to save that for a later round.  I don’t want this post to get too long.

Introduction

VMware is in a perpetual state of change if you haven’t noticed.  Virtualization and the hypervisor will never not be a core competency of ours but we are continually expanding into other areas of IT software solutions for our customers.  I think Paul Maritz states it best with his quote “When we see a management problem, we will be replacing it with an automation solution”.  Take a look at what VMware vCloud Director is accomplishing by delivering the automation of IT services at the Infrastructure layer for our customers and service providers.  Take a look at what Horizon App Manager is doing to create and deliver a self-service enterprise application store to consume SaaS based applications.  One of the best parts about moving towards a self-service model, the system engineers can now have part of their lives back to focus on more important projects for the business since the end users can now consume services on demand.

When I talk with my customers, half of my challenge is educating them on what we are doing to enable them to operate more efficiently from a solution perspective.  We are no longer just a hypervisor company.  Don’t get me wrong, I love talking speeds and feeds and can geek out and get distracted with the best of them on the “tech cool factor”.  Alas, at the end of the day isn’t it about finding a solution that is going to make you (systems engineer) be able to do more with less while also delivering value back to the business?  This is why our customers love us so much into the “infrastructure corner”. We have helped them to maintain happy lines of business by providing IT faster to their customers with virtualization.  As our former friend Patrick Swayze (RIP) once said,  “Nobody puts baby in the corner!”

Enter vFabric Data Director

Let’s think out of the traditional infrastructure box, there are challenges with virtualizing databases from a management perspective.  No it can be done, and many customers are out there deploying Oracle and Microsoft SQL databases on VMware vSphere.   It’s not the traditional I/O workload conversation that one must consider when going after these tier 1 workloads.  It’s more about the long term management of these resources that are constantly being requested, deployed, copied, backed-up,  and the backend management that goes into this entire process.  Database sprawl is a real world problem that many organizations struggle with.

Why not create a portal where a non-DBA type IT user can login and tear off a database by answering a few simple questions?  Why not pre-configure the DBA repetitive tasks from a list of options or a “catalog” and allow them to choose the correct combo meal they would like to consume?  Why not give the DBA’s back time in their day to do more productive forward thinking activities and take the easy operational stuff off their plates?  Enter vFabric Data Director (The artist formerly known as Project Aurora).  Notice the following features and functionality as your watching the clip.

• Self-Service Provisioning
• SQL Statement Execution from the Web Portal
• Backup and Recovery simplification

Demo of vFabric Data Director

vFabric PostgreSQL

The vFabric Data director portal probably makes sense to you now that you have seen it in action.  The first database we have enabled on top of this platform is a vSphere-optimized PostgreSQL database, the most enterprise-ready open source database.  We have specifically custom tuned this fork of Postgres to make it virtualization aware and to run more efficiently.

The vFabric Postgres database  is delivered to the IT environment in the form of a virtual appliance that is intelligent and can self-tune itself as workloads change.  Database buffer sizes can scale up and down as I/O characteristics change, a special ballooning database driver can be invoked for more memory efficiencies within the virtual appliance.  The database us a standard SQL database that supports ODBC connections and supports JDBC tools to query the database just like the open source version.

What’s next?

Expect much more!  I can’t say a ton here in this forum, but know this is just the beginning for this product/solution.  EMC’s Chad Sakac put together a great video for VMworld 2011 (and apparently is allowed to say much more than I am) on his blog site.  Watch towards the end of the video for futures and where you can expect to start to see the flood gates open up as we take it to the next level!

- Scott

Several months ago, a small firm I consult for ordered a Drobo Elite (recently replaced by the B800i).  These guys had run ESXi for a while in one of their environments, and were wanting to explore some of the features requiring shared storage.  Like most small businesses, they wanted to get there without breaking the bank.  There aren’t a ton of options in the $6-7k range for iSCSI arrays on the VCG, so it was an easy choice. Their CIO called up Drobo and placed the order. He explained what they were going to use it for, and the guy configured it right over the phone and shipped it out. A few days later, the Drobo Elite arrived configured with 8 x 2TB Western Digital (WD20EARS) disks at a cost of just under$6k.

Setup in ESXi was straight forward.  I followed the documentation from Drobo and set the PSP to VMW_PSP_MRU and SATP to VMW_SATP_DEFAULT_AA and started throwing VM’s on for testing.

The initial tests were okay.  I wasn’t really bouncing around the room yet, but I am used to larger FC array speeds.  Once I saw that IOmeter was pushing the expected number of IOPS, we were ready to throw on a few VM’s.  For some context, we’re talking about a 100 person company with about 20 servers in total.  They’re running 50% of those on ESXi right now on two hosts.  Once normal daily production started with 3-4 VM’s hitting the Drobo, everything screeched to a grinding halt.

Latency, as reported in ESXTOP, was showing 4-5000ms, and there wasn’t any single workload that was giving it a tough time.  I went back in and double checked the iSCSI config.  All the bindings were correct, as were the PSP and SATP.  Nothing had changed except adding a couple more VM’s to the Drobo.

I began to suspect the switch was misconfigured, so I pulled it out, and went direct to the Drobo.  That didn’t really yield a noticeable improvement.  After troubleshooting this forever, and deliberating on the phone with Drobo, they announced their verdict.  Apparently the WD “Green” drives are not supported with VMware.  They said we’d need to buy the Black drives.

Their site quickly confirmed.  But again, since Drobo configured the unit, knowing it was for a 2 host VMware environment, we both assumed the Green drives were sufficient.  The extra cost of the Black wasn’t warranted for this environment.  I could understand if the customer had gone out and bought some random drives, but they came with the unit directly from Drobo.

They had us run some of their own IOmeter tests directly connected from a Windows box using the MS iSCSI initiator.  We then went ahead and swapped the disks for the recommended WD Black disks, and below are a few charts showing the results.

The Black is faster in every way, but the most noticeable aspect is write latency.  I suspect this is due to the increased processing power and faster cache.  Nevertheless, the results speak for themselves.

Bottom line is, if you’re going to run ESXi on a Drobo, don’t go green!

Introduction

With vSphere 5 comes a plethora of new features and functionality across the entire VMware virtualization platform.  One of the core components that got a nice upgrade was the vSphere Distributed Switch (vDS).  For those of you that have not had the chance to use the vDS, it is a centralized administrative interface that allows access to manage and update a network configuration in one location as opposed to each separate ESX host.  This saves vSphere administrators or network engineers a lot of operational configuration time and/or scripting activities.   The vDS is a feature that is packaged with Enterprise Plus licensing.  Here are some of the new features that are included with the vDS 5.0:

• New stateless firewall that is built into the ESXi kernel (iptables is no longer used)
• Network I/O Control improvements (network resource pools and 802.1q support)
• LLDP standard is now supported for network discovery (no longer just CDP support)
• The ability to mirror ports for advanced network troubleshooting or analysis
• The ability to configure NetFlow for visibility of inner-VM communication (NetFlow version 5)

NetFlow Basics

I could do a write-up on each one of these components as they are all worth discussing in more detail, but I wanted to focus on the NetFlow feature for this post as I think it’s an awesome addition.  NetFlow has had experimental support in vSphere for some time, but now VMware has integrated the functionality right into the vDS and is officially supported.

NetFlow gives the administrator the ability to monitor virtual machine network communications to assist with intrusion detection, network profiling, compliance monitoring, and in general, network forensics.  Enabling this functionality can give you some real insight into what is going on within your environment from a network perspective.  Having “cool features” is a nice to have, but having features that you can utilize and show value back to the business is a completely different value add.

Let’s look at how to setup NetFlow on the new vDS, then take a look at the data you can extract from NetFlow with a third party NetFlow viewer.  Once you see the value of the data, you can then make some important IT business decisions on how you need to mitigate risk and protect your investment by getting ahead of the curve (aka VMware vShield or some other third party software).

Ensure you are running VMware vSphere 5.0 and have activated Enterprise Plus licensing to setup the vDS switch in your environment.  You can see below the new option to deploy a vDS 5.0 switch, and of course we offer backwards compatibility for those that need to deploy to their 4.x environments.  Select the 5.0 version and hit next.

In the “General” section give the vDS a name, in this example I am giving him “dvSwitch5”.  Select next the number of network interface cards you want to participate in the switch and then select next.

For each host in your cluster that you wish to participate in the vDS, you will need to configure the network interfaces that will support this vDS implementation.  In this example I have selected vmnic 4 and vmnic 5 to be members of the vDS 5 switch.  Select next.

That’s it, review the summary and select finish for your vDS configuration to come online and begin configuring NetFlow.

Setup Netflow on the vDS 5

Now you have a fully functioning vDS 5.0 switch, you can actually start to use it!  First let’s go ahead and configure NetFlow on the dvPortGroup, then we will move some virtual machines over to the new vDS so we can get some real data flowing.  Right click on your newly created dvSwitch and select “edit settings”.  Go to the “NetFlow” tab across the top of the page.  You will need to give your vDS an IP address so your NetFlow tool will know where to collect the data from.  Populate an IP address for the vDS, then you will need to enter the IP address of the collector you plan on using to pull the data from.  Make sure you enter the correct port number (default is 1) for how you setup your NetFlow application to communicate.

Right click on the dvPortGroup within the vDS and select the “monitoring” option and enable NetFlow so you can begin to collect data.

Move a few VM’s over to the new vDS so you can begin to capture some real data within your newly established NetFlow configuration.  I have highlighted below how you can change the network connection on a VM to now utilize the dvSwitch5 we created earlier.

Pull Some Data

You will need to utilize a third party NetFlow analysis tool to parse the data we have started to generate.  In the example below I am using a pretty nice application called Manage Engine Netflow Analyzer.  I won’t be covering how to install or setup this application here, as your organization might already have some network tool that they have standardized on.  Once you have moved some virtual machines over to the new vDS, ensure you start to create some traffic so there is some relevant data to examine.  Below I ran a few speedtest.net downloads, and hit some websites to make traffic appear below.

Below you can see the different virtual interfaces on my vDS that are being monitored.  You can see our application is showing us what type of traffic we are examining, and the consumption of the different tcp/udp ports that are communicating both inbound and outbound on the switch.

The “under the covers” reporting is great stuff, but let’s start to look at how this can help the business.  Consider a VMware View environment where you are supporting hundreds if not thousands of desktop images.  You can use the NetFlow data to start to examine if certain VM’s are communicating to production systems that they shouldn’t be communicating to at all.  How about reducing the overall workload on your VMware View ESX server?  Many of the NetFlow products like the one I am showing here will produce reports on where users are going externally on the internet.  See the report below.  YouTube is probably a website you want to keep an eye on, as streaming video can greatly impact a virtual desktop environment.

From an intrusion detection and compliance perspective, you can now gain visibility into the vSphere environment to begin to understand some of the network communications that are taking place.  See below:

From a risk mitigation perspective, VMware can help you eliminate these security vulnerabilities that you are beginning to gather data on.  VMware vShield has three different solutions that can help protect your environment from the edge to the core.  I would suggest to examine segmenting and protecting your internal workloads to eliminate these security risks.  From a virtual desktop perspective, the desktop workloads are better served being contained in their own protected segment (VLAN’s are broadcast domains not protected segments).  Below is an example of how a logical vShield configuration can begin to help you segment your virtual infrastructure.

Conclusion

VMware vSphere 5 offers some great new features that are integrated into the new vSphere 5 Distributed Switch.  Start to leverage your existing investment by examining your network infrastructure with the NetFlow data you can now begin to extract.  Once you have gathered this data, begin considering how you can mitigate some of the security and compliance risks within your organization.  VMware vShield is a product that can help you in this regard and will integrate into your current environment.

-Scott

BREAKING – PALO ALTO (VP)

The VMware licensing debate was killed this afternoon while trying to rescue the #vTax hashtag from the inside lane of the Ridiculous Interstate.  Witnesses say a bearded, balding “smart-looking” man was driving north at a very high rate of speed in a truck with the license plate VMW when the debate was struck.  The truck backed up and struck the debate again and again before authorities arrived and pronounced the debate dead at 5 PM PDT today.

I am writing this as a blogger at Virtual Insanity, and a customer of VMware. I don’t sell VMware, and I’ve never worked for VMware. I don’t even work for a partner. I barely get to chat with my fellow bloggers who work for VMware, and am certainly not privy to inside information, despite my company’s NDA.

With that out of the way, VMware has done the right thing here. The fact that they can take customer feedback and mold it into a dramatic licensing change, just a few weeks before a product GA’s, is astounding. That speaks not only to the agility of the company, but their willingness to please their customers.

They even went out of their way to please NON-PAYING customers with this change. The change to the free version was causing more drama than the change to customers who spend millions with VMware.

Should VMware have focus-grouped the licensing change more than they did? Yes. It would have preempted the customer perception wildfire they have had to fight for the past couple of weeks. I am sure they ran the numbers and knew that only a small percentage would be impacted. But the fact is an even smaller percentage actually ran the scripts to see how it would affect them. Once the fervor got started by a few, it wasn’t going to stop.

A price increase was inevitable. VMware has given us HUNDREDS of new features in the past several years for free. I think not increasing it with 4.0 was the right move, but they couldn’t hold out forever. The new vRAM allotments and policies are spot on, and are going to put a lot of customers’ fears to bed.

Now we can get on with discussing the amazing new features of vSphere 5.0 without that licensing cloud hanging over our heads.

Kudos VMware!

Introduction

This is a follow-up blog post to a write up I did last year on upgrading your virtual hardware.  The post I did was really trying to show people how easy the virtual hardware to version 7 was, and that despite it being a manual effort, it wasn’t all that painful.  There have been several other write-ups in the community that cover how to automate this task to save you time and effort.  In the end, there was no easy automated way to accomplish this task that was officially supported.

There are so many great new features that are being released with vSphere 5 that some of the small stuff might get missed.  As a former VI admin, this is one of the small ones that can’t be overlooked for those of you in the trenches.  There is another new feature that is introduced with vSphere 5 called “VMware Auto Deploy” that somewhat competes with VUM from a ESX deployment methodology.  If you would like to learn more about Auto Deploy, check out Gabe’s write-up here.

In a Nutshell

• VUM can be used to upgrade your ESX 3.x hosts and vSphere 4 hosts to 5.0 (3.x makes a pit stop at 4)
• VUM can be used to upgrade your vSphere “Classic” hosts to ESXi
• VUM can now remediate multiple ESX hosts at the same time rather than queuing up (think multi-threaded)
• VUM can automatically upgrade VMtools at a scheduled maintenance window
• VUM can automatically upgrade Virtual Hardware at a scheduled maintenance window
• VUM can no longer be used to patch guest operating systems
• VUM requires a Windows Operating system and can not be installed on the VMware vCenter Server Appliance
• VUM can automatically upgrade your Virtual Hardware from version 4 or 7 to version 8 (vSphere 5)

Update Manager to the rescue

You can now use vSphere Update Manager to perform orchestrated upgrades to upgrade the virtual hardware and VMware Tools of virtual machines in the inventory at the same time.  Not only can you use VMware update Manager (VUM) to upgrade your ESX hosts to version 5 you can also leverage it assist with the hundreds of VM’s you need to address as part of the upgrade process!  This is a huge time saver and will help eliminate configuration drift across your environment, as I am sure your virtual infrastructure has only grown bigger since the last time we went through this.

Let’s walk through what this process looks like, and how you can now configure update manager to accomplish this.  I am going to assume you have already setup or upgraded your Virtual Center to version 5, and you have also updated or installed VUM 5.

The first step in upgrading your virtual infrastructure is to crate a plan of attack.  Most of my customers group their virtual machines by applications or by lines of business.  This typical grouping won’t lend itself well to our virtual machine updating that we need to do.  I suggest creating a few folders in the “VM’s and Templates” view that you can use to help facilitate this upgrade.  As you can see below I created three different folders that you can use to temporarily move the vm’s into for their scheduled maintenance.   I suggest creating different upgrade windows that you will attach to these three folders (after getting change management approval of course!).  Yes there is downtime required for this process!

For each of these folders you are going to want to configure it to apply the VMware Tools upgrade first.  You can see below that this option is selected for my first patch management window.

After I have selected my VMware Tools upgrade, I can now scan the VM’s that I have moved into this folder to discover which ones need be upgraded.

Now you want to select “Remediate” on the new baseline that we have configured.  You will be prompted to create a schedule for the VMware Tools installation as shown in the capture below.  I have configured my first VMtools patching to occur at 2:20 a.m.

VMware Update Manager gives you the option of taking a snapshot prior to the tools upgrade in case something goes sideways during the upgrade procedure.  Here you can also select if you want to retain your snapshots or have VUM remove the snapshots after a configured period of time (hours):

Now let’s run through the same process again, this time we are going to select the “VM Hardware Upgrade” which will then bring your VMware virtual machine hardware version up to version 8.  As I mentioned above, you can be running at either version 7 or even version 4 for VUM to update your virtual hardware.

Same as before, but this time make sure you stagger your virtual hardware upgrade for 30-40 minutes later:

Same options as before, feel free to take snapshots of the vm’s in case you need to revert for some reason.  Be aware, if you are doing snapshots across hundreds of virtual machines, you should consider the disk space that they will be consuming in both the short and long term.

Below you can see in the recent tasks that our upgrades are taking place automatically which should give you some of your personal time back to do other more important things in your environment.

Conclusion

Leverage VMware Update Manager as part of your upgrade path to vSphere 5.  Automation is critical as your virtual environment continues to grow exponentially.  I haven’t spoken with one customer that is hiring more VMware engineers to their team, so we need to leverage tools/technology to automate whenever possible.

Hope this helps!

-Scott