PowerPath VE Versus Round Robin on VMAX – Round 3 (TKO)

A few weeks ago, I did a couple rounds of testing with PowerPath VE to see how it would perform against VMware Round Robin.  If you missed Round 1 or Round 2, you may want to click and read those now.

Based on the comments, and the other posts that said there was no point in setting IOPS to 1 on Round Robin, I decided I was going to have to get more aggressive and test a wide variety of workloads on multiple hosts and datastores.  My goal is to see if there would be any significant difference between Round Robin and PowerPath VE in a larger environment than I was testing with previously.

For Round 3 of my tests, I use 3 hosts, 9 Win2008 R2 VM’s, and 3 datastores.  My hosts are HP BL460 G7 blades with HP CNA’s.  All hosts are running ESXi 5 and are connected via passthrough modules to Cisco Nexus switches.  FCoE is being used to the Nexus, and then FC from there to Cisco MDS’s, then to the VMAX.  No Storage IO Control, DRS, or FAST is active on these hosts / LUN’s.

Here are the test VM’s, and their respective IOMeter setup:

 

The first test is Round Robin with the IOPS=1 setting.  We’re seeing 20,673 IOPS with an average read latency of 7.69ms.  Write latency is 7.5ms on this test.  When we change all LUN’s back to the default of IOPS=1000, we see a significant drop in IOPS, and a 40% increase in latency.  Since the bulk of my IOMeter profiles are sequential, this makes sense.  EMC tests, as well as my own, show that there is little difference between IOPS=1 and IOPS=1000 when dealing with small block 100% random I/O.

When switching to PowerPath hosts, we see the IOPS increase around 6%.  This is probably not statistically significant or anything, but what I did find interesting is the 15% better read latency.  My guess is that PowerPath is dynamically tuning based on the workload profile from each host, where Round Robin is stuck at whatever I set as the IOPS= number.

Here’s the scorecard for Round 3:

 

To sum up our last round of comparisons, it was nice to see results using more hosts, datastores, and VM’s with varying I/O profiles.  While this was helpful, no one can really simulate what real workloads are going to do in production, with IOMeter.

PowerPath for physical servers is a no-brainer.  Based on my results, I am recommending the purchase of PowerPath VE for my VMware environment as well.  In my opinion, it comes down to predictability, and peace of mind.  I cannot predict what all workloads are going to look like in my environment for the future, and I am not willing to test and tune individual LUN’s with different Round Robin settings.  I’d much rather leave that up to a piece of software.

Thanks for all the comments and ideas for these tests and posts.

 

Post to Twitter Post to Delicious Post to Digg Post to StumbleUpon

This entry was posted in Authors, Brandon Riley by Brandon Riley. Bookmark the permalink.
Brandon Riley

About Brandon Riley

I am a Senior Distributed Systems Engineer working in the financial services sector for the past 15 years. I help design and implement open systems infrastructure. Virtualization with VMware and EMC VMAX is a huge part of that infrastructure. All views expressed in my blog posts are mine and mine alone. Opinions do not represent my employer or affiliates of my employer.
  • David Williams

    Good set of posts, Brandon.  In the end, the point of PowerPath is intelligent, array-oriented path selection and dynamic optimization, not performance.  There are dozens of reasons for the performance differences (using RR requires more manual tuning than with PP/VE), but vendor-provided MPIO is always the way to go.  After all, it’s the storage vendor, not VMware, that knows how to optimize storage I/O for their arrays.  The only down side here is that EMC is still stuck in the era where you pay separately for the no-brainer, software-enabled options rather than just getting it with the price of the hardware, like many of the up-and-coming storage vendors are doing.  In my opinion, that business model will need to go away if they want to stay competitive.

    • http://twitter.com/BrandonJRiley B. Riley

      Thanks David. You are right about the antiquated pricing model. Price is the only reason I did all of these tests. I do think they are beginning to come around on some of their products with software bundling, but still a long way to go. Although there is at least one storage vendor with worse licensing. Starts with a Net and ends with an App.  ;)

  • Chris

    Unfortunately your test doesn’t take in to account the added overhead involved in troubleshooting with an additional vendor.  By using PowerPath VE you’re tying VMware’s hands when it comes time to troubleshoot a storage issue and EMC has been less than impressive in my experience with ensuring PowerPath VE can be adequately managed with out down time.

  • cam

    This thread is over a year old now but I also noticed that ALUA was not discussed in the testing. RR will alternate between multiple storage paths presented to the host, but it doesn’t take into account the latency on a single path; thus, it doesn’t prefer faster access paths over slower ones. When ALUA is enabled, RR should always prefer the path(s) with the lowest latency and thus the tests with PowerPath would be more meaningful.

    • BrandonJRiley

      Cam:

      ALUA is used for midrange arrays that are not active / active. These tests were done on a Symmetrix, which is active / active. It might be interesting to see these tests done on a VNX or EVA with ALUA.

  • Jeffrey Thompson

    Hi Brandon,

    I was recently searching around for some good independent case studies on PowerPath vs native, and thought this was fantastic from a vmware perspective. I loved your testing methodology and data reporting. That being said, you made the comment “PowerPath for physical servers is a no-brainer.” – and I was wondering if you did direct testing yourself or otherwise knew of a report you trust that would validate those in more data-oriented terms? Our use case / focus area is physical right now for MSSQL OLTP DBs.

    Here’s the popular EMC one, but I take all vendor published information with a grain of salt – http://www.emc.com/collateral/analyst-reports/11-10-00-esg-lab-validation-emc-powerpath.pdf

    We’ll be testing with similar methodology ourselves, but it might be valuable to see independent results in the wild. Thanks!