FCoE Problem with Cisco Nexus 5548 and 5596

Since there is very little out there on this, and since it caused plenty of heartache for me, I thought I’d write this up for those having issues with FCoE on the bigger Nexus 5k’s.

Apparently there is a bug feature in the 5548 / 5596 switch where they left out the default QoS policies.  Those have to be in place for FCoE to work.  So they’re shipping the switch with FCoE enabled, but this QoS policy is missing.

What results is once you get everything setup, you’ll see some FLOGI logins, but it’s very sporadic.  The logins will come in and out of the fabric, and no FCoE will happen.  Your FCoE adapters will report link down, even though the vfc’s are up, and the ethernet interfaces are up.

What I suspect is happening – and take this for what it’s worth from an expired CCNA – is the MTU isn’t set properly for the FCoE b/c the system QoS policies aren’t letting the switch know that there is FCoE.  It wasn’t until I mentioned that I changed the default MTU that the Cisco TAC level 2 guy finally remembered this little QoS problem with the big switches.

But he sent me the article, so I’ll save you some time.

If you copy the code in blue and paste it, your links will come up instantly and you’ll be ready to roll.  Here’s the link to the Cisco article.

 

The FCoE class-fcoe system class is not enabled in the QoS configuration.

Solution

For a Cisco Nexus 5548 switch, the FCoE class-fcoe system class is not enabled by default in the QoS configuration. Before enabling FCoE, you must include class-fcoe in each of the following policy types:

The FCoE class-fcoe system class is not enabled in the QoS configuration.

Solution

For a Cisco Nexus 5548 switch, the FCoE class-fcoe system class is not enabled by default in the QoS configuration. Before enabling FCoE, you must include class-fcoe in each of the following policy types:

Network-QoS

Queuing

QoS

The following is an example of a service policy that needs to be configured:

F340.24.10-5548-1
class-map type qos class-fcoe 
class-map type queuing class-fcoe 
match qos-group 1 
class-map type queuing class-all-flood 
match qos-group 2 
class-map type queuing class-ip-multicast 
match qos-group 2 
class-map type network-qos class-fcoe 
match qos-group 1 
class-map type network-qos class-all-flood 
match qos-group 2 
class-map type network-qos class-ip-multicast 
match qos-group 2 
system qos 
 service-policy type qos input fcoe-default-in-policy 
 service-policy type queuing input fcoe-default-in-policy 
 service-policy type queuing output fcoe-default-out-policy

 service-policy type network-qos fcoe-default-nq-policy



The layer between the layers

Back in June I was asked to build and captain the vFabric SQLFire lab at VMworld.  Now, I’ve been a big champion of the vFabric technologies since the beginning, so I was certainly excited to be given the opportunity.  But I don’t think I fully realized just how fortunate I was at the time.  Because not only did the experience force me to go deep into the technology, but it also forced me to focus on a layer where I previously had little direct involvement in my career.  Let me explain.

 

We often like to think of the cloud in three distinct layers:  Infrastructure (IaaS), Platform (PaaS) and Software (SaaS).  But in my opinion, these categories may be a bit too broad because there are definitely layers between the layers.  For example, where does the data live?  Infrastructure guys often mentally push the data up into the Platform layer, while application guys often mentally push the data down towards the Infrastructure layer.  Obviously, regardless of which side of the infrastructure-platform fence you live on, we all “touch” the data all the time.  And of course we all recognize how vitally important the data is.  But unless you’re a DBA, the data usually ends up being a problem for someone on the other side of that fence.

 

So now I mentally place data in its own layer in between the Infrastructure and the Platform, where it in many ways (at least in my mind) serves as the “glue” which binds the two.  You might be asking yourself, “What’s his point?” or even “Who the heck cares?”  Well, I want to make the distinction for a couple reasons.

 

First, I believe the way we categorize and compartmentalize things in our mind has a dramatic affect on our focus and behavior.  Mentally misplacing important concepts in the wrong compartment usually leads to confusion and misunderstanding, and we can miss tremendous opportunities.  However the correct mental placement will bring clarity, focus and potentially open up a whole new world to us.  So now for me, instead of just thinking of data as something that lived in file somewhere or in a database that some DBA was responsible for, a whole new world has been unlocked.  I’ll come back to this point in a bit.

 

Second, lots of Infrastructure folks are a bit concerned about their future due to the high degree of automation and integration happening in the Infrastructure layer right now.  We’re starting to hear whispers of things like “infrastructure guys need to move up the stack or they’ll be left behind.”  Scott Lowe’s recent post The End of the Infrastructure Engineer? not only articulates the concern well, but he also suggests the concern may be unwarranted.  I’m not sure I fully agree with Scott, but I’m simply trying to highlight that the concern is out there and it’s growing.  Shoot, I know I’ve certainly implied numerous times here on this blog that we all need to start moving towards the application/development space (here, here and here).  But would I actually go so far as to say that everyone needs to stop what they’re doing and go buy the latest copy of Programming for Dummies?  Probably not.

 

What I do believe, however, is this new data layer (not that the data layer is actually new, of course) may be a way for infrastructure engineers to stay relevant as the world moves towards application centric clouds.  It may be a way for us to “move up the stack” by taking a few steps, rather than a career changing leap of faith.  After all, building skills in this layer isn’t a shock to the system because, like I said in the beginning, we’ve all indirectly worked with the data layer our entire careers.  Whereas application development is a completely different world for an infrastructure engineer (and vise versa), data lives much closer to home.  Infrastructure and data are like “kissing cousins” … kind of awkward, but not completely taboo either.

 

 

So, if you couldn’t tell by now, I’ve been thinking a lot about data.  Databases, data grids, data fabrics, data warehouses, data in the cloud, moving data, securing data, big data, data data data data data data.  Most of the hard problems (not all, of course) with cloud computing are with data.  It’s always the data that seems to trip us up …

 

User:     “Why can’t I VMotion my server from here to China?”
Admin:   “Well, aside from the fact that we don’t have a data center in China, your application has a Terabyte of data and it would take month to get there.”

User:  “Why can’t I use dropbox anymore?”
Admin:  “Because you put sensitive company data on it, which was compromised and now we’re being sued.  By the way, your boss is looking for you.”

User:  “The performance of my application you put on our hybrid cloud is pitiful.  You suck.”
Admin:  “You told us there would be no more than 100 users, and now there are 50k users trying to access the same database at the same time.  You’re an idiot.”

 

Granted, in all of these situations, it’s not just the data that’s the issue.  Well, actually, come to think of it, it’s not really the data at all, now is it?  It’s all the things we must do in order to deliver/secure/migrate/manage/scale the data in the cloud that becomes the issue.  So data is often the root of our problems, but never really the problem itself.  Instead it’s data handling in the cloud that’s the big challenge.  Yes, that’s it!  And it would appear someone really smart from JPMorgan Chase would agree with me …

 

Whew!  Validation gives me warm fuzzies.  Anyway, circling back to a point I made earlier, since I’ve been focused on the data layer a whole big crazy world has opened up for me.  Much like what vSphere did for servers, there is a ton of activity happening at the data layer to transform the way we handle data at scale in the cloud.  And again, what’s so cool about this layer is that it is all too familiar.  When studying up on how data grids can make data globally available via their own internal WAN replication techniques, or when learning about how a new breed of load balancers are enabling linear scalability for databases, or when exploring how in-memory databases can dramatically improve application performance … the concepts/language/lingo are easily understood and relatable to things I already know.

 

Now in the midst of all this learnin’ it occurred to me, everyone has been talking about the Platform as the next big thing (myself included) … but I would think the data problems need to be solved first, don’t they?  Sure, I know things won’t happen serially here; lots of smart people and cool new companies are working to solve cloudy problems at both layers in parallel.  But we all know that where there are problems, there are opportunities.  And it would appear to me that the more immediate problems needed to be solved are with data handling.  So could it be that the next big thing is actually in the layer between the layers?  And could this really be the place where developers and engineers finally meet and give each other a great big awkward hug?

 

Which brings me back to the very beginning of this blog post (the next big thing, not the awkward hug).  After digging pretty deep into SQLFire, I’ve found it’s a radically new kind of database that addresses many of the issues with data handling in the cloud.  It’s a database built differently from the ground up because it is built on amazing, disruptive data grid technology, yet presents itself to an application as a regular old database.  It can unobtrusively slide in between applications and their existing databases to solve performance problems, or it can stand on its own as a complete database solution.  It can instantly scale linearly, it can make your data extremely fault tolerant, and it can make your data available globally, all with very little effort and/or overhead.  Pretty amazing stuff.  You should check it out and let me know what you think.  And even if you don’t take a look at SQLFire, what do you think about the “layer between the layers?”  The next big thing?

 

 

PowerPath VE Versus Round Robin on VMAX – Round 2

As promised in the first post, here is round 2 of my testing with PowerPath VE and vSphere 5 NMP Round Robin on VMAX.  For this round of testing, I changed the Round Robin iooperationslimit to 1, from the default of 1000.

I understand that this is not recommended, and I also understand that further testing is needed with multiple hosts, multiple VM’s and multiple LUN’s.  As soon as I get the time, I’ll do that and report back.

For the background, and methodology, click the link above to read the first post.  For now, I’ll skip right to the scorecard.


As we can see here, setting Round Robin IOPS to 1 definitely evens the score with PowerPath.  I expected to see more CPU activity than PP, but that wasn’t the case.  I also expect to see more overhead on the array once I add more hosts, VM’s and LUN’s to the mix.  It might be a few weeks before I can pull that off.

Thanks for reading, and commenting.  Round 3 to come.

 

PowerPath VE Versus Round Robin on VMAX – Round 1

 

This past year, I did an exhaustive analysis of potential candidates to replace an aging HP EVA infrastructure for storage.  After narrowing the choices down, based on several factors, the one that had the best VMware integration, along with mainframe support was the EMC Symmetrix VMAX.

One of the best things about choosing VMAX in my mind was PowerPath.  It can be argued whether PowerPath provides benefits, but most people I have talked to in the real world swear that PowerPath is brilliant.  But let’s face it, it HAS to be brilliant to justify the cost per socket.  Before tallying up all my sockets and asking someone to write a check, I needed to do my own due diligence.  There aren’t many comprehensive PowerPath VE vs. Round Robin papers out there, so I needed to create my own.

My assumption was that I’d see a slight performance edge on PowerPath VE, but not enough to justify the cost.  Part of this prejudice comes from hearing the other storage guys out there say there’s no need for vendor specific SATP / PSP’s since VMware NMP is so great these days.  Here’s hoping there’s no massive check to write!  By the way, if you prefer to skip the beautiful full color screen shots, go ahead and scroll down to the scorecard for the results.

 

Tale of the Tape

My test setup was as follows:

Test Setup for PowerPath vs. Round Robin
2 – HP DL380G6 dual socket servers
2 – HP branded Qlogic 4Gbps HBA’s each server
2 – FC connections to a Cisco 9148 and then direct to VMAX
VMware ESXi 5 loaded on both servers
All tests were run on 15K FC disks – no other activity on the array or hosts

 

 

 

 

 

Let’s Get It On!

(i’m sure there’s a royalty I will have to pay for saying that)

Host 1 has PowerPath VE 5.7 b173, and host 2 has Round Robin with the defaults. Each HBA has paths to 2 directors on 2 engines. I used IOmeter from a Windows 2008 VM with fairly standard testing setups.  Results are from ESXTOP captures at 2 second intervals.

The first test I ran was 4k 100% read 0% random.  All these are with 32 outstanding IO’s, unless otherwise specified.

Here is Round Robin

And PowerPath VE

First thing I noticed was that Round Robin looks exactly like my mind thought it would look.  Not that that means anything.  I do realize that this test could have been faster on RR with the IOPS set to 1, and maybe I’ll do that in Round 2.  As for round 1, with more than twice the number of IOPS, PowerPath is earning its license fee here for sure.

How about writes?  Here’s 4k 100% write 0% random.

Round Robin

PowerPath

Once again, PowerPath VE shows near 2x the IOPS and data transfer speeds.  I’m starting to see a pattern emerge. ;-)

How about larger blocks? 32K 100% read 0% random.

Round Robin

PowerPath

PowerPath is really pulling ahead here with over 2x the IOPS yet again.

32K 100% write 0% random

Round Robin

PowerPath

Wow!  PowerPath is killing it on writes!  Maybe PP has some super-secret password to unlock some extra oomph from VMAX’s cache.  ;-)

Nevertheless, it’s obvious that PP is beating up on the default Round Robin here, so let’s throw something tougher at them.

Here’s 4K 50% read 25% random with 4 outstanding IO’s.

PowerPath

The gap between the contenders closes a bit with this latest workload at only a 24% improvement for PP.  But as we all know, IOPS doesn’t tell the entire story.  What about latency?

4k 100% write 0% random

Round Robin

PowerPath

Write latency is 138% higher with Round Robin!  That’s a pretty big gap.  Is it meaningful?  Depends on your workload I guess.

Scorecard after Round 1

 

So far, PowerPath looks like a necessity for folks running EMC arrays.  I’m not sure how it would work on other arrays, but it really shines on the VMAX.  In some of my tests the IOPS with PowerPath were three times greater than with the standard Round Robin configuration!  I do believe that the gap will shrink if I drop the IOPS setting to 1, but I doubt it will shrink to anywhere near even.  We will see.

In addition to the throughput and latency testing, I also did some failover tests.  I’m going to save that for a later round.  I don’t want this post to get too long.