Yesterday I had a marathon five-hour executive briefing with EMC, and I learned a lot about the VNX, and EMC’s strategy going forward. This was good information, and I want to give my take on what I learned, as well as maybe open up some discussion in the comments.
First off, I have to say one thing about EMC. Regardless what you think about their storage technology, and product line-up, their sales is absolutely fantastic! I have yet to experience pre-sales as good as EMC. They are polished, professional, and knowledgeable about their product. They brought in a wide array of talent, including a vSpecialist, and there was not a single question they could not answer. This is impressive, considering some of our questions were a bit. . . creative.
Aside from all this, I am most impressed with their ability to put in an awful lot of work to analyze my current environment, and figure out exactly what my needs are. No one else has even come close in my recent experience. This alone goes a long way toward overcoming what could be perceived as product weaknesses versus their competition.
In my opinion, the VNX launch is an attempt at addressing the lessons learned by EMC over the past several years. I believe we can all agree that EMC was caught off guard by the ability of other vendors to integrate more tightly with VMware than VMware’s majority owner. Their recent moves have shown their willingness to correct that, and a strong desire to leapfrog the competition.
Considering the combination of VNX, recent tighter integration with VMware, and the intense growth of EMC’s battalion of vSpecialists , one can only presume that they are willing to use brute force to become the VMware platform of choice. I find it remarkable that a company with as many divisions and products can move with the degree of agility they have shown recently.
VNX introduces some interesting changes besides the concept of a truly unified platform. Fiber Channel is out, and SAS is in. This makes sense to me, considering the current pricing trends of SSD. It no longer makes as much sense to go with FC as a tier of storage. When SSD was 30x more expensive, not many considered it a viable alternative to FC, despite the huge performance advantage. These days, it’s maybe 3-5x more expensive, and that’s easy to justify with the performance. The price should continue to plummet as more manufacturers come online with more fab plants, so it won’t be long before there is price parity between FC and SSD. This was the right call, in my opinion.
A new version of FAST VP is another significant change introduced with VNX. When you lose an entire category of disks (FC), and only have SAS and SSD, you can eliminate the issues with FAST where you still could have hot spots on slower disks, and cool spots on your faster disks. Now there are only two tiers, so there should not be an issue with data finding itself on the wrong tier.
There is no way I can go through all the features and software changes introduced with VNX, and that is not really my intent. I do want to flesh out one area where I see a potential design flaw. Of course, this is all based on my opinion, and I am not in the same league as EMC, NetApp, HPar, or XioTech storage engiNerds, so take it for what it is worth.
The whole idea of FAST Cache bothers me. EMC is using SSD’s for caching. While there are advantages to FAST Cache over a standard pool of SSD’s, they do not seem to justify this design decision. FAST Cache uses 64k chunks, which is more flexible than normal FAST operation on an SSD pool, which is using 1GB chunks of data. I can see the advantage of tiering things in smaller increments. I cannot see any reason why EMC did not go with a PCI based cache option.
In my opinion, flash is so fast, wrapping it up in a traditional disk interface makes little sense for best performance. I guess at this point, I should disclose that we use a few Texas Memory Systems RamSAN devices here, so i know how fast PCI based flash can be. Maybe this skews my opinion a bit, but EMC engineers who were here yesterday were touting the VNX’s use of the PCI 2.0 bus that was so much faster. I agree. So why not use it for cache?
Maybe I am missing something, and maybe in the real world, it won’t matter. But if EMC ever bothers to perform an SPC benchmark, I suspect we would see a bottleneck caused by this method of crippling SSD’s with SAS interfaces. That said, no one is going to argue that replacing a hot swappable SSD is not 100x easier than shutting an array down to change out a bad PCI flash card. I am just not sure the performance penalty is worth the extra convenience, for me.
If someone who knows a lot more than me can help me understand why this decision was made, feel free to comment below. For now, I will hold out hope that maybe there will be a VNX-p at some point with some faster cache.
In the coming weeks, I will have similar briefings with a few other vendors, as we try and narrow down our choices for a new storage platform to replace our aged HP EVA’s. If I come across anything else interesting, I will certainly pass it along to our readers here.