The Evolution to Evangelism: Same great company, exciting new role

I have accepted a new role at Tintri as the company’s Principal Technology Evangelist. This means I will be taking a more customer-centric approach to shaping and articulating Tintri’s technology and strategy and I will be more closely aligned than ever before with Tintri’s Engineering, Product Management, and Marketing Teams. I will be working with our field employees, customers, partners, media, analysts, and the IT marketplace in general, to ensure our message is resonating across the spectrum. I’m incredibly excited to be taking on this role as I’ve been ‘all in’ on Tintri since the day I saw the technology in action.

IdeaBoyUSE-300x300Speaking of that day … There have been two powerful light bulb moments in my career. One was at VMworld 2007 and the other was about 15 minutes into my first face-to-face conversation with Tintri back in March of 2009.  The person at Tintri I met with showed me a bit of their Virtual Machine Aware storage in action. I instantly got it.  No lengthy explanation was needed.  The light bulb moment was very compelling.  What was supposed to be an initial 60-minute meeting turned into a three hour discussion as I couldn’t stop asking questions and digging deeper into all aspects of the company. I left the meeting feeling like I had to be a part of what Tintri was building.

Since then, I have loved watching customers have that same light bulb moment when we dig into all the Tintri goodness. In my new role, one of my goals will be to evoke as many of these light bulb moments as possible across the entire community.

As far as interaction with the community, another goal of my will be to focus on blogging and connecting with more people through social media. But I won’t be hiding behind this blog or Twitter. I will be in front of customers, prospects, and partners. I will be attending as many industry events as possible and I’ll be in Silicon Valley regularly. So if you see me, come say hello, introduce yourself, and let’s start a conversation.

I’ll wrap up this post by saying I’m thrilled that my new role with allow me to devote a large amount of my time focused on interacting with the virtualization community. Actively participating in the community has helped me learn and grow faster than I could have on my own, and I have a huge level of appreciation and gratitude for that.

::This blog was originally posted at – – please leave any comments via that blog link.

Call to Action: VMworld 2014 Call for Papers is open

Today VMware opened the Call for Papers registration for VMworld 2014.  You can login an submit an abstract here:

This year there are only four main tracks, each with subtracks, as follows:

End-User Computing
· Horizon Desktop
· Horizon Mobile
· Horizon Social
· Desktop-as-a-Service

Hybrid Cloud

· Public Cloud

Software Defined Data Center
· Cloud Infrastructure
· Management
· Storage and Availability
· Networking and Security

Partner Track

· Technology Exchange for Alliance Partner

The Abstract Submission Guidelines can be found here:

Please note these dates.  There is less than one month to submit abstracts.
- April 4: Call for Papers opens
- May 2: Call for Papers closes

Good luck to all those who submit!

Protecting the Software-Defined World

Today, EMC announced their next generation of backup and recovery solutions.  I was unable to attend the announcement.  A replay of the announcement can be found here.  The new announcements include Data Protection Suite enhancements, a New Data Domain OS and new deployment models for VPLEX.

One of the keys to protecting the software-defined world is being able to deliver Data Protection as a Service.  The goal is to integrate with hypervisors, segregated tenant workloads as well as support for public, private and hybrid clouds.  Another item of importance is to meet SLA requirements at scale, from continuous availability to backup and replication to archive and compliance.

The enhancements to the Data Protection Suite include Avamar 7.1, Networker 8.2 and MozyEnterprise.  One of the exciting announcements from my perspective is a new Avamar plug-in for vCloud Director.  This is the first standard API for VMware service providers.  The Avamar plug-in will allow for embedded backup services within vCloud Director.  Another Avamar enhancement includes the ability to backup all workloads to Data Domain including applications, virtual environment, remote offices as well as desktops/laptops.  There is now much closer integration between Avamar and Data Domain.


Chad Sakac wrote an excellent post here that talks about vCloud Suite and vCHS Protection realized from today’s Data Protection announcement.

As mentioned, there were some enhancements made to Networker.  One of my favorites is support for both block and NAS based snapshots with support for VNX, Isilon and NetApp (NAS).  Snapshots are now auto-discovered and cataloged providing a centralized view of all snapshots.  The ability to rollover to a backup media such as tape is now supported which helps drive down data protection cost and is part of the overall data protection continuum.


Data Domain now allows you to backup enterprise applications such as SAP, SAP Hana, SQL and IBM DB2 using DD Boost.  Touching upon the ability to data protection to private, public and hybrid cloud, Data Domain can now protect a workload in any of these cloud environments.  Tenant management is included which provides logical data isolation for administrators as well as the ability to assign roles for users and admins.

Not to be forgotten is a new virtual edition of VPLEX.  VPLEX VE leverages ESXi and leverages the mobility and availability of vSphere.  Included is a plug-in for vCenter.  Right now VPLEX VE support iSCSI-based storage only with VNXe being the first iSCSI-based array being supported.  Additional storage arrays are on the roadmap.  The current distance limitation is 5ms of round-trip latency.


The final announcement is the introduction of Metro-Point which will provide continuous availability utilizing multiple sites.  Basically 3 sites will be utilized with only a single DR copy being used for continuous availability.  CDP is utilized both sides of Metro Point.  It is completely heterogeneous with support for XtremIO, VNX, VMAX, IBM and HP being mentioned during the announcement.

Of course Chad Sakac wrote a great post about VPLEX VE as well, it can be found here.

LiveJournal Tags: ,,,

Honored to be a vexpert for the sixth time!

This week VMware announced the vExpert class of 2014.  This year 754 outstanding members of the community received the award.  I’m honored to have been selected as a vExpert for the sixth year in a row.  The vExpert community is a fantastic mix of passionate people from all walks of life within our industry – Customer, Partner, Vendor, Analyst.  I’ve made a number of great industry friends via this group over the last six years, and have had some of the best technical conversations and debates with these same people.  The value I’ve received simply by being able to collaborate with this group has been the single biggest reward. 

I did want to take a second to congratulate my fellow Tintri teammates, Scott Sauer, Rob Girard, Trent Steele and Rob Waite, who were also part of the 2014 vExpert class.  Well done guys!

After the 2014 announcement was posted I started thinking about that first year the program was around… 

Flashback to 2009

That first year about 300 people were selected (see the 2009 announcement here).  For giggles I decided to search through my .PST archives for that original email that was sent out by John Troyer in February of 2009.


The highlight of that inaugural year for me was the first official vExpert meeting that took place at VMworld 2009.  I walked into the room looking around at all amazing minds in one place, it was a pretty surreal feeling.  John Troyer put together a nice agenda for us starting with then VMware CTO Stephen Herrod who did a Q&A session (part of it was recorded and posted here by Eric Siebert).  Next up were presentations by Jason Boche and Steve Kaplan (check out the recap Steve did about his presentation here).

It has been great to see the vExpert program grow larger year after year and watch those who have been involved in the program grow in their careers as well.  Thanks again to John Troyer and Corey Romero for the continual effort in establishing, growing and maintaining this awesome community.

::This blog was originally posted at – – please leave any comments via that blog link.

A Little Hidden Gem in the Tintri vSphere Web Client Plug-in


What’s new?

Last week (3/20/2014) Tintri announced our vSphere web client plugin that brings the familiar performance metrics that are found in our VMstore web user interface, to the vSphere web client.  This plugin is great for those customers that have begun to adopt and utilize the VMware vSphere web client (the non C# windows based client).  As a reminder, the vSphere web client is where all of the new VMware capabilities and management functionality will be integrated going forward.  As of today (3/24/2014) the Tintri vSphere Web Client plugin is now available in tech preview mode on our support portal.  This new plug-in is a no cost item for our customers, so please feel free to download and install at your convenience! 

A Hidden Gem

The Tintri integration is a nice win for all of our customers.  The rich data we provide back to the web client is really a game changer when it comes to performance troubleshooting, data protection (per VM) and capacity planning.  One of the coolest features that our development team included in the new plugin is the ability to apply our NFS best practices to your ESX hosts with the click of a button.

Below you can see I have selected a Tintri datastore in the web client and have right clicked the object to enable the Tintri menu option to appear:


After selecting the “Apply best practices” menu option, I am now presented with a list of ESXi hosts that have access to this particular datastore.  In my lab/demo environment, this happens to be one ESXi host but in a normal production environment, this would be the entire cluster where you could apply these settings to all of the ESXi hosts at the same time.


Notice where I have the arrows pointing in the first 3 columns compared to the following 3 columns.  There are no gray italicized “match” values present in the selections.  This indicates that the ESXi host we are looking at is not running in our best practices configuration.  As a side note, the Tintri vSphere Best Practices documentation can be found on our support portal.

Let’s set the correct best practices for this particular ESXi host:


Step 1, select the button “Set best practices values” at the lower left hand side of the screen.  Step 2, notice the values have now been corrected on the ESXi host in this particular example, and the italicized gray “match” value is displayed in the first three columns.  Step 3, select the “Save” button in the lower right hand corner of the menu to apply the values we have just set automatically to the above host.  The ESXi hosts will need to be rebooted in order to re-read the new values that have been set.


This little hidden gem is a nice added feature for many customers because it can quickly validate your cluster settings, to ensure you are getting the best performance possible when running VMware vSphere in combination with Tintri.  VMware vSphere Host Profiles would be another great place where you could apply the Tintri NFS best practices and automatically apply them to your hosts/clusters.  Many customers are not running vSphere Enterprise Plus licensing and do not have access to the Host Profiles functionality.  The Tintri plugin now provides an alternative method to accomplishing a simple approach to applying our best practices to your environment.


Data Protection for Virtual and Physical Workloads

vmw-dgrm-vsphere-data-protection-lg First, for those who are not familiar with vDP (vSphere Data Protection), it is a backup and recovery tool designed for VMware environments.  It utilizes EMC Avamar technology to provide superior de-duplication for all virtual machines backed-up by vDP.  To provide further protection, vDP allows you to replicate backup data between vDP virtual appliances.  Therefore you can provide additional protection to your backup data by replicating it offsite to another vDP appliance.  There are two versions of vDP, the first being simply vDP and the other being vDP Advanced.  I will talk primarily about the Advanced version in this article.


vDP does not require an agent in the guest OS to perform backup and recovery operations.  It utilizes VMware Tools to quiesce the OS for OS consistent backups.  For those applications that require application-consistent backups such as Exchange, SQL and SharePoint, vDP provides you with an agent that is installed in the guest OS to quiesce these applications to provide application-consistent backups.  In the past, we have only supported the backup of virtual Exchange, SQL and SharePoint environments.  Since we’re utilizing an agent to backup Exchange, SQL and SharePoint, there is no reason why we couldn’t also backup these same applications running on a physical server.  You can now backup all your VMware virtual machines as well as Exchange, SQL and SharePoint even if those workloads are running on a physical server.

Backup Verification

It is always a good idea to verify that you’re backups are working correctly.  You want to have confidence that data can be restored successfully if the need ever arises.  vDP also provides automated backup verification.  A backup verification job can be created that will restore data automatically after a backup in a sandbox environment.  From a restore perspective, vDP Advanced gives you the ability to restore the entire VM, an application or a particular file.  An end user can restore an individual file using nothing more than a web browser.  Finally, you want to backup vCenter Server with vDP but are concerned with having to restore vCenter Server… without vCenter Server.  vDP allows you to restore directly to host without the need for vCenter Server.


I mentioned Replication earlier however it goes beyond simply replicating from vDP appliance to vDP Appliance.  Since vDP utilizes EMC Avamar technology, you can replicate from vDP Appliance to a physical EMC Avamar grid.  Think of utilizing a service provider to replicate your backup data to a provider using EMC Avamar.  From a topology standpoint, vDP supports one to one, one to many and many to one.  This could be useful if you have remote offices that need backup and recovery services with the need to replicate the backup data to a single site used for disaster recovery.  And since we’re using EMC Avamar, vDP uses changed-block tracking technology.  Therefore only changes to the VM are backed up daily (after the initial backup) and only the changes are replicated to a secondary site therefore helping save on the bandwidth needed between locations.  Finally, vDP also provides you with the ability to utilize Data Domain as a backup data target.  Why is this important?  First, you can point multiple vDP appliances at the Data Domain and de-duplication will now take place across all vDP appliances instead of being limited to the data backed up by the appliance itself.  Throw in Data Domain Boost and you can reduce the amount of data transferred over the network significantly as only unique bits are sent across the network to the Data Domain appliance.


From a scalability perspective, you can deploy up to 10 vDP appliances per vCenter Server.  The de-duplicated backup capacity of a single vDP Advanced appliance is 8TB and the maximum number of virtual machines that can be backed up to a single vDP Advanced appliance is 400 VMs.  Most customers will exhaust the backup capacity before reaching the maximum number of VMs.

I hope you enjoyed my first article on Virtual Insanity.  Feel free to leave a comment if you have any questions.  Below are a few useful links including a helpful answer to the question, what about backup to tape?

vSphere Data Protection Backup to Tape

vSphere Data Protection Advanced Product Page

Looking for Radically Simple Storage?

Hopefully by now you have heard of Virtual SAN, part of VMware’s Software Defined Datacenter strategy. Currently over 10,000 customers have registered for the beta and it has become a frequent subject in many conversations. So where does it fit in your environment? Do you have a need for lower cost Test/Dev, VDI or remote office environments? Those are great places to get started. Even though it has people’s interest I hear all the time it is only a version 1 but did you know it has been in development for a number of years? Download the beta and give it a try.

If you have been waiting anxiously because you liked what you have seen sign up for the upcoming online event:

Virtual SAN Event March 6

Date: Thursday, March 6, 2014
Time: 10:00 a.m. – 11:00 a.m. PST


VSAN is fully integrated with vSphere and literally has two clicks to storage provisioning (try the hands on lab to see how easy it is). Of course there are some requirements that you need to meet first so take a look at the requirements below:

vSphere Requirements

vCenter Server

VSAN requires at a minimum that the VMware vCenter Server™ version is 5.5. Both the Microsoft Windows version of vCenter Server and the VMware vCenter Server Appliance™ can manage VSAN. VSAN is configured and monitored via the VMware vSphere Web Client and this also requires version 5.5. vSphere

VSAN requires at least three vSphere hosts (in which each host has local storage) to form a supported VSAN cluster. This enables the cluster to meet the minimum availability requirement of at least one host, disk, or network failure tolerated. The vSphere hosts requires at a minimum vSphere version 5.5.

Storage Requirements

Disk Controllers

Each vSphere host participating in the VSAN cluster requires a disk controller. This can be a SAS/SATA host bus adapter (HBA) or a RAID controller. However, the RAID controller must function in what is commonly referred to as pass-through mode or HBA mode. In other words, it must be able to pass up the underlying hard disk drives (HDDs) and solid-state drives (SSDs) as individual disk drives without a layer of RAID sitting on top. This is necessary because VSAN will manage any RAID configuration when policy attributes such as availability and performance for virtual machines are defined. The VSAN Hardware Compatibility List (HCL) will call out the controllers that have passed the testing phase.

Each vSphere host in the cluster that contributes its local storage to VSAN must have at least one HDD and at least one SSD.

Hard Disk Drives

Each vSphere host must have at least one HDD when participating in the VSAN cluster. HDDs make up the storage capacity of the VSAN datastore. Additional HDDs increase capacity but might also improve virtual machine performance. This is because virtual machine storage objects might be striped across multiple spindles.

This is covered in far greater detail when VM Storage Policies are discussed later in this paper.

Solid-State Disks

Each vSphere host must have at least one SSD when participating in the VSAN cluster. The SSD provides both a write buffer and a read cache. The more SSD capacity the host has, the greater the performance, because more I/O can be cached.

NOTE: The SSDs do not contribute to the overall size of the distributed VSAN datastore.

Network Requirements

Network Interface Cards

Each vSphere host must have at least one network interface card (NIC). The NIC must be 1Gb capable. However, as a best practice, VMware is recommending 10Gb NICs. For redundancy, a team of NICs can be configured on a per-host basis. VMware considers this a best practice but does not deem it necessary when building a fully functional VSAN cluster.

Supported Virtual Switch Types

VSAN is supported on both the VMware vSphere Distributed Switch™ (VDS) and the vSphere Standard Switch (VSS). No other virtual switch types are supported in the initial release.

VMkernel Network

On each vSphere host, a VMkernel port for VSAN communication must be created. The VMkernel port is labeled Virtual SAN. This port is used for intercluster node communication and also for reads and writes when one of the vSphere hosts in the cluster owns a particular virtual machine, but the actual data blocks making up the virtual machine files are located on a different vSphere host in the cluster. In this case, I/O must traverse the network configured between the hosts in the cluster.

Before you decide to go out and build your own take a look at the “VMware Virtual SAN Design & Sizing Guide” to get some idea around what size and number of components you need.

Additional resources:

VSAN Product page:

VMware Storage Blog:

VSAN Product walk thru:

VMworld 2013 Session STO4798 Software Defined Storage:

HOL- SDC-1308 Virtual SAN (VSAN) and Virtual Storage Solutions

Cormac Hogan has a great blog with a number of resources on VSAN:

Duncan Epping also has some great info on his blog:


Lastly voting is open for Top vBlog 2014 so get signed up and vote.

Up Close and Personal With IBM PureApplication PaaS

The converged infrastructure value proposition, by now, is pretty evident to everyone in the industry. Whether that proposition can be realized, is highly dependent on your particular organization, and specific use case.

Over the past several months, I have had an opportunity to be involved with a very high-profile pilot, with immovable, over-the-top deadlines.  In addition, the security requirements were downright oppressive, and necessitated a completely isolated, separate environment. Multi-tenancy was not an option.

With all this in mind, a pre-built, converged infrastructure package became the obvious choice. Since the solution would be built upon a suite of IBM software, they pitched their new PureApplication system. My first reaction was to look at it as an obvious IBM competitor to the venerable vBlock. But I quickly dismissed that, as I learned more.

The PureApplication platform is quite a bit more than a vBlock competitor. It leverages IBM’s services expertise to provide a giant catalog of pre-configured multi-tiered applications that have been essentially captured, and turned into what IBM calls a “pattern”. The simplest way I can think of to describe a pattern is like the application blueprint that Aaron Sweemer was talking about a few months back. The pattern consists of all tiers of an application, which are deployed and configured simultaneously, and on-demand.

As an example, if one needs a message broker app, there’s a pattern for it. After it is deployed (usually within 20-30 mins.), what’s sitting there is a DataPower appliance, web services, message broker, and database. It’s all configured, and ready to run. Once you load up your specific BAR files, and configure the specifics of how inbound connections and messages will be handled, you can patternize all that with script packages, so that next time you deploy, you’re ready to process messages in 20 minutes.  If you want to create your own patterns, there’s a pretty simple drag and drop interface for doing so.


I know what you’re thinking. . . There are plenty of other ways to capture images, vApps, etc. to make application deployment fast. But what PureApp brings to the table is the (and I hate using this phrase) best-practices from IBM’s years of consulting and building these solutions for thousands of customers. There’s no ground-up installation of each tier, with the tedious hours of configuration, and the cost associated with those hours. That’s what you are paying for when you buy PureApp.

Don’t have anyone in house with years of experience deploying SugarCRM, Business Intelligence, Message Broker, SAP, or BPM from the ground up? No problem. There are patterns for all of them. There are hundreds of patterns so far, and many more are in the pipeline from a growing list of global partners.

The PureApplication platform uses IBM blades, IBM switching, and IBM V7000 storage. The hypervisor is VMware, and they even run vCenter. Problem is, you can’t access vCenter, or install any add-on features. They’ve written their own algorithms for HA, and some of the other things that you’d expect vCenter to handle. The reasoning for this, ostensibly, is so they can support other hypervisors in the future.

For someone accustomed to running VMware and vCenter, it can be quite difficult to get your head around having NO access to the hosts, or vCenter to do any troubleshooting, monitoring, or configuration. But the IBM answer is, this is supposed to be a cloud in a box, and the underlying infrastructure is irrelevant. Still, going from a provider mentality, to an infrastructure consumer one, is a difficult transition, and one that I am still struggling with personally.

The way licensing is handled on this system is, you can use all the licenses for Message Broker, DB2, Red Hat, and the other IBM software pieces that you can possibly consume with the box. It’s a smart way to implement licensing.  You’re never going to be able to run more licenses than you “pay for” with the finite resources included with each system. It’s extremely convenient for the end user, as there is no need to keep up with licensing for the patternized software.

Access to the PureApp platform is via the PureApp console, or CLI. It’s a good interface, but it’s also definitely a 1.x interface. There is very extensive scripting support for adding to patterns, and individual virtual machines. There are also multi-tenancy capabilities by creating multiple “cloud groups” to carve up resources.  There are things that need to be improved, like refresh, and access to more in-depth monitoring of the system.  Having said that, even in the past six months, the improvements made have been quite significant.  IBM is obviously throwing incredible amounts of resources at this platform. Deploying patterns is quite easy, and there is an IBM Image Capture pattern that will hook into existing ESXi hosts to pull off VM’s to use in Pure, and prepare them for patternization.


Having used the platform for a while now, I like it more every day. A couple weeks ago, we were able to press a single button, and upgrade firmware on the switches, blades, ESXi, and the v7000 storage with no input from us. My biggest complaint so far is that I have no access to vCenter to install things like vShield, backup software, monitoring software, etc.. But again, it’s just getting used to a new paradigm that’s hard for me.  IBM does have a monitoring pattern that deploys Tivoli, which helps with monitoring, but it’s one more thing to learn and administer. That said, I do understand why they don’t want people looking into the guts on a true PaaS.

Overall, I can say that I am impressed with the amount of work that has gone into building the PureApplication platform, and am looking forward to the features they have in the pipeline. The support has been great so far as well, but I do hope the support organization can keep up with the exponential sales growth. I have a feeling there will be plenty more growth in 2014.

Tintri VMstore upgrade process made simple

Customers asked for it and we delivered it.  You can now upgrade your Tintri datastore via the management UI.  I created a video to show how easy this upgrade process is for our customers.  I recall being a customer not that long ago and having to engage my storage vendor to have a technical resource dispatched to perform this same task because the process was too “complicated for customers”.  We believe at Tintri that storage should be easy to install, configure, and manage thus our “Zero Management Storage” messaging that you have probably noticed.


Tintri Syslog Configuration with VMware Log Insight


Tintri T600 series

Some of you might have missed the recent big announcement from Tintri, but we launched a new product line to expand our rock solid platform.  The T600 series (picture above) was launched shortly after VMworld this year.  Our customers love Tintri and how we help them manage their virtual environments and are screaming for more.  Our flash first file system gives them the feel of an all flash array but at a fraction of the cost.  This platform not only brings new hardware models to our customers so they can be very prescriptive on their storage requirements, but it also brings a few new exciting software features to the table as well.


Tintri OS 2.1

The new Tintri OS (where much of our intellectual property exists) continues to get better and better offering more features that our customers have been asking for.  The 2.1 version of code now offers several new features:

  • Snapshot enhancements
  • SNMP support (published MIB)
  • LACP support for advanced network configuration
  • Software upgrades from the UI
  • Syslog Integration

I thought I would dive into the syslog feature since I just had a customer ask about configuring this the other day.


Setting up Syslog configuration in Tintri

If you are an existing Tintri customer, you will notice that the menu list under settings now looks a bit different.  Notice the “more” tab in the image below on the left hand side.  This is where some of the new features such as LACP and upgrading from the UI now exist.




To configure the syslog integration, we will want to select the “Alerts” link about halfway down the menu options.  You will be presented with a screen that should look similar to what you see in the image below.  Most likely your email alerting will already be configured if you are an existing T540 customer and upgraded to 2.1.x.




The syslog configuration setting is the new field titled “Remote Server”.  This is where you will enter your syslog dns hostname or ip address so we can forward messages to your instance of VMware Log Insight.  Once you enter the correct values for your environment, select the option “Test forwarding” to ensure that communications are working correctly between the Tintri datastore and Log Insight.


Validate Log Insight is getting data

VMware Log Insight is designed to accept incoming syslog messages by default so there is no configuration that is needed to enable syslog support.  So, It’s time to check the Log insight server for our test data!  Login to your own instance of Log Insight and select the “Interactive Analytics” option at the top of the screen.  In the search column, insert the value “test” to search for our recently sent test message from the Tintri datastore.




You can see in the example above that we are getting the test messages from the selected datastores that I have configured for syslog monitoring.  You can now begin to create saved queries for events that you are interested in, such as cloning, system health metrics, as well as hardware related issues.

Currently there is no Tintri Content pack listed in Solution Exchange but this is something that I am planning on changing in the not so distant future!