Power Up Your DR with Tintri and SRM!

Check out the video on VMware SRM Integration with Tintri ReplicateVM!

Introduction

Disaster Recovery is something that’s very near and dear to my heart all the way back to my years on the end-user side of the fence. The annual or semi-annual Disaster Recovery event is typically a very painful and long process with lots of lost sleep! Dating myself a bit, but BC/DR was even more of a challenge when we had to recover applications running on physical machines.  System state restores to dislike hardware was never fun!

Virtualization’s changed the industry in many ways. One challenge IT Departments face is effectively protecting the business. What happens when we have that “smoking crater” or “How do we know we’re protected?” CIO’s around the world are asking these questions! So preparation is key.

Tintri ReplicateVM

For years Tintri customers have had the ability to efficiently replicate on premise VMs (yes, not LUNs or Volumes – individual VMs) off premise. VMs that have differing Recovery Point Objectives can be managed individually.

image

Tintri ReplicateVM with vCenter Site Recovery Manager (SRM)

Tintri OS 3.1 further integrates the ReplicateVM engine with vCenter Site Recovery Manager (SRM) to provide an automated orchestration and non-disruptive way to centralize recovery plans for every virtualized application! Let’s take a deeper look.

First off if you’re looking to implement Tintri VMstore with vCenter SRM you’ll want to check out the Best Practices Guide or watch the SRM Video above. These guides provide a step-by-step, soup to nuts explanation of exactly what’s required and how to get everything up and running without issue. Second, you’ll need to make sure you’ve got an appropriately setup infrastructure.

The requirements are rather basic. See illustration below.

  • vCenter and SRM Primary and Recovery Sites with independent compute backed by Tintri VMstore datastore.
  • Active Directory authentication at each location. Be sure to follow Microsoft Best Practice for replicating A/D. This is to utilize built-in replication and not host based or array based replication.
  • Rather than using the Embedded PostgreSQL database, SRM 5.8 supports MS SQL Server 2005 – 2014 and this is the recommended route.

image

Now we’re all setup and you need to grab the Tintri Site Recovery Adapter (SRA) off of the Tintri Support Portal. After you download, install the SRA on both of the vCenter servers. Pretty straightforward install on your vCenter Server on both Protected and Recovery sides, next, next….finish.

image

Next go through the normal SRM steps of Creating Mappings, and Setting up Tintri Replication

Mount the Tintri VMstore to your ESX hosts as normal – 192.168.1.1/Tintri = However, you need to create a sub mount for each group of VMs you want to protect.

For instance my Gold RPO – 192.168.1.1/Tintri/Gold_RPO – You can create this by browsing the datastore within the vSphere console and creating a folder /Gold_RPO and then mounting 192.168.1.1/Tintri/Gold_RPO to your ESX hosts. Then sVmotion the Gold RPO VMs to the datastore.

clip_image002

On this datastore I now have all VMs that need recovered with the Gold Recovery Point Objective, now jump over to the VMstore UI and navigate to the Virtual Machines tab and click Service Groups. The easiest way to think of a Service Group is Service Group in Tintri = Protection Group in SRM.

image

For more granular RPO, 15 minutes – Click Custom, Hourly and then click in the Minutes field and choose the required RPO. Also, important to note is the ability to provide crash consistent OR VM Consistent snapshot.

VM Consistent will leverage the native VM Tools present inside of the Guest OS to quiesce applications like Sharepoint, SQL Server, Exchange, DB2, Active Directory…etc.

clip_image001

To wrap up the setup go through and create your Array Pair (Choosing Tintri SRA), Protection Group and Recovery Group. All of these steps are illustrated in great depth in the Video I created or in the Best Practices Guide.

One of the great parts of Tintri ReplicateVM + Tintri SRM is the ease of use and efficiency. ReplicateVM has always been extremely WAN friendly! Like many things on a Tintri VMstore, replication too is based on VM’s and snapshots. When replicating VM snapshots prior to even sending a block of data over the WAN we’ll send block fingerprints to check for which blocks are missing. Once identified we’ll send over those missing blocks in a compressed & deduplicated fashion to ensure that efficiency of the latency sensitive WAN is never over taxed with unneeded blocks of data!

Home Stretch!

With everything setup it’s pretty easy to go through and perform a test recovery of the protected VMs.  Within the SRM Plugin drill into the Recovery Plan that’s been created and mapped to the Protection Group.  It’s worth reiterating that the Protection Groups in SRM correlate directly to the Service Groups in Tintri.

image

Right click on the Recovery Group and choose Test. One of the options you’re asked with is “Do you want to replicate recent changes to the Recovery Site?” This will allow Tintri ReplicateVM to copy over the blocks of data that have changed since the last synch cycle.  After the test you’ll want to right click and run the Cleanup task.  During the test, since it’s not an actual failover – the Protected side still retains the authoritative copy of VMs, so Cleanup allows SRM to get everything back to the way it needs to be for normal replication to continue.  Never go through and perform a Recovery, unless it’s a true failure situation.  If you’re just looking to sanity check yourself, use test.  Recovery moves all authoritative rights over to the Recovery side and you’ll have to re-replicate everything back to the Primary.

Finish Line!

With that I’ll leave you with the 3 key pillars and differentiators.

  • Simplicity of Configuration
  • WAN Efficiency
  • Visibility at a Per-VM level

So what’s the takeaway? Again Tintri continues to deliver disruptive technology that focuses on the largest and fast growing area of the Modern Data Center – Virtualization!

Tintri – Get Thin for the Win!

omg

Introduction

At Tintri I talk with a lot of customers and prospects about their virtualization environments and how it relates to their storage configurations.  Virtual machine provisioning discussions come up quite a bit, so I thought I would write about some new features that Tintri just introduced.

The method in which we deploy virtual machines over the past many years has certainly changed on the storage side of the house.  Thin Provisioned, Eager Zero Thick, Lazy Zero Thick; there has always been a long menu of choices when deciding how to deploy your virtual machine’s that support your applications.  This has also created some confusion for people around “which choice is right for me when I deploy my virtual machine?”  I have also noticed recently that many customers thought they had deployed thin provisioned vmdk’s but in fact they were running thick due to default values being selected.

Thin Provisioning

First let me start off by saying Tintri is “pro virtual machine thin provisioning”.  You might be saying, wait a second, you’re NFS on vSphere, you are thin provisioned by default!  This is true, but with our VAAI implementation we can observe any of the other types of provisioning methods from VMware as well.  Let’s say you do a storage vMotion and move an inefficient thick provisioned virtual machine from an existing block storage environment over to a Tintri VMstore.  If VAAI is installed, we will observe the specifications of the existing format and retain this .vmdk format and punch zero’s.  (unless you decide to change the option when migrating).

Let me make note, there is no need to use older “Thick” provisioning methods when deploying workloads on Tintri.  Our VMstore operating system is designed to understand the workloads of every virtual machine down to an 8KB block.  Tintri has QoS built into our datastore to adapt as your VM’s change from a performance perspective.

 

t880

It’s all about Efficiency

With our new T800 platform, we have upped the bar on giving you more value from your Tintri VMstore investment.  We have enabled compression at rest on all of the new models to help drive your storage costs down. This allows your organization to run as efficiently as possible from a capacity perspective.  With our current shipping version of Tintri OS (3.1.2.1) we now add in some great capacity management features which I will highlight below.

Lab Environment

I deployed a few VM’s for illustration in the lab, they are empty, no operating system, you can see some are eager zero thick provisioned, one is lazy, and one is thin in the screenshot shown below:

vms-pre

Here is the overall capacity of the VMstore prior to making changes to the virtual machine formatting:

space1

In the example above you can see our compression ratio numbers are a little low, so let’s examine why.  If a virtual machine is thick provisioned per VAAI, according to the specifications, you must “hard back” the zero’s, or reserve the space inside the virtual machine.  If you were to thin provision the .vmdk file, then compression would allow us to reclaim the white space.  This process typically involves doing storage VMotion so you can run the conversion process.  Not any more!

Convert to Thin!

Tintri has built in some great ways to help examine and fix how you can optimize your virtual infrastructure.  In the example below you can see the “Provisioned Type” field on the far right that I have exposed in our user interface to identify which VM’s are thick provisioned.

tintri-vms-pre

Let’s go ahead and right click and convert these VM’s within the Tintri user interface to thin disks!

convert2thin

Post conversion

This conversion process is instantaneous, and you can now see in the Tintri user interface we have converted our inefficient thick provisioned vm’s to thin without having to perform a storage VMotion.

tintrinowthin

You can see below the vSphere Web client now reflects an accurate savings on our capacity on each virtual machine:

vc-after

Below you can now see the Tintri VMstore overall compression ratio is gone from 1.7x to 2.7x since we have migrated the virtual machines to thin provisioned vdisks!

space2

Set it and forget it

Tintri has taken this one step further to help our customers (and thank you customers for your continuous feedback, this is a result!).  We now have a global option within the datastore settings to keep all virtual machines that get migrated to Tintri as a thin provisioned regardless!  No more going back to reclaim on accidental vm’s that were migrated over.

datastore

I hope you found this write up useful, let me know if you have any questions!

-Scott

Peak Virtualization and Time Dilation

In the movie Interstellar, one of the central themes throughout is gravity, and its effect on the passage of time. This time dilation is something that has long fascinated me, and it was great to see it fleshed out on the big screen by a master like Christopher Nolan. While not giving any spoilers, the basic scientific principle is that relative velocity and gravity both cause time to elapse at a much slower relative pace. So an astronaut could travel to Mars, and come back having aged 18 months, while someone on Earth had aged 19 months. Of course this is a dramatic oversimplification, but my goal is not to explain relativity here.

What does any of this have to do with virtualization, technology, or anything else?

One of my favorite tech podcasts these days is In Tech We Trust. This group of guys has great chemistry, and a broad array of technical experience. If you haven’t checked it out, I would encourage you to give a listen.

The past couple episodes got me thinking about the relativity between those who are technology pioneers, versus the rest of the world. There were mentions of “peak virtualization”, and how technologies like Docker, and everyone rushing to the public cloud, we could be seeing this now. And this is where I believe we can see some time dilation between the relative velocity of a handful of “astronauts”, heavily into technology on the bleeding edge, and reality down here on Earth. People who attend the multitude of different tech conferences, stream Tech Field Day events, and keep abreast of exciting developments in tech, are not in the majority.

The majority is engaged every single day in making the technology they have execute their business goals. While we are musing about OpenStack, Docker, Swift, etc. these folks are grinding it out with programs that were written long ago, and will never be suitable for those types of cutting edge deployments. I know companies right now who are planning projects to migrate core apps off mainframes. And you know how they’re doing it? They’re basically porting applications over so that they will run on open systems. They’re taking apps written in ALGOL, or COBOL, and writing them the exact same way in a language they can sustain, and deploy on open systems.

They’re not re-architecting the entire application, or the way they do business. They’re interested in satisfying a regulator, or auditor, who has identified a risk. They need to do it as inexpensively as possible, and they need to do it without introducing the risk of switching to object based cloud storage, or a widely distributed, resilient cloud model. They’re not concerned with the benefits they can glean from containerization, OpenStack, or whatever. They need to address this finding, or get off this old behemoth mainframe before it dies, or they have to spend millions on a new support contract.

In the real world, which runs on Earth time, the companies I deal with are not willing to entertain dramatic re-architecture of the core parts of their business, just to take advantage of something they don’t see a need for, or business case around. And if you happen to get an astronaut in a company like this, and he or she mentions something about cloud, or a hot new technology, the response is usually befuddlement, or outright dismissal. How can you blame the C level people? They’re constantly seeing stories about gigantic cloud providers taking massive amounts of downtime, and silly outages that affect the majority of the globe. They don’t need that. They need their bonuses.

Remember, many of these people only implemented virtualization because they couldn’t justify NOT doing it.

While many in our circle of astronauts have the luxury of ruminating on the end of virtualization, and the next big thing, the people who are still in the atmosphere have concerns that are far different. Predicting the future is definitely a fool’s errand, but based on what I can see down here, I’d have to guess that we are not yet at peak virtualization.

You Down With NTP?

(Yea! You know me!)

image

NTP. How can I explain it? I’ll take you frame by frame it.

I’m sure that no readers of Virtual Insanity would ever neglect to setup NTP properly on every single ESXi host.  But occasionally, our NTP source hiccups, or something happens to skew the time.  Recently I found a host with the NTP service stopped.

image

Why?  No idea really.  Maybe it was stopped while someone was troubleshooting.  Maybe it just crashed.  But it will cause issues with backups, and with applications running during backups or vMotions.

When a snapshot is taken, or a VM is vMotioned, the time is sync’d inside the guest by default.  This can be a problem if your host NTP time is off.  All my guests use Active Directory for NTP, and the Linux guests use an AD domain controller for NTP, so I do not rely on guest time syncing up to my ESXi hosts.  Or so I thought. . .

Even if you have your guests configured NOT to do periodic time syncs with VMware Tools, it will still force NTP to sync to the host on snapshot operations, suspend/resume, or vMotion.  There is a way to prevent VMware Tools from syncing the time for these events, but it’s better just to make sure NTP is up and running, and getting the correct time.  There is a clear reason VMware insists on doing these sync’s during times when I/O is quiesced, or transferred to another host.  Timekeeping in a hypervisor environment when you’re sharing CPU cycles is no trivial task.

If you use a backup solution that snapshots the VM, VSS quiesces the I/O inside that guest.  When it does, there’s a VSS timeout for a snapshot to complete.  If the time is exceeded by the snapshot, VSS will timeout, and your job will fail with error code 4 quiesce aborted.

image

By default, this timeout is set to 10 mins on Windows guests.  Of course, my time was off on the ESXi host by 12 minutes, so when the backup job started, VSS kicked off, and then VMware Tools sync’d the time 12 minutes forward.  VSS times out instantly.  If you see this error code on your backups, an easy thing to check first is NTP.

image

I recommend setting the NTP service to start and stop automatically.

image

Previously, I had set this to start and stop with the host.  But if something happens, and it stops, or gets stopped for some reason, it will not restart until the host restarts.

So who’s down with NTP?

Hopefully all the homies. . .

Upgrading SRM from 5.0 to 5.5

If you’re one of those shops that skipped over 5.1, and are now catching up and going to 5.5, you will run into problems with your SRM upgrade.  Here’s how to fix them.

After you perform the upgrade, per the VMware Site Recovery Manager Installation Guide, you may run into permissions issues when launching the SRM plugin.

The error message is: Connection Error: Lost connection to SRM Server x.x.x.x:8095  –  Permission to perform this operation was denied.

image

To fix this, you’ll have to go into the database, and do some editing.

First, stop the VMware Site Recovery Manager service.

Connect to the SRM database server.  If you’re not using SQL, adjust these procedures accordingly.  This is SQL specific.

First, make sure you BACKUP your database.

Under your SRM Database, you’ll see a table called dbo.pd_acedata. Right click that table, and Select Top 1000 rows.

image

In your Results window, you’ll see that the only permissions that exist are old school pre-SSO “Administrators”.  We need to fix that.

To fix it, we’re going to delete that row. Right click the table again, and select Edit Top 200 Rows.

image

Now, select that row with the old Administrators permission, and right click to delete it.

image

Click yes.

image

Now we have to do the same thing to the dbo.pd_authorization table. Edit the first 200 rows.

image

Delete the first line that says DR_ACEData, and click yes again.

Now go start the SRM service.  This will automatically populate your database with the new permissions you’ll need to launch the SRM plugin, and connect. 

If you go back to the table, you can see it has the correct permissions.

image

For some reason, this is a known issue, but the KB is not public.  So here’s your KB.

Don’t forget to backup your database, so you can restore if you blow it up.  Happy upgrading.

New Tintri Collateral – Backup Best Practices and Veeam Integration!

 

tape

 

Hopefully you no longer have a tape changer on staff as the above image eludes to but, just a short post to let the Tintri community know that we have released a few new technical documents to the community.  Personally I love seeing Tintri continue to produce both innovative products as well as technical collateral to support those solutions for our customer base.

 

logo

The first document that we released is a document that covers backup and recovery best practices while utilizing the Tintri VMstore.  This document covers topics on data protection in general built into the Tintri VMstore, i.e. snapshots and replication.  It also includes how to achieve image level backupsand supported transports, VADP, HotAdd, NBD etc.  The document also covers how to recover data, an even more important component to backups!

 

veeam

The second document is one that has been sought after by many of my customers personally, so I am glad to see Tintri has brought this to fruition.  A great technical deep dive that documents leveraging Veeam software for data protection, and how that integrates with the Tintri VMstore.  (Nice work Dominic!).

I hope you enjoy these two new tech documents, just wanted to make a quick mention!  Look for more great things coming right around the corner, very excited to be delivering on some wonderful technical roadmap items this year!

-Scott

vCAC Integration with vCHS

Recently I have been helping several customers configure vCloud Automation Center in their environment. As part of the configuration, there has been desire to deploy not only in their private environment, but also into a vCHS instance.

As most may or may not be aware, the backend for vCHS is actually based upon vCloud Director. The vCloud Director REST API is what is used by vCAC for provisioning, de-provisioning, power on, power off, etc. After adding vCHS as an endpoint and discovering compute resources, you can add those resources to a Fabric Group and begin to create blueprints.   This post assumes you have a working vCAC environment with at least 1 Fabric Group, 1 Business Group, and a Service created for provisioning. 

Throughout some of the trials, I have come across some “gotchas.” The purpose of this post is to outline the following:

Configuring the vApp template in vCHS

Back to Top

First, you will need to login to your vCHS instance and manage the vPDC in vCloud Director.  Select your vPDC in the dashboard and then click Manage Catalogs in vCloud Director

Screen Shot 2014-05-07 at 4.02.30 PM

Double click on  Public Catalogs, then “right click” one of the vApp Templates and select Add to My Cloud (I just selected one of the CentOS vApps)

Screen Shot 2014-05-07 at 4.13.21 PM

This will bring up an Add to My Cloud menu.  Give the vApp a new name and proceed through the selections accepting defaults until you get to the Configure Networking menu.  The default for Networks is set to None.  You MUST select one of the networks in the drop down.  I have chosen the default-routed network.  This step is important because you cannot use a template for a vCAC blueprint with no network defined.  If you do the vCAC deployment will fail mid-way through.

Screen Shot 2014-05-07 at 4.19.20 PM

Accept the defaults on the next two menu options and then select finish.  This will copy the vApp template to My Cloud.  This process is pretty quick for the CentOS template, it could be longer for a custom uploaded or Windows template.

Next click on the My Cloud link, select vApps and you should see the item you just added (look under the Name column, you can see my ‘Chad Lucas – CentOS’ in the illustration below).  Right click the vApp Template and then select Add to Catalog.

Screen Shot 2014-05-07 at 4.29.19 PM

The Add to Catalog menu will pop up, just select the catalog to copy to (I created a vCAC catalog under My Organizations Catalogs previously).  Be sure you select the Customize VM settings radio button to allow the newly deployed VM’s from vCAC to obtain unique IP addresses from the IP Pool.  Then click OK to finish.

image

The capture process will take a minute to complete.  You can verify the item was added to your catalog by selecting Catlogs, My Organization’s Catalogs and then select the catalog you added the vApp Template to (again mine is vCAC)

Screen Shot 2014-05-07 at 4.51.10 PM

Now that the item is added to your catalog, you can remove the vApp from My Cloud.  This only removes the vApp from your cloud workspace, it does not delete the template added to your catalog.  Simply navigate back to My Cloud, select vApps and right click your vApp and select Delete.

Screen Shot 2014-05-07 at 4.54.04 PM

Adding a vCHS Enpoint

Back to Top

Now that we have completed a proper template for vCAC consumption, we can add the vCHS endpoint.  Before we add the endpoint, there are 3 pieces of information needed from vCHS. 1. The vCloud Director API URL  2. The Org  and 3. The credentials used to access vCHS.  To obtain the API URL and Org, navigate back to the vPDC Dashboard and left click the vCloud Director API URL link.  You only need the url to through the :443, disregard the remaining part of the url.   The Org is the full number next to the Multi-Tenant Cloud text highlighted in the illustration below.  (Note I have demarked the highlighted areas for security purposes).

** In a vCHS dedicated model, the Org is the name of the vPDC you created.  Dedicated vCHS allows for multiple vPDC’s and thus the Org is the name of the vPDC you create.  In the non dedicated virtual private cloud offering, the Org is what I’m showing in this example.

Screen Shot 2014-05-07 at 5.02.21 PM

Log into your vCAC instance and Navigate to Infrastructure > Endpoints > Endpoints.  Click New Endpoint > Cloud > vApp (vCloud Director)

image

At the New Enpoint page, give the End Point a name of your choosing, then enter the vCD API URL discussed above into the address field.  Select the credentials for your vCHS instance (if you haven’t already created those credentials, simply click the button to the right of the field and you can create them there).  Then enter the organization for your vCHS instance.  Again this is the M number referenced above.  Then click OK

image

If all of the information was entered correctly the end point will show up and you can perform your first data collection.  Mouse over your vCHS endpoint and then select Data Collection.  On the next screen simply click Start.  The collection will take a couple of minutes.  You can monitor the collection process by repeating these steps but click Refresh until you see – Status: Endpoint Data collection succeeded on

image

Adding vCHS resources to a Fabric Group

Back to Top

Once you vCHS endpoint has been added, you now need to add those resources to a Fabric Group.  Navigate to Infrastructure > Groups > Fabric GroupsMouse over the Fabric group and click edit.  In the next screen, select the check box for your vCHS compute resources.  Then click OK.

image

Creating vApp Component and vApp Blueprints

Back to Top

Now that the vCHS endpoint has been added and the resources have been added to your Fabric Group, we can create a blueprint for the vCHS template we created in the first part of this post. Now let’s get to the blue print creation.  Navigate to Infrastructure > Blueprints > Blueprints > New Blueprint > Cloud > vApp Component (vCloud Director).

image

Give the Blueprint a name and select the Machine Prefix from the drop down.  Note I have given this blueprint a name of New vCHS Centos

image

Now select the Build Information tab.  Leave the first 3 text boxes as their default.  Then click the button to the right of “Clone From:” and select the template we created in the first vCHS step of this post.  My template name if you recall is “Chad Lucas – CentOS”.  This will auto populate the minimum Machine Resource fields.  You can optionally specify Maximums if you wish.  Leave everything else as defaults and click OK.

image

image

Now that the component Blueprint is complete.  We need to create the vApp blueprint for publishing.  Navigate back to Infrastructure > Blueprints > Blueprints > New Blueprint > Cloud > vApp (vCloud Director).  **Note, we are selecting vApp (vCloud Director) this time, NOT vApp Component (vCloud Director)

image

Give this Blueprint a Name, select the Machine Prefix and also specify the amount of Archive (days). Note I have given the name of “New vCHS Centos – Deploy” Then click the Build Information tab.

image

On the Build information tab, select the correct vApp Template in the Clone From text box.  Again in my case it is Chad Lucas – CentOS

image

Next, click the Pencil under the Components section and select the vApp Component Blue print you created in the previous step then click the Green check mark then click OK.

image

Now it’s time to publish the Blueprint.  You should be at the correct screen after click OK in the previous step, however you should be at Infrastructure > Blueprints > Blueprints.  Mouse over the blueprint just created and click Publish then click OK at the Confirm Publish screen.

image

Entitle the Catalog Item and Add to a provisioning service

Back to Top

Now that the Blueprint is published, we need to entitle it and add it to a service.  This assumes you already have a service created.  Navigate to Administration > Catalog Management > Catalog Items.  Click the “Down” arrow next to the newly added catalog item and click Configure.

image

At the configure screen, make sure the status is active and then select your service from the drop down.  Mine is titled vCHS Deploy.  Then click Update.

image

Next click on EntitlementsAdministration > Catalog Management > Entitlements.  Then select the drop down arrow next to the service you select above and click Edit.

image

On the Edit Entitlement screen, select the Items and Approvals tab.  Click the plus sign next to Entitled Catalog Items then check the box next to the newly added catalog item.  Then click OK then Update.

image

Now navigate to the Catalog screen.  Then select the service you added the catalog item to.  Remember in my case, the service was vCHS Deploy.  If you only have one service, then the catalog item should just appear under there.

image

You should now be able to request this catalog item.  Select Request and at the next screen just leave the defaults and click Submit.

image

After submitting, you can monitor the request from the Requests tab in vCAC.  However, you can see the actual provisioning from within vCHS.  Let’s take a look there.  Log back into your vCHS instance.  Click your Virtual Datacenter.  Then click the Manage Catalog in vCloud Director.  Select My Cloud then VM’s.  You should at some point see the Machine as Busy (while it’s customizing the name etc).

image

After customization is complete, it will power on the VM with the naming convention from the Machine Prefix we chose when creating the blueprint.  In this example that is corp-vchs-linux-036.

image

You can also verify the successful deployment under the Requests tab of vCAC.

image

This post is pretty basic and anyone familiar with vCAC knows there is a ton of customization you can do.  I did not go into any of the governance aspects that an Enterprise implementation would surely require.  In either case I hope this provides some additional clarity for provisioning from vCAC to vCHS.

Thanks!

Chad Lucas

Feedback is old and busted. Feed Forward is the new hotness.

We’ve all been there. You just sat through a terribly boring presentation that could have been so much better. If only the feedback form you’re about to fill out had made it to the presenter yesterday.

image

Since most of us spend all our free time on IT, and virtually none on quantum physics, there’s no way we can accomplish that kind of preemptive feedback.  Sure, you can run through a presentation with the wife, or with your Uncle Si.  But they’re not able to give you the kind of feedback you really need to make your presentation a huge hit. If only you had that time machine.

Apparently Duncan Epping, Scott Lowe, and Mike Laverick have been studying physics in their spare time, because they have come up with a solution.  It’s called Feed Forward, and it’s about to take off in a major way with the VMUG organization around the globe.

What exactly is Feed Forward?  It’s a program where a potential presenter can get help, and feedback from pros before giving a presentation. The program is just getting off the ground, but some of the early experiences have been great.  I believe this program will be a way to get more, and better content to VMUG’s. A lot of people who have experiences, or relevant expertise to share, are reluctant to step up. This program would allow them to pitch their presentation at others without risk, and get feedback that helps them understand how their presentation would benefit the group.

As an ardent supporter of the National Forensics League, I believe strongly that public speaking, and empirical presentation skills are invaluable. Unfortunately, not everyone has access to programs like this growing up. Too often, people reach a point in their career where their inability to present holds them back. We all need to be able to present, because in the end, we are all salespeople. Whether we’re selling the boss on a new idea, or simply selling ourselves to a perspective employer, practice helps tremendously.

Feed Forward is one way to get that practice, and get some constructive feedback from people who are adept at presenting, and understand the subject matter that is most relevant to your perspective audience.

I am not sure if this will be limited to VMUG presentations in the long run, or if it will expand beyond that to VMworld, and even presentations for other groups. But I have to say, I am on board 100%, and I strongly encourage our readers to sign up here to stay abreast of Feed Forward developments. Also, if you have ideas or comments on Feed Forward, I am sure the guys would love to hear them.

The Evolution to Evangelism: Same great company, exciting new role

I have accepted a new role at Tintri as the company’s Principal Technology Evangelist. This means I will be taking a more customer-centric approach to shaping and articulating Tintri’s technology and strategy and I will be more closely aligned than ever before with Tintri’s Engineering, Product Management, and Marketing Teams. I will be working with our field employees, customers, partners, media, analysts, and the IT marketplace in general, to ensure our message is resonating across the spectrum. I’m incredibly excited to be taking on this role as I’ve been ‘all in’ on Tintri since the day I saw the technology in action.

IdeaBoyUSE-300x300Speaking of that day … There have been two powerful light bulb moments in my career. One was at VMworld 2007 and the other was about 15 minutes into my first face-to-face conversation with Tintri back in March of 2009.  The person at Tintri I met with showed me a bit of their Virtual Machine Aware storage in action. I instantly got it.  No lengthy explanation was needed.  The light bulb moment was very compelling.  What was supposed to be an initial 60-minute meeting turned into a three hour discussion as I couldn’t stop asking questions and digging deeper into all aspects of the company. I left the meeting feeling like I had to be a part of what Tintri was building.

Since then, I have loved watching customers have that same light bulb moment when we dig into all the Tintri goodness. In my new role, one of my goals will be to evoke as many of these light bulb moments as possible across the entire community.

As far as interaction with the community, another goal of my will be to focus on blogging and connecting with more people through social media. But I won’t be hiding behind this blog or Twitter. I will be in front of customers, prospects, and partners. I will be attending as many industry events as possible and I’ll be in Silicon Valley regularly. So if you see me, come say hello, introduce yourself, and let’s start a conversation.

I’ll wrap up this post by saying I’m thrilled that my new role with allow me to devote a large amount of my time focused on interacting with the virtualization community. Actively participating in the community has helped me learn and grow faster than I could have on my own, and I have a huge level of appreciation and gratitude for that.

::This blog was originally posted at – http://justinlauer.wordpress.com/2014/04/04/the-evolution-to-evangelism-same-great-company-exciting-new-role/ – please leave any comments via that blog link.

Call to Action: VMworld 2014 Call for Papers is open

vmworld2014
Today VMware opened the Call for Papers registration for VMworld 2014.  You can login an submit an abstract here: http://www.vmworld.com/community/conference/cfp

This year there are only four main tracks, each with subtracks, as follows:

End-User Computing
· Horizon Desktop
· Horizon Mobile
· Horizon Social
· Desktop-as-a-Service

Hybrid Cloud

· Public Cloud

Software Defined Data Center
· Cloud Infrastructure
· Management
· Storage and Availability
· Networking and Security

Partner Track

· Technology Exchange for Alliance Partner

The Abstract Submission Guidelines can be found here: http://download3.vmware.com/vmworld/2014/downloads/abstract-submission-guidelines.pdf

Please note these dates.  There is less than one month to submit abstracts.
- April 4: Call for Papers opens
- May 2: Call for Papers closes

Good luck to all those who submit!