Upgrading SRM from 5.0 to 5.5

If you’re one of those shops that skipped over 5.1, and are now catching up and going to 5.5, you will run into problems with your SRM upgrade.  Here’s how to fix them.

After you perform the upgrade, per the VMware Site Recovery Manager Installation Guide, you may run into permissions issues when launching the SRM plugin.

The error message is: Connection Error: Lost connection to SRM Server x.x.x.x:8095  -  Permission to perform this operation was denied.

image

To fix this, you’ll have to go into the database, and do some editing.

First, stop the VMware Site Recovery Manager service.

Connect to the SRM database server.  If you’re not using SQL, adjust these procedures accordingly.  This is SQL specific.

First, make sure you BACKUP your database.

Under your SRM Database, you’ll see a table called dbo.pd_acedata. Right click that table, and Select Top 1000 rows.

image

In your Results window, you’ll see that the only permissions that exist are old school pre-SSO “Administrators”.  We need to fix that.

To fix it, we’re going to delete that row. Right click the table again, and select Edit Top 200 Rows.

image

Now, select that row with the old Administrators permission, and right click to delete it.

image

Click yes.

image

Now we have to do the same thing to the dbo.pd_authorization table. Edit the first 200 rows.

image

Delete the first line that says DR_ACEData, and click yes again.

Now go start the SRM service.  This will automatically populate your database with the new permissions you’ll need to launch the SRM plugin, and connect. 

If you go back to the table, you can see it has the correct permissions.

image

For some reason, this is a known issue, but the KB is not public.  So here’s your KB.

Don’t forget to backup your database, so you can restore if you blow it up.  Happy upgrading.

New Tintri Collateral – Backup Best Practices and Veeam Integration!

 

tape

 

Hopefully you no longer have a tape changer on staff as the above image eludes to but, just a short post to let the Tintri community know that we have released a few new technical documents to the community.  Personally I love seeing Tintri continue to produce both innovative products as well as technical collateral to support those solutions for our customer base.

 

logo

The first document that we released is a document that covers backup and recovery best practices while utilizing the Tintri VMstore.  This document covers topics on data protection in general built into the Tintri VMstore, i.e. snapshots and replication.  It also includes how to achieve image level backupsand supported transports, VADP, HotAdd, NBD etc.  The document also covers how to recover data, an even more important component to backups!

 

veeam

The second document is one that has been sought after by many of my customers personally, so I am glad to see Tintri has brought this to fruition.  A great technical deep dive that documents leveraging Veeam software for data protection, and how that integrates with the Tintri VMstore.  (Nice work Dominic!).

I hope you enjoy these two new tech documents, just wanted to make a quick mention!  Look for more great things coming right around the corner, very excited to be delivering on some wonderful technical roadmap items this year!

-Scott

vCAC Integration with vCHS

Recently I have been helping several customers configure vCloud Automation Center in their environment. As part of the configuration, there has been desire to deploy not only in their private environment, but also into a vCHS instance.

As most may or may not be aware, the backend for vCHS is actually based upon vCloud Director. The vCloud Director REST API is what is used by vCAC for provisioning, de-provisioning, power on, power off, etc. After adding vCHS as an endpoint and discovering compute resources, you can add those resources to a Fabric Group and begin to create blueprints.   This post assumes you have a working vCAC environment with at least 1 Fabric Group, 1 Business Group, and a Service created for provisioning. 

Throughout some of the trials, I have come across some “gotchas.” The purpose of this post is to outline the following:

Configuring the vApp template in vCHS

Back to Top

First, you will need to login to your vCHS instance and manage the vPDC in vCloud Director.  Select your vPDC in the dashboard and then click Manage Catalogs in vCloud Director

Screen Shot 2014-05-07 at 4.02.30 PM

Double click on  Public Catalogs, then “right click” one of the vApp Templates and select Add to My Cloud (I just selected one of the CentOS vApps)

Screen Shot 2014-05-07 at 4.13.21 PM

This will bring up an Add to My Cloud menu.  Give the vApp a new name and proceed through the selections accepting defaults until you get to the Configure Networking menu.  The default for Networks is set to None.  You MUST select one of the networks in the drop down.  I have chosen the default-routed network.  This step is important because you cannot use a template for a vCAC blueprint with no network defined.  If you do the vCAC deployment will fail mid-way through.

Screen Shot 2014-05-07 at 4.19.20 PM

Accept the defaults on the next two menu options and then select finish.  This will copy the vApp template to My Cloud.  This process is pretty quick for the CentOS template, it could be longer for a custom uploaded or Windows template.

Next click on the My Cloud link, select vApps and you should see the item you just added (look under the Name column, you can see my ‘Chad Lucas – CentOS’ in the illustration below).  Right click the vApp Template and then select Add to Catalog.

Screen Shot 2014-05-07 at 4.29.19 PM

The Add to Catalog menu will pop up, just select the catalog to copy to (I created a vCAC catalog under My Organizations Catalogs previously).  Be sure you select the Customize VM settings radio button to allow the newly deployed VM’s from vCAC to obtain unique IP addresses from the IP Pool.  Then click OK to finish.

image

The capture process will take a minute to complete.  You can verify the item was added to your catalog by selecting Catlogs, My Organization’s Catalogs and then select the catalog you added the vApp Template to (again mine is vCAC)

Screen Shot 2014-05-07 at 4.51.10 PM

Now that the item is added to your catalog, you can remove the vApp from My Cloud.  This only removes the vApp from your cloud workspace, it does not delete the template added to your catalog.  Simply navigate back to My Cloud, select vApps and right click your vApp and select Delete.

Screen Shot 2014-05-07 at 4.54.04 PM

Adding a vCHS Enpoint

Back to Top

Now that we have completed a proper template for vCAC consumption, we can add the vCHS endpoint.  Before we add the endpoint, there are 3 pieces of information needed from vCHS. 1. The vCloud Director API URL  2. The Org  and 3. The credentials used to access vCHS.  To obtain the API URL and Org, navigate back to the vPDC Dashboard and left click the vCloud Director API URL link.  You only need the url to through the :443, disregard the remaining part of the url.   The Org is the full number next to the Multi-Tenant Cloud text highlighted in the illustration below.  (Note I have demarked the highlighted areas for security purposes).

** In a vCHS dedicated model, the Org is the name of the vPDC you created.  Dedicated vCHS allows for multiple vPDC’s and thus the Org is the name of the vPDC you create.  In the non dedicated virtual private cloud offering, the Org is what I’m showing in this example.

Screen Shot 2014-05-07 at 5.02.21 PM

Log into your vCAC instance and Navigate to Infrastructure > Endpoints > Endpoints.  Click New Endpoint > Cloud > vApp (vCloud Director)

image

At the New Enpoint page, give the End Point a name of your choosing, then enter the vCD API URL discussed above into the address field.  Select the credentials for your vCHS instance (if you haven’t already created those credentials, simply click the button to the right of the field and you can create them there).  Then enter the organization for your vCHS instance.  Again this is the M number referenced above.  Then click OK

image

If all of the information was entered correctly the end point will show up and you can perform your first data collection.  Mouse over your vCHS endpoint and then select Data Collection.  On the next screen simply click Start.  The collection will take a couple of minutes.  You can monitor the collection process by repeating these steps but click Refresh until you see – Status: Endpoint Data collection succeeded on

image

Adding vCHS resources to a Fabric Group

Back to Top

Once you vCHS endpoint has been added, you now need to add those resources to a Fabric Group.  Navigate to Infrastructure > Groups > Fabric GroupsMouse over the Fabric group and click edit.  In the next screen, select the check box for your vCHS compute resources.  Then click OK.

image

Creating vApp Component and vApp Blueprints

Back to Top

Now that the vCHS endpoint has been added and the resources have been added to your Fabric Group, we can create a blueprint for the vCHS template we created in the first part of this post. Now let’s get to the blue print creation.  Navigate to Infrastructure > Blueprints > Blueprints > New Blueprint > Cloud > vApp Component (vCloud Director).

image

Give the Blueprint a name and select the Machine Prefix from the drop down.  Note I have given this blueprint a name of New vCHS Centos

image

Now select the Build Information tab.  Leave the first 3 text boxes as their default.  Then click the button to the right of “Clone From:” and select the template we created in the first vCHS step of this post.  My template name if you recall is “Chad Lucas – CentOS”.  This will auto populate the minimum Machine Resource fields.  You can optionally specify Maximums if you wish.  Leave everything else as defaults and click OK.

image

image

Now that the component Blueprint is complete.  We need to create the vApp blueprint for publishing.  Navigate back to Infrastructure > Blueprints > Blueprints > New Blueprint > Cloud > vApp (vCloud Director).  **Note, we are selecting vApp (vCloud Director) this time, NOT vApp Component (vCloud Director)

image

Give this Blueprint a Name, select the Machine Prefix and also specify the amount of Archive (days). Note I have given the name of “New vCHS Centos – Deploy” Then click the Build Information tab.

image

On the Build information tab, select the correct vApp Template in the Clone From text box.  Again in my case it is Chad Lucas – CentOS

image

Next, click the Pencil under the Components section and select the vApp Component Blue print you created in the previous step then click the Green check mark then click OK.

image

Now it’s time to publish the Blueprint.  You should be at the correct screen after click OK in the previous step, however you should be at Infrastructure > Blueprints > Blueprints.  Mouse over the blueprint just created and click Publish then click OK at the Confirm Publish screen.

image

Entitle the Catalog Item and Add to a provisioning service

Back to Top

Now that the Blueprint is published, we need to entitle it and add it to a service.  This assumes you already have a service created.  Navigate to Administration > Catalog Management > Catalog Items.  Click the “Down” arrow next to the newly added catalog item and click Configure.

image

At the configure screen, make sure the status is active and then select your service from the drop down.  Mine is titled vCHS Deploy.  Then click Update.

image

Next click on EntitlementsAdministration > Catalog Management > Entitlements.  Then select the drop down arrow next to the service you select above and click Edit.

image

On the Edit Entitlement screen, select the Items and Approvals tab.  Click the plus sign next to Entitled Catalog Items then check the box next to the newly added catalog item.  Then click OK then Update.

image

Now navigate to the Catalog screen.  Then select the service you added the catalog item to.  Remember in my case, the service was vCHS Deploy.  If you only have one service, then the catalog item should just appear under there.

image

You should now be able to request this catalog item.  Select Request and at the next screen just leave the defaults and click Submit.

image

After submitting, you can monitor the request from the Requests tab in vCAC.  However, you can see the actual provisioning from within vCHS.  Let’s take a look there.  Log back into your vCHS instance.  Click your Virtual Datacenter.  Then click the Manage Catalog in vCloud Director.  Select My Cloud then VM’s.  You should at some point see the Machine as Busy (while it’s customizing the name etc).

image

After customization is complete, it will power on the VM with the naming convention from the Machine Prefix we chose when creating the blueprint.  In this example that is corp-vchs-linux-036.

image

You can also verify the successful deployment under the Requests tab of vCAC.

image

This post is pretty basic and anyone familiar with vCAC knows there is a ton of customization you can do.  I did not go into any of the governance aspects that an Enterprise implementation would surely require.  In either case I hope this provides some additional clarity for provisioning from vCAC to vCHS.

Thanks!

Chad Lucas

Feedback is old and busted. Feed Forward is the new hotness.

We’ve all been there. You just sat through a terribly boring presentation that could have been so much better. If only the feedback form you’re about to fill out had made it to the presenter yesterday.

image

Since most of us spend all our free time on IT, and virtually none on quantum physics, there’s no way we can accomplish that kind of preemptive feedback.  Sure, you can run through a presentation with the wife, or with your Uncle Si.  But they’re not able to give you the kind of feedback you really need to make your presentation a huge hit. If only you had that time machine.

Apparently Duncan Epping, Scott Lowe, and Mike Laverick have been studying physics in their spare time, because they have come up with a solution.  It’s called Feed Forward, and it’s about to take off in a major way with the VMUG organization around the globe.

What exactly is Feed Forward?  It’s a program where a potential presenter can get help, and feedback from pros before giving a presentation. The program is just getting off the ground, but some of the early experiences have been great.  I believe this program will be a way to get more, and better content to VMUG’s. A lot of people who have experiences, or relevant expertise to share, are reluctant to step up. This program would allow them to pitch their presentation at others without risk, and get feedback that helps them understand how their presentation would benefit the group.

As an ardent supporter of the National Forensics League, I believe strongly that public speaking, and empirical presentation skills are invaluable. Unfortunately, not everyone has access to programs like this growing up. Too often, people reach a point in their career where their inability to present holds them back. We all need to be able to present, because in the end, we are all salespeople. Whether we’re selling the boss on a new idea, or simply selling ourselves to a perspective employer, practice helps tremendously.

Feed Forward is one way to get that practice, and get some constructive feedback from people who are adept at presenting, and understand the subject matter that is most relevant to your perspective audience.

I am not sure if this will be limited to VMUG presentations in the long run, or if it will expand beyond that to VMworld, and even presentations for other groups. But I have to say, I am on board 100%, and I strongly encourage our readers to sign up here to stay abreast of Feed Forward developments. Also, if you have ideas or comments on Feed Forward, I am sure the guys would love to hear them.

The Evolution to Evangelism: Same great company, exciting new role

I have accepted a new role at Tintri as the company’s Principal Technology Evangelist. This means I will be taking a more customer-centric approach to shaping and articulating Tintri’s technology and strategy and I will be more closely aligned than ever before with Tintri’s Engineering, Product Management, and Marketing Teams. I will be working with our field employees, customers, partners, media, analysts, and the IT marketplace in general, to ensure our message is resonating across the spectrum. I’m incredibly excited to be taking on this role as I’ve been ‘all in’ on Tintri since the day I saw the technology in action.

IdeaBoyUSE-300x300Speaking of that day … There have been two powerful light bulb moments in my career. One was at VMworld 2007 and the other was about 15 minutes into my first face-to-face conversation with Tintri back in March of 2009.  The person at Tintri I met with showed me a bit of their Virtual Machine Aware storage in action. I instantly got it.  No lengthy explanation was needed.  The light bulb moment was very compelling.  What was supposed to be an initial 60-minute meeting turned into a three hour discussion as I couldn’t stop asking questions and digging deeper into all aspects of the company. I left the meeting feeling like I had to be a part of what Tintri was building.

Since then, I have loved watching customers have that same light bulb moment when we dig into all the Tintri goodness. In my new role, one of my goals will be to evoke as many of these light bulb moments as possible across the entire community.

As far as interaction with the community, another goal of my will be to focus on blogging and connecting with more people through social media. But I won’t be hiding behind this blog or Twitter. I will be in front of customers, prospects, and partners. I will be attending as many industry events as possible and I’ll be in Silicon Valley regularly. So if you see me, come say hello, introduce yourself, and let’s start a conversation.

I’ll wrap up this post by saying I’m thrilled that my new role with allow me to devote a large amount of my time focused on interacting with the virtualization community. Actively participating in the community has helped me learn and grow faster than I could have on my own, and I have a huge level of appreciation and gratitude for that.

::This blog was originally posted at – http://justinlauer.wordpress.com/2014/04/04/the-evolution-to-evangelism-same-great-company-exciting-new-role/ – please leave any comments via that blog link.

Call to Action: VMworld 2014 Call for Papers is open

vmworld2014
Today VMware opened the Call for Papers registration for VMworld 2014.  You can login an submit an abstract here: http://www.vmworld.com/community/conference/cfp

This year there are only four main tracks, each with subtracks, as follows:

End-User Computing
· Horizon Desktop
· Horizon Mobile
· Horizon Social
· Desktop-as-a-Service

Hybrid Cloud

· Public Cloud

Software Defined Data Center
· Cloud Infrastructure
· Management
· Storage and Availability
· Networking and Security

Partner Track

· Technology Exchange for Alliance Partner

The Abstract Submission Guidelines can be found here: http://download3.vmware.com/vmworld/2014/downloads/abstract-submission-guidelines.pdf

Please note these dates.  There is less than one month to submit abstracts.
- April 4: Call for Papers opens
- May 2: Call for Papers closes

Good luck to all those who submit!

Protecting the Software-Defined World

Today, EMC announced their next generation of backup and recovery solutions.  I was unable to attend the announcement.  A replay of the announcement can be found here.  The new announcements include Data Protection Suite enhancements, a New Data Domain OS and new deployment models for VPLEX.

One of the keys to protecting the software-defined world is being able to deliver Data Protection as a Service.  The goal is to integrate with hypervisors, segregated tenant workloads as well as support for public, private and hybrid clouds.  Another item of importance is to meet SLA requirements at scale, from continuous availability to backup and replication to archive and compliance.

The enhancements to the Data Protection Suite include Avamar 7.1, Networker 8.2 and MozyEnterprise.  One of the exciting announcements from my perspective is a new Avamar plug-in for vCloud Director.  This is the first standard API for VMware service providers.  The Avamar plug-in will allow for embedded backup services within vCloud Director.  Another Avamar enhancement includes the ability to backup all workloads to Data Domain including applications, virtual environment, remote offices as well as desktops/laptops.  There is now much closer integration between Avamar and Data Domain.

image

Chad Sakac wrote an excellent post here that talks about vCloud Suite and vCHS Protection realized from today’s Data Protection announcement.

As mentioned, there were some enhancements made to Networker.  One of my favorites is support for both block and NAS based snapshots with support for VNX, Isilon and NetApp (NAS).  Snapshots are now auto-discovered and cataloged providing a centralized view of all snapshots.  The ability to rollover to a backup media such as tape is now supported which helps drive down data protection cost and is part of the overall data protection continuum.

image

Data Domain now allows you to backup enterprise applications such as SAP, SAP Hana, SQL and IBM DB2 using DD Boost.  Touching upon the ability to data protection to private, public and hybrid cloud, Data Domain can now protect a workload in any of these cloud environments.  Tenant management is included which provides logical data isolation for administrators as well as the ability to assign roles for users and admins.

Not to be forgotten is a new virtual edition of VPLEX.  VPLEX VE leverages ESXi and leverages the mobility and availability of vSphere.  Included is a plug-in for vCenter.  Right now VPLEX VE support iSCSI-based storage only with VNXe being the first iSCSI-based array being supported.  Additional storage arrays are on the roadmap.  The current distance limitation is 5ms of round-trip latency.

image

The final announcement is the introduction of Metro-Point which will provide continuous availability utilizing multiple sites.  Basically 3 sites will be utilized with only a single DR copy being used for continuous availability.  CDP is utilized both sides of Metro Point.  It is completely heterogeneous with support for XtremIO, VNX, VMAX, IBM and HP being mentioned during the announcement.

Of course Chad Sakac wrote a great post about VPLEX VE as well, it can be found here.

LiveJournal Tags: ,,,

Honored to be a vexpert for the sixth time!

This week VMware announced the vExpert class of 2014.  This year 754 outstanding members of the community received the award.  I’m honored to have been selected as a vExpert for the sixth year in a row.  The vExpert community is a fantastic mix of passionate people from all walks of life within our industry – Customer, Partner, Vendor, Analyst.  I’ve made a number of great industry friends via this group over the last six years, and have had some of the best technical conversations and debates with these same people.  The value I’ve received simply by being able to collaborate with this group has been the single biggest reward. 

I did want to take a second to congratulate my fellow Tintri teammates, Scott Sauer, Rob Girard, Trent Steele and Rob Waite, who were also part of the 2014 vExpert class.  Well done guys!

After the 2014 announcement was posted I started thinking about that first year the program was around… 

Flashback to 2009

That first year about 300 people were selected (see the 2009 announcement here).  For giggles I decided to search through my .PST archives for that original email that was sent out by John Troyer in February of 2009.

vExpert2009

The highlight of that inaugural year for me was the first official vExpert meeting that took place at VMworld 2009.  I walked into the room looking around at all amazing minds in one place, it was a pretty surreal feeling.  John Troyer put together a nice agenda for us starting with then VMware CTO Stephen Herrod who did a Q&A session (part of it was recorded and posted here by Eric Siebert).  Next up were presentations by Jason Boche and Steve Kaplan (check out the recap Steve did about his presentation here).

It has been great to see the vExpert program grow larger year after year and watch those who have been involved in the program grow in their careers as well.  Thanks again to John Troyer and Corey Romero for the continual effort in establishing, growing and maintaining this awesome community.

::This blog was originally posted at – http://justinlauer.wordpress.com/2014/04/03/honored-to-be-a-vexpert-for-the-sixth-time/ – please leave any comments via that blog link.

A Little Hidden Gem in the Tintri vSphere Web Client Plug-in

TintriVCP_Summary

What’s new?

Last week (3/20/2014) Tintri announced our vSphere web client plugin that brings the familiar performance metrics that are found in our VMstore web user interface, to the vSphere web client.  This plugin is great for those customers that have begun to adopt and utilize the VMware vSphere web client (the non C# windows based client).  As a reminder, the vSphere web client is where all of the new VMware capabilities and management functionality will be integrated going forward.  As of today (3/24/2014) the Tintri vSphere Web Client plugin is now available in tech preview mode on our support portal.  This new plug-in is a no cost item for our customers, so please feel free to download and install at your convenience! 

A Hidden Gem

The Tintri integration is a nice win for all of our customers.  The rich data we provide back to the web client is really a game changer when it comes to performance troubleshooting, data protection (per VM) and capacity planning.  One of the coolest features that our development team included in the new plugin is the ability to apply our NFS best practices to your ESX hosts with the click of a button.

Below you can see I have selected a Tintri datastore in the web client and have right clicked the object to enable the Tintri menu option to appear:

best_pract1

After selecting the “Apply best practices” menu option, I am now presented with a list of ESXi hosts that have access to this particular datastore.  In my lab/demo environment, this happens to be one ESXi host but in a normal production environment, this would be the entire cluster where you could apply these settings to all of the ESXi hosts at the same time.

best_pract2_match

Notice where I have the arrows pointing in the first 3 columns compared to the following 3 columns.  There are no gray italicized “match” values present in the selections.  This indicates that the ESXi host we are looking at is not running in our best practices configuration.  As a side note, the Tintri vSphere Best Practices documentation can be found on our support portal.

Let’s set the correct best practices for this particular ESXi host:

best_pract_apply

Step 1, select the button “Set best practices values” at the lower left hand side of the screen.  Step 2, notice the values have now been corrected on the ESXi host in this particular example, and the italicized gray “match” value is displayed in the first three columns.  Step 3, select the “Save” button in the lower right hand corner of the menu to apply the values we have just set automatically to the above host.  The ESXi hosts will need to be rebooted in order to re-read the new values that have been set.

Conclusion

This little hidden gem is a nice added feature for many customers because it can quickly validate your cluster settings, to ensure you are getting the best performance possible when running VMware vSphere in combination with Tintri.  VMware vSphere Host Profiles would be another great place where you could apply the Tintri NFS best practices and automatically apply them to your hosts/clusters.  Many customers are not running vSphere Enterprise Plus licensing and do not have access to the Host Profiles functionality.  The Tintri plugin now provides an alternative method to accomplishing a simple approach to applying our best practices to your environment.

-Scott

Data Protection for Virtual and Physical Workloads

vmw-dgrm-vsphere-data-protection-lg First, for those who are not familiar with vDP (vSphere Data Protection), it is a backup and recovery tool designed for VMware environments.  It utilizes EMC Avamar technology to provide superior de-duplication for all virtual machines backed-up by vDP.  To provide further protection, vDP allows you to replicate backup data between vDP virtual appliances.  Therefore you can provide additional protection to your backup data by replicating it offsite to another vDP appliance.  There are two versions of vDP, the first being simply vDP and the other being vDP Advanced.  I will talk primarily about the Advanced version in this article.

Overview

vDP does not require an agent in the guest OS to perform backup and recovery operations.  It utilizes VMware Tools to quiesce the OS for OS consistent backups.  For those applications that require application-consistent backups such as Exchange, SQL and SharePoint, vDP provides you with an agent that is installed in the guest OS to quiesce these applications to provide application-consistent backups.  In the past, we have only supported the backup of virtual Exchange, SQL and SharePoint environments.  Since we’re utilizing an agent to backup Exchange, SQL and SharePoint, there is no reason why we couldn’t also backup these same applications running on a physical server.  You can now backup all your VMware virtual machines as well as Exchange, SQL and SharePoint even if those workloads are running on a physical server.

Backup Verification

It is always a good idea to verify that you’re backups are working correctly.  You want to have confidence that data can be restored successfully if the need ever arises.  vDP also provides automated backup verification.  A backup verification job can be created that will restore data automatically after a backup in a sandbox environment.  From a restore perspective, vDP Advanced gives you the ability to restore the entire VM, an application or a particular file.  An end user can restore an individual file using nothing more than a web browser.  Finally, you want to backup vCenter Server with vDP but are concerned with having to restore vCenter Server… without vCenter Server.  vDP allows you to restore directly to host without the need for vCenter Server.

Replication

I mentioned Replication earlier however it goes beyond simply replicating from vDP appliance to vDP Appliance.  Since vDP utilizes EMC Avamar technology, you can replicate from vDP Appliance to a physical EMC Avamar grid.  Think of utilizing a service provider to replicate your backup data to a provider using EMC Avamar.  From a topology standpoint, vDP supports one to one, one to many and many to one.  This could be useful if you have remote offices that need backup and recovery services with the need to replicate the backup data to a single site used for disaster recovery.  And since we’re using EMC Avamar, vDP uses changed-block tracking technology.  Therefore only changes to the VM are backed up daily (after the initial backup) and only the changes are replicated to a secondary site therefore helping save on the bandwidth needed between locations.  Finally, vDP also provides you with the ability to utilize Data Domain as a backup data target.  Why is this important?  First, you can point multiple vDP appliances at the Data Domain and de-duplication will now take place across all vDP appliances instead of being limited to the data backed up by the appliance itself.  Throw in Data Domain Boost and you can reduce the amount of data transferred over the network significantly as only unique bits are sent across the network to the Data Domain appliance.

Scalability

From a scalability perspective, you can deploy up to 10 vDP appliances per vCenter Server.  The de-duplicated backup capacity of a single vDP Advanced appliance is 8TB and the maximum number of virtual machines that can be backed up to a single vDP Advanced appliance is 400 VMs.  Most customers will exhaust the backup capacity before reaching the maximum number of VMs.

I hope you enjoyed my first article on Virtual Insanity.  Feel free to leave a comment if you have any questions.  Below are a few useful links including a helpful answer to the question, what about backup to tape?

vSphere Data Protection Backup to Tape

vSphere Data Protection Advanced Product Page