Archive for the ‘Aaron Sweemer’ Category
When I started VirtualInsanity in 2008, I never anticipated what it would become. Instead of just place where I would post random thoughts ever so often, it has become a place that many new and part-time bloggers have come to call their virtual home. By this measure alone, I think VirtualInsanity can be deemed a “success.”
The one challenge I personally have with VirtualInsanity, is that the content of our bloggers is very much virtualization and infrastructure heavy. That is by no means a bad thing. Not at all. But for me, my focus the past few years has been on automation and orchestration, application development, and an overall trend/ movement that is known as DevOps. Can I create content in my newer areas of focus here on VirtualInsanity? Sure, but I don’t think it resonates very well with the typical VirtualInsanity reader.
Therefore I’ve decided it’s time for me to create a new blog, ActualClouds which will be a site dedicated to the non-infrastructure and non-virtualization pieces of cloud computing. But let me also be clear about one thing … VirtualInsanity is going no where. I plan to transfer ownership of the blog to Scott Sauer, one of my original co-authors, where he and the other bloggers will continue to post here (as will I from time to time).
So, wish me luck. ActualClouds is live and I posted my first entry this morning, BladeLogic Integration via vCO and SOAP. Please stop by and check it out. And if you like what you see, please help me get the word out about ActualClouds.
–Aaron Sweemer (Principal Systems Engineer @ VMware)
I remember many years ago when I was studying at the University of Maryland, one of my professors listed WordPerfect as one of the requirements for his class. So I went down to the campus bookstore and bought a copy. The clerk handed me a big bulky box and I remember thinking “gee, this packaging is a bit overkill.” That is until I got back to my dorm room and found 20+ 3.5” floppy disks in the box! It must’ve taken me over an hour to install that darn thing and I knew there had to be a better way. Well sure enough, things have certainly progressed nicely since the days of the floppy disks. We’ve since moved on through CDs, then DVDs, and finally, where the bulk of software distribution is done today, direct Internet download.
It’s hard to imagine what could come next, or how we could improve upon software distribution via direct Internet download. As Internet pipes get ever faster and bigger, what other medium could be a sufficient replacement? Well, frankly, I don’t know that there ever will be. But the future of software distribution lies not in the distribution vehicle, rather it lies in the metadata of the software being distributed.
As applications grow more complex, they are typically broken down into smaller pieces based on functionality. Let’s use Microsoft Exchange as an example. When Microsoft Exchange was first released, it was a pretty basic messaging software product that could be easily setup on a single server. Over the years, as new features and functionality have been added, Microsoft Exchange has evolved into an extremely complex beast. For larger implementations of Exchange, quite often there are large teams of very bright engineers, consultants/contractors, and project managers required to get the product setup and running properly. And the entire process is rarely, if ever, smooth. More often than not, the process is littered with many “bumps in the road.” A complete installation can take many months, and in some extreme cases, years. Could there be a better way? Absolutely. Let me introduce you to the concept of an Application Blueprint.
An Application Blueprint is, well, very much what it sounds like. It is a model of what an application should look like. Like a blueprint for a building, which is a complete plan of all the building components from the foundation to the roof, an application blueprint spells out all of the application components, everything from the VMs and the OSs, all the way up to the nitty gritty application configuration parameters, and everything in between. But unlike blueprints for physical buildings, which are generally printed on large rolled up pieces of blue engineering paper, an application blueprint is something that can be modeled in software and something that can be saved and passed around or downloaded electronically.
In addition – and here’s the really important part – VMware’s vFabric Application Director leverages these blueprints to automate and orchestrate the installation and configuration of a complex application. Now let’s think about that for just a minute, because this stuff is revolutionary and extremely powerful. As we’ve already discussed, applications are getting harder and harder to setup and configure. One could make the argument that certain types of really advanced and complex software solutions, such as Microsoft Exchange, are approaching the point of being almost too complex to be implemented by humans. There are just too many variables to manage and too many super advanced skill sets across numerous disciplines that are required in order to successfully implement these kinds of large, enterprise software solutions. But now there is a way to not only model what these software solutions should look like in a given environment, but also a way to take that model and programmatically “make it so.”
This is very powerful new approach because it eliminates the complexity and removes the error prone “human element” of the implementation equation. All the technical pieces that must be thought out, accounted for and detailed in an implementation project plan can now be modeled in an application blueprint. Everything from the size of the virtual disks and the IP addresses of the VMs, all the way up to the location of the software installation bits, the installation directory and the TCP port numbers will be defined in the blueprint. Once an application has be completely modeled in the blueprint, vFabric Application Director will take that blueprint and make an Execution Plan, which is an installation plan that will include all the tweaks and configuration changes necessary for the application to run in your environment. Then, according to the plan, Application Director will build the VMs, install the OS’s, configure the networks, download the application bits, install the application components and “wire everything together,” so to speak.
You might be thinking, “OK great, but how exactly is this the future of software distribution?” Which is a great question, because what I’ve talked about so far is how application blueprints will greatly improve application installations, which is something different than software distribution. But now that we understand the value of application blueprints, wouldn’t it be great if we could save application blueprints and pass them around? Wouldn’t it be great if, for example, I could go to an online store and find/test/buy an application blueprint for Microsoft Exchange? Yeah, that would be awesome. Guess what? You can. In fact, if you’re interested, here is a blueprint for Exchange. If you go to that link and click the “try” button, you’ll be taken to a page that will help you import the MS Exchange blueprint into your vFabric Application Director.
Keep in mind just how different this is from how things are generally done today. If I bought MS Exchange today, I would get a download link to the actual software installation bits as well as a ton of “how to” documentation. But with this new model, I’m not downloading any software or documentation; rather I’m downloading a software plan. It’s a software plan that can be understood by vFabric Application director. And once the plan has been imported into Application Director, you don’t need to do hours/days/months of planning and researching, you simply click “Go.” Once Application Director has done its job, you will have MS Exchange up and running in your environment, just they way you want/need it to be running.
There are a couple of other important benefits to this new approach. First, let’s think about the subject of patching and updating. Sorry, I didn’t mean to make you gag. Yes, it’s awful. Painful. You show me a person who says they enjoy patching and updating applications, and I’ll show you someone who secretly dresses in full body latex and has Helga the “Pain is Love Goddess” on speed dial. But the good news here is that with Application Blueprints, patching and updating becomes so much easier. Remember, everything about the application is modeled in the blueprint, including (potentially) how to update that application. So in the future, updating an application should be a simple matter of receiving an updated blueprint from the ISV and again, simply clicking “Go.”
And finally, another really cool benefit to this approach lies within the performance of an application and the integration with other applications. Pretty soon, other important application tools will also “understand” Application Blueprints. Why is this important? Well, if a performance monitoring application (for example) can understand Application Blueprints, it can now intelligently spin up additional DBs, or web servers, or application servers, or whatever corrective actions it needs to take in order to solve the performance problem. Pretty nifty, eh?
Yes indeed, software distribution has come a long way over the years. Fifteen years ago we were buying all of our software on floppy disks. Ten years ago everything moved to CDs and DVDs. Now everything is directly downloaded over the Internet. Shoot, the new Apple Macintosh laptops don’t even come with internal CDROMs anymore. And why should they? Who needs them? So what’s next for software distribution? Well, I believe the future of software distribution lies not in the in the metadata of the software being distributed, i.e. the Application Blueprint. Go to the VMware Solution Exchange. Check out an Application Blueprint and see if you agree with me.
Back in June I was asked to build and captain the vFabric SQLFire lab at VMworld. Now, I’ve been a big champion of the vFabric technologies since the beginning, so I was certainly excited to be given the opportunity. But I don’t think I fully realized just how fortunate I was at the time. Because not only did the experience force me to go deep into the technology, but it also forced me to focus on a layer where I previously had little direct involvement in my career. Let me explain.
We often like to think of the cloud in three distinct layers: Infrastructure (IaaS), Platform (PaaS) and Software (SaaS). But in my opinion, these categories may be a bit too broad because there are definitely layers between the layers. For example, where does the data live? Infrastructure guys often mentally push the data up into the Platform layer, while application guys often mentally push the data down towards the Infrastructure layer. Obviously, regardless of which side of the infrastructure-platform fence you live on, we all “touch” the data all the time. And of course we all recognize how vitally important the data is. But unless you’re a DBA, the data usually ends up being a problem for someone on the other side of that fence.
So now I mentally place data in its own layer in between the Infrastructure and the Platform, where it in many ways (at least in my mind) serves as the “glue” which binds the two. You might be asking yourself, “What’s his point?” or even “Who the heck cares?” Well, I want to make the distinction for a couple reasons.
First, I believe the way we categorize and compartmentalize things in our mind has a dramatic affect on our focus and behavior. Mentally misplacing important concepts in the wrong compartment usually leads to confusion and misunderstanding, and we can miss tremendous opportunities. However the correct mental placement will bring clarity, focus and potentially open up a whole new world to us. So now for me, instead of just thinking of data as something that lived in file somewhere or in a database that some DBA was responsible for, a whole new world has been unlocked. I’ll come back to this point in a bit.
Second, lots of Infrastructure folks are a bit concerned about their future due to the high degree of automation and integration happening in the Infrastructure layer right now. We’re starting to hear whispers of things like “infrastructure guys need to move up the stack or they’ll be left behind.” Scott Lowe’s recent post The End of the Infrastructure Engineer? not only articulates the concern well, but he also suggests the concern may be unwarranted. I’m not sure I fully agree with Scott, but I’m simply trying to highlight that the concern is out there and it’s growing. Shoot, I know I’ve certainly implied numerous times here on this blog that we all need to start moving towards the application/development space (here, here and here). But would I actually go so far as to say that everyone needs to stop what they’re doing and go buy the latest copy of Programming for Dummies? Probably not.
What I do believe, however, is this new data layer (not that the data layer is actually new, of course) may be a way for infrastructure engineers to stay relevant as the world moves towards application centric clouds. It may be a way for us to “move up the stack” by taking a few steps, rather than a career changing leap of faith. After all, building skills in this layer isn’t a shock to the system because, like I said in the beginning, we’ve all indirectly worked with the data layer our entire careers. Whereas application development is a completely different world for an infrastructure engineer (and vise versa), data lives much closer to home. Infrastructure and data are like “kissing cousins” … kind of awkward, but not completely taboo either.
So, if you couldn’t tell by now, I’ve been thinking a lot about data. Databases, data grids, data fabrics, data warehouses, data in the cloud, moving data, securing data, big data, data data data data data data. Most of the hard problems (not all, of course) with cloud computing are with data. It’s always the data that seems to trip us up …
User: “Why can’t I VMotion my server from here to China?”
Admin: “Well, aside from the fact that we don’t have a data center in China, your application has a Terabyte of data and it would take month to get there.”
User: “Why can’t I use dropbox anymore?”
Admin: “Because you put sensitive company data on it, which was compromised and now we’re being sued. By the way, your boss is looking for you.”
User: “The performance of my application you put on our hybrid cloud is pitiful. You suck.”
Admin: “You told us there would be no more than 100 users, and now there are 50k users trying to access the same database at the same time. You’re an idiot.”
Granted, in all of these situations, it’s not just the data that’s the issue. Well, actually, come to think of it, it’s not really the data at all, now is it? It’s all the things we must do in order to deliver/secure/migrate/manage/scale the data in the cloud that becomes the issue. So data is often the root of our problems, but never really the problem itself. Instead it’s data handling in the cloud that’s the big challenge. Yes, that’s it! And it would appear someone really smart from JPMorgan Chase would agree with me …
Whew! Validation gives me warm fuzzies. Anyway, circling back to a point I made earlier, since I’ve been focused on the data layer a whole big crazy world has opened up for me. Much like what vSphere did for servers, there is a ton of activity happening at the data layer to transform the way we handle data at scale in the cloud. And again, what’s so cool about this layer is that it is all too familiar. When studying up on how data grids can make data globally available via their own internal WAN replication techniques, or when learning about how a new breed of load balancers are enabling linear scalability for databases, or when exploring how in-memory databases can dramatically improve application performance … the concepts/language/lingo are easily understood and relatable to things I already know.
Now in the midst of all this learnin’ it occurred to me, everyone has been talking about the Platform as the next big thing (myself included) … but I would think the data problems need to be solved first, don’t they? Sure, I know things won’t happen serially here; lots of smart people and cool new companies are working to solve cloudy problems at both layers in parallel. But we all know that where there are problems, there are opportunities. And it would appear to me that the more immediate problems needed to be solved are with data handling. So could it be that the next big thing is actually in the layer between the layers? And could this really be the place where developers and engineers finally meet and give each other a great big awkward hug?
Which brings me back to the very beginning of this blog post (the next big thing, not the awkward hug). After digging pretty deep into SQLFire, I’ve found it’s a radically new kind of database that addresses many of the issues with data handling in the cloud. It’s a database built differently from the ground up because it is built on amazing, disruptive data grid technology, yet presents itself to an application as a regular old database. It can unobtrusively slide in between applications and their existing databases to solve performance problems, or it can stand on its own as a complete database solution. It can instantly scale linearly, it can make your data extremely fault tolerant, and it can make your data available globally, all with very little effort and/or overhead. Pretty amazing stuff. You should check it out and let me know what you think. And even if you don’t take a look at SQLFire, what do you think about the “layer between the layers?” The next big thing?
Ever since the acquisition of springsource nearly two years ago, VMware has been generating a lot of excitement in the application development space. That excitement was kicked into high gear a few weeks ago when VMware announced the industry’s first implementation of open PaaS, CLOUD FOUNDRY.
But I have a feeling much of that excitement is not felt or even understood by the average reader of this blog. The reason largely has to do with the fact that most of us have an IT infrastructure/operations background. We are really good at troubleshooting low-level infrastructure stuff, we can rattle off the differences between RAID5 and RAID10, and we can debate iSCSI vs NFS until we are blue in the face. However, while we may able to go crazy, Einstein deep into infrastructure technologies, there are very few us who would have a single clue about things like MVC software architecture, Object/Relational Mapping, or Dependency Injection.
Sure, some of us (and probably not many of us) may have the ability to create useful automation scripts in PowerShell or PERL, but that’s a far cry from being able to create a full-blown application for end user consumption. And I’m here to tell you the application development world is, now more than ever, something we all need to embrace. Because worlds are colliding and CLOUD FOUNDRY is a glimpse of things to come.
What is CLOUD FOUNDRY?
Well you already know that Cloud Foundry is a PaaS, which means that at a very high level, you can think of Cloud Foundry as something on-par with Microsoft’s Azure, or Google’s AppEngine, or Salesforce’s Force.com, or Engine Yard. Not familiar with those services? Or not 100% clear on what a PaaS is? OK, then for now, let’s think of Cloud Foundry as a Hypervisor for cloud based applications. To be clear, I am NOT saying Cloud Foundry is a Hypervisor (because it is not); but let’s just start there.
So today, what do we do when we want to deploy an application in our virtual datacenters? First, we start with a VM or a collection of VMs, and we either deploy them from a template, or we start from scratch and install an Operating System. Then, after some routine IT processes (patching, updating, configuration management, etc.) we either install and configure the application, or we hand it off to an application team to do the rest. The key point I want to make here is you start with an Operating System and build up from there. Meaning, the primary point of abstraction, the place upon which we begin to start build, is the Hypervisor.
How does this translate to Cloud Foundry? Well, Cloud Foundry allows us to start building applications directly on Cloud Foundry. There is no need to install an Operating System, nor is there a need to patch it, apply configurations, and install application components. That’s all taken care of behind the scenes. So Cloud Foundry becomes the main point of abstraction, the place upon which we directly build our new cloudy applications.
Another way to look at it would be, the Hypervisor switches our focus from managing hardware to managing VMs. Similarly, Cloud Foundry switches our focus from managing VMs to managing applications. In the former case, the hardware doesn’t go away and in the latter case, the VMs won’t go away either. But the way we interact with, manage and even think about hardware has fundamentally changed … and so it will be with VMs and Cloud Foundry.
Again, as a point of clarification, is Cloud Foundry the textbook definition of a Hypervisor? Nope. But if we allow ourselves to loosely define a Hypervisor as the point of abstraction between layers of the compute stack (Hardware – Hypervisor – Operating System – Hypervisor – Application), then Cloud Foundry certainly fits the bill.
How is CLOUD FOUNDRY different from other PaaS offerings?
Now that we understand a bit about what Cloud Foundry is, I’m sure you’re wondering what makes Cloud Foundry any different than the other PaaS offerings out there. The biggest differentiator can be summed up with one word: choice.
Prior to Cloud Foundry, PaaS meant limited choices and ultimately PaaS meant vendor lock-in. Writing an application for Microsoft’s Azure, as an example, means you will only be able to run your application on Azure. I suppose that’s not a big deal if you’re 100% committed to Microsoft’s Azure solution and you’re OK with an off premise only option (Azure is not available for on premise consumption). So there is definitely an element of vendor lock-in there. And this is true for any PaaS offering out there today. Whichever PaaS you go with, you are either limited in terms of the developer frameworks and application services the PaaS makes available to you, or you are limited in your deployment options (i.e. public vs. behind-my-firewall), or both. For the customers I talk to, this is a very big deal.
But the good news is Cloud Foundry brings a big change to all of this. Cloud Foundry has been designed eliminate vendor lock-in by offering:
- choice of developer frameworks
- choice of application services, and
- choice of deployment (internal vs. external cloud).
Of course you might be thinking, “that sounds great, but ultimately we’ll have to run Cloud Foundry on top of vSphere, so we’ll still be locked in to VMware.” Well, you would be wrong. Yes, Cloud Foundry does run on vSphere, but it can run on non-VMware Infrastructure clouds as well.
Choice. It’s super attractive, and it makes Cloud Foundry unique.
Why should you care?
But for you, the reader of this blog – someone who is probably focused on IT operations, not software development – why should you care? Here’s one great reason …
If your IT shop does not like dumping the company Web app on Engine Yard but the dev team is threatening mutiny over working in a stone-age traditional Java production lifecycle (“that’s so 2005, man”), Cloud Foundry can basically become the in-house option.
– Carl Brooks, Senior Technology Writer for SearchCloudComputing.com
Another reason? Whether we like it or not, PaaS is coming like a freight train and we need to get in front of it now. We need to embrace it. We need to be the first to understand the Hypervisor 2.0, and all the moving parts around it. We need to figure out how to offer it to our internal developers before they go consume it externally on their own. Because ultimately, we will either add value to our employer and serve our users, or someone else will. I know I’m not going to get run over by the train … are you?
What should you do next?
Here is my recommendation … be the first person in your organization to embrace and understand Cloud Foundry. Believe me when I tell you this will pay handsome dividends for you far into the future. To start you on your journey, here is some recommended reading …
- The Cloud Foundry blog already has some fantastic “under the hood” info.
- Steve Herrod has a great blog titled Cloud Foundry – Delivering on VMware’s “Open PaaS” Strategy
- John Stame (a former VMware employee) has a nice blog title You Bet Your PAAS it’s Open!
- Of course be sure to check out the Cloud Foundry FAQ
- And for anyone feeling super adventurous, try learning some basic coding skills and then give Cloud Foundry a test drive … 4 Free Ways to Learn to Code Online
If you’re a VMware fan, you have probably already seen the graphic above, or some variation thereof. And you’re also probably already pretty familiar with the blue layer, or the Infrastructure layer of the cloud computing “stack.” In addition, you’re probably well versed in the orange layer, or the End User Computing layer. But what about that green layer?
That green layer is commonly referred to as cloud middleware, or the vFabric Cloud Application Platform. It’s the ooey-gooey middle layer that leaves most of us in IT scratching our heads. It’s where software developers live and breathe, but for rest of us, it’s the layer we have traditionally avoided like Charlie Sheen avoids sanity.
I’ll be talking more about vFabric in future posts, but today I’d like to focus on WaveMaker, because it’s an exciting piece for those of us that aren’t software developers. It may be just the tool that gets us to dip our toes into that ooey-gooey green later.
OK, so what is WaveMaker and how will it fit into that graphic above? First,the official news blurb …
VMware closed its acquisition of WaveMaker on Friday March 4, 2011. WaveMaker is a widely used graphical tool that enables non-expert developers to build web applications quickly. This acquisition furthers VMware’s cloud application platform strategy by empowering additional developers to build and run modern applications that share information with underlying infrastructure to maximize performance, quality of service and infrastructure utilization.
Great, soooooo what does that mean for readers of this blog? WaveMaker is a tool built just for us! It is the tool that will enable us to build web applications very quickly and deploy them to the cloud (that ooey-gooey green layer of the cloud) with a single mouse click. WaveMaker claims it can eliminate98% of code, cut the web development learning curve by 92% and reduce software maintenance by 75%. Here are a couple of other bullet points you’ll find interesting …
- WaveMaker eliminates Java coding for building Web 2.0 applications
- WaveMaker Studio generates standard Java apps
- One-click deployment eliminates the complexity of deploying web apps to enterprise or cloud based hosting.
For more information, be sure to check out Rod Johnson’s blog post VMware acquires WaveMaker. And of course make sure you visit the WaveMaker website. While you’re there, download the software and give it test drive! After you do, be sure to let me know what you think.
If you couldn’t attend VMworld 2010 which occurred last week, here is what you missed (in no particular order) …
Breaking Records … VMworld 2010 by the Numbers
How big was VMworld this year? Bigger than ever. Here are some statistics you might find interesting.
- 17,021: The number of registered attendees. This is up from 12,500 last year, and up from 1,400 at the first VMworld in 2004!
- 85: The number of countries represented by attendees.
- 55: People who have attended every VMworld (I know one of them, and it’s not me … who are the others?)
- 15,344: The number of labs delivered via the VMworld Lab Cloud. This is up from 4,500 last year!
- 145,097: The number of VMs deployed to support the 15,344 labs.
- 4,000: The number of VMs running per hour (on average) in the labs.
The VMworld 2010 Lab Cloud – Real Cloud Computing in Action
All VMworld labs were delivered via the Lab Cloud and let me tell you, it was beyond impressive. Imagine walking into massive room filled with nearly 500 lab stations, sitting down at one of those lab stations, selecting from a catalog of close to 30 labs, and having your lab deployed for you on demand via a hybrid cloud. Think about that for a second. Many companies still believe cloud computing is nothing more than a marketing term. And even more believe that true cloud computing is still a few years (or more) out. But VMware really flexed its muscles by delivering 15k+ labs and 145k+ VMs via a true cloud computing solution, powered by VMware software and running on both Verizon and Terramark cloud offerings. Duncan Epping gives us more details with his blog post, VMworld Labs the Aftermath.
Micro$oft and Citrix Shenanigans
You know, it really wouldn’t feel like a VMworld if both Citrix and Micro$oft weren’t prancing around with a variety of childish marketing tactics. And this year, they certainly didn’t disappoint!
- Citrix – As they have in previous years, Citrix littered every billboard and taxi cab top in a five block radius (probably greater) of Moscone Center. But this year, they really went the extra mile and completely wrapped a whole bunch of taxis in advertising, and then hired them to drive around the event. But here’s the kicker … they weren’t allowed to take passengers! You ask, what’s the big deal? Well, don’t you find it somewhat ironic that a company that promotes “Green IT” (i.e. saving CO2 emissions) via their software, decided to advertise by a means that does nothing more than waste CO2? I mean, if the taxis could actually take passengers, then at least you could make an argument that the CO2 was well used. Or at the very least, it was used for something other than FUD.
I made a comment via Twitter that pointed out their use of non-passenger taking taxis for advertising during the event. In response, both @CitrixPR and @simoncrosby said my claims were untrue. So to prove I am not a liar (and they are), I tried to get a ride with one of the taxis. Here’s what happened …
- Micro$oft – I think I’ll file this next one under the category, “Hey Pot, this is Kettle, what color am I again?” That’s right ladies and gentlemen, Micro$oft, the absolute king of “vendor lock-in” took out a full page ad in USA Today warning VMware customers about multi-year license agreements. Don’t believe me? Here’s a photo taken by @ssauer.
- Why in the world would I highlight these shenanigans here? Because with these actions, both Citrix and Micro$oft have done two things. First, they continue to validate our direction and clear leadership in this space. And second, they are showing their desperation and inability to keep up. This may be a pretty bold statement, but let’s face facts, FUD is the weapon of followers, not leaders. (And by the way, I LOVE the fact that Micro$oft opened their letter with “Dear VMware Customers.” Ummm, aren’t all VMware customers also Micro$oft customers?)
New Product Announcements and Technology Previews
- View 4.5 – VMware’s next big release of their VDI product will be GA this month. VMware View is a product that has been around for a while, but this release is packed with a ton of new Enterprise features and it appears that analysts are finally calling it ready for prime time, as noted by the highly respected Chris Wolf (Research VP at Gartner) in his blog post, VMware View 4.5: Ready for the Large Enterprise.
- vCloud Director – Next to View 4.5, this is probably the most exciting product announcement at VMworld. You may know of the product as “Redwood,” the not-so-secret internal code name for the product. But what is it? It’s a product that will provide the interface, automation and management required by enterprises and services providers to build private and public clouds. Duncan Epping is very familiar with the product and gives us a great overview with his blog post, VMware vCloud Director (vCD).
- vCloud Datacenter Services – vCloud Datacenter Services deliver globally consistent enterprise-class cloud computing infrastructure services. From the VMware website …
Offered by VMware-certified service providers (Verizon, Terremark, Bluelock, SignTel and Colt are the first five), vCloud Datacenter Services provide the business agility and cost effectiveness of public clouds without compromising on portability, compatibility, security and control demanded by enterprise IT organizations.
- vShield App, vShield Edge, vShield Endpoint – What is the biggest concern executives have when it comes to cloud computing? Opinions vary, but no matter who you talk to, everyone would put security in the top three list of concerns. And most would put security in the number one spot. So to help address this, VMware announced three new products that are aimed directly at solving security issues in the cloud: vSheild App, vShield Edge and vShield Endpoint.
- Project Horizon – Steve Herrod gave us a preview of new product in development at VMware, currently called Project Horizon. What is it? It’s kind of hard to describe, but think of an Apple like App Store for the enterprise (not that an App Store is 100% descriptive, but I would say it’s close). I can tell you it generated a lot of buzz and chatter on Twitter. One particular Tweet during the keynote that caught my eye came from Chris Wolf …
SaaS, thin apps, virt desktops provisioning, plus Single Sign On for SaaS – exactly why Horizon is game changer.
During Steve Herrod’s keynote speech on Tuesday, he announced the following two acquistions.
- Integrien delivers real-time infrastructure monitoring, analysis and alerting capabilities. More details will be revealed in time, but it’s obvious their products align nicely with VMware’s cloud computing vision.
- TriCipher brings technology that will provide layer of security to existing VMware products. TriCipher delivers identity-based security, which will integrate a hybrid of different clouds and enable access to SaaS applications from a variety end points.
Best of VMworld 2010 Awards
|Business Continuity / Data Protection||Symantec for NetBackup 7|
|Security||VMware for vShield|
|Management||VKernel for Capacity Management Suite|
|Hardware Virtualization||Cisco Systems for Cisco Nexus 7000 Overlay Transport Virtualization|
|Desktop||Kaviza for Kaviza VDI in-a-box 3.0|
|Private Cloud Computing||newScale for newScale 9|
|Public/Hybrid Cloud Computing||Terremark for Enterprise Cloud|
|New Technology||Veeam Software for Veeam Backup & Replication 5.0 Enterprise Edition|
|Best of Show||Veeam Software for Veeam Backup & Replication 5.0 Enterprise Edition|
And that’s what you missed at VMworld 2010. See you next year!
If you have read Easy vSphere Web Apps with Grails and the VI Java API or Easy VMware Development with VI Java API and Groovy then you know I’m a big fan of Steve Jin’s vSphere Java API. And I recently received the following email …
We are running a survey to better understand the needs of the greater community using the vSphere / VI Java API. I am referring to http://vijava.sourceforge.net.
The survey takes less than 5 minutes and it would help us understand what you need from VMware. Does your organization require formal support, indemnification, training or does the current model work for you ?
This survey will be confidential; we will not disclose your name or who you work for. We just want to understand how you are using the vSphere / VI Java API and what you need from VMware.
Link to survey: http://www.surveymethods.com/EndUser.aspx?C4E08C96C4859496C2
Thanks for your time and hope to see you at our Developer Days @ VMworld 2010
vSphere SDK Product Marketing
So if you’re a developer using the vSphere Java API – or if you would like to use it but can’t for some reason (e.g. lack of official support) – then I would ask that you take the survey. I’ve already taken the survey, and I can assure you it’s quick and painless. Thanks for your help!
You may have noticed over the past few years there have been a number of authors here at Virtual Insanity. Some have contributed only once or twice. Others contribute (or plan to contribute) on a regular basis. In fact, there are now seven authors who call Virtual Insanity home (ordered by number of posts) …
- Aaron Sweemer (me) – Senior Systems Engineer at VMware
- Scott Sauer – Systems Engineer at VMware
- Rick Westrate – Director of Cloud Services at Eastern Computer
- John Blessing – Technical Consultant at EMC
- Chris Everett – Senior Systems Engineer at VMware
- Thomas Mackay (new author!) – Staff Systems Engineer at VMware
- Jeff Szastak (new author!) – Senior Systems Engineer and Tier 1 Applications Specialist at VMware
I was having a conversation yesterday with a few of my fellow co-authors about how we could improve the blog, make it more valuable for our readers, and make it a powerful and highly recognizable industry resource. We set some pretty aggressive goals, but to achieve them, we’ll need to produce quality content across a broad range of topics on much more frequent basis. Unfortunately, the main obstacle each of us face is simply the lack of time to sit down and write meaningful blog posts with any sort of regular frequency. Read the rest of this entry »
A few weeks ago I wrote a post with a very similar title, “Easy VMware Development with VI Java API and Groovy.” Today I want to expand on that a little bit and show you a cool way to quickly stand up web apps for VMware vSphere using Grails. What is Grails? If you’re familiar with the popular Ruby on Rails web application framework, then you can think of Grails as the Java (well, Groovy actually) equivalent of Rails. From the Grails official website …
Grails is an advanced and innovative open source web application platform that delivers new levels of developer productivity by applying principles like Convention over Configuration. Grails helps development teams embrace agile methodologies, deliver quality applications in reduced amounts of time, and focus on what really matters: creating high quality, easy to use applications that delight users.
What does all this mean? The short and sweet answer is Grails will take care of all the pain-in-the-a$$ “stuff” required to get a web app up and running. A good analogy would be cake mix.
Have you ever wanted to write a script or an application that automates your VMware VI3.x / vSphere environment, but lack the development skills to do so? Or, maybe you have development skills, but you’re looking for ways to simplify your code and improve your productivity? In either case, I’ve stumbled across something you’ll definitely want to check out.
Before we start, I should probably clarify something. If you have zero development experience, then the title of this post could be a little misleading. An absolute beginner probably wouldn’t consider this “easy.” There are certainly easier ways to develop VMware scripts which are targeted at VMware Administrators, such as the vSphere PowerCLI. And if you want to do some VMware scripting without learning a programming language and/or acquiring some development skills, then you should stop reading now and go check out the vSphere PowerCLI. However, if you’re a little adventurous and want a “fast track” for creating VMware applications, then by all means, read on.
I’ve got to tell you, I’m pretty darn excited right now. Why? I’m typing this to you from 30,000 feet on a Delta flight from Cincinnati to Las Vegas (for VMware Partner Exchange). And why is that so special? Because, as the title suggests, I’m typing this on my VDI image which resides hundreds of miles away and thousands of feet below me.
Delta has a fairly new service from gogo called “gogo inflight … wi-fi with wings.” This is my first time using the service because the past few flights I’ve taken, I’ve either not had the need to connect or the aircraft I happened to be on did not yet have the service. But this time I have some work to do (i.e. my next “confessions” article for VSM), so I figured I’d give it a whirl. And, being a gluten for punishment, I decided to see if I could push the limits of PCoIP. After a quick sign up form (gogo isn’t free) and firing off a VPN connection back to my home office, I launched the View client and crossed my fingers.
And I can tell you that I am thoroughly impressed! The Windows are snappy, flash is decent and low-end multimedia is adequate. I was watching a youtube.com video with full sound and, while the picture was a little blurry and sound/video sync was slightly off, it was totally watchable. And furthermore, it didn’t cripple my session. Not bad, considering my latency is between 150ms and 250ms, with an estimated average about 200ms.
Is this a glimpse of things to come? Right now it may seem pretty far fetched. After all, the process to connect to my desktop image was fairly painful. I had to …
- Boot into my local OS
- Connect to the gogo inflight wireless access point
- Launch my Firefox browser and walk through the gogo signup form
- Dig trough my briefcase for my wallet and pay for the service
- Fire off my OpenVPN client to my home VPN server
- Launch the VMware View Client
Not exactly what I’d call a seamless user experience. And I believe that conquering this experience – that is, the mobile user – will be the coup de grace for traditional desktop infrastructure. Until then, virtual desktop infrastructure will certainly happen in pockets, but massive, wide scale adoption will continue to elude us. So what has to happen here? In my mind, I see the following things need to happen …
True ubiquity of wireless Internet
This means two things. First, the Internet has to be everywhere at all times. I’m a true mobile user and I need to know that no matter where I am – whether it be on a puddle jumper, or in a remote country hotel – that when I power on my laptop, I will have access.
And second, this also means the connection to the Internet has to be completely integrated and transparent. I don’t want to have to dig for my credit card every time. But even more than that, I want the connection to happen for me automatically, in the background, as part of the boot processes. My software client should auto detect the available wireless networks, connect, and debit my account. Will I have a single unified account that works across all providers? Or will I have multiple accounts that my software client will handle? Or will it be a single, wireless / satellite provider that can reach me anytime, anywhere? I don’t know and I don’t really care. The point is, I don’t want to deal with it. I want to press power and, after a short boot (maybe even zero boot?), have access. Period.
A purpose built Thin OS
Booting into a local OS just to launch a client and connect to a remote OS just isn’t going to cut it. The boot process needs to be fast and do nothing more than present me with a login GUI. If I’m remote, the VPN connection (and any necessary login parameters) need to be part of the login process. There’s no need for a full blown local OS if our goal is to do little more than connect to our primary desktop environment. Sure, us hardcore tech weenies will almost always want some sort of backdoor access to the local OS. But for 99% of the users out there, they don’t care and just want a seamless desktop experience. In fact, if done correctly, they shouldn’t even know there is a local OS and their desktop is actually running in a remote datacenter.
Does this actually exist yet? Sort of. ThinClients typically deliver this kind of user experience. But for the most part, ThinClients aren’t mobile devices. I’ve seen a ThinClient laptop model before, but I don’t know a single person actually using one. I’ve actually seen for more cases of customers converting PCs and laptops to ThinClients. Theron Conrey gives us a great example with his blog post VMware View Linux Live CD How-to. And there are enterprise solutions for converting PCs to ThinClients from both Wyse and DevonIT. So, we’re pretty darn close on this front, but still not 100%.
A rich user experience in low bandwidth, high latency environments
Like I stated earlier, my current PCoIP experience is pretty darn impressive. It is, by far, the best experience I’ve witnessed to a remote desktop. But, I’m not sure the average in-flight user would be ecstatic about it. Sure, all things considered, you can’t beat it. But I recognize all the variables working against me right now. The typical user will not know or even care. They just want it to work. The good news is that PCoIP will continue to improve and brings the promise of delivering a rich user experience, whether at 30k feet of a single switch port away.
So, I ask again, is this a true glimpse of the not-too-distant future? Ten years ago, I was the only one of my friends and family to have a cell phone. Five years ago, mainstream virtualization in the datacenter was laughed at. And a few short months ago, typing this blog post on my VMware View image was impossible. So, you tell me.
Installing and/or upgrading VMware tools has always been a bit more complicated for Linux guests than for Windows guests. After the installation of the package binaries, the vmware-config-tools.pl script must be run to configure the tools for your environment. This script has to be run from the console, which is a pain when you’ve got more then just one or two Linux VMs. And may the good Lord help you if the modules aren’t suitable for your running kernel and you don’t have a compiler (or the C header files for your running kernel) already installed.
When VMware added the Automatic Tools Upgrade …
The situation certainly improved, but it is by no means a fool proof solution. In my experience, it doesn’t work 100% of the time for Linux guests (though this *could* be due to the heavy modification I’ve done in my distro). And furthermore, what if you want to automatically upgrade 100’s of Linux guests, not just one? Or what if you’ve already got a deployment tool that you’d like to use to push the tools out? (Kind of tough when the script needs to be run directly in the console)
So, I looked to see if there was a way to improve the situation. First, I needed to find a way to run vmware-config-tools.pl remotely in an automated fashion. And by the way, it’s not that you can’t run this script remotely via SSH because you can. The problem is that when you do so, you immediately get following question …
It looks like you are trying to run this program in a remote session. This program will temporarily shut down your network connection, so you should only run it from a local console session. Are you SURE you want to continue?
Unfortunately, to run vmware-config-tools.pl remotely, we need to include the –d flag so that the script will automatically select the default answers to all of the questions for us. And the problem is, the default answer to this question is “no.”
So I looked through the vmware-config-tools.pl and I found that it’s really only checking to see if the SSH_CONNECTION environment variable is set. Well, that’s easy … simply executing vmware-config-tools.pl in a different shell allows us to side step this.
Next I just created a simple bash script that gets pushed out to the /tmp directory along with the vmware tools installation package (also pushed to the /tmp directory) and gets executed remotely by my deployment tools (which for me are are just more bash scripts, but this should work with any enterprise deployment tool). Here’s the simple script I used for my guests …
RPM=`ls /tmp | grep VMwareTools`
rpm -e VMwareTools
echo "Old VMwareTools removed" > /tmp/vmware_tools_upgrade.log
rpm -i /tmp/$RPM
echo "$RPM installed" > /tmp/vmware_tools_upgrade.log
sh -l root -c /usr/bin/vmware-config-tools.pl -d
echo "vmware-config-tools.pl -d executed" >> /tmp/vmware_tools_upgrade.log
service vmware-tools restart
echo "vmware-tools restarted" >> /tmp/vmware_tools_upgrade.log
service network restart
echo "network restarted" >> /tmp/vmware_tools_upgrade.log
This is obviously a very basic script and could easily be enhanced with better logging and error handling. Also, for Debian distros, such as Ubuntu, you’d need to modify this script to handle the tar.gz installation package … unless, of course, you’ve modified your distro to handle RPMs (as I have).
The good news is that, at least for my environment:
- This works 100% of the time and a restart of the VMs is not necessary.
- I no longer have to upgrade many guests by hand.
However keep in mind, there is still a network outage during the upgrade (usually just about a minute or two), so be sure to continue using a maintenance window for your upgrades.
First things first
Thanks to Scott Sauer (@ssauer) and John Blessing (@vTrooper) for holding down the fort here at Virtual Insanity while I’ve been finishing up some unfinished projects and preparing for the VCDX Design Exam (which I take later this month). One of Scott’s posts actually won a vSphere blog contest. Nice work Scott! These two guys are becoming pretty good friends of mine here in the Cincinnati area, so hopefully I can convince them to keep the content flowin’.
An itch I couldn’t scratch
I’ve mentioned here on this blog, at least once or twice, that I “eat the dog food” and actually run my primary XP desktop as a VMware View image. Since the conversion almost a year ago, everything has been running pretty well with only a few minor bumps along the way. And with the recent addition of PCoIP, I can’t imaging ever going back.
But there was one little reoccurring problem I was having for which I couldn’t seem to find an answer. It wasn’t a show stopper of an issue, but it was just an “itch I couldn’t scratch,” if you know what I mean. And the problem went something like this …
- Inside my desktop VM I have a Cisco VPN client, necessary for a secure connection back to corporate HQ in Palo Alto, CA.
- When connecting to my desktop with the VPN client inside the VM inactive, I had no issue.
- However, if I disconnect from my desktop while the VPN session was active, then I couldn’t reconnect to my desktop via VMware View.
The reason? The broker was sending me the new IP address of the Cisco VPN Adapter, which is an IP address on the VPN, and an IP address my local computer didn’t know about.
Now, if I were to log off instead of disconnect from my desktop, this would terminate the VPN session and therefore wouldn’t be a problem. But who wants to log off every time? More often than not, I have things open on my desktop (e.g. half written emails, documents, browsers with many many open tabs, etc.) that I don’t want to bother saving and closing every time I step away from the computer. And really the bigger issue is with unintentional disconnects that result from local power/network/OS issues.
I tried all sorts of things to fix this. Among other thins, I tried …
- Reordering the NICs, hoping the broker was just grabbing the first NIC.
- Poking around the broker and agent install files, hoping to find a way to force the IP address.
- I even tried uninstalling and reinstalling the View agent and the Cisco client, hoping the order of installation might do the trick (admittedly, this was a random shot in the dark)
But nothing seemed to work. So until recently, to reconnect I would have to connect directly to my desktop via RDP, or connect to the console via the VMware Infrastructure Client, then disconnect the Cisco VPN and then reconnect via the View client.
See what I mean? Not a show stopper, but man what a pain in the butt!
Well I found a way around this with a handy new addition to the Command Line Tool in View4. Check out page 12 of the Command Line Tool for View Manager titled “Override IP Address.” On the broker from a DOS prompt, in the c:\Program Files\VMware\VMware View\Server\bin directory, execute the following …
vdmadmin.exe –A –d <desktop name> –m <machine name> –override –i hostname
The “desktop name” is the name of the VM in the broker. The “machine name” is the name of the VM in vCenter. It’s likely they’ll be the same, but they don’t have to be and in fact, in my case they weren’t the same. The “hostname” can be either a FQDN or an IP address. Oh, and I can tell you that all parameters must be present or the command won’t execute.
But that was all there was too it. Now I can disconnect and reconnect to my desktop, regardless of the state of my VPN client.
A few months ago I posted an update, with a section near the bottom titled “RoR (Ruby on Rails) and other next generations frameworks." And in that section I made the statement …
About two years ago I was introduced to Ruby on Rails and since then, most of my development work has been with RoR. Thus far, however, I haven’t posted anything on this blog about RoR. Why? Two reasons. The apps I’ve written to date have absolutely nothing to do with VMware. And second, like I said, I’m an amateur. Anyone looking for RoR help and advice can probably find better info on actual RoR blogs. … But I’ve decided that this is about to change.
But since that post, I have yet to write anything about RoR. Why? Believe it or not, I’ve got a really good reason. You may have recently heard that VMware has acquired a company called SpringSource. And SpringSource offers support for a similar type of language and framework which has deep roots in Java, called Goovy and Grails (in addition to a slew of other Java related products and services). From the SpringSource website …
Grails is an advanced and innovative open source web application platform that delivers new levels of developer productivity by applying principles like Convention over Configuration.
Groovy is the leading open source dynamic language for the Java Virtual Machine that offers a flexible Java-like syntax that most Java developers can learn in matter of hours.
Once I learned about the acquisition, I had to make a decision. Do I continue down the RoR path? Or do I switch gears and go in the direction that VMware’s going? Not that it’s impossible to be good at both — or even difficult for a true developer — but I’m not a developer by trade and I’ve got too much going on in my life and with VMware to focus on more than one language and framework at at time.
Now, this may sound like an easy decision, as it would naturally make sense to follow my employer’s lead. But while I’ve played with may different languages in my past (e.g. C++, Visual Basic, Perl and Ruby) and even become fairly proficient in one or two, the one language I’ve avoided has been Java. Frankly, Java just isn’t fun for the amateur developer, in my humble opinion. But after doing my homework and reading numerous blog posts such as Bye bye Ruby, hello Groovy I decided to make the switch.
And so far I’m pretty happy with my decision. Groovy may have Java-like syntax, but it is a dynamic language that is a lot of fun to code in and it’s and pretty darn powerful. I’ve already finished my first web app written in Groovy and Grails (a reporting and graphing tool that the local OHV rep’s and SE’s will use) and it’s about to go live. So right now, I’m feeling pretty Groovy.
Actually, as I type this, I’m at the New Orleans airport after three days at SpringOne 2GX where I’ve been immersed in all things Groovy, Grails, Spring, etc. It’s been a great event, where I sat in on many fantastic sessions and got to meet super crazy smart people. That’s always fun for me. But right now I’m in information overload. I need to compile my notes (which I had to take by hand because my laptop battery decided to reduce it’s charge life to about 5min. Grrrrr.) and put them into something meaningful for the audience of this blog, which for the most part are not developers.
Until next time, check out Groovy and Grails and read the good tutorials out there like …
- Mastering Grails by Scott Davis
- The Official Grails Quick Start Guide
- Grails and Google AppEngine Beginners Guide by Morten Nielsen
- The Groovy Getting Started Guide
I’m in New Orleans. This is arguably one of the funnest cities on earth (or so I’m told, this is my first time here). But despite the crazy night life and all the energy outside, I type this blog post sitting in a dark hotel room with a stiff drink in my hand. I’m not in the mood to venture outside right now. The vodka tonic helps blur a reality I keep avoiding.
Just a few short hours ago, while sitting in one of the sessions at the SpringOne 2GX event, I received an email from one of my oldest and dearest friends. His mother, Martha Speck, lost her battle with cancer. A battle that began a mere two weeks ago and frankly, a battle that I didn’t even know was being fought until today. While reading his message, I found myself trying not to break down in tears in front of 50 of my peers in the session. And my emotional response came as somewhat of a surprise to me because it had been a number of years since I’ve spoken to Mrs. Speck. But when someone touches your heart and life, time is irrelevant.
Do you have a teacher that you identify as the one person who really inspired you? I do. It was Mrs. Speck. She was my English and Creative Writing teacher in high school. Anyone who had Mrs. Speck for Creative Writing at Liberty High School knows what an incredible teacher she was. Steve and I actually had her class together at the same time. I’m sure that must’ve been a bit strange for her. Can you imagine? Her son and her son’s best friend sitting in the back of the classroom thinking they could get away with murder. Oh and we tried! We were young and stupid and tried to take advantage of the situation with all sorts of crazy nonsense. But she handled our antics with the perfect blend of class, humor and discipline. We got away with more than we probably should have, but not nearly as much as we wanted to! And somehow, through it all, I came out of her class with a passion for writing … something I certainly didn’t have going in to her class. She exposed and cultivated a latent passion within me and therefore she, in no small way, had a hand in shaping my future. If you think about it, this very blog is the result of her inspirational teaching.
But her inspirational teaching, in and of itself, wouldn’t make me well up with tears. You see, Mrs. Speck was once known to me as my “second mom.” During my high school years, her son Steve and I were the best of friends and we shared most of our free time together. This, of course, means that I had the fortuitous opportunity to spend a lot of time with Steve’s family. His mother, father and sister became my second family as I shared countless evenings and weekends with them. It was a great time in my life filled with so many wonderful memories.
But despite the wonderful memories I have of my time with Steve and his family, right now as I sit here in this dark hotel room, I am overwhelmed with deep sadness and regret. Deep sadness because an amazing, brilliant woman who played a significant role in shaping my future, and more importantly, a woman who I once called “mom,” is gone. And I feel deep regret because as my life has taken me all over the world, it has been years since I’ve seen or spoken to her … something I will never be able to rectify.
At the end of the day, I believe the only thing we can hope for (as far as this earthly life is concerned) is to leave the world a little bit better than we found it. In fact, I believe there is no higher compliment than to simply say, “my life is better because of you.” This is a compliment I would pay to more than one person in my life for sure, but also to be sure, the list would be extremely short. So let me say, with a tear on my cheek and with all the sincerity and love in my heart …
Farewell Mrs. Speck. My life is better because of you.