I’m Feeling Groovy

groovy

A few months ago I posted an update, with a section near the bottom titled “RoR (Ruby on Rails) and other next generations frameworks."  And in that section I made the statement …

 

About two years ago I was introduced to Ruby on Rails and since then, most of my development work has been with RoR.  Thus far, however, I haven’t posted anything on this blog about RoR.  Why?  Two reasons.  The apps I’ve written to date have absolutely nothing to do with VMware.  And second, like I said, I’m an amateur.  Anyone looking for RoR help and advice can probably find better info on actual RoR blogs. … But I’ve decided that this is about to change.

 

But since that post, I have yet to write anything about RoR. Why? Believe it or not, I’ve got a really good reason.  You may have recently heard that VMware has acquired a company called SpringSource.  And SpringSource offers support for a similar type of language and framework which has deep roots in Java, called Goovy and Grails (in addition to a slew of other Java related products and services).  From the SpringSource website …

 

Grails is an advanced and innovative open source web application platform that delivers new levels of developer productivity by applying principles like Convention over Configuration.

 

Groovy is the leading open source dynamic language for the Java Virtual Machine that offers a flexible Java-like syntax that most Java developers can learn in matter of hours.

 

Once I learned about the acquisition, I had to make a decision. Do I continue down the RoR path? Or do I switch gears and go in the direction that VMware’s going? Not that it’s impossible to be good at both — or even difficult for a true developer — but I’m not a developer by trade and I’ve got too much going on in my life and with VMware to focus on more than one language and framework at at time.

Now, this may sound like an easy decision, as it would naturally make sense to follow my employer’s lead. But while I’ve played with may different languages in my past (e.g. C++, Visual Basic, Perl and Ruby) and even become fairly proficient in one or two, the one language I’ve avoided has been Java. Frankly, Java just isn’t fun for the amateur developer, in my humble opinion. But after doing my homework and reading numerous blog posts such as Bye bye Ruby, hello Groovy I decided to make the switch.

And so far I’m pretty happy with my decision. Groovy may have Java-like syntax, but it is a dynamic language that is a lot of fun to code in and it’s and pretty darn powerful. I’ve already finished my first web app written in Groovy and Grails (a reporting and graphing tool that the local OHV rep’s and SE’s will use) and it’s about to go live.  So right now, I’m feeling pretty Groovy.

Actually, as I type this, I’m at the New Orleans airport after three days at SpringOne 2GX where I’ve been immersed in all things Groovy, Grails, Spring, etc. It’s been a great event, where I sat in on many fantastic sessions and got to meet super crazy smart people. That’s always fun for me. But right now I’m in information overload. I need to compile my notes (which I had to take by hand because my laptop battery decided to reduce it’s charge life to about 5min.  Grrrrr.) and put them into something meaningful for the audience of this blog, which for the most part are not developers.

Until next time, check out Groovy and Grails and read the good tutorials out there like …

    Farewell Mrs. Speck. My life is better because of you.

    I’m in New Orleans.  This is arguably one of the funnest cities on earth (or so I’m told, this is my first time here).  But despite the crazy night life and all the energy outside, I type this blog post sitting in a dark hotel room with a stiff drink in my hand.  I’m not in the mood to venture outside right now.  The vodka tonic helps blur a reality I keep avoiding.

    Just a few short hours ago, while sitting in one of the sessions at the SpringOne 2GX event, I received an email from one of my oldest and dearest friends.  His mother, Martha Speck, lost her battle with cancer.  A battle that began a mere two weeks ago and frankly, a battle that I didn’t even know was being fought until today.  While reading his message, I found myself trying not to break down in tears in front of 50 of my peers in the session.  And my emotional response came as somewhat of a surprise to me because it had been a number of years since I’ve spoken to Mrs. Speck.  But when someone touches your heart and life, time is irrelevant.

    Do you have a teacher that you identify as the one person who really inspired you?  I do.  It was Mrs. Speck.  She was my English and Creative Writing teacher in high school.  Anyone who had Mrs. Speck for Creative Writing at Liberty High School knows what an incredible teacher she was.  Steve and I actually had her class together at the same time.  I’m sure that must’ve been a bit strange for her.  Can you imagine?  Her son and her son’s best friend sitting in the back of the classroom thinking they could get away with murder.   Oh and we tried!  We were young and stupid and tried to take advantage of the situation with all sorts of crazy nonsense.  But she handled our antics with the perfect blend of class, humor and discipline.  We got away with more than we probably should have, but not nearly as much as we wanted to!  And somehow, through it all, I came out of her class with a passion for writing … something I certainly didn’t have going in to her class.  She exposed and cultivated a latent passion within me and therefore she, in no small way, had a hand in shaping my future.  If you think about it, this very blog is the result of her inspirational teaching.

    But her inspirational teaching, in and of itself, wouldn’t make me well up with tears.  You see, Mrs. Speck was once known to me as my “second mom.”   During my high school years, her son Steve and I were the best of friends and we shared most of our free time together.  This, of course, means that I had the fortuitous opportunity to spend a lot of time with Steve’s family.  His mother, father and sister became my second family as I shared countless evenings and weekends with them.   It was a great time in my life filled with so many wonderful memories.

    But despite the wonderful memories I have of my time with Steve and his family, right now as I sit here in this dark hotel room, I am overwhelmed with deep sadness and regret.  Deep sadness because an amazing, brilliant woman who played a significant role in shaping my future, and more importantly, a woman who I once called “mom,” is gone.  And I feel deep regret because as my life has taken me all over the world, it has been years since I’ve seen or spoken to her … something I will never be able to rectify.

    At the end of the day, I believe the only thing we can hope for (as far as this earthly life is concerned) is to leave the world a little bit better than we found it.  In fact, I believe there is no higher compliment than to simply say, “my life is better because of you.”  This is a compliment I would pay to more than one person in my life for sure, but also to be sure, the list would be extremely short.  So let me say, with a tear on my cheek and with all the sincerity and love in my heart …

    Farewell Mrs. Speck.  My life is better because of you.

    Capacity Conundrum Part Deux

     

    – The vTrooper Report –

     

    This is a continuation of the Capacity Conundrum, if you missed the first part start here.

    $ per Compute VM

    So let’s cut to the chase.  In the case of the compute tiles of our Quad we have a price per vCPU and $ per GB of RAM to settle. Keeping our example 2U server in play we could expect to spend approximately $15,000 for a 2U fully loaded with 4GB DIMMs.   Well unfortunately a small part of that 15K  is consumed in I/O cards and maintenance which needs to be pulled out to get the compute number.  For our argument we will use $10K for the compute system without the I/O cards and maint. costs;  This is the CAPEX we will offset in our $/per values.

    vCOMPUTE – FIREPOWAH!

    We know how a CPU works right? Move process into memory , execute CPU cycles, churn, churn more, back to the I/O guys, rinse and repeat.  Basically, this is where the hardware container happens in our data centers.   I say container because it’s easy to show it as a box; It’s hard to define what it will always be in physical form.  1U, 2U, 4U, half blade, Full Blade, appliance , PC ; you name it, it is probably in some one’s ‘datacenter’.  The lowest common denominator I have been able to settle on for a common form factor is Cores per Ram.  Grouping per socket fits because you are measuring the type of memory that is close to the CPU socket.  The NUMA architectures of AMD and Intel with memory controllers on-board and transports to the memory DIMMs without access through the I/O  controllers (eg. Northbridge) help define the grouping.

    TECHNOTE:  Every core has associated memory banks it will use and every container(physical server) has a series of sockets that it controls.   A hypervisor has a limit to how well it can control the associated memory space to the nearest vCPU.   Generally the hypervisor will always schedule available vCPU’s from the same socket and swap the corresponding memory for those processes to the memory banks of the corresponding socket.  It does this is for efficiencies of the x86 architectures.  It can move the vm to another socket and readdress the memory but it has a ‘cost’ associated with such a move.  Path of least resistance is to stay in the same socket.

    If you create a 4 vCPU VM and run it on a 1 core  processor it gets bogged down.   If you have the same VM on a two socket Quad Core (8 Cores)  the four cores utilized by the VM are likely to be on socket 1 or socket 2 .  The cost of splitting the vCPU between the two physical sockets by the scheduler is greater than running the vCPU in the same socket.    AMD delivered this earlier than Intel and sustained higher levels of virtualization consolidation “Per Host” than similar class systems of Intel could provide through the Northbridge.   Core i7 is a new game for Intel and the results of Nehalem show the improvements.

    For more indepth information here is a good read:  CPU Scheduler in VMware ESX

    We have a host of $10K  CapX charge that has two sockets at a 4/45GB Socket Ratio with approx $5k spend in each socket.  Looking at our Hardware invoice the CPU Cores are about 25% of the cost of a socket so we can assume that our per socket cost is broken down into 25% Core and 75% Memory.   So our Socket Ratio yields a $1250 cost for 4 cores and $3750 for 45GB of memory:

    Per Core CapX = $312.50;  Per GB RAM CapX = $83.33

    That gives us a bare metal cost without a hypervisor charge on top, but we need a hypervisor to get a VM running.  Adding in the ESX cost for a per socket license of ESX Enterprise Plus (worst case) you can add $3500 each socket.

    ESX Lic. Cost per socket CapX = $3500

    Raw burn rate of the host would be $8500 per socket if we never loaded a VM on the Host.  Well, we did it for a reason, so let’s get our money back. If we target the standard allocation for this host (4/45GB socket ratio) we get our target VM count of 16 per socket(1 vCPU/2.8 GB RAM).  Also, keep in mind that we broke the socket cost down by 25%  to CPU and 75% to Memory so we will keep that  same  split here.  If we don’t do the split, then any VM that is deployed to the socket will bear the same cost regardless of its size.

    ESX Lic. Cost per VM= $218.75   ( 3500 / 16 )

    -Or-

    Split by the 25/75 % we did previously for the cost of the CPU and Memory and you get a little different calculation.

    3500 * .25 = 875 / 16  = $55 AND     3500 *.75 = 2625 / 45  = $58

    per vCPU=$55

    per vMEM=$58

    Adding it up with our target ratios in tow we get the burn rate of the $ per Compute on a VM basis.

    ($312/4  = 4:1 ratio) + (83*2.8) + {(55*1) + (58 * 2.8) } = $530

    Or Summarized: (vCPU = $78)+(vMEM = $233)+(Hyper$=219)=(vCOMPUTE = $530)

    Assuming 8760 Hours (1 year) this VM would cost $.06/hr in vCOMPUTE.

    Lets apply that to some other VM systems and see if it sticks.  If we plan for the following VM deployment on our socket:

    vmGrid

    The costs would spit out as such:

    vCompute

    Or slice it up into a per hour number:

    perhour

    So based on this analysis some of my VM’s probably only cost $.05 per hour for vCompute.  Interesting. What is more interesting is the fact that the memory cost associated with a VM scales more accurately to the consumption.  You can have as much memory you like for your new 4 and 8 GB aspirations; (eg. memory leaks) you just need to pay for it accordingly.

    Too bad that only pays for the top part of my total cost model.  That said, the benefit here is that this model can span across hypervisors and any market hypervisor can be split up to show the cost of a VM consumed on a Xen , KVM, VirtualIron, Parallels’, or Hyper-V infrastructure.

    I will be working on a few powershell scripts and excel calculators that one can use to make this model more repeatable. At the very least, it is a model that I will use to consider CapacityIQ and third party products like the offering from VKernel; and the output they measure.  Especially if they consume additional costs on a per socket basis.  Which I can now calculate as Overhead.

    Alas there is more to consider, stay tuned for Part III – “the I/O that binds”

    Get Thin Provisioning working for you in vSphere

    Going Thin and not looking back.

    thinYes, I am slowly losing my hair like many other aging men out there, but it wouldn’t be virtual insanity if I were blogging about my personal male pattern baldness issues.  With the latest release of VMware vSphere comes a lot of new features and functionality that can be leveraged to make our lives easier.  One of these features, that I personally have been looking forward to for a while, is Thin Provisioning.  If you aren’t familiar with this technology, jump over to Gestalt IT for a great explanation of what it is and how it works.

    One of the exciting promises of thin provisioning, is getting more “bang for your buck” out of the expensive enterprise storage you have been investing in for your ESX environment.  But, as Bret Michael’s once said, “Every rose has its thorn” and there are some things to look out for and considerations to make, before implementing thin disk technologies.

    Efficiencies are great if they work right and don’t over

    complicate the environment.

    Do your homework and make sure you understand the characteristics of the virtual machine that you are considering migrating into a thin disk configuration.  The last thing you want to do is convert every VM to thin disk, and four months down the road all of your data stores are filling up and you’re scrambling for a storage CAPEX.  Some people are of the opinion to do thin provisioning either on the host side (VMware) or on the storage array side, but not both.  Take a gander at Chad Sakac’s blog that discusses thin on thin and some thoughts around each of these approaches.  I’m not going to go into all of the pluses and minuses of thin provisioning but rather focus on how to make it work for you.

    Coffee Talk

    coffee

    So now that we have some of the basics out of the way, I wanted to share my thoughts on thin provisioning.  Like many organizations, we get requests from our customers that err on the side of caution.  They want to plan for the worse case and ensure that their project and/or application isn’t setup for failure.  I don’t blame them really, I do it myself all the time when I make coffee at home.  I always end up making more coffee than I typically drink, just in case I might need that extra charge.  The best way to do that is pad it, request more than what you might really need, just in case something comes up down the road.  Virtual machine disk storage in some cases fits this same profile.  If my coffee maker granted me access to hot coffee on demand, I would stop making extra coffee.  Thin disks can give your end users that capacity on demand so you can gain control of the padding effect that typically takes place in most corporate organizations.

    Take it back…

    So now you have done your research, you’re starting to get a feel for what this thin stuff is and how it might play out in your shop.  It’s go time.  If you’re a smaller VMware customer, you probably already have an idea of what are good target disks to convert.  If you’re a larger environment, it might be a little more difficult to gauge where the bloated pigs are hiding.

    I worked at GE for a couple of years and was exposed to some of the Six Sigma methodologies they preach as well as practice.  Sounds boring, right?  Not really.  You can really leverage DMAIC for a lot of IT related problems/issues/projects.  You don’t have to take it to the extreme, use the framework to help guide you on your quest:

    DMAIC

    The DMAIC project methodology has five phases:

    • Define high-level project goals and the current process.
    • Measure key aspects of the current process and collect relevant data.
    • Analyze the data to verify cause-and-effect relationships. Determine what the relationships are and attempt to ensure that all factors have been considered.
    • Improve or optimize the process based upon data analysis using techniques like Design of experiments.
    • Control to ensure that any deviations from target are corrected before they result in defects. Set up pilot runs to establish process capability, move on to production, set up control mechanisms and continuously monitor the process.

    We have already defined our project goals and what we are trying to accomplish.  We need a good “Measure” tool to really find where we might benefit from thin provisioning.  Powershell is a great tool that most VMware administrators use, or have at least heard of.  So this was the first place I turned to for assistance.

    Alan Renouf of “Virtu-AL” http://www.virtu-al.net/ gave me a hand in writing the powershell script needed.  (Thanks again, Alan!).  Alan already had a one liner script to produce a list of vm’s, their disks assigned, and how much data each disk was consuming.  I needed the ability to see this data outside a powershell window and be able to analyze it in a better format.  We have a decent-sized VMware environment and exporting this out to a .csv for analysis is extremely helpful.  Here is the script!

    ************************************************************************

    # Set the Filename for the exported data
    $Filename = “C:\VMDisks.csv”

    Connect-VIServer MYVIServer

    $AllVMs = Get-View -ViewType VirtualMachine
    $SortedVMs = $AllVMs | Select *, @{N=”NumDisks”;E={@($_.Guest.Disk.Length)}} | Sort NumDisks -Descending

    $VMDisks = @()
    ForEach ($VM in $SortedVMs){
    $Details = New-object PSObject
    $Details | Add-Member -Name Name -Value $VM.name -Membertype NoteProperty
    $DiskNum = 0
    Foreach ($disk in $VM.Guest.Disk){
    $Details | Add-Member -Name “Disk$($DiskNum)path” -MemberType NoteProperty -Value $Disk.DiskPath
    $Details | Add-Member -Name “Disk$($DiskNum)Capacity(MB)” -MemberType NoteProperty -Value ([math]::Round($disk.Capacity/ 1MB))
    $Details | Add-Member -Name “Disk$($DiskNum)FreeSpace(MB)” -MemberType NoteProperty -Value ([math]::Round($disk.FreeSpace / 1MB))
    $DiskNum++
    }
    $VMDisks += $Details
    Remove-Variable Details
    }
    $VMDisks | Export-Csv -NoTypeInformation $Filename

    ***********************************************************************

    So now that you have this great spreadsheet, you can do all sorts of crazy sorting and reporting, within Excel.  Take some time on phase 3, “Analyze” what you’re seeing.  Talk to your VM stakeholders to see how things might be changing from their perspective.  Try to plan for the surprises and position yourself accordingly.

    Next is the “Improve” phase of DMAIC (see it’s easy!).  This is the part where you actually do the work.  It’s time to start leveraging the storage VMotion API’s, and reclaim some of that unused disk.

    1. Select the target VM in the VC client.
    2. Right click on the VM and select the option “Migrate”.
    3. Select the option “Change Datastore”.
    4. Select the destination, or click advanced if you are targeting one particular disk.
    5. Select “Thin provisioned format”.
    6. Select Finish.

    Rinse and Repeat for the rest of that spreadsheet you have worked so hard on.

    The last phase of DMAIC is “Control”.  This is one of the most important pieces to thin provisioning in my opinion.  At the minimum you need to setup Virtual Center alerts to monitor when your datastores are approaching critical levels.  You can’t implement thin disks in your vSphere environment and walk away.  The smart people over at VMware have given us the ability to monitor datastore disk space usage and over-allocation with the latest release of Virtual Center.  Setup your monitors so you are e-mailed when some of these thin disks begin to grow and you need to take some action.

    image

    Eric Gray of VMware takes this to the next level, check out his blog post on utilizing powershell to prevent datastore emergencies.  My personal approach to this concept is to setup a “hotspare” datastore for your environment.  A good practice to implement here would be to try reclaiming enough storage from your migrations to thin disks to free-up a “hot spare datastore”.  Implementing an automated recovery solution like Eric’s will help you sleep easier at night.  Worried about what might happen if your script doesn’t work or you do hit the perfect storm and end up with a full VMFS volume?  Intelligence has been built into vSphere to automatically pause the virtual machines, impressive.  Check out Eric’s video:

    Wrapping it all up

    Thin disk provisioning is a great feature that you should consider leveraging in your environment.  With some forward thinking and best practices you can achieve higher ROI for your ESX storage.  VMware vSphere offers the ability for you to migrate from thick to think with no downtime, so you can begin reclaiming storage on the fly.  Keep it simple, start out with a high level analysis of your infrastructure.  Identify the candidates that are a good fit and worth focusing on.  Setup your alerts on the datastores as soon as you migrate your first virtual machine so you are protecting yourself from problems down the road.  Consider taking automated actions if your datastores are reaching critical thresholds.

    I hope you found this article helpful, good luck!

    Scott Sauer

    Apple Mac OSX in the cloud?

    A colleague of mine, Mark Medovich, turned me on to an interesting solution for Mac users, like me. I am a recently converted Windows user to the MAC. (now for a little over a year) I have to admit, I love my MAC. I still use many Windows applications. I like outlook better than entourage, and the powerpoint presentation still works better with the version in which it was created. Hurray for Fusion… But I digress…. This solution is about running a MAC OS on top of vsphere. What? Really? I thought that wasn’t allowed? Actually, the great community over at DiscCloud have created a way to do so, without actually “running” the MAC OS on vsphere. http://disccloud.ning.com/

    You do still have to run the Mac OS on your Mac and you can’t run the MAC OS on a non MAC Hardware. Pretty cool stuff. The way this works, is that it mounts the MAC OS instance as another instance on your mac client. Everything you work on, is stored and secured on the vsphere server. Backups can happne in the cloud.

    All of the FAQs are here, including how it doesn’t affect the Apple EULA. http://disccloud.ning.com/page/disccloud-faq

    This is a real world example of how the cloud can benefit the average MAC user, like me. This example brings home the how the cloud will eventually be a natural way of how we all will do our computing, in the future. Apple, as we know, has had a resurgence in popularity in the last several years with the iPhone being one of the major drivers. But us with MacBook Pros, have also helped. This is all conusmer/end user driven growth. Not Enterprise driven growth. Actually the MAC causes traditional enterprise IT shops difficutly. This is not unlike of how vmware got its momentum through individual server administrators that were tired of doing work at 2am on Sunday morning. And now, DiscCloud, a group of passionate MAC users and developers have developed a very inovative solution using vSphere and a cloud computing mindset. Here’s to the community, individual and innovation. Now its up to companies like vmware to take those inovations and enhance their ability to be managed within the enterprise.