Looking for Radically Simple Storage?

Hopefully by now you have heard of Virtual SAN, part of VMware’s Software Defined Datacenter strategy. Currently over 10,000 customers have registered for the beta and it has become a frequent subject in many conversations. So where does it fit in your environment? Do you have a need for lower cost Test/Dev, VDI or remote office environments? Those are great places to get started. Even though it has people’s interest I hear all the time it is only a version 1 but did you know it has been in development for a number of years? Download the beta and give it a try.

If you have been waiting anxiously because you liked what you have seen sign up for the upcoming online event:

Virtual SAN Event March 6
http://www.vmware.com/now

Date: Thursday, March 6, 2014
Time: 10:00 a.m. – 11:00 a.m. PST

clip_image003

VSAN is fully integrated with vSphere and literally has two clicks to storage provisioning (try the hands on lab to see how easy it is). Of course there are some requirements that you need to meet first so take a look at the requirements below:

vSphere Requirements

vCenter Server

VSAN requires at a minimum that the VMware vCenter Server™ version is 5.5. Both the Microsoft Windows version of vCenter Server and the VMware vCenter Server Appliance™ can manage VSAN. VSAN is configured and monitored via the VMware vSphere Web Client and this also requires version 5.5. vSphere

VSAN requires at least three vSphere hosts (in which each host has local storage) to form a supported VSAN cluster. This enables the cluster to meet the minimum availability requirement of at least one host, disk, or network failure tolerated. The vSphere hosts requires at a minimum vSphere version 5.5.

Storage Requirements

Disk Controllers

Each vSphere host participating in the VSAN cluster requires a disk controller. This can be a SAS/SATA host bus adapter (HBA) or a RAID controller. However, the RAID controller must function in what is commonly referred to as pass-through mode or HBA mode. In other words, it must be able to pass up the underlying hard disk drives (HDDs) and solid-state drives (SSDs) as individual disk drives without a layer of RAID sitting on top. This is necessary because VSAN will manage any RAID configuration when policy attributes such as availability and performance for virtual machines are defined. The VSAN Hardware Compatibility List (HCL) will call out the controllers that have passed the testing phase.

Each vSphere host in the cluster that contributes its local storage to VSAN must have at least one HDD and at least one SSD.

Hard Disk Drives

Each vSphere host must have at least one HDD when participating in the VSAN cluster. HDDs make up the storage capacity of the VSAN datastore. Additional HDDs increase capacity but might also improve virtual machine performance. This is because virtual machine storage objects might be striped across multiple spindles.

This is covered in far greater detail when VM Storage Policies are discussed later in this paper.

Solid-State Disks

Each vSphere host must have at least one SSD when participating in the VSAN cluster. The SSD provides both a write buffer and a read cache. The more SSD capacity the host has, the greater the performance, because more I/O can be cached.

NOTE: The SSDs do not contribute to the overall size of the distributed VSAN datastore.

Network Requirements

Network Interface Cards

Each vSphere host must have at least one network interface card (NIC). The NIC must be 1Gb capable. However, as a best practice, VMware is recommending 10Gb NICs. For redundancy, a team of NICs can be configured on a per-host basis. VMware considers this a best practice but does not deem it necessary when building a fully functional VSAN cluster.

Supported Virtual Switch Types

VSAN is supported on both the VMware vSphere Distributed Switch™ (VDS) and the vSphere Standard Switch (VSS). No other virtual switch types are supported in the initial release.

VMkernel Network

On each vSphere host, a VMkernel port for VSAN communication must be created. The VMkernel port is labeled Virtual SAN. This port is used for intercluster node communication and also for reads and writes when one of the vSphere hosts in the cluster owns a particular virtual machine, but the actual data blocks making up the virtual machine files are located on a different vSphere host in the cluster. In this case, I/O must traverse the network configured between the hosts in the cluster.

Before you decide to go out and build your own take a look at the “VMware Virtual SAN Design & Sizing Guide” to get some idea around what size and number of components you need.

Additional resources:

VSAN Product page:
http://www.vmware.com/products/virtual-san/

VMware Storage Blog:
http://blogs.vmware.com/vsphere/storage

VSAN Product walk thru:
http://vmwarewalkthroughs.com/VSAN/

VMworld 2013 Session STO4798 Software Defined Storage:
http://www.youtube.com/watch?v=92PThRfKGQw

HOL- SDC-1308 Virtual SAN (VSAN) and Virtual Storage Solutions
http://labs.hol.vmware.com/HOL/#lab/562

Cormac Hogan has a great blog with a number of resources on VSAN:

http://cormachogan.com/vsan/

Duncan Epping also has some great info on his blog:

http://www.yellow-bricks.com/virtual-san/

 

Lastly voting is open for Top vBlog 2014 so get signed up and vote.

Up Close and Personal With IBM PureApplication PaaS

The converged infrastructure value proposition, by now, is pretty evident to everyone in the industry. Whether that proposition can be realized, is highly dependent on your particular organization, and specific use case.

Over the past several months, I have had an opportunity to be involved with a very high-profile pilot, with immovable, over-the-top deadlines.  In addition, the security requirements were downright oppressive, and necessitated a completely isolated, separate environment. Multi-tenancy was not an option.

With all this in mind, a pre-built, converged infrastructure package became the obvious choice. Since the solution would be built upon a suite of IBM software, they pitched their new PureApplication system. My first reaction was to look at it as an obvious IBM competitor to the venerable vBlock. But I quickly dismissed that, as I learned more.

The PureApplication platform is quite a bit more than a vBlock competitor. It leverages IBM’s services expertise to provide a giant catalog of pre-configured multi-tiered applications that have been essentially captured, and turned into what IBM calls a “pattern”. The simplest way I can think of to describe a pattern is like the application blueprint that Aaron Sweemer was talking about a few months back. The pattern consists of all tiers of an application, which are deployed and configured simultaneously, and on-demand.

As an example, if one needs a message broker app, there’s a pattern for it. After it is deployed (usually within 20-30 mins.), what’s sitting there is a DataPower appliance, web services, message broker, and database. It’s all configured, and ready to run. Once you load up your specific BAR files, and configure the specifics of how inbound connections and messages will be handled, you can patternize all that with script packages, so that next time you deploy, you’re ready to process messages in 20 minutes.  If you want to create your own patterns, there’s a pretty simple drag and drop interface for doing so.

image

I know what you’re thinking. . . There are plenty of other ways to capture images, vApps, etc. to make application deployment fast. But what PureApp brings to the table is the (and I hate using this phrase) best-practices from IBM’s years of consulting and building these solutions for thousands of customers. There’s no ground-up installation of each tier, with the tedious hours of configuration, and the cost associated with those hours. That’s what you are paying for when you buy PureApp.

Don’t have anyone in house with years of experience deploying SugarCRM, Business Intelligence, Message Broker, SAP, or BPM from the ground up? No problem. There are patterns for all of them. There are hundreds of patterns so far, and many more are in the pipeline from a growing list of global partners.

The PureApplication platform uses IBM blades, IBM switching, and IBM V7000 storage. The hypervisor is VMware, and they even run vCenter. Problem is, you can’t access vCenter, or install any add-on features. They’ve written their own algorithms for HA, and some of the other things that you’d expect vCenter to handle. The reasoning for this, ostensibly, is so they can support other hypervisors in the future.

For someone accustomed to running VMware and vCenter, it can be quite difficult to get your head around having NO access to the hosts, or vCenter to do any troubleshooting, monitoring, or configuration. But the IBM answer is, this is supposed to be a cloud in a box, and the underlying infrastructure is irrelevant. Still, going from a provider mentality, to an infrastructure consumer one, is a difficult transition, and one that I am still struggling with personally.

The way licensing is handled on this system is, you can use all the licenses for Message Broker, DB2, Red Hat, and the other IBM software pieces that you can possibly consume with the box. It’s a smart way to implement licensing.  You’re never going to be able to run more licenses than you “pay for” with the finite resources included with each system. It’s extremely convenient for the end user, as there is no need to keep up with licensing for the patternized software.

Access to the PureApp platform is via the PureApp console, or CLI. It’s a good interface, but it’s also definitely a 1.x interface. There is very extensive scripting support for adding to patterns, and individual virtual machines. There are also multi-tenancy capabilities by creating multiple “cloud groups” to carve up resources.  There are things that need to be improved, like refresh, and access to more in-depth monitoring of the system.  Having said that, even in the past six months, the improvements made have been quite significant.  IBM is obviously throwing incredible amounts of resources at this platform. Deploying patterns is quite easy, and there is an IBM Image Capture pattern that will hook into existing ESXi hosts to pull off VM’s to use in Pure, and prepare them for patternization.

SNAGHTMLf9ba738

Having used the platform for a while now, I like it more every day. A couple weeks ago, we were able to press a single button, and upgrade firmware on the switches, blades, ESXi, and the v7000 storage with no input from us. My biggest complaint so far is that I have no access to vCenter to install things like vShield, backup software, monitoring software, etc.. But again, it’s just getting used to a new paradigm that’s hard for me.  IBM does have a monitoring pattern that deploys Tivoli, which helps with monitoring, but it’s one more thing to learn and administer. That said, I do understand why they don’t want people looking into the guts on a true PaaS.

Overall, I can say that I am impressed with the amount of work that has gone into building the PureApplication platform, and am looking forward to the features they have in the pipeline. The support has been great so far as well, but I do hope the support organization can keep up with the exponential sales growth. I have a feeling there will be plenty more growth in 2014.

Tintri VMstore upgrade process made simple

Customers asked for it and we delivered it.  You can now upgrade your Tintri datastore via the management UI.  I created a video to show how easy this upgrade process is for our customers.  I recall being a customer not that long ago and having to engage my storage vendor to have a technical resource dispatched to perform this same task because the process was too “complicated for customers”.  We believe at Tintri that storage should be easy to install, configure, and manage thus our “Zero Management Storage” messaging that you have probably noticed.

-Enjoy!

Tintri Syslog Configuration with VMware Log Insight

T600-Oblique-reflection-shadow-341

Tintri T600 series

Some of you might have missed the recent big announcement from Tintri, but we launched a new product line to expand our rock solid platform.  The T600 series (picture above) was launched shortly after VMworld this year.  Our customers love Tintri and how we help them manage their virtual environments and are screaming for more.  Our flash first file system gives them the feel of an all flash array but at a fraction of the cost.  This platform not only brings new hardware models to our customers so they can be very prescriptive on their storage requirements, but it also brings a few new exciting software features to the table as well.

 

Tintri OS 2.1

The new Tintri OS (where much of our intellectual property exists) continues to get better and better offering more features that our customers have been asking for.  The 2.1 version of code now offers several new features:

  • Snapshot enhancements
  • SNMP support (published MIB)
  • LACP support for advanced network configuration
  • Software upgrades from the UI
  • Syslog Integration

I thought I would dive into the syslog feature since I just had a customer ask about configuring this the other day.

 

Setting up Syslog configuration in Tintri

If you are an existing Tintri customer, you will notice that the menu list under settings now looks a bit different.  Notice the “more” tab in the image below on the left hand side.  This is where some of the new features such as LACP and upgrading from the UI now exist.

 

tintri-settings

 

To configure the syslog integration, we will want to select the “Alerts” link about halfway down the menu options.  You will be presented with a screen that should look similar to what you see in the image below.  Most likely your email alerting will already be configured if you are an existing T540 customer and upgraded to 2.1.x.

 

Tintri-syslog

 

The syslog configuration setting is the new field titled “Remote Server”.  This is where you will enter your syslog dns hostname or ip address so we can forward messages to your instance of VMware Log Insight.  Once you enter the correct values for your environment, select the option “Test forwarding” to ensure that communications are working correctly between the Tintri datastore and Log Insight.

 

Validate Log Insight is getting data

VMware Log Insight is designed to accept incoming syslog messages by default so there is no configuration that is needed to enable syslog support.  So, It’s time to check the Log insight server for our test data!  Login to your own instance of Log Insight and select the “Interactive Analytics” option at the top of the screen.  In the search column, insert the value “test” to search for our recently sent test message from the Tintri datastore.

 

syslog-tester

 

You can see in the example above that we are getting the test messages from the selected datastores that I have configured for syslog monitoring.  You can now begin to create saved queries for events that you are interested in, such as cloning, system health metrics, as well as hardware related issues.

Currently there is no Tintri Content pack listed in Solution Exchange but this is something that I am planning on changing in the not so distant future!

-Scott

A new blog for Aaron

Screen Shot 2013-11-20 at 1.59.59 PM

When I started VirtualInsanity in 2008, I never anticipated what it would become.  Instead of just place where I would post random thoughts ever so often, it has become a place that many new and part-time bloggers have come to call their virtual home.  By this measure alone, I think VirtualInsanity can be deemed a “success.” 

 

The one challenge I personally have with VirtualInsanity, is that the content of our bloggers is very much virtualization and infrastructure heavy.  That is by no means a bad thing.  Not at all.  But for me, my focus the past few years has been on automation and orchestration, application development, and an overall trend/ movement that is known as DevOps.   Can I create content in my newer areas of focus here on VirtualInsanity?  Sure, but I don’t think it resonates very well with the typical VirtualInsanity reader.

 

 

Therefore I’ve decided it’s time for me to create a new blog, ActualClouds which will be a site dedicated to the non-infrastructure and non-virtualization pieces of cloud computing.  But let me also be clear about one thing … VirtualInsanity is going no where.  I plan to transfer ownership of the blog to Scott Sauer, one of my original co-authors, where he and the other bloggers will continue to post here (as will I from time to time).

 

So, wish me luck.  ActualClouds is live and I posted my first entry this morning, BladeLogic Integration via vCO and SOAP.  Please stop by and check it out.  And if you like what you see, please help me get the word out about ActualClouds.

 

–Aaron Sweemer (Principal Systems Engineer @ VMware)

Have you discovered the elephant in the room?

 

clip_image002Over the past couple of years Big Data has been growing in popularity. Companies are trying to figure out how to better utilize data across their organization and social media sites. They want to utilize this data to:

 

· Develop Search engines and improve accuracy

· Develop patterns to better understand customers

· Make better predictions about customer needs

· Target marketing to customers

 

and many more ways that I am sure we don’t know about (is the NSA listening?). You may not even know if your company is running this or starting to look at this technology and that is why it is important to understand what it is and how you can help. Typically the infrastructure guys don’t find out about applications until there is an issue. They just need a server right?

Infrastructure IT has to align with the business and understand the business requirements as virtualization grows in our organizations. We can’t just give them a server and move on any more. As automation continues to make its way into the environment this becomes more important for us to understand. We need to start designing for requirements or scale appropriately. So do you know if Hadoop is being deployed or discussed in your organization?

If yes? As a VMware administrator you can provide value to the business in their efforts. In the vSphere 5.5 release Big Data Extensions (BDE) was announced as part of vSphere Enterprise and Enterprise Plus editions. This new tool helps you deploy and manage Hadoop clusters running on vSphere. Those features include:

 

Quickly Deploy, Manage, and Scale Hadoop Clusters. Big Data Extensions enables the rapid deployment of Hadoop clusters on VMware vSphere. You can quickly deploy, manage, and scale Hadoop nodes using the virtual machine as a simple and elegant container. Big Data Extensions provides a simple deployment toolkit that can be accessed though VMware vCenter Server to deploy a highly available Hadoop cluster in minutes using the Big Data Extensions user interface.

Support for Major Hadoop Distributions. Big Data Extensions includes support for Apache Hadoop, Cloudera, Greenplum, Hortonworks, MapR, Pivotal and (coming soon) Intel. HBase, Pig, and Hive are also supported. The Big Data Extensions virtual appliance includes Apache Hadoop 1.2. Customers can easily upload distributions of their choice and configure Big Data Extensions to deploy their preferred distributions.

Graphical User Interface Simplifies Management Tasks. The Big Data Extensions plug-in, a graphical user interface integrated with vSphere Web Client, lets you easily perform common Hadoop infrastructure and cluster management administrative tasks.

Elastic Scaling Lets You Optimize Cluster Performance and Resource Utilization. Elasticity-enabled clusters start and stop virtual machines automatically and dynamically to optimize resource consumption. Elasticity is ideal in a mixed workload environment to ensure that high priority jobs are assigned sufficient resources. Elasticity adjusts the number of active compute virtual machines based on configuration settings you specify.

 

clip_image004

 

That’s great but what is Hadoop? Apache Hadoop is an open source large scale distributed batch processing infrastructure. Got all of that? Well the easiest way to explain it is we want to collect data and figure out how to make it useful. The goal is to take large amounts of data and break it into smaller, easier data sets that can be processed at the same time. This also allows for the data to be crawled for interesting data about you and what you did last night on Twitter. An example might be to collect data from your company website and/or other social media sites and look for trends in what people are saying about your products and where they are at. This way they can focus marketing in a particular area. As infrastructure folks we need to pay attention to the scale because Hadoop can run a lot of nodes and process a lot of data. It can run on local disk as it has its own file system HDFS or can integrate with storage systems like EMC Isilon that has HDFS already (Check out the EMC Starter kit below if you have Isilon).

That said; if your company is just getting started with Hadoop this is the perfect time to look at Big Data Extensions. It can help you accelerate your deployment as well as take advantage of all of the existing advantages you get with vSphere. Are you interested in taking a look at the demo? Interested in learning more about BDE if so you can visit these sites:

 

VMware Apache Hadoop on vSphere

Try/Download vSphere BDE

Benchmarking Case Study Hadoop Performance on vSphere

EMC Starter Kit

 

Don’t forget to get some hands on training at VMware Labs; the Hol-SDC-1309 is the lab for Big Data Extensions. There are other distributions of Hadoop that are supported and offer downloads of their products for you to try out:

 

Pivotal

Cloudera

Hortonworks

MapR

 

I wanted to say thank you to Kevin Leong and Sarah Korah for spending some time with our customers and educating us all on the great stuff VMware is doing. If you or anyone you know is going to be at the Hadoop World /Strata later this month in NYC (October 28th – 30th) stop by, say hello to the VMware BDE team and join the Hadoop virtualization action.

Home Lab Series: Brief Status Update

 

This is a slight side step from my ‘ESXi 5.5, VSAN, and Mac Mini series’, which I am still very much working on.  I am presently testing a custom ISO I built that should replace the need for the work around outlined in William Lam’s post covering the Mac Mini Thunderbolt adapter caveat.  The issue at its core is that the Thunderbolt Ethernet adapter device id is missing from the driver map file and as such is not loaded by the kernel at boot.  The immediate work around can be run manually and/or added into an ESXi host’s /etc/rc.local.d/local.sh file to run automatically at startup.

The issue I have experienced with this workaround is that, when a host is rebooted, the existing binding for the thunderbolt adapter is lost and needs to be reconfigured.  An additional reboot/reload of the vmkdevmgr is required to clear out the old adapter before it can be re-added. multipleNIC1

This is not a show stopper, it simply adds the task of gracefully removing the adapter from its vSwitch/VDS prior to performing maintenance on a host.  Which I have successfully done without impact to my VMs even when running on VSAN. 

This is where the custom ESXi installation ISO I’m working on comes into play.  It includes a modified driver map file with the device id for the Apple thunderbolt adapter by default.  (I already covered some statements on supportability in my first entry on this topic, and this certainly falls into that category).  I will include the ISO and the steps to build one yourself in part two as soon as I validate it!  Until then, here is a little of what I have been doing with my home lab. 

 

(Slightly off-topic and a bit dated at almost a year, but check out this post on a company who took it upon themselves to leverage 160 Mac Mini servers to replace Apple’s retired Xserve platform.)

labvmsI underestimated just how much I would nerd out after getting this lab running.  In my nested environment I was constrained by resources and availability (too noisy to leave on), which prevented me from getting too carried away.  That’s a bloated apology for taking my time with this second update, but as you’ll see in the screenshot to the right, I was far from idle. 

Does standing up a virtual load balancer to test external NAT to my VPN and Ventrilo servers in the midst of other tasks qualify as a symtom of ADD? 

To assist in benchmarking this new environment, I prioritized an evaluation version of vCenter Operations Manager.  Establishing baselines and understanding workloads is the best way to maximize a home lab investment.  You may have also noticed the ‘MacOSX’ VM, which plays host to my family’s Plex Media Server.  This system has a stringent SLA agreement, that my wife often monitors, and I dare not risk breaching it.  (She holds a large stake in the budgeting process).

Apart from playing with F5s BigIP LTM VE and the OpenVPN appliance, I have flashed my ASUS RT-N16 router firmware, replacing it with DD-WRT to test out the OpenVPN integration and other features.  All of course with some future projects and home lab scenarios in mind.

In short, this lab is already paying dividends by providing a place for me to both learn, work, and play.  The uptime of my first few VMs is nearing the two week mark, power outages and all.  Check back soon or follow me on Twitter for more details about my next update.

adserver

VMware and Puppet Labs

One of the most popular partner labs at VMworld this year was Puppet Labs, HOL-PRT-1307 (Automate vSphere Provisioning and Management). If you haven’t heard of these guys you should take a look at what they are doing. Puppet allows you to manage your infrastructure through the lifecycle of the deployment. That means provisioning and configuration and of course they are working with VMware to create integrations that will benefit your infrastructure.

If you are interested in checking out some of these integrations that are coming take a look at the presentation that Becky Smith did at PuppetConf. Have you heard of Project Zombie? If not again Nick Weaver from VMware talked about VMware Hybrid Services and how we are leveraging Puppet to manage those environments. This should give you some insight around the benefits that the products together are providing. It is pretty typical to hear the Linux team in an organization talk about using Puppet or something similar. Recently I heard a lot of development teams talking about how they can leverage it. It has a lot of uses in other parts of the infrastructure as well as you can see in the presentations from PuppetConf. Take a look at Puppet Forge and you will see not only Operating Systems, (including Windows) middleware, applications, networking and storage.

Interested in getting started with Puppet and seeing how it could help you manage your vSphere environment? Start by reading some great articles already put together by:

Nick Weaver

William Lam

Nan Liu

Also there is some training out on the Puppet Labs website that you can get started with: https://puppetlabs.com/learn don’t forget integration with App Director and vCAC there is training available:

vCloud Automation Center

http://mylearn.vmware.com/mgrReg/plan.cfm?plan=39561&ui=www_edu
Application Director
http://mylearn.vmware.com/mgrreg/courses.cfm?ui=www_edu&a=det&id_course=157650

Stay tuned for additional info and how to articles.

Building a Home Lab with ESXi 5.5, VSAN, and Mac Mini Server (6,2) (Part 1 of 3)

 

This will be a three part series around my experience building a lab environment utilizing Apple’s Mac Mini with ESXi 5.5 and VMWare VSAN therein.  This first post will focus on my choice of hardware components and supportability topics.  In part two I will provide a detailed account of the steps taken to run ESXi 5.5 on the Mini, the configuration of VSAN, and creation of a VM Storage Profile.  Lastly, in part three, I will focus on performance of my VSAN datastore using esxtop and testing failure/maintenance scenarios of the environment.  Please don’t hesitate to reach out to me on Twitter (@initDave) with any questions, comments, or critiques you might have.

Having a place to experiment on software and hardware without the fear of impacting the work or services of others is a beautiful thing.  You can make and break configurations all day long without any apprehension or fear of it affecting others.  Nesting an environment, that is running VMs within VMs, is also a great way to get your hands dirty in this way.  This is the method I used for the last several years on a Dell PowerEdge 2950 Server and it has served me very well.

With that, I am excited to announce this server’s retirement and to usher in my Home Lab v2.0, stepping away from nested while simultaneously reducing my footprint.  (Sort of, the Synology NAS and Cisco 200 Series switch may challenge that argument).  For the last three weeks I have been running vSphere 5.5 with VSAN Beta and it has been working great.  I’d say that I’m sad to see my rackmount server go, but I’d be lying.

 

old New

Home Lab v1.0

Dell PowerEdge 2950

2x Quad Core Xeon E5500

32GB RAM

6x 450GB 15k SAS (LSI Raid Controller)

2x 1GB NIC

Home Lab v2.0

3x Mac Mini Server

2.3GHz Quad-Core Intel Core i7

16GB RAM

128GB Samsung 840 SSD

1TB 7200RPM HDD

Synology DS1813+ NAS (8x 2TB WD Red)

Cisco SG200-26 24 Port GigE Switch (LACP and Static VLAN)

 

What drew me in to the idea of running this lab on the Mac Mini was the novelty, the challenge, and size of it.  I referenced William Lam’s work over at virtuallyghetto prior to and after purchasing my lab. I highly recommend you survey the waters prior to committing to any one build. Chris Wahl on his website, details his own lab while providing a plethora of links to HCL and non-HCL compliant builds of others.

An important choice I had to make when designing this home infrastructure setup was supportability.  The Mac Mini hardware is not on the official VMWare HCL list NOR is its AHCI controller as it pertains to VSAN.  With this project being a personal investment, it is important you understand and are comfortable with that.  However, while still not ‘official’, the VMWare Community is filled with a rock solid group of enthusiasts and professionals who impress me more and more everyday.  I utilized some stellar blogs in this little project of mine and I will be providing a consolidated reference list of all the articles I leveraged at the end of this series.  If you aren’t already reading or following the likes of William Lam, Duncan Epping, Scott Lowe, and Cormac Hogan, I highly suggest you do!

Lets get into the details of the hardware.

Compute

An additional benefit of the Mac as an ESXi host is the ability to run OS X virtualized without any special tomfoolery.  If you follow me on twitter you may have seen that I run a 2007 Mac Pro as a Plex Media server and I felt it was time to take this thing virtual.  The Mini was intriguing to me in conjunction with VSAN due to how small the footprint of the environment is.  My cluster capacity stands at 28Ghz across 12 Physical Hyper-Threading enabled cores, 48GB of Memory, 2.73TB of data storage, and 300GB of SSD read caching.  All of that and you can barely even tell they are powered on, which when compared to the full rackmount server, well.. you can’t compare.

Storage

Another key design element was a highly redundant form of storage for family photos, videos, and general backups, hence the Synology DS1813+.  The folks over at Synology caught my eye on the VMWorld floor with their iSCSI storage replication/failover features, scalability, Time Machine drive emulation, and VAAI support.  A great alternative would be the DS1513+, which is a 5-Bay NAS and is cheaper.  I was also interested in a storage solution that would function independently of my vSphere environment, which I fully intend to destroy and rebuild at least 10x in the next month or two.

Networking

For the network side I chose the Cisco SG200 series because I couldn’t afford a GigE Catalyst switch and this was a cheap alternative that fit my needs.  The 200 series supports LACP (up to four groups) and VLANs with static routing, which covers everything I was looking for in my attempt to simulate a segmented physical network.  I would have liked the additional features a full IOS or NX-OS implementation would bring, but my basement is not in that market segment unfortunately.

Look for part two in the next couple days where I dive into the technical bits of configuring ESXi 5.5 and VSAN on Mac Mini.  I’ll leave you off with a shopping list of the components inside this apparatus and links to some great immediate resources.

Apple Mac Mini Server (2012)

Corsair 16GB (2x8GB) DDR3 1600Mhz

Samsung 840 Pro Series 128GB SSD

SanDisk Cruzer Fit 8GB (For ESXi installation)

Apple Thunderbolt to GigE Adapter

Synology DS1813+ iSCSI NAS

2TB Western Digital Red NAS Hard Drive

Cisco SG200-26 24 Port GigE Smart Switch

 

Duncan Epping – http://www.yellow-bricks.com

William Lam – http://www.virtuallyghetto.com

Cormac Hogan – http://cormachogan.com