An update on my home lab post that I did earlier in the year >> https://virtualisedreality.com/2010/02/14/home-lab/
I have been running my Home Lab stage 2 since February this year and it has been a fantastic asset for my continual education. I have completed a number of beta’s and familiarised myself with a lot of new software using it. Shortly after the initial build I found I was suffering from the performance of the SATA disks, causing the system to slow down and feel like sludge when I have a large number of virtual machines running. At this time I looked into the cost of SSD’s and was unable afford this upgrade so I purchased 3 cheap SATA disks to spread the VM’s amongst the different disks. This solved the problem temporarily but I still wasn’t very happy with the performance.
With the launch of vSphere 4.1 it was time to crank the lab up and after a bit of reading I decided it was time to invest in a SSD.
Below are some of the blog posts I read when deciding whether an SSD was the right thing to purchase for the home lab
After reading these posts particularly Vinf.net I decided an SSD was the right thing to do, price and capacity was more important than performance for me as I was pretty sure any SSD would be better than my current SATA setup. In the end I went for the same drive as Simon Gallagher on vinf.net the Kingston SSDNow vSeries 128Gb SSD.
Not only was the price and capacity right on this drive but as it came with a desktop mounting kit it meant I didn’t have to think about buying any convertors or cradles etc.
The drive arrived the very next day as I had paid for a Saturday deliver and I was quick to unbox and get it installed.
The SSD was soon unboxed, it comes complete with power and SATA adapter to use with a desktop PC (Or my ML115) a cradle to mount it in a 3.5″ enclosure and a data transfer CD (This was binned).
The drive was quickly converted ready to be mounted in the ML115 and installed in my server.
I took this opportunity to upgrade the lab to ESXi 4.1 GA, the plan was to run ESXi 4.1 installed onto one of the SATA drives, I would then have a number of infrastructure machines from this instance of ESXi running on top of the SSD. As well as these infrastructure machines such as a DC, vCenter etc I would create several nested ESXi servers and use a virtual appliance for some shared storage.
I decided on this setup, as for day to day usage I would run the one ESXi server with my DC, vCenter etc. This would allow me to test new releases and third party software etc. When a cluster was then required I could boot the VSA and the nested ESXi hosts, a cluster in a single box solution means no more expense on additional hosts. I decided to use the HP Lefthand VSA for the storage as it’s a product that I am familiar with. The key part to make this work on top of the relatively small capacity was to thin provision absolutely everything.
The following volumes are now available on my home lab
VMFS1-3 are SATA disks connected directly to the ML115 motherboard, note the onboard RAID controlller isn’t recognised by vSphere. VMFS4-SSD is the new SSD drive and SharedVMFS1-SSD is a LUN presented by the Lefthand VSA located on VMFS4-SSD.
A 50Gb thin provisioned LUN has been configured on the lefthand VSA as follows
These are the virtual machines in my environment at present, note ESX2 and 3 in the host cluster are the nested ESXi hosts. These are currently turned off hence the hosts in the production cluster are not visible.
With over 300Gb of provisioned virtual machine disks and 10Gb of ISO’s on my 128Gb SSD I still have 40Gb of free space thanks to thin provisioning.
I could also relocate some of the VM’s or individual disks to the SSD’s for a more tiered storage system once I start to fill up the SSD.
Now for the main reason for purchasing the SSD performance, I am pleased to report I am amazed. I am able to run all these virtual machines with no issues what so ever. I haven’t done any number crunching but the fact I am able to run what I need with ease makes me very happy. Previously when the VM’s were running on SATA I would often have significant lag when trying to interact with them.
The final very small change is i’m now running a small piece of wake on lan software on my Mac so when ever I want I can wake my homelab up without visiting the server it’s self, ultimate laziness when sat on the sofa! All my VM’s then start in the needed order using the Virtual Machine Startup and Shutdown options in configuration on the physical ESXi host.
Technically you are still running on SATA. 😉
Your research has helped me out a lot. Thanks for that. I am in the process of building my home lab and just been researching the crap out of it. I read a blog from Duncan at yellew-bricks.com and he recommended a 40 gig SSD. I was wondering, do you install the ESXi host on the SSD, or is it just used as a datastore. I plan to purchase vm workstation and run my esxi4.1 host virtually. I am just curious how you went about doing that?
Personally I only run the VM’s on the SSD, my ESXi is installed on USB on one host and SATA HD on another