It was just over a year ago that vSphere was launched and now is time for the first major update to the product. On Tuesday 13th vSphere 4.1 was released to the public, my apologies for the timing of this blog post but as always work has taken priority. 4.1 is much more than a patch / security update and is more akin to the VI 3.0x to 3.5 we saw a couple of years ago. One of the first big changes that had been mentioned in recent release notes is vCenter is now 64bit only, so all those of you using 32bit servers and 32 bit windows for your vCenter its time to move to a 64bit platform.
So what new features can you expect to see in vSphere 4.1, as always the release notes are a good place to start >> http://www.vmware.com/support/vsphere4/doc/vsp_41_new_feat.html
Below are some of the major features, updates and announcements for me :
VMware ESX. VMware vSphere 4.1 and its subsequent update and patch releases are the last releases to include both ESX and ESXi hypervisor architectures. Future major releases of VMware vSphere will include only the VMware ESXi architecture.
- VMware recommends that customers start transitioning to the ESXi architecture when deploying VMware vSphere 4.1.
This has been something that has been on the cards for a long time, with much speculation that the first vSphere release was going to be ESXi only. It’s time to start getting used to ESXi and VMA. Also time for a lot of third party software manufacturers to get their ESXi support 100%.
Scripted Install for ESXi. Scripted installation of ESXi to local and remote disks allows rapid deployment of ESXi to many machines. You can start the scripted installation with a CD-ROM drive or over the network by using PXE booting. You cannot use scripted installation to install ESXi to a USB device
At last we can now script ESXi installations and install by using a PXE boot, @Vinf_net pointed me in the direction of this very useful post http://communities.vmware.com/blogs/vmwareinsmb/2010/07/13/esxi-41-scripted-installation-via-pxe-and-kickstart
vSphere Client Removal from ESX/ESXi Builds. For ESX and ESXi, the vSphere Client is available for download from the VMware Web site. It is no longer packaged with builds of ESX and ESXi. After installing ESX and ESXi, users are directed to the download page on the VMware Web site to get the compatible vSphere Client for that release. The vSphere Client is still packaged with builds of vCenter Server
Not really groundbreaking news but it is one that could easily catch you out. If you are working on your first new installation of ESX 4.1 at a site with limited or no internet connection you must remember to either a. download the vSphere client before hand or or b. Install the vSphere client from the vCenter DVD or if already installed the web interface or your vCenter.
Hardware Acceleration with vStorage APIs for Array Integration (VAAI). ESX can offload specific storage operations to compliant storage hardware. With storage hardware assistance, ESX performs these operations faster and consumes less CPU, memory, and storage fabric bandwidth
This is a huge feature meaning that when doing storage tasks like clones, storage vMotion etc, vSphere can offload all the work to your storage array to do, meaning the task is completed faster and with last pressure put onto your ESX hosts. Remember to be able to use functionality like VAAI you will need at least VMware vSphere Enterprise licensing http://www.vmware.com/files/pdf/vsphere_pricing.pdf
Storage Performance Statistics. vSphere 4.1 offers enhanced visibility into storage throughput and latency of hosts and virtual machines, and aids in troubleshooting storage performance issues. NFS statistics are now available in vCenter Server performance charts, as well as esxtop. New VMDK and datastore statistics are included. All statistics are available through the vSphere SDK.
Storage I/O Control. This feature provides quality-of-service capabilities for storage I/O in the form of I/O shares and limits that are enforced across all virtual machines accessing a datastore, regardless of which host they are running on. Using Storage I/O Control, vSphere administrators can ensure that the most important virtual machines get adequate I/O resources even in times of congestion
We now have greater visibility of storage performance and performance issues not only through ESXTOP but also the performance charts, we can also control access to storage in the way we have been used to managing memory and CPU.
New Chart Options
Enable Storage I/O Control
Network I/O Control. Traffic-management controls allow flexible partitioning of physical NIC bandwidth between different traffic types, including virtual machine, vMotion, FT, and IP storage traffic (vNetwork Distributed Switch only).
With Network I/O control this means that 2 x 10Gb network connections per ESX host is a real possibility, by being able to control the bandwidth my either shares or bandwidth we can allocate chunks or the 10Gb connection to each function inside our ESX host, e.g. management traffic, iSCSI etc. This is much like the way we control bandwidth with the HP Flex 10’s. Can’t wait for the 10Gb prices to come down a little so we can start implementing using this method. Going from 8 /10 network cables per host to 2 would be a big improvement.
VMware HA Scalability Improvements. VMware HA has the same limits for virtual machines per host, hosts per cluster, and virtual machines per cluster as vSphere. See Configuration Maximums for VMware vSphere 4.1 for details about the limitations for this release. VMware HA Healthcheck and Operational Status. The VMware HA dashboard in the vSphere Client provides a new detailed window called Cluster Operational Status. This window displays more information about the current VMware HA operational status, including the specific status and errors for each host in the VMware HA cluster. See the vSphere Availability Guide.
There have been a number of enhancements to the availability functionality for full information it is worth reading the following white paper >> http://www.vmware.com/files/pdf/techpaper/VMW-Whats-New-vSphere41-HA.pdf. Amongst these features, HA has been improved with a healthcheck functionality that lets you check the health of your HA cluster.
vCenter Converter Hyper-V Import. vCenter Converter allows users to point to a Hyper-V machine. Converter displays the virtual machines running on the Hyper-V system, and users can select a powered-off virtual machine to import to a VMware destination. See the vCenter Converter Installation and Administration Guide
Always useful for those Hyper-V to VMware migrations 😉
DRS Virtual Machine Host Affinity Rules. DRS provides the ability to set constraints that restrict placement of a virtual machine to a subset of hosts in a cluster. This feature is useful for enforcing host-based ISV licensing models, as well as keeping sets of virtual machines on different racks or blade systems for availability reasons.
This will come in very useful when trying to split machines over multiple blade chassis, or trying to ensure a certain VM lives on newer better performing hosts etc.
Memory Compression. Compressed memory is a new level of the memory hierarchy, between RAM and disk. Slower than memory, but much faster than disk, compressed memory improves the performance of virtual machines when memory is under contention, because less virtual memory is swapped to disk.
VMware have certainly done it again with this one, first we had transparent page sharing we now have memory compression when memory is contended on a host. Memory compression is enabled by default and when a memory page needs to be swapped first of all ESX will attempt to compress the page. You are able to disable or fine tune this feature with the mem advanced settings. By default 10% of the allocated VM memory size is allocated.
vMotion Enhancements. In vSphere 4.1, vMotion enhancements significantly reduce the overall time for host evacuations, with support for more simultaneous virtual machine migrations and faster individual virtual machine migrations. The result is a performance improvement of up to 8x for an individual virtual machine migration, and support for four to eight simultaneous vMotion migrations per host, depending on the vMotion network adapter (1GbE or 10GbE respectively).
Always nice when there is a performance increase in functionality and being able to vMotion more VM’s at once will defiantly help when it comes to putting a host in maintenance mode.
ESX/ESXi Active Directory Integration. Integration with Microsoft Active Directory allows seamless user authentication for ESX/ESXi. You can maintain users and groups in Active Directory for centralized user management and you can assign privileges to users or groups on ESX/ESXi hosts. In vSphere 4.1, integration with Active Directory allows you to roll out permission rules to hosts by using Host Profiles
This will defiantly help in larger teams meaning you can now manage all VMware vSphere authentication from host to vCenter with AD.
Configuring USB Device Passthrough from an ESX/ESXi Host to a Virtual Machine. You can configure a virtual machine to use USB devices that are connected to an ESX/ESXi host where the virtual machine is running. The connection is maintained even if you migrate the virtual machine using vMotion.
At last we can use our USB license dongles with our ESX guest servers. I’m sure there are also 101 over uses people have been waiting to implement but haven’t been able to because lack of USB support.
That’s really just scratching on the surface of the new features and new possibilities this brings to our virtualised environments, below are a few links to information to help you learn more.
vChat with TrainSignal’s David Davis and Simon Seagrave from TechHead – New vSphere 4.1 and Top 15 killer features
VMware 4.1 documentation
Eric Sloof has put together some fantastic articles about the new functionality in 4.1, check out his blog