Archives For vSphere

vSphere iPad Client

March 18, 2011 — Leave a comment

Today VMware have launched the much anticipated vSphere Client for the iPad, originally announced at VMwold in September 2010. The vSphere iPad client isn’t aimed to replace or mimic the functionality of the Windows client but more allow you to access the most common tasks, when you need them.

Since Apple released the iPad 1 in April 2010 the the table market has gone from strength to strength with companies such as Forrester predicting that by 2015 1/3 of all internet users will be table based. With these trends and predications it clear why VMware have started looking at converting their tools to have a tablet based version. When creating a tablet based version of an application it isn’t just a simple process of converting the existing application to work on an iPad, it’s time for a complete touch compatible UI and User experience.


When speaking to the team behind the vSphere iPad client they made clear this was defiantly a 1.0 product and that they are looking forward to as much feedback as possible to help improve functionality and usability in future versions. This current product uses the vCMA (vCenter Mobile Access) fling as a backend to the application, the vCMA acts as the middle man between you vSphere environment and your iPad. What this means is that because the vCMA is effectively the brains of the operations application for other tablet platforms can easily be created. The future direction for the product is that the vCMA functionality will be integrated into the vCenter, they are also looking to bring applications for other tablets such as Android in the future.


VMware have release a couple of videos to assist with configuring the vCMA

and configure and use the iPad vSphere Client

VMware have also announced a new community to assist with support and to collect feedback, that can be found here >>


As previously mentioned the purpose of the iPad client isn’t to offer the complete functionality of the Windows client but access to the 80% that are most commonly used. The functionality of this 1.0 product is as follows.

  • Search for vSphere hosts and virtual machines
  • Monitor the performance of vSphere hosts and virtual machines
  • Manage virtual machines with the ability to start, stop and suspend
  • View and restore virtual machines’ snapshots
  • Reboot vSphere hosts or put them into maintenance mode
  • Diagnose vSphere hosts and virtual machines using built-in ping and traceroutetools


Cost and Availability

The application is available from the iTunes store now and best of all the application is free, I understand there were thoughts about charging for this application at one point but I think they have done the right thing to get the most users.

My Thoughts

I will start with admitting I am a gadget and particularly and Apple freak, so anything that I can do on my iPad is good to me. A lot of thought has gone into the user interface and it makes it really easy to use. The integration with the ping and traceroute tools are a nice edition when trying to diagnose problems remotely. I am finding the performance monitoring section to be a little lacking at the moment and would really like to be able to see CPU ready, disk latency, memory swapping and ballooning etc. The application isn’t currently cluster aware and only looks at each server individually. I am really looking forward to future releases and more functionality, but they have done a fantastic job for a 1.0 release.

I think the use cases for it at the moment, certainly in enterprises will be somewhat limited as I am not sure how many larger companies will allow admins to connect in externally using an iPad for security reasons. We are also seeing VMware View being integrated in companies to allow users to utilise iPads, so maybe integration with the View Security server maybe a good way to bridge the external connection in the future. However for companies that do allow iPads to connect in remotely or have wireless in the datacenter I think we are seeing the start of a tool that will become part of the admins toolkit.

What would I like to see in the future?

Increased performance data such as CPU Ready, Disk Latency, Memory Swapping etc

Customisable timeframes for performance data

The ability to start an RDP session to a server

Cluster and vMotion aware

The ability to be able to customise VM hardware specification

Alerts in the form of notifications?

As you may already know with the introduction of vSphere 4.1 vCenter now requires a 64bit OS, this may add complexity to your upgrade to vSphere 4.1 compared to previous updates.

VMware provide a very comprehensive document on the procedure which can be found here >> I highly recommend your read through this document, I am not going to copy its contents here more point out points of interest and guide you in the direction of some KB articles that may assist you along the way.

  • Ensure you pay specific attention to the prerequisites listed on page 22 of the document above.
  • If you are moving from a version prior to vCenter 4, the supported databases have changed, SQL 2000 is no longer supported and Oracle 9i is no longer supported.
  • Ensure you or your DBA has full access to the database, ensure you know the passwords for the account used to connect to the database and ensure a backup is taken of the vCenter database before proceeding.
  • If you are using a Microsoft SQL database ensure you install the SQL Native Client driver on your new vCenter server, it can be downloaded here >>
  • For SQL databases ensure that your database login has DBO rights for the vCenter database and the MSDB database, the MSDB database permission is only required during the upgrade.

Before starting the upgrade you may wish to run the vCenter agent pre-upgrade check tool, details of this tool can be found on page 28 of the above document, this will create you a report with links to KB articles for problems you may occur when the vCenter agent is being upgraded on the hosts.

VMware have included a data migration tool on the vCenter 4.1 installation media, this will take a lot of the pain out of the migration to the new 64bit host. Details of this tool can be found on page 33. Not only will this tool migrate all the settings, certificates, licences for your vCenter server but if you are using the SQL Express database for smaller installation it will also move the database itself. This goes for the Updade Manager as well as vCenter.

The data migration tool can be found in the datamigration folder of the vCenter 4.1 installation media.


You will need to extract the zip file held within and follow the instructions in the upgrade guide to complete the backup of data. Before beginning ensure your vCenter and Update Manager services are stopped on the server. Once completed you will move the folder to the destination vCenter, ensure you have created a 64bit DSN if you are using an external DB and then run the install.bat to move your configuration and at the same time upgrade to 4.1.

KB articles to be aware of

I have found the KB articles below of assistance during 4.1 upgrades, it maybe worth familiarising yourself with these potential problems prior to you upgrade.

Problem: Upgrading to vCenter Server 4.1 fails when the installer upgrades the database. You get an error that reads “Exception thrown while executing SQL script” whilst performing the upgrade. Check the VCDatabaseUpgrade.log in %temp% directory for more specific information of the error.


Problem: Installing VMware Converter 4.2 fails with the error: Error 29454 Setup failed to register VMware vCenter Converter extension


Problem: Installing VMware Update Manager 4.1 fails with the error: Error 25085


Problem: Moving the vCenter SQL Database


Problem: Migrating vCenter Server to a different host machine


VMware vSphere 4.1

July 18, 2010 — Leave a comment

It was just over a year ago that vSphere was launched and now is time for the first major update to the product. On Tuesday 13th vSphere 4.1 was released to the public, my apologies for the timing of this blog post but as always work has taken priority. 4.1 is much more than a patch / security update and is more akin to the VI 3.0x to 3.5 we saw a couple of years ago. One of the first big changes that had been mentioned in recent release notes is vCenter is now 64bit only, so all those of you using 32bit servers and 32 bit windows for your vCenter its time to move to a 64bit platform.

So what new features can you expect to see in vSphere 4.1, as always the release notes are a good place to start >>

Below are some of the major features, updates and announcements for me :

VMware ESX. VMware vSphere 4.1 and its subsequent update and patch releases are the last releases to include both ESX and ESXi hypervisor architectures. Future major releases of VMware vSphere will include only the VMware ESXi architecture.

  • VMware recommends that customers start transitioning to the ESXi architecture when deploying VMware vSphere 4.1.

This has been something that has been on the cards for a long time, with much speculation that the first vSphere release was going to be ESXi only. It’s time to start getting used to ESXi and VMA. Also time for a lot of third party software manufacturers to get their ESXi support 100%.


Scripted Install for ESXi. Scripted installation of ESXi to local and remote disks allows rapid deployment of ESXi to many machines. You can start the scripted installation with a CD-ROM drive or over the network by using PXE booting. You cannot use scripted installation to install ESXi to a USB device

At last we can now script ESXi installations and install by using a PXE boot, @Vinf_net pointed me in the direction of this very useful post

vSphere Client Removal from ESX/ESXi Builds. For ESX and ESXi, the vSphere Client is available for download from the VMware Web site. It is no longer packaged with builds of ESX and ESXi. After installing ESX and ESXi, users are directed to the download page on the VMware Web site to get the compatible vSphere Client for that release. The vSphere Client is still packaged with builds of vCenter Server

Not really groundbreaking news but it is one that could easily catch you out. If you are working on your first new installation of ESX 4.1 at a site with limited or no internet connection you must remember to either a. download the vSphere client before hand or or b. Install the vSphere client from the vCenter DVD or if already installed the web interface or your vCenter.

Hardware Acceleration with vStorage APIs for Array Integration (VAAI). ESX can offload specific storage operations to compliant storage hardware. With storage hardware assistance, ESX performs these operations faster and consumes less CPU, memory, and storage fabric bandwidth

This is a huge feature meaning that when doing storage tasks like clones, storage vMotion etc, vSphere can offload all the work to your storage array to do, meaning the task is completed faster and with last pressure put onto your ESX hosts. Remember to be able to use functionality like VAAI you will need at least VMware vSphere Enterprise licensing

Storage Performance Statistics. vSphere 4.1 offers enhanced visibility into storage throughput and latency of hosts and virtual machines, and aids in troubleshooting storage performance issues. NFS statistics are now available in vCenter Server performance charts, as well as esxtop. New VMDK and datastore statistics are included. All statistics are available through the vSphere SDK.


Storage I/O Control. This feature provides quality-of-service capabilities for storage I/O in the form of I/O shares and limits that are enforced across all virtual machines accessing a datastore, regardless of which host they are running on. Using Storage I/O Control, vSphere administrators can ensure that the most important virtual machines get adequate I/O resources even in times of congestion

We now have greater visibility of storage performance and performance issues not only through ESXTOP but also the performance charts, we can also control access to storage in the way we have been used to managing memory and CPU.

New Chart Options


Enable Storage I/O Control


Network I/O Control. Traffic-management controls allow flexible partitioning of physical NIC bandwidth between different traffic types, including virtual machine, vMotion, FT, and IP storage traffic (vNetwork Distributed Switch only).

With Network I/O control this means that 2 x 10Gb network connections per ESX host is a real possibility, by being able to control the bandwidth my either shares or bandwidth we can allocate chunks or the 10Gb connection to each function inside our ESX host, e.g. management traffic, iSCSI etc. This is much like the way we control bandwidth with the HP Flex 10’s. Can’t wait for the 10Gb prices to come down a little so we can start implementing using this method. Going from 8 /10 network cables per host to 2 would be a big improvement.

image 5abce53f-eda8-4903-8a4d-51b1b91773de

image image

  • VMware HA Scalability Improvements. VMware HA has the same limits for virtual machines per host, hosts per cluster, and virtual machines per cluster as vSphere. See Configuration Maximums for VMware vSphere 4.1 for details about the limitations for this release.
  • VMware HA Healthcheck and Operational Status. The VMware HA dashboard in the vSphere Client provides a new detailed window called Cluster Operational Status. This window displays more information about the current VMware HA operational status, including the specific status and errors for each host in the VMware HA cluster. See the vSphere Availability Guide.
  • There have been a number of enhancements to the availability functionality for full information it is worth reading the following white paper >> Amongst these features, HA has been improved with a healthcheck functionality that lets you check the health of your HA cluster.


    vCenter Converter Hyper-V Import. vCenter Converter allows users to point to a Hyper-V machine. Converter displays the virtual machines running on the Hyper-V system, and users can select a powered-off virtual machine to import to a VMware destination. See the vCenter Converter Installation and Administration Guide

    Always useful for those Hyper-V to VMware migrations 😉

    DRS Virtual Machine Host Affinity Rules. DRS provides the ability to set constraints that restrict placement of a virtual machine to a subset of hosts in a cluster. This feature is useful for enforcing host-based ISV licensing models, as well as keeping sets of virtual machines on different racks or blade systems for availability reasons.

    This will come in very useful when trying to split machines over multiple blade chassis, or trying to ensure a certain VM lives on newer better performing hosts etc.


    Memory Compression. Compressed memory is a new level of the memory hierarchy, between RAM and disk. Slower than memory, but much faster than disk, compressed memory improves the performance of virtual machines when memory is under contention, because less virtual memory is swapped to disk.

    VMware have certainly done it again with this one, first we had transparent page sharing we now have memory compression when memory is contended on a host. Memory compression is enabled by default and when a memory page needs to be swapped first of all ESX will attempt to compress the page. You are able to disable or fine tune this feature with the mem advanced settings. By default 10% of the allocated VM memory size is allocated.


    vMotion Enhancements. In vSphere 4.1, vMotion enhancements significantly reduce the overall time for host evacuations, with support for more simultaneous virtual machine migrations and faster individual virtual machine migrations. The result is a performance improvement of up to 8x for an individual virtual machine migration, and support for four to eight simultaneous vMotion migrations per host, depending on the vMotion network adapter (1GbE or 10GbE respectively).

    Always nice when there is a performance increase in functionality and being able to vMotion more VM’s at once will defiantly help when it comes to putting a host in maintenance mode.

    ESX/ESXi Active Directory Integration. Integration with Microsoft Active Directory allows seamless user authentication for ESX/ESXi. You can maintain users and groups in Active Directory for centralized user management and you can assign privileges to users or groups on ESX/ESXi hosts. In vSphere 4.1, integration with Active Directory allows you to roll out permission rules to hosts by using Host Profiles

    This will defiantly help in larger teams meaning you can now manage all VMware vSphere authentication from host to vCenter with AD.

    Configuring USB Device Passthrough from an ESX/ESXi Host to a Virtual Machine. You can configure a virtual machine to use USB devices that are connected to an ESX/ESXi host where the virtual machine is running. The connection is maintained even if you migrate the virtual machine using vMotion.

    At last we can use our USB license dongles with our ESX guest servers. I’m sure there are also 101 over uses people have been waiting to implement but haven’t been able to because lack of USB support.

    That’s really just scratching on the surface of the new features and new possibilities this brings to our virtualised environments, below are a few links to information to help you learn more.

    vChat with TrainSignal’s David Davis and Simon Seagrave from TechHead – New vSphere 4.1 and Top 15 killer features


    VMware 4.1 documentation


    Eric Sloof has put together some fantastic articles about the new functionality in 4.1, check out his blog


    This is more of a quick one for reference as there are numerous articles on this in the blogosphere. Enabling Cisco CDP on your vSwitch’s is fantastic for troubleshooting NIC problems and Documentations. As Cisco CDP is enabled by default on most devices all you need to do from the ESX side is the following.

    From the command line of your ESX host run the following command for each vSwitch

    esxcfg-vswitch –B both vSwitch0

    Now when you view the vSwitch configuration on your ESX host you will be able to see more information about the physical switchthe host is connected to, including switch name and most importantly the switch port number.


    If you login to your cisco switch and run the following command

    show cdp entry *

    You will then be able to see the reserve of this information from the Cisco switch itself, very usefully when diagnosing networking or cabling issues.


    My next step is to write a powershell script that will grab the information from ESX and document it. Watch this space.

    vSphere Design Workshop

    April 16, 2010 — 1 Comment

    As those who follow me on twitter will know this week I attended the vSphere Design Workshop course, as many were asking me my thoughts on the course I thought I would do a short blog post.

    I as many other consultants working for VMware partners are required to attend this course for our partner accreditations levels, this sounds good to me I’m always interested in more training! Prior to going on the course I had heard some negative feedback regarding the course content but despite this I attended with an open mind.

    I was hoping to bring back from the course some new ideas and understanding of the design process, learn how others were tackling design scenarios and refresh my mind on some of the design elements within vSphere.  The course is made up of 55% lecture and 45% lab exercise and it would be recommended that you were already at VCP level before attending.

    I attended the course at Megirus in the UK, we had a nice sized class with 9 attendees all from different backgrounds and we were  lucky enough that 2 attendees were VMware trainers looking at the class content for the first time.

    I’m not going to go into detail of the course content as that is what the course is for but I personally found the information extremely useful. The main take away information was bought on from the discussions within the class prompted by the class material and the trainer. I was lucky enough to be in a class where everyone would openly talk about their experiences, problems and findings within their designs and installations. If you were unlucky enough to attend a class where people weren’t keen to take part in the discussions openly I believe you could end up being a bit disappointed.

    I found the later labs in the course a little dry and thought that if the lab data that was mentioned was actually shown in capacity planner it would have bought a bit more depth and relevance to these labs, apart from this I was very pleased with what I learnt and will now be taking what I have learnt and using it in future design engagements.

    So if you are involved with designing vSphere infrastructures and have the opportunity to attend I would recommend attending the course. Don’t expect it to be an all out technical course, don’t expect it to set you up ready for a VCDX or give you the ability to design an infrastructure if you not already a VCP and start the course prepared to participate in the discussions.

    After recently working with the HP Virtual Connect Flex10’s in a HP C7000 blade chassis I have decided to blog about a few of the things I have learnt.

    What is a Virtual Connect Flex10?

    OK I’m going to cheat on this bit, below I have included the HP video on VC Flex 10’s it does a very good job of explaining exactly what a VC Flex 10 is.

    So what this means for us is that, for each Flex 10 connection in our blade server, it will offer us up to 4 network connections (Flex NICs) that can be fine tuned between 100Mb to 10Gb. With two Flex 10’s we can then ensure our networking is fully redundant. We are then able to uplink our Virtual Connect Flex 10’s with as little as 2 10Gb connections to our Core network infrastructure.

    With the amount of NIC’s required in your average ESX build this makes FLEX10’s very attractive when deploying HP Blade Chassis.

    HP Virtual Connect Flex-10 10Gb Ethernet Module


    HP 1/10Gb-F Virtual Connect Ethernet Module



    The first thing you will need to do when using the Virtual Connects is upgrade the firmware, after a bit of searching on the internet I found there were numerous issues reported with older firmware.

    Firmware for the Virtual Connect Flex 10’s can be found here >>

    I found the descriptions of the downloads very misleading and in the end found out I had to use the FC firmware even though I had an Ethernet module!

    I found the easiest and most reliable way of doing this was to use the Virtual Connect c-Class Virtual Connect Support Utility. This is available from the HP download site


    Once you have downloaded and installed the utility you will be able to use it to connect to your blade chassis and perform several tasks on your virtual connects.


    If you have a current configuration on your Virtual Connects ensure you run a backup of the configuration first encase you run into any issues.


    Once you have backed up the configuration you are able to apply the firmware update


    I also found the Bay command useful when I had to only update 1 module on a replacement module after an initial failure.

    My Scenario

    In my scenario I was working with an HP C7000 blade chassis, numerous HL BL460c G6 server blades, two HP Virtual Connect Flex-10 10Gb Ethernet Modules and two non stacking Cisco switches. I was also working with iSCSI based storage. This is by no means a recommendation of hardware but simply what I had to work with in my scenario.

    The HP BL460c G6 server blade comes with 2 x LOM (LAN on Motherboard) connections. This means we are able to provision 8 FlexNics in total, these would be spread over two virtual connects located in Interconnect bays 1 and 2.

    Virtual Connect Configuration

    In my scenario I would be using a single CX4 connection from each Virtual Connect to uplink to each of my core switches. As these two switches are not stacked together, only uplinked the virtual connect would be configured in an active / passive configuration for failover.

    To accomplish this I created a single Virtual Connect domain with both virtual connects being members of this domain. A single Shared Uplink set was then configured which both CX4’s were part of. The relative VLAN’s in my network were then created inside the Virtual Connect Networks configuration. A server profile was then created for each ESX host, 8 FlexNics were configured for each host, as I like to tag the traffic on the ESX side, I configured the links to allow multiple connections and selected the relevant VLANs to be assigned to these NICS. The only exception to this was the vMotion NICs, these were configured with no uplink and were assigned directly to the the vMotion network that had been configured, as all the ESX hosts were located in the blade chassis this meant all the vMotion traffic could travel across the virtual connects.

    A couple of points to consider when designing your Virtual Connect environment is that a VLAN may only connect to one FlexNIC per LOM. This means that you can run into difficulty if your service console is on your production network. As you wouldn’t be able to assign your production VLAN to your Service Console portgroup and to you Virtual Machine Network portgroup. To avoid this you can move your Service Console network onto a management VLAN and route as necessary between this VLAN and your production VLAN.

    When assigning the FlexNICS they will be assigned in the following order.


    All the FlexNICs configured from LOM:1 will connect through the Virtual Connect in bay 1 and all the FlexNICs configured from LOM:2 will connect through bay 2. This means that in a redundant configuration you can use every other FlexNIC to be a redundant adapter in your configuration.


    ESX Configuration

    Conveniently the Flex NIC configuration mentioned above matches up with the vmnics as follows

    LOM:1-a vmnic0
    LOM:2-a vmnic1
    LOM:1-b vmnic2
    LOM:2-b vmnic3
    LOM:1-c vmnic4
    LOM:2-c vmni5
    LOM:1-d vmni6
    LOM:2-d vmnic7

    This meant I could easily split the FlexNics up into 4 Virtual Switches as follows.

    vSwitch0 – Service Console Network – vmnic0, vmnic1 – Management VLAN

    vSwitch1 – Virtual Machine Networks – vmnic6, vmnic6 – All LAN Traffic VLAN’s

    vSwitch2 – vMotion Network – vmnic4, vmnic5 – vMotion VLAN

    vSwitch 3 –  iSCSI Network – vmnic2, vmnic3 – iSCSI VLAN

    The real benefit of the Flex10 is the ability to be able customise the amount of traffic you are assigning to each FlexNic, I followed the Virtual Connect with vSphere white paper for the guidelines on amounts of bandwidth and they were assigned as follows through the virtual connect manager.

    Service Console – 500Mb

    Virtual Machines Networks – 3Gb

    vMotion – 2.5 Gb

    iSCSI – 4Gb

    The final point is to make sure you are running the very latest driver for the FlexNICs within your ESX as previous versions of the driver had issues with reporting and acting on uplink failures.

    Cisco Switch Configuration

    As per the Virtual Connect and Cisco whitepaper the switch ports were configured as follows

    interface TenGigabitEthernet0/1
    description “VC1 Uplink 1, Po1”
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 2,3,4,5,6,7
    switchport mode trunk
    spanning-tree portfast trunk

    Portfast is configured to allow quick failover between virtual connect in the event of a failure.


    I have put together the following diagram to help illustrate the configuration, unfortunately my Visio skills aren’t quite up to Hany Michaels over at !

    Download Diagram >>

    Blade to VC


    Extra redundancy could of course have been offered by installing an addition mezzanine card into the blades, an additional 2 virtual connects would have also been required.

    If a stacking switch was used an active / active configuration using LACP / Ether channel trunks could have been configured.


    Frank Dennemans Fantastic blog – Flex10 Lessons Learned Go
    Again Frank Dennemans Fantastic blog – Flex10 Lessons Learned Go
    HP Virtual Connect Flex-10 and VMware vSphere 4.0 Go
    HP Virtual Connect and Cisco Administration Go
    Kenneth van Ditmarsch has a large amount of Flex10 information which is really worth looking at Go

    Whilst working on a customers site I discovered an issue trying to customize a Windows 2008 Datacenter x64 template, whilst trying to discover if this was an issue with just this OS I have found the following.

    The below section is taken from vCenter 4 administration guide (Page 176)


    This is the same section from vCenter 4 Update 1 administration guide (Page 176)


    Note that Windows 2008 support has been removed from the update 1 administration guide.

    I would be interested to know if anyone has been able to customize Windows 2008 VM’s using update 1 and if you have what version of 2008 and was your vCenter a fresh install of update 1 or an upgrade.

    @vmwareKB on twitter is currently looking into this for me at the moment, will update you with more findings.


    A number of people in the comments and on twitter have told me they have been able to customise 2008 standard and enterprise with update 1, I have reinstalled the VMware tools in my Datacenter template and have now successfully been able to customise my VM. So this looks like a section has just been missed from the documentation. Will let you know when I get an official update from VMware on 2008 customisations.

    After a number of issues were found with update 1 in relationship with HP management agents VMware have rereleased a fixed version of the update called update 1a.

    The symptoms of the problem are described in the KB article as

    When attempting to upgrade ESX 4.0 to ESX 4.0 Update 1 (U1), you may experience these symptoms:

    • Upgrade operation may fail or hang and can result in an incomplete installation
    • Upon reboot, the host that was being upgraded may be left in an inconsistent state and may display a purple diagnostic screen with the following error:
      COS Panic: Int3 @ mp_register_ioapic

    During the ESX 4.0 Update installation a process checks for running agents and stops them before proceeding.


    Full information about this issue can be found in the KB here >> 

    Update 1a is available from update manager and the download site now

    vSphere Update 1 Released

    November 20, 2009 — 1 Comment

    After many rumours that update 1 for vSphere would be released on the 19th of November we were all sorely disappointed, with new rumours it would be Monday 23rd. Much to my surprise this morning it has been released.


    The biggest new features are View 4 support, Windows 7 vSphere client support and Windows 7 and Windows 2008R2 as a guest VM. There has also been an update to the configuration maximums as follows HA Cluster Configuration Maximum — HA clusters can now support 160 virtual machines per host in HA Cluster of 8 hosts or less. The maximum number of virtual machines per host in cluster sizes of 9 hosts and above is still 40, allowing a maximum of 1280 Virtual Machines per HA cluster.

    This following comment in the release notes is very interesting so looks like VMware are making moves to making vCenter a 64bit application.

    Future releases of VMware vCenter Server might not support installation on 32-bit Windows operating systems. VMware recommends installing vCenter Server on a 64-bit Windows operating system.

    Here is a copy of what’s new from the release notes which can be found here >> 

    What’s New in ESX

    The following information provides highlights of some of the enhancements available in this release of VMware ESX:

    VMware View 4.0 support This release adds support for VMware View 4.0, a solution built specifically for delivering desktops as a managed service from the protocol to the platform.

    Windows 7 and Windows 2008 R2 support –This release adds support for 32-bit and 64-bit versions of Windows 7 as well as 64-bit Windows 2008 R2 as guest OS platforms. In addition, the vSphere Client is now supported and can be installed on a Windows 7 platform. For a complete list of supported guest operating systems with this release, see the VMware Compatibility Guide.

    Enhanced Clustering Support for Microsoft Windows – Microsoft Cluster Server (MSCS) for Windows 2000 and 2003 and Windows Server 2008 Failover Clustering is now supported on an VMware High Availability (HA) and Dynamic Resource Scheduler (DRS) cluster in a limited configuration. HA and DRS functionality can be effectively disabled for individual MSCS virtual machines as opposed to disabling HA and DRS on the entire ESX/ESXi host. Refer to the Setup for Failover Clustering and Microsoft Cluster Service guide for additional configuration guidelines.

    Enhanced VMware Paravirtualized SCSI Support Support for boot disk devices attached to a Paravirtualized SCSI ( PVSCSI) adapter has been added for Windows 2003 and 2008 guest operating systems. Floppy disk images are also available containing the driver for use during the Windows installation by selecting F6 to install additional drivers during setup. Floppy images can be found in the /vmimages/floppies/ folder.

    Improved vNetwork Distributed Switch Performance Several performance and usability issues have been resolved resulting in the following:

    • Improved performance when making configuration changes to a vNetwork Distributed Switch (vDS) instance when the ESX/ESXi host is under a heavy load
    • Improved performance when adding or removing an ESX/ESXi host to or from a vDS instance

    Increase in vCPU per Core Limit The limit on vCPUs per core has been increased from 20 to 25. This change raises the supported limit only. It does not include any additional performance optimizations. Raising the limit allows users more flexibility to configure systems based on specific workloads and to get the most advantage from increasingly faster processors. The achievable number of vCPUs per core depends on the workload and specifics of the hardware. For more information see the Performance Best Practices for VMware vSphere 4.0 guide.

    Enablement of Intel Xeon Processor 3400 Series – Support for the Xeon processor 3400 series has been added. For a complete list of supported third party hardware and devices, see the VMware Compatibility Guide.

    Resolved Issues In addition, this release delivers a number of bug fixes that have been documented in the Resolved Issues section.

    What’s New in vCenter

    This update release of VMware vCenter Server 4.0 Update 1 offers the following improvements:

    • IBM DB2 Database Support for vCenter Server — This release adds support for IBM DB2 9.5 as the backend database platform for VMware vCenter Server 4.0. The following editions of IBM DB2 are supported:
      • IBM DB2 Enterprise 9.5
      • IBM DB2 Workgroup 9.5
      • IBM DB2 Express 9.5
      • IBM DB2 Express-C 9.5
    • VMware View 4.0 support — This release adds support for VMware View 4.0, a solution built specifically for delivering desktops as a managed service from the protocol to the platform.
    • Windows 7 and Windows 2008 R2 support — This release adds support for 32-bit and 64-bit versions of Windows 7 as well as 64-bit Windows 2008 R2 as guest operating system platforms. In addition, the vSphere Client is now supported and can be installed on a Windows 7 platform.
    • Pre-Upgrade Checker Tool — A standalone pre-upgrade checker tool is now available as part of the vCenter Server installation media that proactively checks ESX hosts for any potential issues that you might encounter while upgrading vCenter agents on these hosts as part of the vCenter Server upgrade process. You can run this tool independently prior to upgrading an existing vCenter Server instance. The tool can help identify any configuration, networking, disk space or other ESX host-related issues that could prevent ESX hosts from being managed by vCenter Server after a successful vCenter Server upgrade.
    • HA Cluster Configuration Maximum — HA clusters can now support 160 virtual machines per host in HA Cluster of 8 hosts or less. The maximum number of virtual machines per host in cluster sizes of 9 hosts and above is still 40, allowing a maximum of 1280 Virtual Machines per HA cluster.
    • Bug fixes described in Resolved Issues.

    More information and the download can be found on VMware’s website here >>

    A CPU of the host is incompatible error appears and VMotion stops working after upgrading to vSphere 4.0

    Just experienced this issue and found the resolution on VMware’s Kb 1011294 >> posted here for reference and if anyone else runs into the problem.

    VMotion fails after upgrading from ESX 3.x to ESX 4.0
    You receive an error similar to:

    Unable to migrate from to : The CPU of the host is incompatible with the CPU feature requirements of the virtual machine; problem detected at CPUID level.


    Host CPU is incompatible with the virtual machine’s requirements at cupid level 0x1 register’ecx’
    Host bits: 0000:0100:0000:1000:0010:0010:0000:0001
    Required: 1000:0100:0000:100x:xxx0:0x1x:xxx0:x001
    Mismatch detected for these features:
    *General incompatibilities; refer to KB article 1993 for possible solutions.

    This issue occurs after upgrading the virtual hardware in the virtual machines
    A new virtual machine created on vSphere 4.0 migrates successfully
    The upgraded virtual machines may have some CPU masks applied which are causing the migration difficulties.

    To ensure that VMotion is successful:
    Power down the virtual machine.
    Click the link to Edit Settings of the virtual machine.
    Click the Options tab.
    Select CPUID Mask under Advanced.
    Click Advanced.
    Click Reset All to Default.
    Click OK.
    Click OK again.
    Power on the virtual machine and migrate.
    Note: If the issue still exists after trying the steps in this article, file a support request with VMware Support and note this KB article ID in the problem description. For more information, see How to submit a Support Request. For further contact options, see