Archives For Storage

2014 has been a busy year for me for many reasons but I thought I would briefly summerise some of the highlights for me over the year as well as some musings with regard to the future of the industry.


I have been lucky enough to attend a number of events this year, including BriForum, vForum and IPExpo in London, vForum in Manchester,  the Dell Enterprise Forum in Frankfurt, VMworld in Barcelona as well as a number of VMware User Group events. These events for me offer a great opportunity to meet individuals from the communities and the technical deep-dive sessions at these events really offer a valuable opportunity to get a better understanding on particular subjects from industry experts. I am looking forward to many events in the coming year including hopefully BriForum and VMworld again, I would also like to get a better understanding of Microsoft, Amazon and Google direction in the industry.

End User Computing

This year has been a year of improvement and maturity for end user computing, we have seen VMware acquire AirWatch for $1.54 billion, the aqcuistion of cloud volumes as well as the release of Horizon 6. The subject of end user computing is becoming ever more defined and mature, we should no longer be awaiting the year of VDI and the focus should be firmley around the user. There is no single right answer to end user computing, we should be concentrating on the users, their use cases and needs, what can we do to make our users more productive? This will be a hybrid mix of many technologies from desktop PC’s to VDI, Mobiles and tablets and more. From a user perspective we need to ensure they can easily access their applications and data on whatever platform and wherever they are. From an administrator perspective we need to ensure this can be done in a secure way that will meet the user’s needs, it needs to easy to manage, monitor and upgrade. For me I like to practice what I preach and my business processes and personal life is spread between a mix of devices and operating systems, I use a Mac Book Pro as my main business device but also use an iPad mini, Samsung Galaxy Note Pro 12.2 and a Windows 8.1 VDI desktop. For me the device should no longer matter and it doesn’t, but it is imperative that the applications and data are where I need them when I need them.


We are starting to see the ever growing importance of applications within the IT infrastructure, whilst they have always been important the focus of IT Administrators and consultants maybe hasn’t always been focused purely on the applications but the infrastructure used to run the applications. During 2014 it has become increasingly obvious that this is where the future of the IT industry lies, focusing on not keeping the cogs turning but ensuring our applications are meeting our business needs. Integration and automation not necessarily between infrastructure components but applications will be key in the software defined world, how are you going to get SaaS application A talking to SaaS application B? With the focus on the applications we are seeing growth in the areas that focus on the applications like Docker and Openstack, DevOps is key.

Hybrid Cloud

2014 for me was the year of the hybrid cloud, we saw VMware launch their first and second UK datacenter as well as a number of datacenters across the globe. From a customer perspective vCloud Air offers an easy way to understand how cloud will work within their business, with data residency guarantees that will suit their business needs, the ability to use the same tools they use to manage their existing private cloud as well as the ability to move workloads between private and public clouds when ever required. We have seen customers trial and start to move production workloads to the cloud using vCloud Air.
For me the future of the hybrid cloud is more than simply your private and public infrastructures, SaaS will make a big part of your infrastructure and moving forward will be ever increasing.  We are seeing Office 365 becoming the norm for many Exchange upgrades and new software installations will focus on SaaS first. Until we are able to replace all of our applications with SaaS alternatives, infrastructure is still going to be a key requirement and this is where vCloud Air offers the flexibility that businesses need.

I think it is going to be interesting to see what the next Server OS from Microsoft is going to bring, you would assume that cloud integration will be baked in as standard,  when deploying new roles you will get the choice to decide whether you want to deploy on premise or in Azure.  We will have to wait and see. I think they is particularly going to be a lot of power in a Dropbox alternative baked directly into to the Widows OS, imagine the simplicity of being able to access all your business shares that are on you Windows files servers from any device, anywhere without a VPN or similar technology but the power will have to be in data security.

Shared Storage Choice

As ever a focus this year has been on shared storage, no matter which way the industry is going there is always going to be a growing demand for storage, whilst at present that is largely on premise in the future we are going to see cloud storage options be ever increasing and important to our businesses.

We have seen the growth of many next generation storage vendors such as Nimble and Pure Storage, we have see the hyper-converged market become ever matured with Nutantix and Simplivity alongside the launch of VMware Evo:Rail and the announcement of Evo:Rack.

For me Nimble Storage has been really standout and we have seen some great reactions from customers when deployed in their infrastructures, it brings together simplicity and high performance with large capacity at a suitable price. Next year I am going to be interested to see how the adoption of Hyper-converged infrastructures grows, particularly with Nutantix and Evo:Rail / VSAN solutions within my customers.

Data Protection

As ever we have seen Veeam build upon their fantastic backup and recovery product with the release of V8, this see’s improved methods of recovery and replication amongst other new features. Next year I would love to see them be able to offer a product that allows you to back up your VMs no matter if they are on premise or in the cloud with vCloud Air, Azure or Amazon EC2. But for me the biggest challenge moving into a SaaS world is data protection. Many people seem to forget about data protection when moving their applications and data to the cloud, but is this correct? Should we be trusting these important assets with one provider, who ever they maybe, or is having 3 copies of your data ever more important? I think the challenge of data protection in the cloud era is having a platform that will allow you to backup, protect and recover your data from a variety or resources to a different set of resources. Let’s say you are storing important business information with SaaS provider A, what happens if they go bust or have a massive data breach or business continuity issue? Maybe you are taking a regular dump of data to a CSV file or similar, but what use is this to your business unless you can convert and recover your data to SaaS provider B? Without global standards between similar providers this is where protecting SaaS applications will become difficult and in my opinion a big challenge for our industry. Maybe until this is solved outside of the main players like Microsoft and Google etc companies will choose to turn to IaaS solutions and protect their data in a more traditional way or will they just take the risk and trust the providers?

Personal Achievements

I have really enjoyed taking part in a number of industry interview opportunities this year, I love sharing a my thoughts and visions for the industry as well as getting to discuss these subjects with others. I have presented at a number of events including the UKVMUG and my companies own events with a record number attending our most recent VMware event that is growing year on year. The biggest challenge for me this year has been working on a second book, this time with co-author Peter von Oven, we are nearing the end now and are hoping that our book Mastering Horizon 6 will be published prior to April by Packt publishing. my biggest achievement was to be made a director of the company I work for, I will be concentrating on pre-sales and operations for my business and this gives me a great opportunity to continue learning and evangalising about technology as well as getting involved with the internal processes and procedures within the business and understanding how modern applications will help our business. I am looking forward to helping the business grow and be better known within the technology industry as well as working on some exciting projects.

That’s all for now, there are so many more areas I could talk about, 2015 is going to be an exciting year for many reasons. I hope to be able to catchup with many of you in the new year.

Happy new year.


This is a blog post that has been long overdue, I have blogged about Nimble Storage a couple of times when at VMworld and Devin Hamilton (Director of Storage Architecture & Nimble’s first ever customer-facing engineer) was also on one of the HandsonVirtualization podcasts that we recorded in the past. I sat down with a good friend of mine Nick Dyer around 6 months ago, Nick at the time had been with Nimble for only a few months after previously being at Xsigo and Dell EqualLogic, we discussed who Nimble were and what made them different to everyone else in the market place, Nick also gave me a tour of its features and functionality.

Very recently Nimble have announced Nimble OS 2.0, which this walkthrough is based on and big thanks to Nick for helping me up date this from 1.X to 2.X

Home Screen

The home screen shows you a good overview of what is happening within your storage array. On the left we can see a breakdown of the storage usage including snapshots, below this we can see what our space saving is, utilising the in-line compression technology for both primary and snapshot data. In the middle we have a breakdown of throughput in MB/Sec and IOPS broken down by Reads and Writes. Finally on the left we have a breakdown of events over the last 24 hours.

Array Management

Prior to Nimble OS 2.0, the architecture worked with a frame / scale up based design where you start with a head unit that contains 2 controllers, 12 high-capacity spinning disks and a number of high-performance SSDs. You then can increase capacity by attaching up to a further 3 shelves of high-capacity drives using the SAS connectors on the controllers, or you can scale performance by upgrading the controllers or swapping for larger SSDs. What is different about Nimble is the architecture is not based on drive spindles to deliver performance as like traditional storage arrays, rather using multiple Intel Xeon processors to drive IOPS from the array. Nimble have now released version 2.0 of their software, meaning that scale-out is now available as a third scaling method for any Nimble array. This now forms part of a Nimble Storage array “Group”. Today Nimble supports up to 4 arrays in a group – each array supporting 3 shelves of additional disk. The theoretical maximums are thus ~280,000 IOPS & 508TB usable storage in a scale out cluster!

We can see on the screenshot below on a different system that there are a number of shelves configured and that we have active and hot standby controllers configured.

Nimble use an architecture called CASL (Cache Accelerated Storage Architecture) This is made up of a number of components, SSDs are utilised as a Random Read Cache for Hot Blocks in utilisation within the systems, random writes are coalesced through NVRAM to the array, and it compresses them and writes them sequentially to the RAID 6 near line SAS spinning disks resulting in write operations that Nimble claim can be up to 100x faster than traditional disk alone.

The compression within the Nimble storage array happens inline with no performance loss and can offer between 30-75 percent saving depending on the workload.

Check out the following page for more information on CASL –

One of the nice feature in the GUI is when you hover over a port in the array screen it will then highlight which port it corresponds to on the array and display the IP address and status on screen.

When configuring the array with your ESXi or Windows servers you will use the target IP address shown below to configure your storage connectivity.

The network configuration on the array is easily configured. Nimble has now dedicated “Networking” tab available in the administration menu where the Group or individual arrays can be changed. From here we can also configure a new technology Nimble call “Virtual Target IP addresses” as well as creating “Network Zones” to stop traversing and saturating Inter-Switch Links for Multipath configurations. Both of these topics are an individual blog post on their own! It is now also possible to create multiple routes on the array to allow for replication traffic, for example.

Any individual port can be configured to either be on the management, data or both networks.

It’s now also possible to save your network changes as “Draft”, but also to revert your network settings back to your previously applied configuration – very handy in case something went wrong!

Adding Arrays to Group

To deploy a Nimble array into the group, it is now as simple as clicking a button in the Nimble GUI under the “Arrays” page. We did this on a pair of fully-functional Nimble VMs.

The Group will then detect the unconfigured Nimble array (which must be on the same layer 2 broadcast domain). It is also where it is possible to merge two Groups together in this screen!

From here all that’s required are the physical network IP addresses for the new array data ports. It will enherit all other configuration from the Group (ie Replication, Email Alerts, Autosupport, Performance Policies, Initiator Groups and more). This is a non-disruptive process, too!

Once IP addresses are configured, the new array is provisioned in the Group in the “default” storage pool.

Initiator Groups

Initiator groups are used to manage access to the volumes, you start off by creating an initiator group for servers that will require access to the same volumes, in this example ESXi hosts, you will then map your volume to the initiator group.

Performance Policies

Performance policies are used to handle the cache, compression and block sizes for the volumes to tune the metrics to suit the use case. Out of the box there are a number already configured for the most frequent use cases, however it is entirely possible to create your own with your own requirements (ie creating a volume which will never be cache-worthy, which is very useful for backup-to-disk volumes). This is useful as traditional storage arrays which utilize flash as a tier or cache very rarely have the intelligence to specify or keep dirty-data away from these very expensive resources.


Volume collections are utilised for replicating and snapshotting volumes, volume collection may contain multiple volumes that will allow you to synchronise snapshots and or replications over multiple linked volumes, this maybe useful for VMFS volumes that contain multiple related VMs for example. Another example maybe your SQL-Logs and DB volumes.

Snapshots and replicas are able to be made fully consistent with the use of VSS integration direct from the array without the need to install additional software.

As Nimble uses a variable block size of 4/8/16/32KB snapshots and replication are generally very space efficient when compared too other arrays utilising larger block sizes. Also all snapshots are using compressed blocks and thus it is not uncommon to see snapshots taken and retained for longer than 30 days on the array.

As snapshots are so granular and do not take any performance overhead, the limitation of snapshots is currently 10,000 per array group, and 1000 per volume.

The image below shows the average snapshot change rate as a daily percentage that Nimble customers see for key use cases.


Within the volumes view under the manage menu you are able to see at a glance the performance and compression on each volume over the last 5 minutes by selecting the performance tab.

Individual Volume Breakdown

By selecting an individual volume in this view you get a more detailed breakdown of the configuration and performance utilisation of that individual volume. We are also able to edit the volume, set it offline and delete the volume from this same screen.

Individual Volume Snapshot Tab

By selecting the snapshot or replication tabs in the individual volume view you get a detail breakdown of the usage including the date and name of the snapshot / replica, its origin and schedule but also information regarding how much new data is kept within the snapshot and what compression ratio was achieved.

Replication Partner

Replication Partners are easily configured via a simple wizard accessed under Manage > Protection > Replication Partners, the replication is configurable to take place either over the data or management network to give you flexibility. What you can also see here is Nimble give you the ability to decide where your replication traffic gets presented; over the management or data networks you have!

What I really liked about the replication configuration was the built in quality of service that allows you to tune the replication, this could be extremely important for a small business utilising a single line for replication and other business traffic.

After configuring the replication you get a very clear view of the policies configured and the volume collections replicating, you also see what the lag is between production and DR.


The Nimble arrays contain a dial home support functionality called Infosight. Each array contains 30million sensors, when enabled every 5 minutes the results from those sensors are rolled up into a log bundle and is transmitted to Nimble support. Nimble’s systems are then are able to detect issues, failures and auto raise cases prior to the customer knowing in many cases. Today over 90% of all support cases raised by Nimble are automatically generated and resolved, according to Nick.

Firmware updates are easily handled within the array themselves allowing you to check version information, download the latest firmware and upgrade the unit.

By default all volume and snapshot space on the Nimble array is thin provisioned, this can be customised for new volumes by configuring the volumes reserve seen above.


There are a number of monitoring options within the Nimble array, these can all be found under the Monitor tab on the top menu. The example shows the performance across the array, you can customise this view to see performance across a time period from the last 5minutes to the last 90 days as standard and also focus on an individual volumes.


That’s it for my Nimble array walkthrough I intend on delving a little deeper when possible in the future. I really like what Nimble are doing in this space as they appear to be doing something different to most and when digging deeper all the technical design decisions certainly make a lot of sense, based on the results I am hearing customers seem to be very happy. Of course there are a huge amount of ways to deliver the IO for your infrastructure but Nimble are certainly cementing their space as a validated disruptive technology in this arena.

Something that interests me greatly is its use cases in VDI. Speaking to Devin the arrays even love mixed VDI and Server workloads due to the way the writes are coalesced through the NVRAM random workloads aren’t a problem. I

Next week I shall be attending the Dell Storage Forum taking place at the Marriott Rive Gauche Hotel & Conference Center, Paris. This is the 2nd Storage Forum to take place in Europe after the London event in January.

Marriott Rive Gauche Exterior

For those who haven’t been to a storage forum before it offers Dells customers and Partners the chance to meet the Dell Storage Execs, hear about the product line and strategy, attend technical sessions to learn to get the most out of your environment and finally sit hands on labs for the Dell storage range.

If you will be attending the storage forum make sure you say hi, the storage forum offers a great opportunity to meet other users from across different industries, make contacts and discuss your use and love of the technology.  

Full details regarding the event can be found on the Dell Storage Forum website here >>

Also if you are in Paris on Wednesday or attending the forum be sure to join us for Storage Beers taking place just a short walk away from the conference at Havane Café  (Boulevard Auguste Blanqui, 75013 Paris, France)


I look forward to meeting you there and for those of you who wont be able to attend I will be blogging and taking lots of photos throughout the conference.

Today was the second and final day of BriForum taking place in London, today’s main focus was the breakout sessions. Again I sat on a number of sessions and spent a lot of time catching up with VMware and other vendors in the main hall.

My best session was that by Ruben Spruijt for a second day, this time Ruben was joined by Login VSI, they jointly contribute to the site and whitepapers of Project VRC. They were presenting the initial findings of a year long piece of work to understand the impact of various anti virus software on VDI and SBC saturation. If you haven’t check out project VRC I highly recommend you do, those that have know that the analysis so far has concentrated on the saturation point of a HP DL360 G6 with various VDI and SBC workloads. The initial findings that were presented cover Trend, McAfee, Symantec and Microsoft Forefront, one of the biggest findings was the impact of not completing a full scan prior to rolling out your VDI desktops from the golden image, without this initial scan the saturation point of the server was near instant. The surprising winner at this time appears to be Microsoft Forefront with the least impact on the saturation point. Many including the presenters pointed out however this maybe inline with the features offered from the products and no comparison of features was carried out. They are currently completing this analysis and you should see the white paper on their site soon.

Another session I enjoyed was that of Nutanix, who were launching in EMEA today, as mentioned yesterday Nutanix is a building block approach to delivering compute and storage, the CEO Dheeray Pandey was presenting around the subject of big data and how their product takes from the learning’s of Google Approach by adding storage locally alongside compute. Their technology uses a number of new technologies to ensure the storage is highly available (Appearing as a NAS to vSphere) but ensuring the storage is kept local to the VM where possible. They do this not in a traditional VSA way but by using their controller VM on each host to directly interact with the local, Fusion IO, SSD and SATA storage using RDM’s and Direct Path IO. They went on to talk about their unique distributed metadata service medusa and distributed disk maintenance service curator to help allow them to scale but keep availability and performance at the forefront of the solution. I was also lucky enough to have a hands on demo with Rob Tribe the new Regional SE Manager EMEA , I was pleased to see that not only is Nutanics addressing a need particularly for VDI I have seen for some time but they are doing it with a very simple user interface. Watch my blog for a dedicated post on this subject as I dig further and get access to their lab.


The day was rounded off with Brian Madden thanking the attendees announcing that they were looking into doing a BriForum in Australia in November (To the surprise of Gabe and the TechTarget staff, as I understand its only in the investigation phase at present) A large number of attendees then went over to the Nutanix launch party at The Grand Union Bar where announcement’s were met with beer and food. It was good to catch up with fellow vExperts Darren Woollard And Greg Robertson as well as chatting with Brian Madden about everything other than work including Gin and Curry amongst other things.

All in all I have had a very good couple of days, picked up some good tips, made some good contacts and caught up with some familiar faces, I’m looking forward to watching the sessions I was unable to attend. All I would hope for the next year is that the vendor sessions could be tweaked, I know these guys sponsored the event but a number of the vendor sponsored sessions were over an hour of a sales presentation rather than the subject that was in the show guides.

If you get the chance to go there is another BriForum in July in Chicago and Brian suggested that they would be back in London again next year.


Finally the 4th HandsOnVirtualization podcast is now available to download from the links below. The podcast was recorded in two parts in November / December, in the first half Jonathan and I sum up our collective experiences at VMworld US and Europe, discuss the latest releases from VMware and Dell EqualLogic, Jonathan discusses his recent attendance of Tech field day and some of the technologies that he saw whilst attending. Jonathan and I talk about the Dell Storage Forum that for the first time is coming to the UK, Jonathan gives us an idea what we can expect to see. In the second half of the podcast Jonathan and I chat to Devin Hamilton from Nimble Storage regarding their products and technologies.

The podcast can be downloaded from iTunes here >>

Or for non i devices here >>


Nimble Storage








Dell Storage Forum

VMworld Official Site

Barry’s VMworld Experience

Tech Field Day 8

Jonathan’s VMworld Review



If you have any comments, suggestions or would like to be on the podcast in the new year please catch Barry on twitter @virtualisedreal or email barry {[a t]} virtualisedreality dot com

Also keep an eye out in the new year for a blog post dedicated to Nimble Storage.

I have been working on this script on and off for a couple of weeks, the plan was to get my head around the new Equallogic Powershell tools and create a configuration dump that could be used to assist with documenting / analysing an Equallogics configurations. I also used this as a point to have a play with SAPIEN Technologies PrimalForms (Community Edition).

I have created a script which is linked to below. Please be aware this is currently a beta and I am looking for as much feedback on this as possible. The script will display a GUI to allow you to type the connectivity information


The script will check for the existence of the HIT Kit 3.5 or higher, if it is not present it will alert you.


The script checks for the existence of a folder called EqlReports on the root of the C: drive, if it doesn’t exist it will create it. The report once created it named GroupName-Report-Date. So if you are testing the same SAN on the same day you will need to rename the files.

The final report is a HTML based report and will look something like this.


It includes, group, replication, member, volume, volume acl, snapshot and schedule information. Currently the IP’s are displayed in IPV6 but in v2 I will add the IPV6 conversion.

The main Powershell one liners that are used are as below.

Group Configuration

Get-EqlGroupConfiguration | select groupname, groupaddress, grouptimezone,
 groupdescription, smtpservers, grouplocation, ntpservers 

Replication Parther

Get-EqlReplicationPartner | select partnername, partneripaddress, 
partnerdescription, primarygroup, delegatedspacemb, replicationstatus |
 ConvertTo-Html -pre "<h1>Replication Partner</h1> "  -Fragment

Member Information

get-eqlmember | select membername, memberdescription, firmwareversion, 
defaultgateway, storagepoolname, raidtype, raidstatus, totalspacemb, 
freespacemb | ConvertTo-Html -pre "<h1>Member Information</h1> " -Fragment

Volume Information

get-eqlvolume | select volumename, volumesizeMB, StoragePoolName, 
ThinProvision, onlinestatus, volumedescription, snapshotreservepercent, 
replicareserveinuseMB, snapshotcount | ConvertTo-Html 
-pre "<h1>Volume Information</h1> "  -Fragment

Volume ACL Information

get-eqlvolume | get-eqlvolumeacl | select volumename, initiatorname,
 username, initiatoripaddress, acltargettype | ConvertTo-Html 
-pre "<h1>Volume ACL Information</h1> "  -Fragment

Snapshot Information

get-eqlvolume | get-eqlsnapshot | select volumename, snapshotname, 
snapshotsizemb, onlinestatus, creationtimestamp | ConvertTo-Html
-pre "<h1>Snapshot Information</h1> "  -Fragment


Finally a bit more indepth was the schedule information, I only wanted the data for volumes with schedules, otherwise you got nasty blank information with only titles. Big thanks must goto @virtualportal Steve Bryenn for assisting me with this code

 $sched = @()
 $volumes = get-eqlvolume 
 foreach ($volume in $volumes)
     $schedules = $volume | get-eqlschedule
     if ($schedules.volumename -ne $null)
     $sched_report = $schedules | SELECT volumename,schedulename,  
schedulestatus, scheduletype, startdate, starttime, enddate, endtime,  
repeatfactor, timefrequency, keepcount, accesstype, onlinestatus,  
     $sched += $sched_report


You can download the script here, it is as a .doc file to allow me to upload it on! so please rename as a .ps1 file. I will host it in an alternative location soon.

Equallogic HIT Kit 3.5

December 17, 2010 — 1 Comment

The Equallogic HIT (Host Integration Toolkit) Kit 3.5 was recently released for early adopters as part of this there are now Powershell tools available!

I am currently working on a number of scripts, most specifically for Equallogic Health Checks, Automated Configuration and Automating the process of adding new ESX hosts to VMFS volumes / VMFS volumes to hosts.

The Cmdlet’s that are available are as follows


The HIT kit and the user guides etc can be downloaded from

Today I was pleased to see the announcement of the Iomega ix12-300r, moving on from the success of the ix4-200d and r Iomega latest device supports up to 12 disks, have 4 Gb connections and an Intel Core 2 Duo processor. The ix12-300r


The key features for the device are as below

  • Capacity: Starting at 4TB (4 x 1TB SATA-II Hard Disk Drives HDDs) with maximum capacity of 24TB (12 x 2TB SATA-II Hard Disk Drives)
    4-drive disk packs enable expansion to 8 or 12 drive configurations
  • RAID Support: On-disk data protection and organization configurable as RAID 0, 1, 5, 6 and 10. Automatic RAID rebuild and hot swap drives.
  • iSCSI Support: Provides block-level access for the most efficient storage utilization, especially for database and email application performance.
  • VMware® Certified: HCL certified NAS (NFS) and iSCSI storage for VMware ESX (vSphere)
  • Multiple network interface cards (NICs) can be configured for I/O failover
  • Expandability: Add storage capacity by simply adding drives in the unpopulated drive bays. Added drives can be brought into an existing Storage Pool or made into a new Storage Pool. Additionally, USB connected Hard Disk Drives can be attached and used as shares. The StorCenter ix12 supports read and write on Fat32, NTFS, or ext2/ext3 formatted hard disks; read only of external HFS+ formatted drives.



Virtualisation Use Cases


As with the ix4-200d / r the ix12-300r would offer a fantastic staging area for virtualised environment backups. Paired with a good backup product like Veeam an affordable storage device like this means you can keep many restore points online for quick restoration. With a larger amount of disks comes greater IOPS and the possibility to potentially speed up backups further.

Test and Development

The ix12-300r would be an ideal device for test and dev, saving you space on a costly production SAN. Whilst the rack mount form factor wouldn’t really be suitable for many home labs, I can see this new device being a great ISO store / template store, as well as a staging area for new virtual machines in a business environment.

SMB Production Storage

Using a single controller type device in a production environment always makes me a little uneasy, I am a great fan of building in as much redundancy into your virtual environment as possible. But in very small environments (e.g. two ESX hosts) or branch offices the ix12-300r could be the very device you are looking for. Teamed up with a good backup strategy (Maybe Veeam and a ix4-200r) and understanding of the limitations / risk the could be the device you need.


It looks like Iomega with EMC’s support has done it again, the ix4-200d changed the look of many people VMware home labs and SMB backups, it looks like the ix12-300r could do the same for small office and branch office environments and test and dev / 3rd line storage. I look forward to getting my hands on one and seeing how it performs. In the mean time I would love to hear your views and use cases on this new product.

More Information

Chad Sakac’s Blog EMC –

Chuck Hollis’s Blog EMC –

Iomega Product Page –

EMC Press Release –