Search Results For dell storage

Yesterday was day 2 of the Dell Enterprise Forum EMEA in Frankfurt Germany, due to attending a number of NDA sessions there wasn’t so much tweeting and blogging from me but it was a very productive and insightful day. 

Every year what stands out to me at these events is not just the content and labs, but the access to the execs and great technical people at Dell, speaking as a Dell Partner and a previous Customer the opportunity you get at these events is really unrivalled. It really does show you the levels of effort and skill that are behind the scenes, which by the time it reaches the partners and customers I feel is some times masked. 

IMG 4220

My day started off with a session discussing storage in a VDI world and in particular Compellent configuration options, this was really insightful and whilst the subject matter is something I feel very comfortable with the insight into the Compellent specifics was of a great help. This session also led to me speaking to some great technical people at Dell that are actually specialising in VDI in terms of storage and configuration options. I am hoping to be able to do some follow up with these guys in the future and will blog what I can. 


As mentioned all my other sessions during the day were NDA and whilst I can’t blog about these, once again I want to highlight how beneficial it is to have access to these sessions and the subsiquent access to the execs afterwards. 

Outside of the sessions I did sit the Compellent vSphere integration lab which was good to see how the integrations between the two works, particularly with the web client integration. I also had some further time speaking to vendors and looking around the show floor. 

IMG 4216

Random shot of hardware but I liked it! 🙂

During the evening reception yesterday at the Dell Enterprise Forum I spent some time speaking to some of the vendors and Dell staff on the stands on the show floor. Amongst one of the stands that grabbed my attention was the desktop virtualisation stand with Dell Wyse Thin Clients and the nVidia GRID technology on display. The Dell Wyse Cloud Connect is a pocket sized Thin Client Dongle, I had read about it when it was released but hadn’t had a chance to see it in action.

The Cloud Connect was demonstrated on a lovely Dell 27” touchscreen display (I really need to find out more about this), the Cloud Connect runs an Android OS and as well as supporting remote connectivity to all the usual VDI and Session Host providors it also supports local Android Applications and web browsing. The unit itself has built in WIFI, Bluetooth as well as a limited amount of storage, room to add further storage and USB host ports for external devices. 

With its mobile form factor and the fact that you can manage these devices via a cloud portal, this could potentially make a very interesting device for remote workers and education environments amongst others. I work with a number of companies that wish to enable staff to use their home PCs to connect into VDI environments, but due to legislation that dictates only company owned devices can connect to the VDI environment this isn’t possible. With a cost effective device like this, potentially this situation could be solved?

I need to spend some time with one of these units to figure out actually how feasible it is to use as regular thin client, but when I manage to get my hands on one I will post my results. If you are at Dell EF be sure to check it out on their stand. 

Dell Wyse Cloud Connect

This is a blog post that has been long overdue, I have blogged about Nimble Storage a couple of times when at VMworld and Devin Hamilton (Director of Storage Architecture & Nimble’s first ever customer-facing engineer) was also on one of the HandsonVirtualization podcasts that we recorded in the past. I sat down with a good friend of mine Nick Dyer around 6 months ago, Nick at the time had been with Nimble for only a few months after previously being at Xsigo and Dell EqualLogic, we discussed who Nimble were and what made them different to everyone else in the market place, Nick also gave me a tour of its features and functionality.

Very recently Nimble have announced Nimble OS 2.0, which this walkthrough is based on and big thanks to Nick for helping me up date this from 1.X to 2.X

Home Screen

The home screen shows you a good overview of what is happening within your storage array. On the left we can see a breakdown of the storage usage including snapshots, below this we can see what our space saving is, utilising the in-line compression technology for both primary and snapshot data. In the middle we have a breakdown of throughput in MB/Sec and IOPS broken down by Reads and Writes. Finally on the left we have a breakdown of events over the last 24 hours.

Array Management

Prior to Nimble OS 2.0, the architecture worked with a frame / scale up based design where you start with a head unit that contains 2 controllers, 12 high-capacity spinning disks and a number of high-performance SSDs. You then can increase capacity by attaching up to a further 3 shelves of high-capacity drives using the SAS connectors on the controllers, or you can scale performance by upgrading the controllers or swapping for larger SSDs. What is different about Nimble is the architecture is not based on drive spindles to deliver performance as like traditional storage arrays, rather using multiple Intel Xeon processors to drive IOPS from the array. Nimble have now released version 2.0 of their software, meaning that scale-out is now available as a third scaling method for any Nimble array. This now forms part of a Nimble Storage array “Group”. Today Nimble supports up to 4 arrays in a group – each array supporting 3 shelves of additional disk. The theoretical maximums are thus ~280,000 IOPS & 508TB usable storage in a scale out cluster!

We can see on the screenshot below on a different system that there are a number of shelves configured and that we have active and hot standby controllers configured.

Nimble use an architecture called CASL (Cache Accelerated Storage Architecture) This is made up of a number of components, SSDs are utilised as a Random Read Cache for Hot Blocks in utilisation within the systems, random writes are coalesced through NVRAM to the array, and it compresses them and writes them sequentially to the RAID 6 near line SAS spinning disks resulting in write operations that Nimble claim can be up to 100x faster than traditional disk alone.

The compression within the Nimble storage array happens inline with no performance loss and can offer between 30-75 percent saving depending on the workload.

Check out the following page for more information on CASL –

One of the nice feature in the GUI is when you hover over a port in the array screen it will then highlight which port it corresponds to on the array and display the IP address and status on screen.

When configuring the array with your ESXi or Windows servers you will use the target IP address shown below to configure your storage connectivity.

The network configuration on the array is easily configured. Nimble has now dedicated “Networking” tab available in the administration menu where the Group or individual arrays can be changed. From here we can also configure a new technology Nimble call “Virtual Target IP addresses” as well as creating “Network Zones” to stop traversing and saturating Inter-Switch Links for Multipath configurations. Both of these topics are an individual blog post on their own! It is now also possible to create multiple routes on the array to allow for replication traffic, for example.

Any individual port can be configured to either be on the management, data or both networks.

It’s now also possible to save your network changes as “Draft”, but also to revert your network settings back to your previously applied configuration – very handy in case something went wrong!

Adding Arrays to Group

To deploy a Nimble array into the group, it is now as simple as clicking a button in the Nimble GUI under the “Arrays” page. We did this on a pair of fully-functional Nimble VMs.

The Group will then detect the unconfigured Nimble array (which must be on the same layer 2 broadcast domain). It is also where it is possible to merge two Groups together in this screen!

From here all that’s required are the physical network IP addresses for the new array data ports. It will enherit all other configuration from the Group (ie Replication, Email Alerts, Autosupport, Performance Policies, Initiator Groups and more). This is a non-disruptive process, too!

Once IP addresses are configured, the new array is provisioned in the Group in the “default” storage pool.

Initiator Groups

Initiator groups are used to manage access to the volumes, you start off by creating an initiator group for servers that will require access to the same volumes, in this example ESXi hosts, you will then map your volume to the initiator group.

Performance Policies

Performance policies are used to handle the cache, compression and block sizes for the volumes to tune the metrics to suit the use case. Out of the box there are a number already configured for the most frequent use cases, however it is entirely possible to create your own with your own requirements (ie creating a volume which will never be cache-worthy, which is very useful for backup-to-disk volumes). This is useful as traditional storage arrays which utilize flash as a tier or cache very rarely have the intelligence to specify or keep dirty-data away from these very expensive resources.


Volume collections are utilised for replicating and snapshotting volumes, volume collection may contain multiple volumes that will allow you to synchronise snapshots and or replications over multiple linked volumes, this maybe useful for VMFS volumes that contain multiple related VMs for example. Another example maybe your SQL-Logs and DB volumes.

Snapshots and replicas are able to be made fully consistent with the use of VSS integration direct from the array without the need to install additional software.

As Nimble uses a variable block size of 4/8/16/32KB snapshots and replication are generally very space efficient when compared too other arrays utilising larger block sizes. Also all snapshots are using compressed blocks and thus it is not uncommon to see snapshots taken and retained for longer than 30 days on the array.

As snapshots are so granular and do not take any performance overhead, the limitation of snapshots is currently 10,000 per array group, and 1000 per volume.

The image below shows the average snapshot change rate as a daily percentage that Nimble customers see for key use cases.


Within the volumes view under the manage menu you are able to see at a glance the performance and compression on each volume over the last 5 minutes by selecting the performance tab.

Individual Volume Breakdown

By selecting an individual volume in this view you get a more detailed breakdown of the configuration and performance utilisation of that individual volume. We are also able to edit the volume, set it offline and delete the volume from this same screen.

Individual Volume Snapshot Tab

By selecting the snapshot or replication tabs in the individual volume view you get a detail breakdown of the usage including the date and name of the snapshot / replica, its origin and schedule but also information regarding how much new data is kept within the snapshot and what compression ratio was achieved.

Replication Partner

Replication Partners are easily configured via a simple wizard accessed under Manage > Protection > Replication Partners, the replication is configurable to take place either over the data or management network to give you flexibility. What you can also see here is Nimble give you the ability to decide where your replication traffic gets presented; over the management or data networks you have!

What I really liked about the replication configuration was the built in quality of service that allows you to tune the replication, this could be extremely important for a small business utilising a single line for replication and other business traffic.

After configuring the replication you get a very clear view of the policies configured and the volume collections replicating, you also see what the lag is between production and DR.


The Nimble arrays contain a dial home support functionality called Infosight. Each array contains 30million sensors, when enabled every 5 minutes the results from those sensors are rolled up into a log bundle and is transmitted to Nimble support. Nimble’s systems are then are able to detect issues, failures and auto raise cases prior to the customer knowing in many cases. Today over 90% of all support cases raised by Nimble are automatically generated and resolved, according to Nick.

Firmware updates are easily handled within the array themselves allowing you to check version information, download the latest firmware and upgrade the unit.

By default all volume and snapshot space on the Nimble array is thin provisioned, this can be customised for new volumes by configuring the volumes reserve seen above.


There are a number of monitoring options within the Nimble array, these can all be found under the Monitor tab on the top menu. The example shows the performance across the array, you can customise this view to see performance across a time period from the last 5minutes to the last 90 days as standard and also focus on an individual volumes.


That’s it for my Nimble array walkthrough I intend on delving a little deeper when possible in the future. I really like what Nimble are doing in this space as they appear to be doing something different to most and when digging deeper all the technical design decisions certainly make a lot of sense, based on the results I am hearing customers seem to be very happy. Of course there are a huge amount of ways to deliver the IO for your infrastructure but Nimble are certainly cementing their space as a validated disruptive technology in this arena.

Something that interests me greatly is its use cases in VDI. Speaking to Devin the arrays even love mixed VDI and Server workloads due to the way the writes are coalesced through the NVRAM random workloads aren’t a problem. I

Dell Acquires AppAssure

February 26, 2012 — Leave a comment

Friday Dell announced that they were to acquire backup vendor AppAssure, more information for the Acquasition can be found here >>


In the announcement above Dell gave some clues to their intentions for AppAssure with tight integration with the Dell storage portfolio to further assist with the fluid data vision.

Dell will extend the benefit of AppAssure across our enterprise solutions and services portfolio. Initially, it will be a software-only solution, and then over time we will offer additional data protection solutions tightly integrated in our Fluid Data architecture as we’ve done with our other acquired IP, including EqualLogic, Compellent and the Fluid File System. Customers will be able to manage data end-to-end, not in silos of servers and storage, or islands of sites.

Other than a base understanding of the AppAssure product I haven’t got any previous experience with using the product so I will leave those thoughts for another day, but you can see why Dell would want to have a backup product in the fold, the likes of HP, EMC and NetApp are all able to offer a backup product with their storage solutions and quite commonly I am asked what is the best way to backup VM’s from a replica or snapshot which isn’t easily done without a third party product at present.

However Dell have always been pretty good with their relationship with backup vendors particularly Symantec and increasingly Commvault, personally if it was Commvault that had been acquired I wouldn’t have been surprised at all.

Dell and Commvault

Dell and Symantec

My personal hope is that with the acquisition of AppAssure it doesn’t cause Dell to put their guard up against other backup vendors on the market, as a hardware vendor I believe it is important for them to be able to maintain the flexibility to allow end users to use their choice of backup products. I look forward into seeing how AppAssure is integrated into the storage family and also personally learning what AppAssure has to offer.

Finally the 4th HandsOnVirtualization podcast is now available to download from the links below. The podcast was recorded in two parts in November / December, in the first half Jonathan and I sum up our collective experiences at VMworld US and Europe, discuss the latest releases from VMware and Dell EqualLogic, Jonathan discusses his recent attendance of Tech field day and some of the technologies that he saw whilst attending. Jonathan and I talk about the Dell Storage Forum that for the first time is coming to the UK, Jonathan gives us an idea what we can expect to see. In the second half of the podcast Jonathan and I chat to Devin Hamilton from Nimble Storage regarding their products and technologies.

The podcast can be downloaded from iTunes here >>

Or for non i devices here >>


Nimble Storage








Dell Storage Forum

VMworld Official Site

Barry’s VMworld Experience

Tech Field Day 8

Jonathan’s VMworld Review



If you have any comments, suggestions or would like to be on the podcast in the new year please catch Barry on twitter @virtualisedreal or email barry {[a t]} virtualisedreality dot com

Also keep an eye out in the new year for a blog post dedicated to Nimble Storage.

On Tuesday I visited the Dell Tech Camp 2011, taking place at the Roundhouse in London. The purpose of the Dell Tech Camp is to allow Dell to showcase it’s products to their customers and partners. This event isn’t really a conference as there was only 1 formal presentation at the beginning, the introduction was led by Stephen Murdoch, Vice President and General Manager of Large Enterprise at Dell EMEA. The introduction concentrated on what Dell had been doing,  it was clear that social networking had hit Dell big time, past the  hashtag that was being advertised every where, Stephen explained how they had embraced social networking internally and are using a solution called idea storm to assist with product development ideas. Stephen went on to explain how Dell practice what they preach with 7000 server virtualised internally, gaining a 55 million dollar power saving at it’s Austin HQ and it had now reached it’s goal of becoming carbon neutral.

(I apologise for the poor quality photos, they were taken on my iPhone, I will take my DSLR next time!)

Stephen Murdoch

















Microsoft were next on stage with Terry Smith to talk about their special relationship with Dell and the cloud. Terry talked about how Microsoft was emabracing a different world with software driven experiences and continous cloud services for everyone, they already have over 400 million users on the cloud and half of the fortune 500. Microsoft Bing now has over 30% of the market share in the US and there are 30 million paying users of XBox live, finally Microsoft has roughly 80% of it’s developers currently concentrating on the cloud.

After the introduction presentation we were onto the round central hall at the round house, the hall was separated into different areas showcasing different areas and products from Dell.

Show Floor













My first port of call was to find the Dell storage stand and discuss the announcements at the Dell Storage Forum (US) from a few hours earlier.

Equallogic and Compellent

















I was please to see on show an Equallogic PS6000 series SAN, a Compellent SAN and the recently announced Dell Equallogic FS7500, the FS7500 will add NAS functionality to the Equallogic product line. The FS7500 is powered by Dell’s Scalable File System, this comes from their acquisition of Exanet, it will give clustered NFS and CIF’s functionality to the Equallogic range of SANs. It works by being added into the Equallogic group for centralised management. It come’s as two clustered server nodes and a backup power supply to protect the 24GB of Cache within the unit. for full details of the Equallogic FS7500 be sure to check out the product page here >>

The other announcement from Equallogic was the new 5.1 firmware, the new 5.1 version of the firmware will introduce new load balancing methods, for those that don’t know Equallogic virtualises the storage which means your volumes get spread (currently) on up to 3 members, these volumes are then currently moved to the different types of storage based on the workload of the volume. Moving forward with the new firmware this will now happen at the block level, meaning hotspots within a volume are able to be moved to the faster disk, this will also happen starting when you have at least 2 members in your group. Technology similar to this has been used already within Equallogic’s SAS / SSD XVS models. It has been reported that in the Dell labs this has meant an increase in performance of unto 3 x utilising the same hardware! The new firmware will also be able to support the DCB (Datacenter Bridging Protocol) to further enhance the stability of performance and QoS for iSCSI when using 10Gb NICS, watch out for more to come on this.

I also used this opportunity to start understanding the Compellent product line, I will leave the details of this for another post when I know a little bit more, but Compellent is now sitting as Equallogic’s big brother in the Dell storage family and is based on a frame based architecture supporting both iSCSI and FC.

Next for me was onto the Dell SMB stand, my focus once again turned to the storage side of things and the new PowerVault MD3600f / MD3620f, this is Dell’s first FC SAN in their PowerVault range and to be honest I was quite surprised to learn about it. Previous to the Compellent purchase Dell had been completely iSCSI through it’s Equallogic and PowerVault products and this seemed to be a set standard for them. I wonder if we will see FC reach the Equallogic product line in the future? My gut feeling is that we won’t, but then I didn’t think we would see it with PowerVault either. Speaking to the representative on the SAN they explained this wasn’t a shift in direction just meeting demand, they acknowledge that some people have already invested in FC and some have a set mind that they want to use FC.

Photo 1













I visited the Dell user experience area and was shown the processes they go through when designing a new product, I was quite surprised to see the levels of detail they goto to design a business object such as a server. The representative explained that they always start on paper with the server design from a aesthetic point of view, but they like to as quickly as possible get to a foam model (as shown in the picture below, the middle server) Dell heavily involves user groups in the design process, the representative went on to explain how when making the removal disk caddies Dell had initially designed these to be a gloss black in colour, the feedback was this made them look like they were made from plastic and not metal, this final result was this part was painted in a grey powder coat for a more industrial look.














I spent several hours walking around and looking at what was on offer and chatting to various Dell representative’s , there were a number of Alienware laptops on display including a model with a 3D display. I had a go at a first person shoot’em up style game, it was very impressive but for me personally I found the wearing of the glasses to be more of a distraction than adding to the game at all.























There were also a number of solutions for specialist areas such as hospitals, military and the police. The below mobile datacenter was displayed on the crimescene investigation stand and comprised of 6 servers, and Equallogic PS6010.

Photo 8






















This was my first Dell Tech Camp and I thoroughly enjoyed it, it was good to network with the Dell employees and see the technology in action. From my point of view I would have loved to have been able to have got some hands on time with the technology and learn the more in-depth aspects of the products. Hopefully this is something we can see added in the future, they certainly had all the technology there on site to do some form or hand’s on break outs.

I was recently lucky enough to have a conference call with some of the product managers at Quest Software (Vizioncore) about a number of their products and up and coming products. This first post is surrounding their new product which has been announced at VMworld this week, vFoglight Storage. Brad Adamske (Product Manager) and Steve Paravola (Product Marketing Manager) took a few minutes of their busy days to explain the product to me.

vFoglight Storage is based on software produced by a company called Monosphere who Quest purchased in December 2008, the product was then known as Storage Horizon.

vFoglight Storage provides a top down and bottom up view of your storage in your virtual environment. It not only displays and allows you to track capacity and performance statistics but also allows you to track the topology in real-time.

At release the product will work with all of the NetApp filers, EMC Clarion CX3’s and CX4’s, in the future HP EVA and Dell storage amongst others will be added to the product range.

Not only will it talk to the arrays but also the storage switches to monitor IO, throughput and latency, Cisco and Brocade are supported at the moment.


The product is completely agentless, working with all the different manufacturers API’s.

With vFoglight Storage you are able to track back to see how your performance differs from any time in past, very useful for being able to compare baseline figures when you believe you have a performance issue, or to see how the latest roll out of virtual machines has effected your storage.

Features and Benefits from Product Data Sheet

Performance Monitoring – Detailed metrics help you quickly identify performance problems that impact the virtual environment.

Capacity Monitoring – Detailed capacity monitoring helps you identify capacity issues, including how much storage each datastore is using.

Topology Views – On-the-fly graphical views show relationships between virtual and physical storage infrastructure, helping you determine if your storage setup best meets your organization’s needs.

Out-of-the Box Alerts – Detailed alarms highlight deviations from industry best practices, allowing you to identify and resolve problems faster.
Reporting – Out-of-the-box reports on performance, capacity and alarms make it easy to communicate storage performance and capacity information to key stakeholders.


Initially vFoglight storage will not plug-in with the conventional vFoglight product but this integration is in the roadmap for future releases.

I really can’t wait to have a look at the technology and I’m looking forward to the Dell storage integration as large amount of our customers run Equallogic SAN’s.


If you are lucky enough to be at VMworld this week stop by booth #1113 to see a live demo for yourself.

We have just finished recording, editing and uploading the second podcast with special guest Doug Hazelman of Veeam. Expect more of the usual virtualisation news and discussion, the latest from Dell Storage and Doug Hazelman from Veeam joined us to discuss the recent announcement regarding Hyper-V support in Veeam,

If you are an iTunes user the podcast can be downloaded from here >>

If you aren’t an iTunes user you can download it from our Feedburner page here >>

A big thanks to Anton Le Char who has given us the first review on iTunes! It’s really good to get feedback and I would love to hear more about what people think and what they would like to hear about in future episodes. Also if you would like to be our special guest please ping me us on Twitter @VirtualisedReal or @S1xth or alternatively email me at barry(at) virtualisedreality(dot)com


Show Notes

Jonathan catch up

Dell Storage Forum 2011 in Orlando 1 week away
Dell Storage Forum 2011 – My Session – Virtualisation Case Study

Barry catch up

London VMUG (Cloud Day)
VMware HIT Kit ME


Topics of Discussion

1. Reminder of the Dell Storage Forum 2011
2. VMware purchase of Shavlik, SlideRocket, Mozy
3. Veeam announcement of HyperV support (What we would like to see in Veeam 6)
4. VMware Patch releases (May)
5. EQL Mem 1.0.1
6. EQL Firmware 5.0.5 release
7. VMware Horizon App Manager release
8. EMC World Iomega PX6-300D SSD – 100VMs Boot

Special Guest

Doug Hazelman – Veeam



I have been wanting to do a Podcast for some time, I love working with virtualisation and the community and any opportunity to discuss virtualisation is fantastic. What has typically stopped me doing this is having the correct person to team up with to get it off the ground. Around a year ago I purchased the domain (Yeah I know virtualization is spelt the American way but I was aiming this at a global audience) I had a number of ideas around what I wanted to do with this domain but never got around to it.

I revisited my ideas just over a month ago and decided it was time for me to get started with a podcast, the topic was easy to choose with virtualisation being the primary topic, but with my love and knowledge of Equallogic storage I also wanted to make this a key aspect of the podcast. Once this was decided I knew the Jonathan Franconi (@S1xth) would be the man to approach as we have been involved in many discussions on twitter around these subjects. To my pleasure Jonathan was really up for it and had been wanting to do something similar for some time. Without any further ado we starting planning, chatting on Skype and our first episode has been recorded, edited and accepted by iTunes. The submitting the podcast to iTunes but was a learning curve as I had never really looked into it before, unfortunately not all my meta data has gone across yet but I am hoping as of the next episode there will be full descriptions and a more relevant logo etc.

Episode 1

Jonathan has managed to secure Will Urban from Dell Storage to be our first special guest, with the recent release of the Equallogic VMware Host Integration Toolkit it made for a very relevant interesting subject for us to discuss.

Also in the podcast you will hear introductions from Jonathan and myself, discussion on our current challenges and work with virtualisation and Dell Storage.

Latest news from the industry including :-

The new EVGA PD02 PCoIP thin client

VMware View 4.6 including the View Security Server

VMware View and vSphere iPad applications

Latest News on the Dell Storage Conference

Equallogic 5.04 Firmware

Equallogic VMware Host Integration Toolkit

VMware vSphere PowerCLI Reference

The podcast can be downloaded direct from iTunes >>


Direct from TalkShoe where the podcast is hosted >>

For more information on the podcasts and blog posts regarding the subjects we discuss please the site is currently a work in progress but hopefully we will have it up to speed within the next week.

For Jonathan’s blog please visit

Jonathan and I are hoping to make this podcast a monthly thing, if not more often when relevant, we would love to hear your feedback and anybody that is interested in becoming a guest please get in touch barry(at)virtualisedreality(dot)com. Even though this first podcast has a heavy slant on Equallogic / Dell Storage we would love to broaden our horizons and cover other VMware friendly storage solutions amongst other products.

2014 has been a busy year for me for many reasons but I thought I would briefly summerise some of the highlights for me over the year as well as some musings with regard to the future of the industry.


I have been lucky enough to attend a number of events this year, including BriForum, vForum and IPExpo in London, vForum in Manchester,  the Dell Enterprise Forum in Frankfurt, VMworld in Barcelona as well as a number of VMware User Group events. These events for me offer a great opportunity to meet individuals from the communities and the technical deep-dive sessions at these events really offer a valuable opportunity to get a better understanding on particular subjects from industry experts. I am looking forward to many events in the coming year including hopefully BriForum and VMworld again, I would also like to get a better understanding of Microsoft, Amazon and Google direction in the industry.

End User Computing

This year has been a year of improvement and maturity for end user computing, we have seen VMware acquire AirWatch for $1.54 billion, the aqcuistion of cloud volumes as well as the release of Horizon 6. The subject of end user computing is becoming ever more defined and mature, we should no longer be awaiting the year of VDI and the focus should be firmley around the user. There is no single right answer to end user computing, we should be concentrating on the users, their use cases and needs, what can we do to make our users more productive? This will be a hybrid mix of many technologies from desktop PC’s to VDI, Mobiles and tablets and more. From a user perspective we need to ensure they can easily access their applications and data on whatever platform and wherever they are. From an administrator perspective we need to ensure this can be done in a secure way that will meet the user’s needs, it needs to easy to manage, monitor and upgrade. For me I like to practice what I preach and my business processes and personal life is spread between a mix of devices and operating systems, I use a Mac Book Pro as my main business device but also use an iPad mini, Samsung Galaxy Note Pro 12.2 and a Windows 8.1 VDI desktop. For me the device should no longer matter and it doesn’t, but it is imperative that the applications and data are where I need them when I need them.


We are starting to see the ever growing importance of applications within the IT infrastructure, whilst they have always been important the focus of IT Administrators and consultants maybe hasn’t always been focused purely on the applications but the infrastructure used to run the applications. During 2014 it has become increasingly obvious that this is where the future of the IT industry lies, focusing on not keeping the cogs turning but ensuring our applications are meeting our business needs. Integration and automation not necessarily between infrastructure components but applications will be key in the software defined world, how are you going to get SaaS application A talking to SaaS application B? With the focus on the applications we are seeing growth in the areas that focus on the applications like Docker and Openstack, DevOps is key.

Hybrid Cloud

2014 for me was the year of the hybrid cloud, we saw VMware launch their first and second UK datacenter as well as a number of datacenters across the globe. From a customer perspective vCloud Air offers an easy way to understand how cloud will work within their business, with data residency guarantees that will suit their business needs, the ability to use the same tools they use to manage their existing private cloud as well as the ability to move workloads between private and public clouds when ever required. We have seen customers trial and start to move production workloads to the cloud using vCloud Air.
For me the future of the hybrid cloud is more than simply your private and public infrastructures, SaaS will make a big part of your infrastructure and moving forward will be ever increasing.  We are seeing Office 365 becoming the norm for many Exchange upgrades and new software installations will focus on SaaS first. Until we are able to replace all of our applications with SaaS alternatives, infrastructure is still going to be a key requirement and this is where vCloud Air offers the flexibility that businesses need.

I think it is going to be interesting to see what the next Server OS from Microsoft is going to bring, you would assume that cloud integration will be baked in as standard,  when deploying new roles you will get the choice to decide whether you want to deploy on premise or in Azure.  We will have to wait and see. I think they is particularly going to be a lot of power in a Dropbox alternative baked directly into to the Widows OS, imagine the simplicity of being able to access all your business shares that are on you Windows files servers from any device, anywhere without a VPN or similar technology but the power will have to be in data security.

Shared Storage Choice

As ever a focus this year has been on shared storage, no matter which way the industry is going there is always going to be a growing demand for storage, whilst at present that is largely on premise in the future we are going to see cloud storage options be ever increasing and important to our businesses.

We have seen the growth of many next generation storage vendors such as Nimble and Pure Storage, we have see the hyper-converged market become ever matured with Nutantix and Simplivity alongside the launch of VMware Evo:Rail and the announcement of Evo:Rack.

For me Nimble Storage has been really standout and we have seen some great reactions from customers when deployed in their infrastructures, it brings together simplicity and high performance with large capacity at a suitable price. Next year I am going to be interested to see how the adoption of Hyper-converged infrastructures grows, particularly with Nutantix and Evo:Rail / VSAN solutions within my customers.

Data Protection

As ever we have seen Veeam build upon their fantastic backup and recovery product with the release of V8, this see’s improved methods of recovery and replication amongst other new features. Next year I would love to see them be able to offer a product that allows you to back up your VMs no matter if they are on premise or in the cloud with vCloud Air, Azure or Amazon EC2. But for me the biggest challenge moving into a SaaS world is data protection. Many people seem to forget about data protection when moving their applications and data to the cloud, but is this correct? Should we be trusting these important assets with one provider, who ever they maybe, or is having 3 copies of your data ever more important? I think the challenge of data protection in the cloud era is having a platform that will allow you to backup, protect and recover your data from a variety or resources to a different set of resources. Let’s say you are storing important business information with SaaS provider A, what happens if they go bust or have a massive data breach or business continuity issue? Maybe you are taking a regular dump of data to a CSV file or similar, but what use is this to your business unless you can convert and recover your data to SaaS provider B? Without global standards between similar providers this is where protecting SaaS applications will become difficult and in my opinion a big challenge for our industry. Maybe until this is solved outside of the main players like Microsoft and Google etc companies will choose to turn to IaaS solutions and protect their data in a more traditional way or will they just take the risk and trust the providers?

Personal Achievements

I have really enjoyed taking part in a number of industry interview opportunities this year, I love sharing a my thoughts and visions for the industry as well as getting to discuss these subjects with others. I have presented at a number of events including the UKVMUG and my companies own events with a record number attending our most recent VMware event that is growing year on year. The biggest challenge for me this year has been working on a second book, this time with co-author Peter von Oven, we are nearing the end now and are hoping that our book Mastering Horizon 6 will be published prior to April by Packt publishing. my biggest achievement was to be made a director of the company I work for, I will be concentrating on pre-sales and operations for my business and this gives me a great opportunity to continue learning and evangalising about technology as well as getting involved with the internal processes and procedures within the business and understanding how modern applications will help our business. I am looking forward to helping the business grow and be better known within the technology industry as well as working on some exciting projects.

That’s all for now, there are so many more areas I could talk about, 2015 is going to be an exciting year for many reasons. I hope to be able to catchup with many of you in the new year.

Happy new year.