Archives For VMware

2014 has been a busy year for me for many reasons but I thought I would briefly summerise some of the highlights for me over the year as well as some musings with regard to the future of the industry.


I have been lucky enough to attend a number of events this year, including BriForum, vForum and IPExpo in London, vForum in Manchester,  the Dell Enterprise Forum in Frankfurt, VMworld in Barcelona as well as a number of VMware User Group events. These events for me offer a great opportunity to meet individuals from the communities and the technical deep-dive sessions at these events really offer a valuable opportunity to get a better understanding on particular subjects from industry experts. I am looking forward to many events in the coming year including hopefully BriForum and VMworld again, I would also like to get a better understanding of Microsoft, Amazon and Google direction in the industry.

End User Computing

This year has been a year of improvement and maturity for end user computing, we have seen VMware acquire AirWatch for $1.54 billion, the aqcuistion of cloud volumes as well as the release of Horizon 6. The subject of end user computing is becoming ever more defined and mature, we should no longer be awaiting the year of VDI and the focus should be firmley around the user. There is no single right answer to end user computing, we should be concentrating on the users, their use cases and needs, what can we do to make our users more productive? This will be a hybrid mix of many technologies from desktop PC’s to VDI, Mobiles and tablets and more. From a user perspective we need to ensure they can easily access their applications and data on whatever platform and wherever they are. From an administrator perspective we need to ensure this can be done in a secure way that will meet the user’s needs, it needs to easy to manage, monitor and upgrade. For me I like to practice what I preach and my business processes and personal life is spread between a mix of devices and operating systems, I use a Mac Book Pro as my main business device but also use an iPad mini, Samsung Galaxy Note Pro 12.2 and a Windows 8.1 VDI desktop. For me the device should no longer matter and it doesn’t, but it is imperative that the applications and data are where I need them when I need them.


We are starting to see the ever growing importance of applications within the IT infrastructure, whilst they have always been important the focus of IT Administrators and consultants maybe hasn’t always been focused purely on the applications but the infrastructure used to run the applications. During 2014 it has become increasingly obvious that this is where the future of the IT industry lies, focusing on not keeping the cogs turning but ensuring our applications are meeting our business needs. Integration and automation not necessarily between infrastructure components but applications will be key in the software defined world, how are you going to get SaaS application A talking to SaaS application B? With the focus on the applications we are seeing growth in the areas that focus on the applications like Docker and Openstack, DevOps is key.

Hybrid Cloud

2014 for me was the year of the hybrid cloud, we saw VMware launch their first and second UK datacenter as well as a number of datacenters across the globe. From a customer perspective vCloud Air offers an easy way to understand how cloud will work within their business, with data residency guarantees that will suit their business needs, the ability to use the same tools they use to manage their existing private cloud as well as the ability to move workloads between private and public clouds when ever required. We have seen customers trial and start to move production workloads to the cloud using vCloud Air.
For me the future of the hybrid cloud is more than simply your private and public infrastructures, SaaS will make a big part of your infrastructure and moving forward will be ever increasing.  We are seeing Office 365 becoming the norm for many Exchange upgrades and new software installations will focus on SaaS first. Until we are able to replace all of our applications with SaaS alternatives, infrastructure is still going to be a key requirement and this is where vCloud Air offers the flexibility that businesses need.

I think it is going to be interesting to see what the next Server OS from Microsoft is going to bring, you would assume that cloud integration will be baked in as standard,  when deploying new roles you will get the choice to decide whether you want to deploy on premise or in Azure.  We will have to wait and see. I think they is particularly going to be a lot of power in a Dropbox alternative baked directly into to the Widows OS, imagine the simplicity of being able to access all your business shares that are on you Windows files servers from any device, anywhere without a VPN or similar technology but the power will have to be in data security.

Shared Storage Choice

As ever a focus this year has been on shared storage, no matter which way the industry is going there is always going to be a growing demand for storage, whilst at present that is largely on premise in the future we are going to see cloud storage options be ever increasing and important to our businesses.

We have seen the growth of many next generation storage vendors such as Nimble and Pure Storage, we have see the hyper-converged market become ever matured with Nutantix and Simplivity alongside the launch of VMware Evo:Rail and the announcement of Evo:Rack.

For me Nimble Storage has been really standout and we have seen some great reactions from customers when deployed in their infrastructures, it brings together simplicity and high performance with large capacity at a suitable price. Next year I am going to be interested to see how the adoption of Hyper-converged infrastructures grows, particularly with Nutantix and Evo:Rail / VSAN solutions within my customers.

Data Protection

As ever we have seen Veeam build upon their fantastic backup and recovery product with the release of V8, this see’s improved methods of recovery and replication amongst other new features. Next year I would love to see them be able to offer a product that allows you to back up your VMs no matter if they are on premise or in the cloud with vCloud Air, Azure or Amazon EC2. But for me the biggest challenge moving into a SaaS world is data protection. Many people seem to forget about data protection when moving their applications and data to the cloud, but is this correct? Should we be trusting these important assets with one provider, who ever they maybe, or is having 3 copies of your data ever more important? I think the challenge of data protection in the cloud era is having a platform that will allow you to backup, protect and recover your data from a variety or resources to a different set of resources. Let’s say you are storing important business information with SaaS provider A, what happens if they go bust or have a massive data breach or business continuity issue? Maybe you are taking a regular dump of data to a CSV file or similar, but what use is this to your business unless you can convert and recover your data to SaaS provider B? Without global standards between similar providers this is where protecting SaaS applications will become difficult and in my opinion a big challenge for our industry. Maybe until this is solved outside of the main players like Microsoft and Google etc companies will choose to turn to IaaS solutions and protect their data in a more traditional way or will they just take the risk and trust the providers?

Personal Achievements

I have really enjoyed taking part in a number of industry interview opportunities this year, I love sharing a my thoughts and visions for the industry as well as getting to discuss these subjects with others. I have presented at a number of events including the UKVMUG and my companies own events with a record number attending our most recent VMware event that is growing year on year. The biggest challenge for me this year has been working on a second book, this time with co-author Peter von Oven, we are nearing the end now and are hoping that our book Mastering Horizon 6 will be published prior to April by Packt publishing. my biggest achievement was to be made a director of the company I work for, I will be concentrating on pre-sales and operations for my business and this gives me a great opportunity to continue learning and evangalising about technology as well as getting involved with the internal processes and procedures within the business and understanding how modern applications will help our business. I am looking forward to helping the business grow and be better known within the technology industry as well as working on some exciting projects.

That’s all for now, there are so many more areas I could talk about, 2015 is going to be an exciting year for many reasons. I hope to be able to catchup with many of you in the new year.

Happy new year.


On Thursday this week I joined Ian Well (Vice President Veeam NEMEA) and Aseem Anwar (System Engineer Veeam) to discuss the modern datacenter, protection and availability in a new style live show.

We discussed how the landscape of the datacenter has change and continues to change and how this coupled with the growing demands of our business users and customers affects the requirements for availability and data protection.

The recording can be seen below and for more information check out this link >>

Watch out for more of these Veeam live shows and also live whiteboard sessions that the UK team are putting together, the first whiteboard session is happening next week >>

BitDefender with vShield Endpoint Architecture Doodle

Just a quick doodle depicting the architecture of Bitdefender Gravity Zone when utilised with vShield Endpoint.

When Bitdefender is used in conjunction with vShield Endpoint it allows you to offload the AV and malware scanning from your VMs to the Security Virtual Appliances, this also allows deduplication of the data to be scanned.

Management and policies are managed via the BitDefender Security Console, updates to the AV signatures are also downloaded via here.

This can obviously be of assistance in a server environment but if you’re are considering VDI an AV solution that works with vShield Endpoint can allow you to sometimes configure less resource per VM and limits the risk on an AV storm.

This morning I was lucky enough to attend the VMware vCHS EMEA launch that took place at the amazing skyline bar and restaurant Paramount in the Centre Point Building, Oxford Street London.

IMG 6710

The event was a press focused event that fellow blogger Michael Poore and I were lucky enough to be invited along too, at the event there were presentations by VMware CEO Pat Gelsinger, David Parry Jones (Regional Director UK and Ireland) as well as Bill Fathers (Senior VP and General Manager vCHS) and Gavin Jackson (VP and General Manager vCloud Services EMEA), alongside VMware were customers from Betfair and Cancer Research UK as well as from VMware partners, Softcat, Computacenter and The Internet Group.

IMG 6715

Pat Gelsinger opened the presentation, discussing how the era that we are in now is the most disruptive time in IT ever, with a techtonic shift with the way we are consuming and delivering IT services. We have seen minor tremors of this change so far with the likes of Dell going private and IBM selling its X86 business for only $2.3bn. We are moving to the era of the mobile-cloud which VMware are positioning itself to lead and see vCHS as a major component. Pat Gelsinger spoke about the closure of the acquisition of AirWatch and joked about the experience of writing a $1.5bn cheque. 

IMG 6718

Pat Gelsinger went onto to talk about vCHS and its importance to VMware and it’s customer, stating that off premise IT growth is by far the largest area of growth today, explaining how current alternatives by competitors deliver locked in models that differ greatly from what they are deploying in their own datacentres. It is the Hybrid nature out of the box that makes vCHS so relevant and easy to adopt, the customers in the room spoke about the ease of using the vCloud Connector to connect their existing VMware Virtual Infrastructures to vCHS during the beta and extending their L2 and L3 networks to vCHS with ease. 

With today’s announcement we are seeing for the first time vCHS coming to EMEA and specifically the UK after its launch in the states in September of last year, VMware spoke about the importance of data sovereignty to many organisations, knowing where your data is and which laws legislate over it is critical to many businesses. As such VMware see’s vCHS as a global effort and are planning further expansion in the future. At this stage vCHS EMEA is available from one datacentre in Slough but their are already plans for a second to come online in the UK offering DR as well as further expansion into Europe in general. 

Prior to the launch of vCHS in EMEA there was a beta that was oversubscribed by 10x, the most common use cases tested during the beta were test and dev and 100% of beta participants were interested in the DR opportunities that vCHS could offer. These represent the two largest areas we have been speaking to customers about when it comes to cloud adoption. Speaking to Softcat’s Solutions Director Sam Routledge he also see’s DR as being an easy adopter for customers and once trust in this platform exists they will start trusting it more with their production workloads. DR functionality isn’t included out of the box in todays GA but as I understand it there are plans to follow on with this service as soon as possible. 

In my opinion vCHS will certainly offer the easiest way for my customers to start to move workloads and adopt the cloud, vCHS is compatible with all versions of vSphere and via the vCloud Connector you are able to move VM’s to and from the cloud from your existing vCenter server with ease. Also speaking to Sam Routledge and Ed Doleman  (Head of Channel VMware UK and Ireland) vCHS offers their partners many opportunities past simply just providing their customers with access to vCHS, with the opportunity to create be-spoke services on the platform as well as helping customer to automate, orchestrate and manage their hybrid cloud platforms. Michael Bischoff CIO of Betfair spoke about the importance of automation in making vCHS work for them in their environments during the beta testing. 

I am looking forward over the coming weeks to achieving the partner accreditation’s and starting to talk to my customers about the opportunities vCHS provides. I will also be looking to blog my experiences. 

If you want to learn more a good place to start it here >>

In this video we will restore an individual email and a whole inbox by using Veeam Backup and Replication v7 and the Exchange Explorer Component

[Be sure to up the quality using the settings button and watch at full screen for the best viewing experience]

This week VMware have held their annual US event in San Francisco and as usual there have been a mass of updates. In this post I will focus upon vSphere 5.5 and what’s new but we have also seen a number of other releases and announcements in key areas.

To read the full list of what’s new check out the What’s New in VMware vSphere 5.5 Platform document from VMware Here >>

The updates to vSphere are separated into 5 key areas, ESXi Hypervisor Enhancements, Virtual Machine Enhancements, vCenter Server Enhancements, vStorage Enhancements and vSphere Networking Enhancements. Having a quick look through the list their are some updates that I know a number of my customers and colleagues will be very happy about, these include support for VMDK’s up to 62TB in Size, enhancements to single sign on as well as others.

vSphere ESXi Hypervisor Enhancements

There are three key enhancements for the Hypervisor these include the ability to hot add or hot remove PCIe based SSD devices such as Fusion-IO Cards, traditionally this may have been seen as a disadvantage of this type of SSD over SAS or SATA based hard disks. The ESXi Hypervisor is now able to make use of the CPU’s Reliable Memory Technology to ensure the hypervisor is running keys processes such as the hostd and watchdog process in the most reliable areas of memory to minimise issues from memory errors. Finally the balanced policy for power management is now aware of the deep processor power stage known as C-State, previously it has only been aware of the performance stage (P-State) with this increased awareness it will introduce additional power saving benefits and may also increase performance due to the nature of the turbo mode frequencies in the intel chipsets.

Virtual Machine Enhancements

Normally every year we hear how the monster VM can be bigger and better than last years monster VM, this year is no difference but with some introductions that have been long awaited by some.

vSphere 5.5 brings another new virtual machine hardware version, version 10. Included in this version is a new virtual SATA controller, allowing up to 30 devices per controller, so with a maximum of 4 controllers per VM we can now support double the amount of disk devices from 60 to 120 per VM. As to what the use case would be for this number of disks i’m not sure, but if you have one let me know!

In vSphere 5.1 we saw the introduction of support for hardware based GPUs but it was limited to NVIDIA based GPUs, with 5.5 we now are able to use both AMD and Intel based GPUs. There are three supported rendering modes, automatic, hardware and software and vMotion can still be leveraged even across hosts with GPU’s from different vendors. Check out the document linked to above for more detail on this.  For the first time we are also seeing GPU acceleration for Linux in this release as well.

vCenter Enhancements


One of the biggest improvements that I know the engineers I work with are going to love is the face that SSO has been re-built from the ground up, this was an area of much frustration since the release of 5.1. With 5.5 there is an improved multi-master architecture, built in replication and site awareness. On top of this there is now no database required and a much simplified one deployment model for all scenarios.

When installing you will now be presented with 3 options 

  • vCenter Single Sign On for first or only vCenter server
  • vCenter Single Sign On for an additional vCenter server in the same site
  • vCenter Single Sign On for an additional vCenter Server in a new site (Multisite)

VMware are now also publishing simplified recommendations for vCenter deployment options as follows.

Single vCenter Design Recommendation


Multiple Remote vCenter Server Design Recommendation


Mac Support

Another enhancement that I know will be popular with the community is the fact that the web client is now fully supported by Mac OSX meaning you now have remote console support as well as the ability to mount CD-ROMs etc. The usability of the web client has also been improved with support for drag and drop, additional filter support and a new recent items navigation view.

vCenter Appliance

The embedded database within the appliance that has previously been focused at small environments has been re-engineered to allow up to 500 hosts and 5000 virtual machines to be managed. Meaning this limitation to adoption is no longer a barrier, although as I understand it you will still need a Windows VM for the Update manager component which for a smaller environment does limit the desire to implement the Linux based appliance.

vSphere App HA

Whilst vSphere Application HA has been around for some time it has always relied on third party technologies to actually monitor your applications, with 5.5 that has changed. With the new vSphere App HA feature it is possible to monitor and detect an issue with an application service, upon detection the service will be restarted, if that fails to resolve the issue the VM will be rebooted, this is also fully integrated with vSphere alerting to ensure you are aware of any resolved or unresolved issues. To deploy application HA you are required to deploy the AppHA and Hyperic Appliances, the AppHA appliance stores and manages the vSphere App HA policies and the Hyperic appliance monitors and enforces the policies. Once the appliances have been deployed a Hyperic agent is installed in the virtual machines whose applications will be protected by AppHA.


The supported services listed in the beta documentation we as follows.


It is good to see IIS, MSSQL and Apache on this list and it would be good to see MySQL supported in the future. I would also query the possibility of adding Domino and Exchange? Whilst these applications as does SQL have many ways of protecting it itself the ease and simplicity of this solution would particularly be useful for protecting your email services in a smaller environment.


Probably the most asked for feature for me especially since Hyper-V started supporting larger disks was the ability to create virtual machine hard disks bigger than 2TBs in size, with vSphere 5.5 we now have a vDisk and Virtual Mode RDM limit of 62TB. Whilst I believe there is usually better ways of storing large data for organisational and protection purposes there are still a lot of people that need disks a lot bigger than 2TBs.

A number of improvements to enable the use of MSCS in virtualised environments, again this has been a sticking point with some of my customer in the past with 5.5 the following configurations are now supported.

Microsoft Windows 2012

Round-Robin path policy for shared sotrage

iSCSI Protocol for Shared storage

FCoE for Shared Storage

There is now true end to end support for 16GB FC.

vSphere Replication has been enhanced to allow greater interoperability with storage vMotion and Storage DRS as well as the introduction of vSphere Replication Muti-Point-in-Time snapshot retention meaning we can keen historical recover points at the DR site to allow multiple different recovery options. My biggest gripe with vSphere Replication is the fact it doesn’t allow you to test failover like SRM, whilst I can understand why VMware don’t want to introduce this it still makes this feature unusable for me. Your DR plan is only as good as your last test!

vSphere Flash Read Cache

With vSphere 5.5 a new feature called Flash Read Cache has been introduced allowing performance enhancements for read intensive applications by pooling of multiple locally attached flash based devices into a single vSphere Flash Resource which is consumed in the same way as CPU and Memory are today.


I will be blogging about this feature in more detail as soon as I can.


There are a number of updates to networking for the distributed vSwith, check out the document above for more detail.


Whilst many people may have been expecting to see vSphere 6 this year I don’t think the features in vSphere 5.5 will disappoint. I have not seen any updated licensing documents yet to fully understand where the new features will be sitting but we can expect many like the flash read cache to appear top end I would have thought. I will be digging a little deeper into this features as soon as I can as well as the new vSAN beta!


I have recently worked with one of my colleagues (Josh Herbert) to setup demonstrations of every component of the Horizon Suite for a recent seminar I presented at. We presented the Seminar alongside Peter Von Oven Senior VMware End User Computing Specialist and wanted to ensure that customers not only understood what the Horizon Suite could do for them but also see for them selves in scenario based and hands on demos. The main points we wanted to get across was the fact that VMware did much more than just VDI and that a number of restrictions that have traditionally been around VDI no longer exist.

Regular readers of my blog will have seen my 8U mobile rack that I use for seminars before


For the end user computing seminar the rack was upgraded to incorporate a Dell PowerEdge R720 so we could install the NVIDIA GRID K1 GPU for the 3D demos utilising VMware Virtual Shared Graphics Acceleration vSGA. We use a mobile rack to allow us to taken our seminars on tour to various different regions. Also installed in the rack is a R620, Dell EqualLogic Storage and Force10 switching. 

2013-06-20 14.43.10

We put together a number of demonstrations including utilising a Dell Wyse P45 Teradici Zero Client with 4 screens running 3D CAD demos utilising the NVIDIA GRID K1 card a vSGA, seen in the short video below.

We also had the Dell Wyse P25 the baby brother of the P45, it is also a Teradici Zero client with the ability to run two displays. The demo showed HD video but also the functionality of Horizon Workspace including Horizon Data and Horizon Application management over multiple devices, on display on the day was IOS and Android devices as well as using the Surface RT for web based demos of Horizon Workspace as well as the tech preview for View.


One problem we have been seeing is many customer either have or are looking at investing in Microsoft Lync at present, traditionally using Lync for Video and Voice inside a VDI desktop whilst possible has been unsupported and the results were mixed. With Lync 2013, the Microsoft Lync Plugin for VDI, View 5.2 and a Windows based Thin Client these restrictions can be lifted. To show this we had a live demo between two Wyse WES7 thin clients connected to View desktops taking part in live video calls.


Amongst the other demos we  had a live demo of a Windows XP laptop being migrated to a Windows 7 laptop utilising VMware Horizon Mirage, whilst all users data and settings remained intact. Using the same environment we were also able to demonstrate application layering, fixing a broken application in the base image and recover user data all using Mirage.


I am intending to record a number of demos around Mirage and Horizon Workspace and will place a copy of these online when I have done so.

Below you will see a short video we put together showing the demo environment that we used.

Within my lab environment I largely used a wildcard certificate for all my external services. This certificate was originally created on my Exchange server within my environment. Using this certificate on other Windows servers is generally an easy task of exporting the certificate with the private key and applying to the new server. However using this certificate with Horizon Workspace was a little different.

Firstly the certificate needs to be configured on the Horizon Configurator appliance through the following URL >> https://horizonconfigname/cfg

You then select SSL Certificate from the left hand menu.


I re-downloaded the certificate from my certificate provider, opened it in notepad and was able to import this into the SSL box. Ensure that you also copy the intermediary certificates into this box immediately after your certificate. This was supplied by Go Daddy in a gd_bundle.cert file.


The exporting the private key from the Exchange server was a little more complex. First I exported the certificate as follows.

From an MMC console add the Certificate snap in, ensure you select Computer Account, Local Computer.


Browse to your wildcard certificate, right click and select export


The certificate export wizard will appear


Ensure you choose yes, export the private key.


Choose to export the certificate as Personal Information Exchange Format.


Finally you will need to input a password and choose where to save the certificate too.

Next we need to extract the private key from the certificate, the way that I achieved this was with an application called OpenSSL.

Download the installer from here 

I chose the Win32 OpenSSL v1.0.1e Light variant, once downloaded I ran a simple Next, Next, Next installation. This installed the application to C:\OpenSSL-Win32

From a command prompt you will now need to run the following commands from the command line. 

openssl pkcs12 –in [location to *.pfx file] –nocerts –out key.pem


You will be prompted to enter the password and to create a password, you will then be asked to enter a phrase for the PEM file that is too be created.

Once this is done we are left with an encrypted private key file, the next step is to remove the passphrase encryption.


We now have a file that we are able to open in notepad and paste its contents into the Horizon Configurator.


Once we have pasted the key into the Private Key box we are able to select save.


We will now need to repeat this process on the Horizon Connector.


You should now be in a position to test Horizon in a browser to ensure the certificate is valid.


If you receive the following error ensure that you have pasted your intermediary certificates after your certificate in the SSL Certificate boxes shown above.

Request failed: PKIX path building failed: unable to find valid certification path to requested target


The most complex thing about Horizon Workspace is remembering all the administration URLs. I hope in future version we will see a bit more of a combined admin interface between components. In the mean time here is a list with all the admin URLs taken from the user guide.


Administrator Web interface (Active Directory user)

Manage the Catalog, users and groups, entitlements, reports, etc. (Login as Active Directory user with administrator role.)


Administrator Web interface (non-Active Directory user)

Use this URL if you cannot login as the Active Directory user with the administrator role. (Log in as an administrator using the username admin and the password you set during configuration.)


Web Client (end user)

Manage files, launch applications, or launch View pools. (Login as an Active Directory user or virtual user.)


Connector Web interface

Configure additional ThinApp settings, View pool settings, check directory sync status, or alerts. (Log in as an administrator using the password you set during configuration.)


Configurator Web interface

See system information, check modules, set license key, or set admin password. (Log in as an administrator using the password you set during configuration.)


I have recently been given the opportunity to take a deeper look at the Dell DR4000 Backup Appliance, as this now fully supports Veeam it was of particular interest to me. The DR4000 is a server appliance based upon the de-duplication technology Dell acquired from Ocarina. 

Out of the box to running was very quick and easy, with a text based wizard guiding me through the initial steps then moving onto the web based user interface. It took no more than 10-15mins of my time for the initial configuration and access to the user interface. 

IMG 3260

Once the initial configuration was completed I logged into the user interface, the default username is administrator and password is St0r@ge!. Once you are logged in, you get a dashboard view of the appliance, the screenshot below is taken after a number of backups with Veeam. As you can see I have managed to get a total de-duplication saving of 63% across my Veeam backup jobs that I wrote to the appliance. I only had the opportunity to write a small amount of backups but presumably the more backups wrote there would be potential for larger savings. To break it down Veeam backed up 1.9TB of VMs to a 391GB file, the appliance below shows this backups, a reverse incremental for this backup at 46GB and a further 7GB backup file, after de-duplication and compression on the DR4000 these files were down to 168GB. 


One thing to note was in my haste I only plugged in one PSU and one NIC rather than the full 4 available and they show as errors in the hardware health page. 


The box comes configured out the box with a backup container ready for you to write your Veeam, Commvault, AppAssure etc backups too. You are also able to run through a very simple wizard to create other containers. 

New Container

You are also able to configure the compression level across the appliance to be able to favour maximum compression but there could be a performance hit or fast to get a happy medium. 


One of the elements that I really like about the DR4000 is the ability to use it to offsite your backups, you are able to quickly and easily configure one box to be a target and one to be a source. You are able to allow one DR4000 to be a target for up 5 source boxes. 



Once you have your backups writing to the appliance there are a good range of stats and usage statistics to tell you what is going. 



The only configuration needed from a Veeam perspective was to set the jobs deduce-friendly option and add a new repository pointing at the CIFs share. 



For me simplicity and effectiveness are key when it comes to backups they are one of the most important elements of IT to your business but they aren’t something you want to have sleepless nights worrying about, this is one of the reasons I like Veeam so much. For me the DR4000 adds to these elements offering further compression and the ability to simply and effectively move your backups offsite. 

I have a video demo available of this whole setup if anyone is interested please let me know.