Go to Top

What is the best way to protect your virtual environments from getting deleted?

What is the best way to protect your virtual environments from getting deleted?

After over a decade in the market, a new study by Gartner states that virtualised technology has finally reached a mature state. The experts expect that even though the worldwide virtualisation market has again increased by almost 6% to reach 5.6 billion USD, businesswise there will not be many gains in the upcoming future.

They found that almost every large company has already virtualised their servers to the highest degree – with many organisations having virtualised server rates in excess of 75%. Virtualisation is far from being a niche technology anymore, it has become the mainstream norm.

With so many companies using virtualisation in their IT environments it is no wonder that over the last few years, data recovery experts have experienced a significant rise in requests to recover data stored in virtual environments and machines.

Virtualisation, in combination with other advanced server and storage technologies like deduplication, hyper-converged storages or RAID, often results in additional layers data that recovery experts have to dig through in order to successfully recover the lost data.

To avoid unnecessary virtual machine deletions (which may have been prevented) we offer you six tips on how to protect your virtual files from getting permanently lost:

1.   Use the right backup software for your virtual environment

There are several backup software solutions available in the market which can be used for virtualised files. Some are compatible with both VMware and HyperV solutions, but this is actually not the most important factor you should consider when selecting a software. The most critical issue is how long it takes to recover the virtual machines (VMs) out of the backup as well as the VM templates. Additionally, bear in mind that good backup solutions for VMs can mount the backup while the files are transferred back to the main host system.

2.   A snapshot is not a backup

Always create backups if you want to make sure that should a failure occur, everything can still be recovered to its full extent. You should also create snapshots if your changes are very important and you don’t want to risk losing any data – you can cover the time frame between the last and the next backup. If you use snapshots, don’t try to build them on each other. Some “experts” create 6-7 layered snapshots, which not only reduces performance but also means that this concept is more likely to be susceptible to error. And the damage of a VMFS or the failure of a physical server cannot be fixed by using a snapshot.

3.   Don’t save backups and running VMs in the same place

If you save your backups on the same hard disk or storage space where your active VMs are located, you could be risking a total data loss. If a backup fails and a VM is active, it is very likely that it will overwrite the backup. To prevent this from happening, always use different places to store your backups and the active VMs. You should also make multiple backups and store them on a different server/hard disks, in the cloud and on tape. Having at least two additional backup storages can protect you from a permanent data loss.

4.  Don’t mix virtualisation solutions

If you use VMware and HyperV virtualisation solutions in the same environment, you may at some point experience weird outcomes which can lead to data loss. If, for example, a VMware system on one SAN is expanded onto a second SAN utilising HyperV, the second SAN may experience a severe data loss. The different layers that the virtualisation tools create behave totally different when they are linked and/or expanded. So we recommend you keep your virtualisation simple and stick to one solution. Too many layers of complexity can make your system crash and will make the recovery of the data more complex and expensive – if possible at all!

5.   Choose wisely what technologies you use with your virtualisation solution

There are several technologies available that can impact a virtualised environment, such is thin provisioning. Thin provisioning in its simplest form means that only the storage space needed is used at the time. When additional space is needed, free space is randomly allocated.

This poses a dangerous risk: when data on a virtual system is lost, the system should be stopped immediately. By allowing it to continue to operate, you risk data from other virtual disks running on the same hard drive being stored on this “free” space.

Bearing this in mind, it is a good idea to consider if you really need to use such a complex technology and, when possible, always opt for a simpler approach. Should you ever experience a data loss, the recovery process would then be quicker and less expensive.

6.  Plan ahead before you use VMs

Always keep in mind that virtualisation and virtual machines are not error-free and are as likely to fail as any other technology. Before you create a virtual environment for critical applications, consider if it’s the best option. Some applications have a high input/output (I/O) rate and are therefore better fitted for physical server environments.

Planning ahead regarding virtualisation is key to preventing data loss. A problem we have encountered repeatedly with virtualisation data losses is that the setup of the virtualised server and storage is insufficient.

In addition, missing documentation of the VMs, virtual server and their connections to the applications (using business critical data) make the data recovery process a costlier and time-consuming task.

Beware! Even if you implement these basic tips, you may still be at risk of experiencing a data loss involving virtual environments and machines.

Every IT environment is unique and has its own advantages and disadvantages. It pays for the administrator in charge to have detailed knowledge of how the system works. Ideally, he should even know how to react in case of a failure or data loss.

However, if the administrator is not sure of what to do, he should be wary of risking further data loss by tinkering with the VMs. Sometimes a small (bad) decision can make a data loss worse or even permanent. It is better to consult directly with an experienced specialist – it pays off in the end!

Picture copyright: Martina Taylor  / pixelio.de