The $64 000-dollar question

The $64 000-dollar question


Finding the best solution for your business’s disaster recovery requirements means taking a fresh look at what is available, and using common sense.

Our last blog alluded to a key principle of disaster recovery: make sure you are in control. This doesn’t mean that you can’t outsource to a specialist provider of disaster recovery services (indeed, for many companies this makes good business sense) but it has to be done intentionally and with adequate planning. It can’t just be presumed to be a sort of byproduct of, for example, cloud computing.

Another point to make about cloud is that clients are usually not allowed to run a virtualised environment on a virtual cloud provider’s infrastructure. In most cases this is considered a violation of the terms of service. Typically, a cloud provider offers a virtual operating system on which clients install their applications. This means that the client has basic control of the operating system and full control of what runs on it; the provider controls all the underlying functions.

The reality is that using the cloud for disaster recovery means less control over the finer details like adequate data protection.

By contrast, virtualisation means the client controls the whole stack, from tin to app. As we indicated in the first blog in this series, we believe that new technologies hold the key to developing a highly practical and affordable IT disaster recovery solution. Companies wanting to manage their own disaster recovery arrangements should first of all look at the open source virtualisation solutions that currently exist. It’s possible to buy a licence for around R400 to R500 per CPU from a company that offers support, making this option highly affordable.

As part of this virtualisation strategy, one could rent a server in a local data centre and another in a data centre in another area—for example, one in Johannesburg and one in Cape Town—and then replicate apps between the two sites. It’s actually not a very complex procedure at all.

These data centres should have audited, transparent diesel reserves or other contingency plans to cope with extended power outages. In fact, given the availability of affordable and plentiful bandwidth, some companies might even consider a third server overseas, in a country where power supplies are stable. This might put one on the wrong side of some data-privacy legislation, but if the risks seem to justify such a step, it might be worth investigating this option.

Such a structure would be relatively simple to set up. It would have the great advantage of allowing the company to switch between the disaster recovery and production environments. Because the production environment is mirrored on the disaster recovery sites, losing power at the production site would entail hardly any downtime for the IT systems—and no loss of data, depending on how often changes are replicated.

Of course, the question of work-area recovery for the staff at the production site is something that would also have to be considered as they too would have been impacted by the extended power outage.

Next time, to conclude this series for Business Continuity Awareness Week, a few thoughts on testing and when disaster recovery makes no business sense.