Last time, we argued that tape backups had been superseded by newer, more reliable technologies; and that companies should be investigating the range of vendors of virtualisation technologies to get the best deal. Virtualised environments can be easily cloned to the disaster recovery site, and proper testing becomes practical (more on testing in a later blog).
Despite all the reasons for making the move, many companies still cling to tape backups. One of the reasons for the inertia could be that auditors still look for evidence that tapes have been made, and clearly some education of the auditing industry needs to be undertaken. Companies that do make the change should plan to explain the benefits to their own auditors and make sure they understand the difference between cloning and replication.
Another reason could be simply humanity’s natural conservatism—it always seems better to stick with the familiar. Again, a proactive approach is required, and adequate training of IT staff on the new technology will pay dividends.
The next red flag as regards the typical IT disaster recovery plan relates directly to the power problems mentioned in the previous blog. Most IT departments already have backup generators in place, but are they adequate? Many companies have not purchased “industrial-strength” generators so they cannot run for longer than a few hours—as noted earlier, while we don’t think a national blackout is likely, a regional one is a distinct possibility. Does your company have a generator that could run for longer than a day at a time, and enough fuel to do so?
One point to make here is that if the disaster recovery environment is virtual, then it is more than possible to switch over to the disaster recovery site at short notice and before the diesel runs out or the generator’s tolerance levels are reached—presuming the alternative site does have some form of power as well. Playing ping-pong between sites in this way would theoretically permit a household-type generator to be used successfully through a longer outage. Obviously, a proper DNS and routing design would be needed to avoid having to reconfigure end-user desktops and system interconnections every time you do it.
Clearly, it’s better to purchase the right generator!
In any event, it seems wise to ensure a three-day supply of diesel to allow for delays in sourcing it when one is competing with other businesses in the affected area.
Many seem to hold the view that the cloud offers a trouble-free way to insure against disaster. Next time, some thoughts about why that might be an over-optimistic viewpoint, especially when power systems are unstable.