4 Major Challenges with Traditional Disaster Recovery

Every company knows that you need tested backups of your data, and those backups must be kept offsite. What most companies don't know, however, is that new data-protection and cloud technologies take disaster recovery to the next level. In this chapter, you'll learn about those new technologies, how they work, and how they will modernize your disaster recovery systems to ensure that your enterprise is prepared when unforeseen disasters, small or large, hit the data center.

Challenges with Traditional Disaster Recovery

While most companies have a disaster recovery (DR) plan of some type, there are still many companies who don’t have any plan at all. The sad thing about this fact is that, according to the National Archives and Records Administration (NARA) in Washington, D.C., 93% of companies that lost their data center for ten or more days due to a disaster filed for bankruptcy within one year of the disaster. Additionally, 60% of companies that lose their data shut down within six months of the disaster.

Hopefully you aren’t employed by one of the 48% of companies we surveyed that don’t have a plan and, if you are, it’s time to make DR a priority. After all, disasters do happen and many of those disasters aren’t large-scale catastrophes like fire, flood, tornado, or hurricane. Most disasters are much smaller in scope, such as human error, unexpected data corruption (e.g., due to a firmware upgrade on the SAN), or ransomware attacks.

For companies that do have a DR plan, so many of those plans are out of date, using outdated technologies, or both.

Let’s review the top 3 the challenges that enterprise companies face with traditional disaster recovery.

Click here to download our free ebook on What's Next for Enterprise IT Architectures and Solutions

Challenge #1: Tape and Offsite Storage

The tried-and-true tape backup has been around forever. The great thing about tape storage is that it’s very reliable. The not-so-great thing about tape storage is that tapes are difficult to inventory, they can be easily lost or stolen, and they are time-consuming to test. Additionally, if your company needs fast restoration of applications and data after a disaster, tape storage isn’t going to be able to provide that; tapes must be recalled and restored—all of which takes a significant amount of time.

While offsite tape storage is still a reliable and affordable option for long term offsite archival and data protection, it’s not the best option available today for disaster recovery, because businesses need better RPOs and RTOs than what tape can provide. Speaking of RPO and RTO, what are they and what are the challenges around them?

Challenge #2: Meeting RPO / RTO

Failover Plan

Recovery point objective (RPO) is the minimum amount of data that is acceptable to lose in the event of a disaster. Recovery time objective (RTO) is the amount of time that is acceptable to recover applications in the event of a disaster.

For most companies using tape backup, their RPO is 24 hours (because they do a backup each night) and the RTO might be 48 hours, because that’s how long it would take them to recall the tapes, recover the data, and bring the applications back up.

While that timeframe and amount of data loss might be fine for a small business it’s not going to be acceptable at medium and large enterprises. With thousands of employees working every day, the thought of losing a day’s worth of data (24-hour RPO), and the cost to try to re-create that, is unacceptable. With a 48-hour RTO, the company could be down as much as two days before all applications are restored. Again, for most companies, that amount of downtime is going to be unacceptable.

Certainly RPO and RTOs can be reduced but that leads to high costs...

Challenge #3: High Costs

In the past, it’s been widely known that “the shorter the RPO and faster the RTO, the greater the cost of the disaster recovery solution” (in terms of hardware, software, and data transmission); however, that is changing with new DR solutions (which we’ll talk about later in this blog series).

With traditional DR solutions, to obtain short RPOs and fast RTOs, you had to use a SAN with synchronous replication and have dedicated wide area network (WAN) circuits between sites. In most DR designs this required you to have your own secondary data center to send the replicated data to and run your secondary servers and storage in the event of the disaster. Many DR replication solutions were designed just for specific application. When you wanted replication for another application, you had to purchase another replication solution for that app. All this semi-custom disaster recovery technology, plus the monthly bandwidth to support the movement of the data, resulted in a very high cost for a high-quality DR solution. Unfortunately, this put disaster recovery out of the financial reach for many companies.

Challenge #4: Maintaining the DR Plan / Runbook

Another challenge associated with traditional disaster recovery solutions is that of maintaining the DR plan itself. This plan is the actual runbook, as it’s called, as to the steps that the actual administrators would take in the event of a real disaster. That runbook must include a plan for every application, its associated data, and user connectivity, and the sequential recovery steps for the application. With applications changing and moving at a constant rate in the modern data center, the task of maintaining the disaster recovery runbook has become overwhelming for most companies. The result is that their DR runbook is out of date, and, should a real disaster occur, they would be unable to meet the recovery time objective and maybe be unable to recover at all. This is because their runbook doesn’t provide the necessary information required to get the applications back up and running.

So, what's the solution to these challenges?

Find out with this free solution brief, "Discover the Next Phase of Disaster Recovery"