Some may wax nostalgic for the old days when, in the event of a disaster, business owners simply had to grab their Rolodex and run. Today’s businesses operate within a complex web of digital data, so companies must make sure it is both intact and accessible after a disaster.
That is where data replication comes in as an important subset of your larger disaster recovery efforts. By copying and moving applications and data from a database on one server to another located outside of the business (likely in another geographical region) and typically in real-time or near real-time, a business can stay up and running in the event of everything from hurricanes to hackers.
However, data replication doesn’t just consist of copying and moving all of your data all of the time. First, replicating all of an organization’s data is prohibitively expensive. Therefore, a key component of a data replication strategy is to make sure that essential applications, processes and data are highest on the priority list. For example, e-mail, CRM and financial systems are core applications that typically can’t be down for more than few hours.
Nor is the data replication itself the only part of an overall data replication strategy to take seriously. In addition to identifying your must-have data, you have to figure out how you will access that data. When disasters occur, you might think, “It’s replicated. No problem!” But how will you get to that data once it’s in a failover state and ensure your strategy will work when you need it? Do you need a MPLS circuit at your DR site or VPN tunnels? Larger organizations typically have a private network that connects their different offices around the country. If their offices use that private network to talk to their server infrastructure in the production data center, they will need to have the DR data center connected to that same MPLS network in order for their users to communicate to it. If an organization needs a more cost-effective way of talking to the DR data center, VPN tunnels over the Internet is another option.
Here are three important best practices to keep in mind to make sure your data gets up and running efficiently and effectively:
- Tier data sets in order of importance First, you must understand what applications you have and place them in the correct order of importance in order to optimize your budget. Due to high costs, data replication is typically reserved only for essential applications and processes. Defining recovery time objectives and recovery point objectives is essential as you need to know how long you can go without your most critical applications and how much data loss can your business can handle.
- Determine the optimal pace and sequence for bringing resources back up In the event of a disaster, replicated data must be brought back up in a carefully determined sequence and pace. Certain applications are dependent on others to start. If you replicated 50 different servers, you can’t simply start them all up at once.
- Test your data replication and DR plan An often ignored aspect of DR is testing your plan. So, if your strategy includes replicating entire VMs, you should test that DR environment to make sure you have addressed infrastructure changes that may have occurred throughout the year. You could have 50 systems and replicate 20 of them. But when you fire everything up after declaring a disaster, what if you realize that you forgot to replicate one core system that all of the other 20 systems depend on?
For example, you need to have your Exchange email server running all the time, even during a disaster. So it should be Tier 0 or a mission-critical application. Tier 1 applications might include your billing or order entry system so you can take orders even though your production environment is down.
Keep in mind, you also need make sure you understand where these files, in all tiers, exist in order to be backed up. Are they in database files that need to go on a server? Are they already properly set up?
Instead, bringing resources back up in a recovery cloud scenario is almost like a dance — a slow waltz is a good example — as applications come back up slowly and carefully, step by step. For instance, domain controllers have to come up first. Authentication has to be done early. Then an Exchange email server that houses email might come up next. Finally, ancillary systems can get in line.
For guaranteed data replication success, you need to go beyond testing the validity of the data. In addition, test the order of operations to ensure all systems communicate properly. In addition, you need to access the replicated files often to make sure they are not corrupted.
A cloud recovery solution such as Flexential’s DRaaS can help you identify your RTO and RPO, as well as make sure all of your data replication and DR plans are properly and regularly tested. Flexential helps all DR-subscribed customers with their testing twice a year. Contact one of our experts today at www.flexential.com.