Preventing a Disaster: My Methodology–Part 3
Welcome back to part 3 of my 4 part series on Preventing a Disaster. In Part 1, I discussed the importance of Database Integrity checks and possibly ways to run them against VLDB (very large databases). The second article was an explanation on Backups in general.
In this entry, I would like to discuss the concept of “Off-Site Locations”.
It is my belief that to truly say you have a effective disaster recovery plan, your plan needs to include how to get the SQL backups off-site.
There are a couple of options that I prefer and then there are less desirable options that are still technically effective.
Several Cloud vendors proved storage containers, i.e. hard drive space. Of course MS SQL works best with MS Azure. SQL Server 2012 SP1-CU2 and SQL Server 2014 provided the ability to backup directly to an Azure storage container. This pretty much combines steps 2 and 3 into one efficient step. In April of 2014 Microsoft provided a secondary tool that allows previous versions of SQL to backup directly to Azure as well.
However, there are some downsides in my opinion. Your server obviously needs an internet connection to the outside world and you have to have purchased an Azure account with the appropriate storage size. And as that storage blob grow, so does your monthly bill.
The benefits of using Azure storage include: compression, encryption and seamless integration into SQL Server.
Option 2 – SAN Replication
One option that I have seen to be successful is what I am calling SAN Replication. If your company has a backup datacenter in a different location, then chances are you have a SAN storage array their.
In this configuration, you would use native SQL compressed backups to a local SAN, other than your data SAN of course! Then that SAN is replicated to your secondary Data Center SAN, either using SAN snapshot or true block-by-block replication.
This method can be very effective in getting your data off-side. This method may take a little longer however it usually is effective. The major downside to this is cost. The cost of running a 2nd datacenter and a 2nd comparable SAN is enormous. That is one of the reasons why cloud storage is becoming a more viable option as time goes by.
Option 3 – Personal Relocation
This is personally my favorite! (just kidding) In this method you would use native SQL backup with compression targeted to an external drive. And then at the end of the day personally take that external drive off-site.
Now, I am sure some of you are laughing at this method but with the cost of consumer hard drive rapidly decreasing this is a very viable option for some smaller companies. I actually knew a company that did this every Friday and the IT manager relocated the USB external drive to a bank vault. The company purchased 5 drives large enough to hold a weeks worth of backup files and rotated them out weekly. This method allowed them to keep 30 days worth of backups at all times off-site.
This method is probably the cheapest; however not necessarily the safest.
I am sure there are many other scenarios that are effective to getting SQL backups off-site, these are the ones that I have seen work in the real world successfully.
The important thing to remember and what to take away from this post, is to get your backups off-site. In the event of your primary datacenter crashing, you need to be able to get your data restored ASAP. And if your most recent backup is on a server/SAN in that datacenter, your recovery time has just been exponentially increased.
Your backup plan is not complete until your backups are off-site!