Here is the good, bad and ugly scenario for backups.
- Ugly – Not having a backup. When the application or database crashes, you are screwed.
- Bad – Taking regular backup, but storing the backup on the same server. When the application or database crashes, you can restore it from the backup located on the same server. But when the whole server crashes, and you lost the backup, you are screwed.
- Good – Taking regular backup, and storing it in multiple locations.
If you are in the ugly scenario of not having a backup, stop reading this article, and immediately take a backup of your system. You can take a backup of your system using rsync, dd, or rsnapshot.
If you are in the bad scenario of taking the backup, but storing it only on the same server, then consider storing the backups at multiple locations. Following are the 4 different locations where you can store your backups.
1. Backups on the same server
This is probably the straight forward approach. Taking the backup of your critical information (applications, databases, configuration files, etc.,) and storing it on a disk on the same server. If you’ve mounted a remote dedicated backup filesystem using NFS on the local server, I still consider that as storing the backup on the same server. The disadvantage of this method is that when the whole system crashes, or if by mistake you do a rm -rf /, and erased everything on the system, you’ve lost your backup.
Taking a backup and storing it on the same server is a good staring point. In addition to this, you should consider storing your backups in one of the following locations.
2. Backups on a different server
Once you’ve taken the backup on the local server, copy the backup to a remote server. If you have a qa-server, take a backup of your production, and restore it on qa-server. Probably you should assign a dedicated server with lot of space to store backups. When you have a dedicated server for backup, you can even initiate the backup from the dedicated remote server, and don’t have to store a copy of the backup on the local server.
For database backups, I prefer to take the backup on the local server, and copy the backup to a remote server. This way, the database backup copy is located at two different locations. If you lose one backup, you still have the other one. Also, when the database crashes on the local server, it is quick and easy to restore it from the backup located on the same server, instead of copying the backup from the remote server to the local server and restoring it.
Note: Use mysqldump, mysqlcopy for MySQL database backup, and pg_dump psql for PostgreSQL database backup, and RMAN for Oracle database backup.
3. Tape backup
If you don’t have a dedicated backup server to store copy of all your backups, implement a tape backup solution and store all your backups on tape. Tape backups are slow. So, take a backup on the local server first and copy the backup to tape during off peak hours or weekends. The advantage of tape backup is that the backups are easily portable, where you can move around the backup anywhere you want.
4. Backup at an Off-site
You can do all of the above and still get into trouble, when disaster strikes. If the local server, backup server, and tape backup are all located at the same physical location, in a diaster situation, you might lose all the data. So, it is important that you store your backups at an off-site.
You can either have a redundant datacenter, where all your critical applications in the primary datacenter are synced with the disaster datacenter (or) at a bare minimum, keep a copy of the backup tape at an off-site location. Don’t physically rotate the tapes and keep it at the same datacenter, which is useless during a disaster recovery scenario.
Comments on this entry are closed.
I’m sorry, tape? These days? Really?
@Ramesh what type of tape drive do you suggest ?
Worst: you have a backup, deploy it to several locations, have it done daily, weekly and monthly, have space to store backups for the last 5 years but… never tested it. When you finnaly need it, you try to use it and then you discover that your backup is done wrong. Then you are really screwed! 🙂
This is practically a nice article to read.
Now that alot of TGS fan will need is to have practical exercises for those dummies who want to learn more, more links to practical examples perhaps.
Regards
Justice
Backup never tested is not a backup in my opinion!
Looks like a RMAN for Oracle database backup article should be on its way from geekstuff
😉
Then your have all three biases covered….
what are the advantages of RMAN over datapump?
🙂
Webmin is a handy tool for conf file backups and pushing them via ftp/ sftp to remote locations.
Having a copy of your backups at a remote site is _very_ important. Many businesses cannot justify a DR site (they can afford to be down during a disaster recovery). But without data, there will be no recovery. Protection from fires and natural disasters demands that you have a copy of reasonably current data stored offsite.
Particularly with tape- or optical- based remote backups, it may be convenient to use a remote storage site which doesn’t have particularly high physical security (such as a branch office). In that case, the backups should be encrypted. The keys should be stored remotely as well, but they must be in a secure location.
I agree with: Backup never tested is not a backup in my opinion!