Restore, Backup’s Ugly Duckling

By now it’s obvious you’re doing backups. You have your 3-2-1 backup scheme humming in the background and take comfort knowing that your data is safe by duplication.

You should probably sleep better than most knowing that your information is sheltered from faulty hardware, robberies, etc… but there’s something in the back of your mind nagging you. When it fails, how will it work?

Content duplication comes in many shapes, taken a couple of years ago.

Truth be told, the vast majority of people who do backups, only ever try to do a restore when they actually need it. And though an “all-hands-on-deck” / “disaster-has-hit” sort of behavior will work most of the time, it’s definitely not recommended.

Schedule a quarterly/bi-anual event and do a restore. You’ll thank me.

A myriad of scenarios where things may not be functioning exactly as expected is bound to surface. The most probable one is that no backup was ever being made! I’ve seen it happen first hand. Somewhere along the way, over the years, something changed and all this time you were threading without a safety net.

But all of the above aludes to personal files which are not a time-sensitive recovery. What about backups pertaining to operational deployments? Full pipelines that exist to allow your company to function? If a support server inside your firm dies, whats the process? What would it halt? How long would it take to bring it back up? In Business Continuity we call this RTO, Recovery Time Objective. The lower the time to have something back online the better.

And that is one of the reasons why not only should you do scheduled restores you should also have an automated recovery script. Luckily for us, what a couple of years ago was an excruciating thing to set up, by the ways of Docker, you can spin almost everything you can dream about on your own personal machine.

Thanks to Sean Kenney you can even spin a Docker LEGO whale onto your desk.

For my pet projects I have an Atlassian Bitbucket installation running on an HP ProLiant MicroServer. Though I’m the solo developer on this instance let’s imagine it’s the deploy of a small sized company and we must implement a restore pipeline that, should disaster strike, will be able to have a provisional copy running as soon as possible on an employee machine.

On the ProLiant MicroServer, BitBucket data is stored both on a MySQL database and on the filesystem under /home/bitbucket. For simplicity sake lets forget about the zipping, unzipping and storage of such backup. So after unpacking the backup we have:

  • a folder with a Desktop\restore\bitbucket.sql file dump.
  • a folder with the Desktop\restore\home\bitbucket file structure.

What we need to do is:

1) Launch a MySQL container with our bitbucket.sql database

2) Launch a BitBucket container with the filesystem mapped to the local Desktop\restore\home\bitbucket

3) We also need to download MySQL’s JDBC drivers that BitBucket doesn’t natively provice

4) Your BitBucket installation will probably be running under https, or a reverse proxy so we also have to strip all that from the config

The script bellow, prep_for_local_docker_restore.sh does steps 3 (MySQL JDBC Drivers) and 4 (change properties files).

The bash commands are pretty self explanatory. You have curl to download stuff from the web and sed to do regex replacements. If you have doubts on those drop me a message.

We then have a docker-compose.yml file that takes care of spinning both the MySQL and the BitBucket container.

Given this, restoring the full BitBucket server to a computer running Docker would be as simple as opening up your shell and writing:

./prep_for_local_docker_restore.sh
docker-compose up

And in a matter of minutes you’d be able to point your browser to localhost:7990 and keep on going while waiting for the ProLiant server’s new disk.

Leave a Reply

Your email address will not be published. Required fields are marked *