By now you’re doing backups. You have your 3-2-1 backup scheme humming in the background and take comfort knowing that your data is safe by duplication.

You are probably sleeping better than most knowing that your information is sheltered from faulty hardware, robberies, etc… but there’s something in the back of your mind pestering you. When it fails, how will it work?

Content duplication comes in many shapes, taken a couple of years ago.

Truth be told, the vast majority of people who do backups, only do a restore when they actually need it. And though an “all-hands-on-deck” / “disaster-has-hit” sort of behavior will work most of the time, it’s definitely not recommended.

Schedule a quarterly/biannual event and do a restore. You’ll thank me.

A myriad of scenarios where things may not be functioning exactly as expected is bound to surface. The most probable one is that no backup was ever being made! I’ve seen it happen first hand. Somewhere along the way, over the years, something changed and all this time you were threading without a safety net.

But all the above alludes to personal files, which are not a time-sensitive recovery. What about backups pertaining to operational deployments? Full pipelines that exist to allow your company to function? If a support server inside your firm dies, what's the process? What would it halt? How long would it take to bring it back up? In Business Continuity we call this RTO, Recovery Time Objective. The lower the time to have something back online the better.

And that is one of the reasons why not only should you do scheduled restores, you should also have an automated recovery script. Luckily for us, what a couple of years ago was an excruciating thing to set up, by the ways of Docker, you can spin almost everything you can dream about on your own personal machine.

Thanks to Sean Kenney you can even spin a Docker LEGO Whale onto your desk.

For my pet projects I have an Atlassian Bitbucket installation running on an HP ProLiant MicroServer. Though I’m the solo developer on this instance let’s imagine it’s the deploy of a small sized company and we must implement a restore pipeline that, should disaster strike, will be able to have a provisional copy running as soon as possible on an employee machine.

On the ProLiant MicroServer, BitBucket data is stored both on a MySQL database and on the filesystem under /home/bitbucket. For simplicity’ sake let's forget about the zipping, unzipping and storage of such backup. So after unpacking the backup we have:

  • a folder with a Desktop\restore\bitbucket.sql file dump.
  • a folder with the Desktop\restore\home\bitbucket file structure.

What we need to do is:

1) Launch a MySQL container with our bitbucket.sql database

2) Launch a Bitbucket container with the filesystem mapped to the local Desktop\restore\home\bitbucket

3) We also need to download MySQL’s JDBC drivers that Bitbucket doesn’t natively provide

4) Your Bitbucket installation will probably be running under https, or a reverse proxy, so we also have to strip all that from the config

The script below, prep_for_local_docker_restore.sh does step 3 (MySQL JDBC Drivers) and 4 (change properties files).

#!/bin/bash

# Reads variables (MYSQL_USERNAME and MYSQL_PASSWORD from .env file
if [ -f .env ]; then
    export $(cat .env | grep -v '#' | awk '/=/ {print $1}')
fi

# download mysql connector (for jira)
if [ ! -f mysql-connector-java-5.1.47.zip ]; then
	curl -O https://cdn.mysql.com/archives/mysql-connector-java-5.1/mysql-connector-java-5.1.47.zip
    unzip mysql-connector-java-5.1.47.zip
	mkdir mysql_driver
	cp mysql-connector-java-5.1.47/mysql-connector-java-5.1.47.jar ./mysql_driver/
fi
 
# move sql files to sql init directory which will be used from docker-compose (jira and bitbucket)
if [ ! -d mysql ]; then
    mkdir mysql
    mv *.sql mysql
fi
 
# change all server specific settings (like https, proxy, etc)
sed -i -e 's/^server/#server/g' home/bitbucket/shared/bitbucket.properties
sed -i -E "s/(jdbc.url=jdbc:mysql:\/\/)(.*)(:3306)/\1bitbucket_db\3/g" home/bitbucket/shared/bitbucket.properties
sed -i -e "s/^jdbc\.user=.*/jdbc.user=${MYSQL_USERNAME}/g" home/bitbucket/shared/bitbucket.properties
sed -i -e "s/^jdbc\.password=.*/jdbc.password=${MYSQL_PASSWORD}/g" home/bitbucket/shared/bitbucket.properties
 
# put mysql in correct TLS mode as per this answer https://stackoverflow.com/a/67234641/67945
sed -i -e "s/^jdbc.url=jdbc:mysql:.*/&\&enabledTLSProtocols=TLSv1.2/g" home/bitbucket/shared/bitbucket.properties
 
# remove specific config information from server.xml 
sed -i -e 's/^ *redirectPort.*$//g' home/bitbucket/shared/server.xml
sed -i -e 's/^ *secure.*$//g' home/bitbucket/shared/server.xml
sed -i -e 's/^ *scheme.*$//g' home/bitbucket/shared/server.xml
sed -i -e 's/^ *proxyName.*$//g' home/bitbucket/shared/server.xml
sed -i -e 's/^ *proxyPort.*$//g' home/bitbucket/shared/server.xml

The bash commands are pretty self-explanatory. You have curl to download stuff from the web and sed to do regex replacements. If you have doubts on those, drop me a message.

We then have a docker-compose.yml file that takes care of spinning the MySQL container and both the Bitbucket and the Jira ones.

# docker-compose.yml
version: "3"
services: 
  bitbucket_db:
    container_name: bitbucket_db
    image: "mysql:5.7.29"
    ports:
      - "3306:3306"
    environment:
      MYSQL_ROOT_PASSWORD: ${MYSQL_PASSWORD}
    volumes:
      - "./mysql:/docker-entrypoint-initdb.d"
  bitbucket_server:
    container_name: bitbucket_server
    image: "atlassian/bitbucket-server:7.0.0-jdk8"
    ports:
      - "7990:7990"
      - "7999:7999"
    volumes:
      - "./home/bitbucket:/var/atlassian/application-data/bitbucket"
    depends_on:
      - "bitbucket_db"
  jira_server:
    container_name: jira_server
    image: "atlassian/jira-software:8.6.1-jdk8"
    ports:
      - "8080:8080"
    environment:
      ATL_JDBC_URL: "jdbc:mysql://bitbucket_db:3306/${MYSQL_JIRA_DATABASE}?autoReconnect=true&useSSL=false&characterEncoding=utf8"
      ATL_JDBC_USER: ${MYSQL_USER}
      ATL_JDBC_PASSWORD: ${MYSQL_PASSWORD}
      ATL_DB_DRIVER: "com.mysql.jdbc.Driver"
      ATL_DB_TYPE: "mysql"
      ATL_DB_SCHEMA_NAME: ${MYSQL_JIRA_DATABASE}
    depends_on:
      - "bitbucket_db"

You must then create a file named .env which will store some local variables:

# Environment Variables
MYSQL_USER=yourUsername
MYSQL_PASSWORD=yourPassword
MYSQL_JIRA_DATABASE=jira_jira  // whatever your Jira DB is

Given this, restoring the full Bitbucket & Jira servers to a computer running Docker would be as simple as opening up your shell and writing:

./prep_for_local_docker_restore.sh
docker-compose up

# Jira needs MySQL JDBC connector, just copy it to container
docker cp mysql-connector-java-5.1.47/mysql-connector-java-5.1.47.jar jira_server:/opt/atlassian/jira/lib

# Both Jira and Bitbucket need MySQL up when they start so
# if for some reason they start first you'll have to do a restart
docker restart jira_server
docker restart bitbucket_server

And in a matter of minutes you’d be able to point your browser to localhost:7990 and localhost:8080 and keep the lights on while the waiting on the ProLiant new disk.