Managed Website

As I had previously mentioned I recently wiped my host machine for my website. Go ahead, check out the Wayback Machine for it. From beginning to end it’s been rather static and consuming the entire purpose of a remote server. During this “reset” I wanted to more fully utilize my server. I wanted to do this using Docker and isolate any state (the database, as in this article you are reading right now) to a second storage container so that the main container is easily managed in the event things go sideways. This article explains how I did this using Docker and GitLab’s Continuous Integration framework.

Plan the Work

From the start I knew I wanted a WordPress website. WordPress uses MariaDB (or, the MySQL Community Edition) for storage. So, reasonably, I’d want a container for WordPress and a container for MariaDB to separate container responsibilities. Easily enough they already exist. So, we are going to keep the whole orchestration simple by using a single GitLab CI script. To keep things even more simple, there is only one step: deploy. This step will handle everything that is required to deploy the website as an entire solution. This entails:

  1. Create the Storage Container
  2. Create the WordPress Container
  3. Create a Backup Container with scheduled backup job
  4. Create a Dropbox Container with scheduled upload job

Trust me, it sounds way more complicated than it is. We are simply creating 4 containers and 2 volumes then doing some periodic shuffling of data.

Work the Plan

Now that we know all that we need to do, let’s automate it! Our GitLab Runner is a Shell based Runner. This means that the commands are run on the host computer and not within a Docker-in-Docker container. This lets us create and manipulate containers on the Docker host. Pretty handy for automation! Let’s focus on the script block…

Create the Storage Container

The first step in launching any container is to create it. We create the Strorage Container in 5 lines:

  - docker pull mariadb:latest
  - docker stop -t 0 ${STORAGE_CONTAINER} || true
  - docker rm ${STORAGE_CONTAINER} || true
  - docker volume create ${STORAGE_VOLUME}
  - docker run -d --name ${STORAGE_CONTAINER} -v ${STORAGE_VOLUME}:/var/lib/mysql -e "MYSQL_ROOT_PASSWORD=${DB_ROOT_PASSWORD}" -e "MYSQL_DATABASE=${DB_NAME}" -e "MYSQL_USER=${DB_USER}" -e "MYSQL_PASSWORD=${DB_PASSWORD}" --restart always mariadb:latest

On lines 1 – 3 we update the MariaDB image, stop any existing storage containers, then remove any stopped storage containers. This sets up the Docker host for creating a brand spanking new container hot off the update MariaDB image in the Docker repository. The next step (line 4) we create a new named volume. If this volume exists nothing happens. Finally, in line 5, we start a new container with the storage volume mounted it.

Create the WordPress Container

In a very similar process, we create the WordPress container. This container will simply host the WordPress installation and be configured to talk to the MariaDB database.

  - docker pull wordpress:latest
  - docker stop -t 0 ${WEBSITE_CONTAINER} || true
  - docker rm ${WEBSITE_CONTAINER} || true
  - docker volume create ${WEBSITE_VOLUME}
  - docker run -d -P --link ${STORAGE_CONTAINER}:mysql --name ${WEBSITE_CONTAINER} -v ${WEBSITE_VOLUME}:/var/www/html/wp-content -e WORDPRESS_DB_USER=${DB_USER} -e "WORDPRESS_DB_PASSWORD=${DB_PASSWORD}" -e WORDPRESS_DB_HOST=mysql -e WORDPRESS_DB_NAME=${DB_NAME} -e LETSENCRYPT_EMAIL=${EMAIL} -e "LETSENCRYPT_HOST=${HOSTS}" -e "VIRTUAL_HOST=${HOSTS}" --restart always wordpress:latest

Once again, lines 1 – 3 update the WordPress image from the Docker repository, stops any existing containers, then removes the stopped containers. Line 4 we create a new named volume before finally launching a new WordPress container. Line 5 is a little different though. Here, we actually link the Storage Container to the WordPress container. This creates a private network between the two for network traffic. This keeps the Storage Container inaccessible from the Internet at large while still allowing the WordPress container to communicate with the databases it’s hosting. The volume we mount to this container stores plugins and uploads and has nothing to do with the database. Oh, and the container starts with configurations to automatically integrate with out NGINX proxy service that handles automatic SSL configurations (maybe a post later on that one)!

Create a Backup Container with scheduled backup job

What’s a website if it’s not backed up? Risk. So, let’s reduce that! Since we have modular storage for each of the containers we are running, we can mount them to other containers as well. This, admittedly, is a bit risky and I don’t recommend this for very busy sites for the simple reason as the data being backed up has the potential to be written to as it’s being read for backup. This can obviously corrupt a backup or fail a web request.

  - docker pull aveltens/wordpress-backup:latest
  - docker stop -t 0 ${BACKUP_CONTAINER} || true
  - docker rm ${BACKUP_CONTAINER} || true
  - docker run --name ${BACKUP_CONTAINER} -v ${BACKUP_VOLUME}:/backups --volumes-from=${WEBSITE_CONTAINER} --link=${STORAGE_CONTAINER}:mysql -e "BACKUP_TIME=0 5 * * *" -d aveltens/wordpress-backup:latest

Lines 1-3 update the container image, stops it, then cleans up. Line 4 starts a new Backup container with a Backup volume mounted. It also has any volume mounted to the WordPress container also mounted and it’s linked to the MariaDB container. This lets the Backup container copy files from the volumes mounted on the WordPress container and it allows database access to the MariaDB. This particular image we use will automatically compress files from the mounted volumes and perform a database dump to a text file before compressing that as well. This all runs on a configurable schedule defined by the BACKUP environment variable. Each interval defined (in this case every 5 hours) these backup files are created then copied to the backups directory, which we conveniently mounted the Backup Volume.

Create a Dropbox Container with scheduled upload job

Now that we have a Backup Volume containing all our important data, we need to store that somewhere less volatile. I picked Dropbox since it has free storage and I don’t intend to keep every backup I make. Thankfully, there’s a Docker image for that!

    - docker pull janeczku/dropbox:latest
    - docker stop -t 0 ${DROPBOX_CONTAINER} || true
    - docker rm ${DROPBOX_CONTAINER} || true
    - docker run -d --restart=always --name=${DROPBOX_CONTAINER} -v ${BACKUP_VOLUME}:/dbox/Dropbox janeczku/dropbox

Once again, lines 1 – 3 are all about updating, stopping, and cleaning up. Line 4 starts the container using the Dropbox image. We mount that Backup Volume to the Dropbox directory. The Dropbox directory is linked with my Dropbox account and anything that gets placed in this directory is automatically uploaded to the cloud. Viola! We have automated backups! Since this container automatically removes backups older than 90 days, I don’t have to worry for a while about reaching the maximum data for the free tier of Dropbox.

Pulling It All Together

We have several stages of our build we are orchestrating: the container, the database, the backup, and the upload. These are all handled in about 20 lines of code. While this isn’t the smoothest of processes I haven’t had any problem and have been able to replicate it for other websites I also am hosting. This brings me a managed method of automating deployments. It’s pretty cheap to put together and I’m sure it has some room for improvements. If you have any ideas on how to improve this solution, share it with the world! Leave a comment below and I will most likely see if I can incorporate your idea into the next iteration.

Leave a Reply

Your email address will not be published. Required fields are marked *