Hacktoberfest: Minecraft Server

Hacktoberfest: Minecraft Server

Hacktoberfest is upon us! This month Iโ€™m hacking on small projects each week and sharing them.

Previously…

A few weeks ago we found a problem with our GitLab Runner and fixed it. This week, we attempted to make a persistent Minecraft Server using a Dockerfile and the new GitLab Runner to deploy it. We hope to get backups running on the Minecraft Server.

Our Minecraft Server we managed to get working was working great… until we realized it didn’t have any backups. We tried looking for something akin to the WordPress Backup container solution. This didn’t quite pan out as it required a bit of container-to-container communications. I’d like to scale Minecraft hosting out, so while this is a solution, it isn’t a very clean one. Plus, I don’t really want to rely on a 3rd party to update the Dockerfile. So, here we are.

Redefined Requirements

Knowing what we want is half the battle. Figuring out how to do it is the actual hard part. So, we kicked back, grabbed some cookies, and started to think. What do we really want in a perfect Minecraft Server?

  1. We want maximum uptime. If there’s an update, rebooting should pick it up. Done!
  2. We want security. If we need to ban someone or whitelist someone this should persist across reboots. TODO
  3. We want safety. Rebooting should reload the existing world. If something corrupts it, we should be able to recover from a previous backup. TODO

Safety First

For this week, we focused on safety. We want to save our hard work building amazing things so we don’t lose it unexpectedly. To do this, we will need to safely stop the auto save, manually save the world state, back up all of the world files, then start the auto save. This is ideally scheduled as some sort of scheduled task that kicks off every day (or hour). To have the server interact with Minecraft, we will need some sort of RCON utility. So. we leveraged out new-fangled GitLab Runner to help us out.

Getting an RCON utility into a Docker image seemed rather straight-forward. Go get it, make it available to the build context, then copy it to the image giving it executable permissions. Seems easy enough, we can eve use GitLab artifacting since its in the same pipeline!

<br>
rcon-setup:<br>
  stage: stage<br>
  image: golang:latest<br>
  script:<br>
    - "go get github.com/SeerUK/minecraft-rcon/..."<br>
    - "go install github.com/SeerUK/minecraft-rcon/..."<br>
    - "mkdir bin"<br>
    - "cp $GOPATH/bin/minecraft-rcon ./bin"<br>
  artifacts:<br>
    paths:<br>
      - bin/<br>

Here we have a stage (conveniently called stage. I know, so creative!) this runs on the latest Go container and simply pulls the source code to the local Go source path then compiles and installs the binary to the Go binary path. We copy it to the bin directory and artifact it! Now the artifact is in GitLab and is available to downstream dependencies. Let’s build the Docker image!

<br>
build:<br>
  stage: build<br>
  image: docker:latest<br>
  dependencies:<br>
    - stage<br>
  services:<br>
    - docker:dind<br>
  before_script:<br>
    - "docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY"<br>
  script:<br>
    - "docker build -t ${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_NAME} --pull ."<br>
    - "docker push ${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_NAME}"<br>
  after_script:<br>
    - "docker logout ${CI_REGISTRY}"<br>
  tags:<br>
    - docker<br>

Simple stuff here. Let’s take a look at the Dockerfile itself

<br>
FROM alpine:latest<br>
ARG MC_VERSION=1.13.1<br>
ARG MC_JAR_SHA1=fe123682e9cb30031eae351764f653500b7396c9<br>
ARG JAR_URL=https://launcher.mojang.com/mc/game/${MC_VERSION}/server/${MC_JAR_SHA1}/server.jar<br>
ARG MIN_MEMORY='256M'<br>
ARG MAX_MEMORY='1024M'<br>
ARG MC_CLIENT="c2technology"<br>
ENV CLIENT ${MC_CLIENT}<br>
ENV _JAVA_OPTIONS '-Xms${MIN_MEMORY} -Xmx${MAX_MEMORY}'<br>
RUN mkdir -pv /opt/minecraft /etc/minecraft<br>
RUN adduser -DHs /sbin/nologin minecraft<br>
COPY bin/minecraft-rcon /usr/bin/minecraft-rcon<br>
COPY backup /usr/bin<br>
COPY entrypoint.sh /etc/minecraft<br>
RUN apk add --update ca-certificates openjdk8-jre-base tzdata wget \<br>
    &amp;amp;&amp;amp; wget -O /opt/minecraft/minecraft_server.jar ${JAR_URL} \<br>
    &amp;amp;&amp;amp; apk del --purge wget \<br>
    &amp;amp;&amp;amp; rm -rf /var/cache/apk/* \<br>
    &amp;amp;&amp;amp; chown -R minecraft:minecraft /etc/minecraft /opt/minecraft \<br>
    &amp;amp;&amp;amp; chmod +x entrypoint.sh<br>
EXPOSE 25565<br>
USER minecraft<br>
WORKDIR /etc/minecraft<br>
ENTRYPOINT ["./entrypoint.sh"]<br>

Starting with a minimal Linux Alpine container, we set some arguments to the Dockerfile. These can be overwritten as arguments passed to the docker build command. They must be defined in the Dockerfile in order to override them. We have some reasonably safe defaults here. We set some environment variables in the resulting container, make a directory, add a user, then we copy the RCON Go binary (from the artifacts copied into the Docker build context by GitLab’s artifact system) over to the container as well as the backup script we wrote. Then we install some dependencies, expose the Minecraft server port, switch to the Minecraft user, set the working directory, then run the entrypoint.sh script. Let’s take a look at that entrypoint.

<br>
echo 'eula=true' &amp;gt; /etc/minecraft/eula.txt<br>
crontab -l | { cat; echo "0 */6 * * * backup"; } | crontab -<br>
java -jar /opt/minecraft/minecraft_server.jar nogui<br>

Not too complicated. This auto-accepts the EULA (Minecraft requires this to run) then sets up a job that runs every 6 hours to execute a backup command. Then finally runs the Minecraft server. This is what we wanted to be able to do in the first place — back things up on a schedule. We could have the scheduled interval for running the backup command configurable, which we will most likely do after we get this thing working (this is Hacktoberfest after all). So… let’s take a look at that backup script.

<br>
#!/bin/sh<br>
minecraft-rcon save-off<br>
minecraft-rcon save-all<br>
tar czf /opt/backups/$(date +%Y-%m-%d)-mc-${CLIENT}.tar.gz /opt/minecraft/<br>
minecraft-rcon save-on<br>

Easy peasy! using that new minecraft-rcon binary, we turn automatic saving of the Minecraft world off so we can access it without it changing on us (and corrupting backup). We make one final save, tar it all up, then turn automatic saving back on. This seems to be the right thing to do so we don’t corrupt the world or save a corrupted version. We’ll see if this actually works when we get it running. If not, this is the file we can update to get it to correctly work — even if it means stopping the Minecraft service then restarting it.

Now that we have the Docker container published to our repository, we can update the existing Minecraft Server YAML to use it!

<br>
deploy:<br>
  script:<br>
    - docker pull minecraft-docker:latest<br>
    - docker exec minecraft backup<br>
    - docker stop minecraft || true<br>
    - docker rm minecraft || true<br>
    - docker run -d --name minecraft -p 25565:25565 \<br>
        -v minecraft-world:/opt/minecraft/data/world \<br>
        -v minecraft-config:/opt/minecraft/config \<br>
        -v minecraft-mods:/opt/minecraft/mods \<br>
        -v minecraft-plugins:/opt/minecraft/plugins \<br>
        --restart always minecraft-docker:latest<br>
    - docker cp ./config/* minecraft:/opt/minecraft/config/<br>
    - docker cp ./data/* minecraft:/opt/minecraft/data/<br>
    - docker cp ./mods/* minecraft:/opt/minecraft/mods/<br>
    - docker cp ./plugins/* minecraft:/opt/minecraft/plugins/<br>
    - docker exec minecraft backup<br>
    - docker restart minecraft<br>

We kick things off by pulling the latest minecraft-docker image. This will pull the private repository image we just published into the local Docker-in-Docker container that’s running this build. Then we backup the existing world if it exists before stopping the current Minecraft server. After that, we remove it and create a new container with various mounts. We then copy over the configurations and anything else we have version controlled before backing it up once again and restarting it. We back it up so many times right now because we’re not sure if this will corrupt the world data. Once we do know what happens, we will come back and clean this up a bit.

Conclusion

Ultimately, we didn’t hit our goal to get this working in a week. However, we will continue to work on this so our world can be saved (if only it were that easy)! If you have any tips or thoughts on this, please comment below! I’d love to hear about your solutions or for you to share your experience if you’ve done something similar.

Hacktoberfest: Gitlab Artifacts

Hacktoberfest: Gitlab Artifacts

Hacktoberfest is upon us! This month Iโ€™m hacking on small projects each week and sharing them.

Background

Gitlab is a great alternative to GitHub. One of the main features I find is the ability to have unlimited private repositories for free.  This lets me work on things without having them publicly exposed until I’m ready for them to be. In addition to private repositories, it also has a private Docker Registry you can store your Docker images in. GitLab also has other built in CI/CD capabilities like secrets that are passed to the CI/CD orchestration file. GitHub has CI/CD capabilities too, but I feel like GitLab seems less involved for setting it all up.

All of my CI/CD jobs are orchestrated in a gitlab-ci.yml file that sits in the repository. Couple this with a self-hosted GitLab Runner with Docker installed and I have a true CI/CD solution where tagging master triggers an automatic build of the tagged code, publishing the Docker image built during this job, and a deployment of that Docker image to a container (replacing the existing on if present). While this does require some thought into how to persist data across deployments, it does make it very easy for automatic deployments. In the event something does go wrong (and it will) you can easily re-run any previous build making rollbacks extremely simple with a one click solution. Repeatability for the win!

Artifacts

So, this week I was attempting to learn the Artifact system of the GitLab API. Instead of mounting a directory to pass files between jobs, GitLab has an Artifacts API that allows GitLab to store (permanently or temporarily) any number of artifacts defined in a successful pipeline execution. These artifacts are available via the web and the Artifacts API. I have several Go projects that could benefit from cross-compiling the binaries. Why not store these compilations so they are easy to grab whenever I need them for a specific environment? As an added benefit, after compiling once, I could deploy to various environments using these artifacts in downstream jobs. So, I jumped at this chance and found it less than ideal.

The Job

There is a Job API document defining how to create artifacts within a pipeline. It looks as simple as definining artifacts in your gitlab-ci.yml file:

<br>
artifacts:<br>
  paths:<br>
    - dist/<br>

Creating artifacts is the easiest bit I came across. This will auto upload the contents of the dist folder to the orchestration service which will make it available on the GitLab site for that specific job. Easy peasy!

Getting those artifacts to a downstream job is pretty easy as well if you keep in mind the concept of a build context and have navigated GitLab’s various API documents. Thankfully, I’ve done that boring part and will explain (with examples) how to get artifacts working in your GitLab project!

The Breakdown

The API documentation that shows how to upload artifacts also shows how to download artifacts. How convenient! Unfortunately, this is not within the framework of the  gitlab-ci.yml file. The only API documentation you should need for multi-job single-pipeline artifact sharing is here:  YAML API. For other uses (cross-pipeline or scripting artifact downloads) you can see the Jobs API (warning: cross-pipeline artifacts are a premium only feature at the time of writing).

Looking at the Dependencies section the dependencies definition should be used in conjunction with artifacts. Defining a dependent job will cause an ordered execution and any artifacts on the dependent job will be download and extracted to the current build context. Here’s an example:

<br>
dependencies:<br>
  - job:name:<br>

So, what’s a build context? It’s the thing that’s sent to Docker when a build is triggered. If the artifacts aren’t part of the build context, they won’t be available for a Dockerfile to access (COPY, etc).

Here’s the gitlab-ci.yml example:

<br>
stages:<br>
  - build<br>
  - release</p>
<p>build-binary:<br>
  stage: build<br>
  image: golang:latest<br>
  variables:<br>
    PROJECT_DIR: "/go/src/gitlab.com/c2technology"<br>
  before_script:<br>
    - mkdir -p ${PROJECT_DIR}<br>
    - cp -r $CI_PROJECT_DIR ${PROJECT_DIR}/${CI_PROJECT_NAME}<br>
    - go get github.com/tools/godep<br>
    - go install github.com/tools/godep<br>
    - cd ${PROJECT_DIR}/${CI_PROJECT_NAME}<br>
    - godep restore ./...<br>
  script:<br>
    - ./crosscompile.sh<br>
  after_script:<br>
    - cp -r ${PROJECT_DIR}/${CI_PROJECT_NAME}/dist $CI_PROJECT_DIR<br>
  tags:<br>
    - docker<br>
  artifacts:<br>
    paths:<br>
      - dist/<br>
    expire_in: 2 hours</p>
<p>publish-image:<br>
  stage: release<br>
  image: docker:latest<br>
  services:<br>
    - docker:dind<br>
  only:<br>
    - "master"<br>
  variables:<br>
    DOCKER_HOST: tcp://docker:2375/<br>
    DOCKER_DRIVER: overlay2<br>
  before_script:<br>
    - "docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY"<br>
  script:<br>
    - docker build -t ${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_NAME} --pull $CI_PROJECT_DIR<br>
    - docker push ${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_NAME}<br>
  after_script:<br>
    - "docker logout ${CI_REGISTRY}"<br>
  tags:<br>
    - docker<br>
  dependencies:<br>
    - build-binary<br>

And the crosscompile.sh

<br>
#!/bin/sh<br>
echo "Cross compiling alexa-bot..."<br>
for GOOS in darwin linux windows; do<br>
  for GOARCH in 386 amd64; do<br>
    echo "Building $GOOS-$GOARCH"<br>
    export GOOS=$GOOS<br>
    export GOARCH=$GOARCH<br>
    go build -o dist/alexa-bot-$GOOS-$GOARCH<br>
  done<br>
done<br>
echo "Complete!"<br>

In this example, we cross-compile a Go binary and artifact it in the build-binary job before executing the public-image job (which is dependent on build-binary). This downloads and extracts the artifacts from build-binary and sends the project directory (including the downloaded artifacts) as the build context to Docker when building the Dockerfile. Let’s look at the Dockerfile:

<br>
FROM alpine:latest<br>
RUN apk add --no-cache --update ca-certificates<br>
COPY dist/alexa-bot-linux-amd64 /bin/alexa-bot<br>
RUN chmod +x /bin/alexa-bot<br>
EXPOSE 443<br>
# Run the binary with all the options<br>
ENTRYPOINT [ "alexa-bot -c " + $CLIENT_ID + " -s " + $CLIENT_SECRET + " -x " + $SERVER_SECRET + " -t " + $PRODUCT_TYPE_ID + " -n 123 -v " + $VERSION + " -r " + $REDIRECT_URL + " -z " + $TOKEN_URL ]<br>

You can see here that the Dockerfile starts with Alpine Linux as the base Docker image, updates the CA certificates then copies the dist/alexa-bot-linux-amd64 binary from the Docker build context to the compiled Docker image then gives it executable permissions. The rest of the file sets up the port the binary will listen to and passes the configurations to the image.

Once this Docker image is pushed to GitLab’s private repository, the Docker image is available to run (provided with some environment configurations)!

Conclusion

All in all, this seems to work out great for single pipeline builds. When you get into multi-pipeline builds things get even trickier. I found that the artifacts system didn’t quite meet my requirements and I opted for condensing builds and not artifacting the compiled binaries. I could, however, trigger a pipeline that cross-compiles and artifacts the compiled binaries and run a second pipeline that also cross-compiles the binaries (duplicated work) then creates an image. Ultimately, I didn’t really care about the artifacts as my purpose was always to create the Docker images. As usual, your mileage may vary!

Hacktoberfest: Use Alexa without talking!

Hacktoberfest: Use Alexa without talking!

Hacktoberfest is upon us! This month Iโ€™m hacking on small projects each week and sharing them.

Backstory

When we sleep at night, we like to have some background white noise to help with sleeping. Since we have an Echo in our kitchen and an Echo Dot in our bedroom, we ask it to “play rain sounds” which starts a lovely little piece of audio that puts us right to sleep . Unfortunately, it does not look and after an hour it stops. The sudden silence sometimes wakes me up, and I can’t restart it without talking to Alexa and potentially waking up my wife, and nobody wants that! So, I started researching how to talk to Alexa without talking to her and I came across this article on the DolphinAttack. This got me thinking about a device which could leverage this vulnerability, but quickly gave up as it involved more hardware than I wanted and there was a lot of setup. So, I kept researching and came across this forum which talked about a workaround! This seemed more promising and more similar to what I wanted.

The Goal

My goal for this week’s hackathon was to create a Slack Bot for talking to Alexa. The idea is that the Slack App received a text command to send to Alexa, converts the text to a speech audio file, then sends it to the device I want to receive it. This would let me send a text message to Alexa in the middle of the night without ruining my wife’s sleep!

The Backend

Thankfully, there is already an open source GitLab project for handling the audio file push and an article that shows how to use it! I started by manually proving the concept before moving forward. After this proof-of-concept seemed like it would work out, I started on a Dockerfile to set this baby up!

<br>
FROM alpine:latest<br>
ARG CLIENT_ID<br>
ARG CLIENT_SECRET<br>
ARG PRODUCT_TYPE_ID<br>
ENV CLIENT_ID $CLIENT_ID<br>
ENV CLIENT_SECRET $CLIENT_SECRET<br>
ENV PRODUCT_TYPE_ID $PRODUCT_TYPE_ID<br>
RUN mkdir -pv /opt/alexa<br>
WORKDIR /opt/alexa<br>
COPY *.sh ./<br>
RUN chmod +x *.sh<br>
RUN apk add --update ca-certificates \<br>
    espeak \<br>
    curl<br>
RUN (crontab -l ; echo "*/20 * * * * /opt/alexa/refresh_token.sh") | crontab -<br>

And a GitLab YAML file to auto-deploy it

<br>
stages:<br>
  - build<br>
  - deploy<br>
build-image:<br>
  stage: build<br>
  image: docker:latest<br>
  variables:<br>
    DOCKER_HOST: tcp://docker:2375/<br>
    DOCKER_DRIVER: overlay2<br>
  services:<br>
    - docker:dind<br>
  only:<br>
    - "master"<br>
  before_script:<br>
    - "docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY"<br>
  script:<br>
    - "docker build -t ${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_NAME} -t ${CI_REGISTRY_IMAGE}:latest --pull ."<br>
    - "docker push ${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_NAME}"<br>
    - "docker push ${CI_REGISTRY_IMAGE}:latest"<br>
  after_script:<br>
    - "docker logout ${CI_REGISTRY}"<br>
  tags:<br>
    - docker<br>
deploy-image:<br>
  stage: deploy<br>
  image: docker:latest<br>
  only:<br>
    - "master"<br>
  variables:<br>
    C_NAME: "alexa-bot"<br>
  before_script:<br>
    - "docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY"<br>
  script:<br>
    - "docker pull  ${CI_REGISTRY_IMAGE}:latest"<br>
    - "docker container stop -t 0 ${C_NAME} || true"<br>
    - "docker container rm ${C_NAME} || true"<br>
    - "docker run -d -P --name ${C_NAME} -e CLIENT_ID=${CLIENT_ID} -e CLIENT_SECRET=${CLIENT_SECRET} -e PRODUCT_TYPE_ID=${PRODUCT_TYPE_ID} --restart always ${CI_REGISTRY_IMAGE}:latest"<br>
  after_script:<br>
    - "docker logout ${CI_REGISTRY}"<br>
  tags:<br>
    - deploy<br>
 

TODO

Now that the image is published and the container running, we need to set up some sort of front-end. My major concern is security at this point, so I do still need to figure out that part. We don’t want random people accessing this API and submitting audio we can’t hear to control our device. It’ll probably be some client-server shared secret. I’ll go through some work on that part and make another post when it’s finished. Hackathons are meant to be down and dirty for an MVP. I at least have a container I can SSH into to issue commands now, so that’ll work for a little while to accomplish my goal. Ease of use via Slack would be the logical next step. Until next time!

Hacktoberfest: GitLab Runner

Hacktoberfest is upon us! This month Iโ€™m hacking on small projects each week and sharing them.

We Continue…

Last week, my son and I recently ventured out to start a Minecraft server as part of Hacktober fest. This week we are planning on automating that server a bit more. A few days after playing on the server, we restarted it to test things out. Ultimately, this did work and the server came back up, but the world data was not saved and we had no backups. Thinking out loud, we brainstormed both why this happened and how we could prevent it in the future. We certainly don’t want to build an awesome structure then lose it when the server restarts! Backups were definitely in the back of my mind, as were persisted version-controlled configurations for the server (currently after restarting, all the settings are reset to the defaults). So we set about trying to find a backup solution.

Sounds Simple

We definitely wanted to automate backups. After reading a lot about how Minecraft saves files, we knew the backup was rather simple:

save-off
save-all
tar czf /opt/backups/$(date +%Y-%m-%d)-mc-${CLIENT}.tar.gz /opt/minecraft/
save-on

Sounds simple enough right? Wrong! These commands are all sent the Minecraft server from an RCON client. A quick search finds a lot of options! So far, so good. Since the Docker image we were using for Minecraft included an RCON client… but it didn’t really do backups. At least, not the way I wanted to. So, we decided to create our own Dockerfile for our Minecraft server!

DIY

After much searching we found a very simple and very compact Minecraft RCON client written in Go! Why not? There are no dependencies on a VM or additional libraries. This should be dead simple to include in our Minecraft Server Dockerfile. Of course, as are many things that seem simple, it was not as simple as we originally thought. In our build automation, we included a stage called “setup” running on the latest Golang image. Then we simply run a script to go get and go install the Go application. We used the artifacts instruction to share this compiled binary to the next stage which copies it to our Minecraft Server Dockerfile. Now we have an RCON client!

We wanted backups to happen automatically, so we added a backup script. This script runs using Cron to periodically backup the Minecraft Server files and copy them to a backups directory. The idea here is to mount a volume to this directory so it can be used in another container (the one that will ultimately move it to Dropbox). Once we got that set up we ran the pipeline and…

It failed… The custom GitLab Runner image I had hobbled together a few months ago wasn’t working as expected. It wouldn’t pull the Golang image, so we couldn’t get the Go binary. I had been experiencing many problems with this image and was sure I probably made it wrong. I at least didn’t make it in a repeatable fashion. I definitely didn’t have it running in a container… So, we decided to shift our focus to making a new GitLab Runner image in the way I had intended — in a repeatable fashion preferably containerized. So we spun up a new repository to build this container…

With automated deploys in mind, this should be relatively easy. We created a new .gitlab-ci.yml file and had the job run in a docker:latest container (ie: Docker-in-Docker). All this job is going to do s pull the latest GitLab Runner image, configure it, then run it. Let’s see how this works:

image: docker:latest
variables:
  DOCKER_DRIVER: "overlay2"
script:
  - "docker pull gitlab/gitlab-runner:latest"
  - "docker run -d --name gitlab-runner \
    --restart always \
    -v ${GITLAB_VOLUME}:/etc/gitlab-runner \
    -v /var/run/docker.sock:/var/run/docker.sock \
    gitlab/gitlab-runner:latest"
tags:
  - docker

We push and.. wait a second.. before we push let’s walk through this… We want GitLab to automatically deploy a GitLab Runner… This isn’t repeatable. If I were to tear down the current GitLab Runner, this would never run! This can’t be the solution.. let’s take a step back…

We know we want this automated. The first time we run it, there won’t be any automation at all! We can’t use the YAML file! We must use a Shell Script (or some other script)! Let’s take a look at what we need to do

#!/bin/bash
# If docker is not installed, go get it
command -v docker &amp;amp;amp;gt;/dev/null 2&amp;amp;amp;gt;&amp;amp;amp;amp;1 || {
  curl -sSL https://get.docker.com/ | sh
}
# Set defaults
VERSION="latest"
function usage() {
  echo "This script automatically installs Docker and runs a GitLab Runner container."
  echo " ./run.sh -t [options]"
  echo " -t, --token : (Required) The GitLab Token of the container"
  echo " -n, --name : (Required) The name of the container"
  echo " --alpine : (Optional) Use the Alpine version of the gitlab-runner container"
  exit 1
}
#Parse command line options
while [[ $# -gt 0 ]]; do
  case $1 in
    -n | --name)
      NAME="$2"
      shift # past argument
      shift # past value
    ;;
    --alpine)
      VERSION="alpine"
      shift # past argument
    ;;
    -t | --token)
      TOKEN="$2"
      shift # past argument
      shift # past value
    ;;
    *)
      usage
    ;;
  esac
done
if [ -z ${TOKEN} ]; then
      echo "Token is required!"
      usage
fi
if [ -z ${NAME} ]; then
      echo "Name is required!"
      usage
fi
# Create a volume that will persist
docker stop -t 0 ${NAME} || true
docker rm ${NAME} || true
docker run -d --name ${NAME} \
  --restart always \
  -v /var/run/docker.sock:/var/run/docker.sock \
  gitlab/gitlab-runner:${VERSION}
#Register with GitLab
docker exec ${NAME} gitlab-runner register \
  --non-interactive \
  --executor "docker" \
  --docker-image alpine:latest \
  --url "https://gitlab.com/" \
  --registration-token "${TOKEN}" \
  --name "${NAME}" \
  --docker-privileged \
  --tag-list "docker" \
  --run-untagged \
  --locked="false"

There. Now we have a script that can take a container name and your GitLab Runner token then spawn a new docker privileged container. I decided to not mount a config volume as it would continuously add new runner configs when the service restarted. There is a bit of cleanup in GitLab under the CI/CD runners section when a runner is restarted, but that seems to not be a big deal at the moment. Now that we have a new repeatable GitLab Runner up, we can try to get that Minecraft Dockerfile working next time!