Hacktoberfest: Gitlab Artifacts

Hacktoberfest: Gitlab Artifacts

Hacktoberfest is upon us! This month I’m hacking on small projects each week and sharing them.

Background

Gitlab is a great alternative to GitHub. One of the main features I find is the ability to have unlimited private repositories for free.  This lets me work on things without having them publicly exposed until I’m ready for them to be. In addition to private repositories, it also has a private Docker Registry you can store your Docker images in. GitLab also has other built in CI/CD capabilities like secrets that are passed to the CI/CD orchestration file. GitHub has CI/CD capabilities too, but I feel like GitLab seems less involved for setting it all up.

All of my CI/CD jobs are orchestrated in a gitlab-ci.yml file that sits in the repository. Couple this with a self-hosted GitLab Runner with Docker installed and I have a true CI/CD solution where tagging master triggers an automatic build of the tagged code, publishing the Docker image built during this job, and a deployment of that Docker image to a container (replacing the existing on if present). While this does require some thought into how to persist data across deployments, it does make it very easy for automatic deployments. In the event something does go wrong (and it will) you can easily re-run any previous build making rollbacks extremely simple with a one click solution. Repeatability for the win!

Artifacts

So, this week I was attempting to learn the Artifact system of the GitLab API. Instead of mounting a directory to pass files between jobs, GitLab has an Artifacts API that allows GitLab to store (permanently or temporarily) any number of artifacts defined in a successful pipeline execution. These artifacts are available via the web and the Artifacts API. I have several Go projects that could benefit from cross-compiling the binaries. Why not store these compilations so they are easy to grab whenever I need them for a specific environment? As an added benefit, after compiling once, I could deploy to various environments using these artifacts in downstream jobs. So, I jumped at this chance and found it less than ideal.

The Job

There is a Job API document defining how to create artifacts within a pipeline. It looks as simple as definining artifacts in your gitlab-ci.yml file:

<br>
artifacts:<br>
  paths:<br>
    - dist/<br>

Creating artifacts is the easiest bit I came across. This will auto upload the contents of the dist folder to the orchestration service which will make it available on the GitLab site for that specific job. Easy peasy!

Getting those artifacts to a downstream job is pretty easy as well if you keep in mind the concept of a build context and have navigated GitLab’s various API documents. Thankfully, I’ve done that boring part and will explain (with examples) how to get artifacts working in your GitLab project!

The Breakdown

The API documentation that shows how to upload artifacts also shows how to download artifacts. How convenient! Unfortunately, this is not within the framework of the  gitlab-ci.yml file. The only API documentation you should need for multi-job single-pipeline artifact sharing is here:  YAML API. For other uses (cross-pipeline or scripting artifact downloads) you can see the Jobs API (warning: cross-pipeline artifacts are a premium only feature at the time of writing).

Looking at the Dependencies section the dependencies definition should be used in conjunction with artifacts. Defining a dependent job will cause an ordered execution and any artifacts on the dependent job will be download and extracted to the current build context. Here’s an example:

<br>
dependencies:<br>
  - job:name:<br>

So, what’s a build context? It’s the thing that’s sent to Docker when a build is triggered. If the artifacts aren’t part of the build context, they won’t be available for a Dockerfile to access (COPY, etc).

Here’s the gitlab-ci.yml example:

<br>
stages:<br>
  - build<br>
  - release</p>
<p>build-binary:<br>
  stage: build<br>
  image: golang:latest<br>
  variables:<br>
    PROJECT_DIR: "/go/src/gitlab.com/c2technology"<br>
  before_script:<br>
    - mkdir -p ${PROJECT_DIR}<br>
    - cp -r $CI_PROJECT_DIR ${PROJECT_DIR}/${CI_PROJECT_NAME}<br>
    - go get github.com/tools/godep<br>
    - go install github.com/tools/godep<br>
    - cd ${PROJECT_DIR}/${CI_PROJECT_NAME}<br>
    - godep restore ./...<br>
  script:<br>
    - ./crosscompile.sh<br>
  after_script:<br>
    - cp -r ${PROJECT_DIR}/${CI_PROJECT_NAME}/dist $CI_PROJECT_DIR<br>
  tags:<br>
    - docker<br>
  artifacts:<br>
    paths:<br>
      - dist/<br>
    expire_in: 2 hours</p>
<p>publish-image:<br>
  stage: release<br>
  image: docker:latest<br>
  services:<br>
    - docker:dind<br>
  only:<br>
    - "master"<br>
  variables:<br>
    DOCKER_HOST: tcp://docker:2375/<br>
    DOCKER_DRIVER: overlay2<br>
  before_script:<br>
    - "docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY"<br>
  script:<br>
    - docker build -t ${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_NAME} --pull $CI_PROJECT_DIR<br>
    - docker push ${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_NAME}<br>
  after_script:<br>
    - "docker logout ${CI_REGISTRY}"<br>
  tags:<br>
    - docker<br>
  dependencies:<br>
    - build-binary<br>

And the crosscompile.sh

<br>
#!/bin/sh<br>
echo "Cross compiling alexa-bot..."<br>
for GOOS in darwin linux windows; do<br>
  for GOARCH in 386 amd64; do<br>
    echo "Building $GOOS-$GOARCH"<br>
    export GOOS=$GOOS<br>
    export GOARCH=$GOARCH<br>
    go build -o dist/alexa-bot-$GOOS-$GOARCH<br>
  done<br>
done<br>
echo "Complete!"<br>

In this example, we cross-compile a Go binary and artifact it in the build-binary job before executing the public-image job (which is dependent on build-binary). This downloads and extracts the artifacts from build-binary and sends the project directory (including the downloaded artifacts) as the build context to Docker when building the Dockerfile. Let’s look at the Dockerfile:

<br>
FROM alpine:latest<br>
RUN apk add --no-cache --update ca-certificates<br>
COPY dist/alexa-bot-linux-amd64 /bin/alexa-bot<br>
RUN chmod +x /bin/alexa-bot<br>
EXPOSE 443<br>
# Run the binary with all the options<br>
ENTRYPOINT [ "alexa-bot -c " + $CLIENT_ID + " -s " + $CLIENT_SECRET + " -x " + $SERVER_SECRET + " -t " + $PRODUCT_TYPE_ID + " -n 123 -v " + $VERSION + " -r " + $REDIRECT_URL + " -z " + $TOKEN_URL ]<br>

You can see here that the Dockerfile starts with Alpine Linux as the base Docker image, updates the CA certificates then copies the dist/alexa-bot-linux-amd64 binary from the Docker build context to the compiled Docker image then gives it executable permissions. The rest of the file sets up the port the binary will listen to and passes the configurations to the image.

Once this Docker image is pushed to GitLab’s private repository, the Docker image is available to run (provided with some environment configurations)!

Conclusion

All in all, this seems to work out great for single pipeline builds. When you get into multi-pipeline builds things get even trickier. I found that the artifacts system didn’t quite meet my requirements and I opted for condensing builds and not artifacting the compiled binaries. I could, however, trigger a pipeline that cross-compiles and artifacts the compiled binaries and run a second pipeline that also cross-compiles the binaries (duplicated work) then creates an image. Ultimately, I didn’t really care about the artifacts as my purpose was always to create the Docker images. As usual, your mileage may vary!

Hacktoberfest: Use Alexa without talking!

Hacktoberfest: Use Alexa without talking!

Hacktoberfest is upon us! This month I’m hacking on small projects each week and sharing them.

Backstory

When we sleep at night, we like to have some background white noise to help with sleeping. Since we have an Echo in our kitchen and an Echo Dot in our bedroom, we ask it to “play rain sounds” which starts a lovely little piece of audio that puts us right to sleep . Unfortunately, it does not look and after an hour it stops. The sudden silence sometimes wakes me up, and I can’t restart it without talking to Alexa and potentially waking up my wife, and nobody wants that! So, I started researching how to talk to Alexa without talking to her and I came across this article on the DolphinAttack. This got me thinking about a device which could leverage this vulnerability, but quickly gave up as it involved more hardware than I wanted and there was a lot of setup. So, I kept researching and came across this forum which talked about a workaround! This seemed more promising and more similar to what I wanted.

The Goal

My goal for this week’s hackathon was to create a Slack Bot for talking to Alexa. The idea is that the Slack App received a text command to send to Alexa, converts the text to a speech audio file, then sends it to the device I want to receive it. This would let me send a text message to Alexa in the middle of the night without ruining my wife’s sleep!

The Backend

Thankfully, there is already an open source GitLab project for handling the audio file push and an article that shows how to use it! I started by manually proving the concept before moving forward. After this proof-of-concept seemed like it would work out, I started on a Dockerfile to set this baby up!

<br>
FROM alpine:latest<br>
ARG CLIENT_ID<br>
ARG CLIENT_SECRET<br>
ARG PRODUCT_TYPE_ID<br>
ENV CLIENT_ID $CLIENT_ID<br>
ENV CLIENT_SECRET $CLIENT_SECRET<br>
ENV PRODUCT_TYPE_ID $PRODUCT_TYPE_ID<br>
RUN mkdir -pv /opt/alexa<br>
WORKDIR /opt/alexa<br>
COPY *.sh ./<br>
RUN chmod +x *.sh<br>
RUN apk add --update ca-certificates \<br>
    espeak \<br>
    curl<br>
RUN (crontab -l ; echo "*/20 * * * * /opt/alexa/refresh_token.sh") | crontab -<br>

And a GitLab YAML file to auto-deploy it

<br>
stages:<br>
  - build<br>
  - deploy<br>
build-image:<br>
  stage: build<br>
  image: docker:latest<br>
  variables:<br>
    DOCKER_HOST: tcp://docker:2375/<br>
    DOCKER_DRIVER: overlay2<br>
  services:<br>
    - docker:dind<br>
  only:<br>
    - "master"<br>
  before_script:<br>
    - "docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY"<br>
  script:<br>
    - "docker build -t ${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_NAME} -t ${CI_REGISTRY_IMAGE}:latest --pull ."<br>
    - "docker push ${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_NAME}"<br>
    - "docker push ${CI_REGISTRY_IMAGE}:latest"<br>
  after_script:<br>
    - "docker logout ${CI_REGISTRY}"<br>
  tags:<br>
    - docker<br>
deploy-image:<br>
  stage: deploy<br>
  image: docker:latest<br>
  only:<br>
    - "master"<br>
  variables:<br>
    C_NAME: "alexa-bot"<br>
  before_script:<br>
    - "docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY"<br>
  script:<br>
    - "docker pull  ${CI_REGISTRY_IMAGE}:latest"<br>
    - "docker container stop -t 0 ${C_NAME} || true"<br>
    - "docker container rm ${C_NAME} || true"<br>
    - "docker run -d -P --name ${C_NAME} -e CLIENT_ID=${CLIENT_ID} -e CLIENT_SECRET=${CLIENT_SECRET} -e PRODUCT_TYPE_ID=${PRODUCT_TYPE_ID} --restart always ${CI_REGISTRY_IMAGE}:latest"<br>
  after_script:<br>
    - "docker logout ${CI_REGISTRY}"<br>
  tags:<br>
    - deploy<br>
 

TODO

Now that the image is published and the container running, we need to set up some sort of front-end. My major concern is security at this point, so I do still need to figure out that part. We don’t want random people accessing this API and submitting audio we can’t hear to control our device. It’ll probably be some client-server shared secret. I’ll go through some work on that part and make another post when it’s finished. Hackathons are meant to be down and dirty for an MVP. I at least have a container I can SSH into to issue commands now, so that’ll work for a little while to accomplish my goal. Ease of use via Slack would be the logical next step. Until next time!

Hacktoberfest: GitLab Runner

Hacktoberfest is upon us! This month I’m hacking on small projects each week and sharing them.

We Continue…

Last week, my son and I recently ventured out to start a Minecraft server as part of Hacktober fest. This week we are planning on automating that server a bit more. A few days after playing on the server, we restarted it to test things out. Ultimately, this did work and the server came back up, but the world data was not saved and we had no backups. Thinking out loud, we brainstormed both why this happened and how we could prevent it in the future. We certainly don’t want to build an awesome structure then lose it when the server restarts! Backups were definitely in the back of my mind, as were persisted version-controlled configurations for the server (currently after restarting, all the settings are reset to the defaults). So we set about trying to find a backup solution.

Sounds Simple

We definitely wanted to automate backups. After reading a lot about how Minecraft saves files, we knew the backup was rather simple:

save-off
save-all
tar czf /opt/backups/$(date +%Y-%m-%d)-mc-${CLIENT}.tar.gz /opt/minecraft/
save-on

Sounds simple enough right? Wrong! These commands are all sent the Minecraft server from an RCON client. A quick search finds a lot of options! So far, so good. Since the Docker image we were using for Minecraft included an RCON client… but it didn’t really do backups. At least, not the way I wanted to. So, we decided to create our own Dockerfile for our Minecraft server!

DIY

After much searching we found a very simple and very compact Minecraft RCON client written in Go! Why not? There are no dependencies on a VM or additional libraries. This should be dead simple to include in our Minecraft Server Dockerfile. Of course, as are many things that seem simple, it was not as simple as we originally thought. In our build automation, we included a stage called “setup” running on the latest Golang image. Then we simply run a script to go get and go install the Go application. We used the artifacts instruction to share this compiled binary to the next stage which copies it to our Minecraft Server Dockerfile. Now we have an RCON client!

We wanted backups to happen automatically, so we added a backup script. This script runs using Cron to periodically backup the Minecraft Server files and copy them to a backups directory. The idea here is to mount a volume to this directory so it can be used in another container (the one that will ultimately move it to Dropbox). Once we got that set up we ran the pipeline and…

It failed… The custom GitLab Runner image I had hobbled together a few months ago wasn’t working as expected. It wouldn’t pull the Golang image, so we couldn’t get the Go binary. I had been experiencing many problems with this image and was sure I probably made it wrong. I at least didn’t make it in a repeatable fashion. I definitely didn’t have it running in a container… So, we decided to shift our focus to making a new GitLab Runner image in the way I had intended — in a repeatable fashion preferably containerized. So we spun up a new repository to build this container…

With automated deploys in mind, this should be relatively easy. We created a new .gitlab-ci.yml file and had the job run in a docker:latest container (ie: Docker-in-Docker). All this job is going to do s pull the latest GitLab Runner image, configure it, then run it. Let’s see how this works:

image: docker:latest
variables:
  DOCKER_DRIVER: "overlay2"
script:
  - "docker pull gitlab/gitlab-runner:latest"
  - "docker run -d --name gitlab-runner \
    --restart always \
    -v ${GITLAB_VOLUME}:/etc/gitlab-runner \
    -v /var/run/docker.sock:/var/run/docker.sock \
    gitlab/gitlab-runner:latest"
tags:
  - docker

We push and.. wait a second.. before we push let’s walk through this… We want GitLab to automatically deploy a GitLab Runner… This isn’t repeatable. If I were to tear down the current GitLab Runner, this would never run! This can’t be the solution.. let’s take a step back…

We know we want this automated. The first time we run it, there won’t be any automation at all! We can’t use the YAML file! We must use a Shell Script (or some other script)! Let’s take a look at what we need to do

#!/bin/bash
# If docker is not installed, go get it
command -v docker &amp;amp;amp;gt;/dev/null 2&amp;amp;amp;gt;&amp;amp;amp;amp;1 || {
  curl -sSL https://get.docker.com/ | sh
}
# Set defaults
VERSION="latest"
function usage() {
  echo "This script automatically installs Docker and runs a GitLab Runner container."
  echo " ./run.sh -t [options]"
  echo " -t, --token : (Required) The GitLab Token of the container"
  echo " -n, --name : (Required) The name of the container"
  echo " --alpine : (Optional) Use the Alpine version of the gitlab-runner container"
  exit 1
}
#Parse command line options
while [[ $# -gt 0 ]]; do
  case $1 in
    -n | --name)
      NAME="$2"
      shift # past argument
      shift # past value
    ;;
    --alpine)
      VERSION="alpine"
      shift # past argument
    ;;
    -t | --token)
      TOKEN="$2"
      shift # past argument
      shift # past value
    ;;
    *)
      usage
    ;;
  esac
done
if [ -z ${TOKEN} ]; then
      echo "Token is required!"
      usage
fi
if [ -z ${NAME} ]; then
      echo "Name is required!"
      usage
fi
# Create a volume that will persist
docker stop -t 0 ${NAME} || true
docker rm ${NAME} || true
docker run -d --name ${NAME} \
  --restart always \
  -v /var/run/docker.sock:/var/run/docker.sock \
  gitlab/gitlab-runner:${VERSION}
#Register with GitLab
docker exec ${NAME} gitlab-runner register \
  --non-interactive \
  --executor "docker" \
  --docker-image alpine:latest \
  --url "https://gitlab.com/" \
  --registration-token "${TOKEN}" \
  --name "${NAME}" \
  --docker-privileged \
  --tag-list "docker" \
  --run-untagged \
  --locked="false"

There. Now we have a script that can take a container name and your GitLab Runner token then spawn a new docker privileged container. I decided to not mount a config volume as it would continuously add new runner configs when the service restarted. There is a bit of cleanup in GitLab under the CI/CD runners section when a runner is restarted, but that seems to not be a big deal at the moment. Now that we have a new repeatable GitLab Runner up, we can try to get that Minecraft Dockerfile working next time!

Hacktoberfest!

Hacktoberfest is upon us! This month I’m hacking on small projects each week and sharing them.

Minecraft

This week my son and I hacked together an auto-deployed Docker based Minecraft server that is also automatically backed up and uses Git for managing server configurations. We haven’t finished the auto-backup to Dropbox portion yet, but that’s something we can always work on later! Here’s what we did…

Continuous Integration & Continuous Deployment

I knew I wanted this server to be a “set it and forget it” type of server. If anything went wrong, restarting it would fix it without losing world data. With this in mind, my first thought was using GitLab‘s CI/CD process and a .gitlab-ci.yml file. Looking through the Docker registry, I found a Minecraft server image that appears to stay up to date: itzg/minecraft-server. We simply used that and mounted a few volumes for version controlled configs and the world data directory. The GitLab Runner file is a lot to take in if this is the first time seeing this type of file. I’ll walk through each line and explain what it does. Here’s it is:

cache:
  key: "$CI_REGISTRY_IMAGE-$CI_COMMIT_SHA"
  untracked: true
stages:
  - deploy
deploy-prod:
  stage: deploy
  image: docker:latest
  only:
    - "master"
  variables:
    DOCKER_DRIVER: "overlay2"
  script:
    - docker pull itzg/minecraft-server:latest
    - docker stop minecraft || true
    - docker rm minecraft || true
    - docker run -d --name minecraft -p 25565:25565 -v minecraft-world:/data/world -v minecraft-config:/config -v minecraft-mods:/mods -v minecraft-plugins:/plugins -e EULA=TRUE --restart always  itzg/minecraft-server
    - docker cp ./config/* minecraft:/config/ || true
    - docker cp ./data/* minecraft:/data/ || true
    - docker cp ./mods/* minecraft:/mods/ || true
    - docker cp ./plugins/* minecraft:/plugins/ || true
    - docker restart minecraft || true
    - docker pull janeczku/dropbox:latest
    - docker stop -t 0 minecraft-backup || true
    - docker rm minecraft-backup || true
    - docker run -d --restart=always --name=minecraft-backup -v minecraft-world:/dbox/Dropbox/minecraft-server/data -v minecraft-config:/dbox/Dropbox/minecraft-server/config -v minecraft-mods:/dbox/Dropbox/minecraft-server/mods -v minecraft-plugins:/dbox/Dropbox/minecraft-server/plugins janeczku/dropbox
  tags:
    - docker

Devil is in the Details

cache:
  key: "$CI_REGISTRY_IMAGE-$CI_COMMIT_SHA"
  untracked: true

Lines 1-3 relate to GitLab and Docker Container caching. This will cache the resulting image to the GitLab Runner cache with the key on line 2. This is useful for downstream builds. I copied this from another project (Hacktoberfest!) so, I’m not sure if they are needed since this container isn’t used by any other job.

stages:
  - deploy

Line 4 defines the stages that will be used during this pipeline and line 5 is that stage. We define only the “deploy” stage as we aren’t performing any testing or other delivery stages. This is useful to define for organization within this file and allows you to introduce pipelines later. Again, this may not be necessary since it’s not involved in more than one pipeline. I did copy this from another project to reduce the amount of time spend recreating it so… Hacktoberfest!

deploy-prod:
  stage: deploy
  image: docker:latest
  only:
    - "master"
  variables:
    DOCKER_DRIVER: "overlay2"
  script:
    - docker pull itzg/minecraft-server:latest
    - docker stop minecraft || true
    - docker rm minecraft || true
    - docker run -d --name minecraft -p 25565:25565 -v minecraft-world:/data/world -v minecraft-config:/config -v minecraft-mods:/mods -v minecraft-plugins:/plugins -e EULA=TRUE --restart always  itzg/minecraft-server
    - docker cp ./config/* minecraft:/config/ || true
    - docker cp ./data/* minecraft:/data/ || true
    - docker cp ./mods/* minecraft:/mods/ || true
    - docker cp ./plugins/* minecraft:/plugins/ || true
    - docker restart minecraft || true
    - docker pull janeczku/dropbox:latest
    - docker stop -t 0 minecraft-backup || true
    - docker rm minecraft-backup || true
    - docker run -d --restart=always --name=minecraft-backup -v minecraft-world:/dbox/Dropbox/minecraft-server/data -v minecraft-config:/dbox/Dropbox/minecraft-server/config -v minecraft-mods:/dbox/Dropbox/minecraft-server/mods -v minecraft-plugins:/dbox/Dropbox/minecraft-server/plugins janeczku/dropbox

Line 6 defines a new job called deploy-prod and is only deployed during the deploy stage we just defined and only for the master branch (line 9 and 10). This will spin up a new docker container using the latest docker image (line 8) from the Docker registry. Once spun up line 11 defines environmental variables available to the container and line 12 sets the DOCKER_DRIVER. This driver is supposed to be more efficient. Again, this was copied from another project and I haven’t had any problems, so I leave it alone. Lines 13-26 are the meat and potatoes. The script section does the heavy lifting.

  script:
    - docker pull itzg/minecraft-server:latest
    - docker stop minecraft || true
    - docker rm minecraft || true
    - docker run -d --name minecraft -p 25565:25565 -v minecraft-world:/data/world -v minecraft-config:/config -v minecraft-mods:/mods -v minecraft-plugins:/plugins -e EULA=TRUE --restart always  itzg/minecraft-server
    - docker cp ./config/* minecraft:/config/ || true
    - docker cp ./data/* minecraft:/data/ || true
    - docker cp ./mods/* minecraft:/mods/ || true
    - docker cp ./plugins/* minecraft:/plugins/ || true
    - docker restart minecraft || true
    - docker pull janeczku/dropbox:latest
    - docker stop -t 0 minecraft-backup || true
    - docker rm minecraft-backup || true
    - docker run -d --restart=always --name=minecraft-backup -v minecraft-world:/dbox/Dropbox/minecraft-server/data -v minecraft-config:/dbox/Dropbox/minecraft-server/config -v minecraft-mods:/dbox/Dropbox/minecraft-server/mods -v minecraft-plugins:/dbox/Dropbox/minecraft-server/plugins janeczku/dropbox

Line 13 defines what will run on the GitLab Runner. This is the reason we use the latest docker image on line 8. The commands on lines 14 through 26 will actually leverage Docker on the GitLab Runner and manipulate docker containers. We start off on line 14 by pulling the itzg/minecraft-server image (in case Minecraft server releases an update). This updates the image that Docker can use but doesn’t update the running container. After we pull the latest container image, we stop (line 15) and remove (line 16) the current running Minecraft server container. The || true guarantees the execution will not return an error which would stop this from succeeding in the event the container isn’t running or doesn’t exist. Line 15 doesn’t have a timeout for forcing the container to shutdown so it will wait to clean up and prepare the world for the server going down. This helps prevent data corruption. Line 16 removes the running container so we can re-deploy with the latest version.

    - docker run -d --name minecraft -p 25565:25565 -v minecraft-world:/data/world -v minecraft-config:/config -v minecraft-mods:/mods -v minecraft-plugins:/plugins -e EULA=TRUE --restart always  itzg/minecraft-server

Line 17 does a lot. It runs a container named Minecraft (--name minecraft) as a daemon service (-d) and binds the host port 25565 to the container port 25565 (-p 25565:25565). Since Minecraft clients check for this specific port, I opted to just bind the public exposed port to the same container port. We then mount several Docker volumes with the -v flag. I like to think of a Docker Volume as a flash drive. It’s just a storage device you can plug into other containers. The only difference is that you can plug it into multiple containers simultaneously and you can mount it to a specific folder. Here, we mounted the minecraft-world volume to the /data/world container directory. We also mount the minecraft-config to /config, minecraft-mods to /mods and minecraft-plugins to /plugins. Once these mounts are in place, we set the container EULA environment variable to true, set the container to always restart if it goes down, and finally tell Docker what image to use for the container. This runs the actual container and the whole server spins up performing any startup tasks before the server is live and available for connection!

    - docker cp ./config/* minecraft:/config/ || true
    - docker cp ./data/* minecraft:/data/ || true
    - docker cp ./mods/* minecraft:/mods/ || true
    - docker cp ./plugins/* minecraft:/plugins/ || true
    - docker restart minecraft || true

Once this container is running, we copy version controlled files (if any) from the checked out repository into the mounted volumes on the Minecraft server container (lines 18-21). This lets us version control configurations, plugins, and mods and auto-deploy them to the server. Once these are in place, we restart the container one more time (line 22) for these to take effect. These files are copied in this way so we don’t have to worry if this is the first time the container is ran. If we mounted the files before starting the container, the container wouldn’t exist on the first run and would require 2 deployments for these changes to take effect.

    - docker pull janeczku/dropbox:latest
    - docker stop -t 0 minecraft-backup || true
    - docker rm minecraft-backup || true
    - docker run -d --restart=always --name=minecraft-backup -v minecraft-world:/dbox/Dropbox/minecraft-server/data -v minecraft-config:/dbox/Dropbox/minecraft-server/config -v minecraft-mods:/dbox/Dropbox/minecraft-server/mods -v minecraft-plugins:/dbox/Dropbox/minecraft-server/plugins janeczku/dropbox

Lines 23-26 are backup related. Line 23 updates the local version of the janeczku/dropbox:latest image. Lines 24 and 25 should look familiar. These stop and remove the existing container while guaranteeing success if the container is already stopped or doesn’t exist. Line 26 should also look familiar. Here, we start another container as a daemon (-d) that restarts whenever it stops (--restart=always) named minecraft-backup (--name=minecraft-backup) with a few volumes mounted to the container. These volume mounts should also look familiar! We mount the same volumes here as we do the Minecraft server so this container can periodically back up the contents of the volumes to Dropbox. We are still troubleshooting why this isn’t quite working and hope to have this resolved next time.

  tags:
    - docker

Finally, likes 27 and 28 tell GitLab what GitLab Runner to run this job on. Each GitLab Runner may have tags that jobs can use to run on specific runners. This particular job requires a GitLab Runner that is Docker enabled. This doesn’t actually mean there is a Docker enabled GitLab Runner available, but in my case, I have already set up a Docker enabled GitLab Runner and I can force jobs to use it by adding this tag to this file.

That’s it! Now we have a Minecraft server! Even without the Dropbox backups, this project was fun. It was an awesome moment when my son wanted to stand up a Minecraft server and actually stayed interested through the build. He got to see behind the scenes how the game is set up, although I don’t think he fully understands how it all works, but maybe one day!

We will continue tinkering with this script as we add configuration files and get the Dropbox backups working. I will be open sourcing this when we have completed working on it. Until then feel free to use this as a template for your next Minecraft Server and I will be updating this post as I update the files.

Reset

Begin Anew

I recently wiped my hosting server since I was closing down my computer repair business. It always bothered me that I was only hosting one site on a rather large server and I couldn’t do much with it without risking downtime (not that it really mattered given so little traffic). When I shut it down I had a clean slate.. why not try out hosting with Docker? Not knowing much about it, other than some basic commands, a general idea what it is, and some experience at work, I dove in.

Containment

At work I deploy a whole bunch of microservices each in their own container to a local docker host on our development machines for testing/developing. There’s a whole Continuous Integration/Continuous Delivery (CI/CD) pipeline that each Github repo is hooked into that builds a Docker container configured to run the application then unit tests before publishing the Docker container to the private Docker Registry. The CI/CD system is triggered after this to actually deploy it to various environments. If any of these deployments fail, the deploy stops and the team is notified in Slack. Pretty sweet setup since I don’t have to do anything to actually deploy any changes. Just merging a branch to master kicks off an automatic deployment. They are running some sort of cluster that provides fail-over and load balancing… but I don’t quite think I need that… yet. So, knowing what I already know about Docker I set out to deploy this website as a container! Being a lazy developer, I leveraged existing Docker images on the public Docker Registry. Since I am familiar with WordPress, and there is a Docker image with WordPress already set up I went with that. For storage, WordPress needs a MySQL database which exists on the Docker Register. I use those and we’re all done here, right?

Configurations

Not so fast. We still need to actually set up the MySQL and WordPress containers to talk to each other. By default, each container runs in isolation. Nothing is exposed so nothing can access the container. This is great since we’re not quite ready for internet traffic. So I wrote a Dockerfile that builds the vanilla WordPress container to link the MySQL container on a private network. Now that WordPress and MySQL can talk to each other, we still need to configure MySQL and WordPress on the application level. We’re done with the container so, it’s time to hit the configs! MySQL setup is rather easy. Log into the MySQL container, create a username and password and you’re done! For WordPress, there’s a lovely little wizard the first time your hit your website that lets you set it up through a convenient user interface. After creating the username and password on MySQL, I hit the site and walk through the WordPress setup wizard. Once I finished with the wizard I’m done setting up the website!

Wrap-up

In another post, I’ll explain the CI/CD portion of how I accomplished making this repeatable. I did end up automating the MySQL setup so it auto-generates a username and password (of my choosing) when the Docker container is created. Additionally, I made this whole process repeatable and connected it to the GitLab CI/CD system to make it automated. My #1 goal was to make this repeatable and automated so if I ever need to do something like upgrade my server or move hosts, I’d be able to regenerate the site in it’s current state with the push of a button (or at least a few simple steps).

Rules

We’ve come so far since last “game dev” session! We have an actual rules book! This came about after sifting through notes we had taken in the previous sessions while play testing and meeting conflicting information. It was time to consolidate and create one rules set to rule them all! The rule book isn’t entirely complete, but it will serve as the golden standard going forward. If a rule changes, it changes in the rule book. If an idea occurs, it’s tested before being added to the rule book.

Things are a’ Changin’

We ended up changing the actions on a turn. Now, instead of placing a tile, drawing back up to 3 cards, moving, fighting, moving zombies, then discarding players still place a tile but then choose whether to move, search, trade, or use an item outside of combat. The rest of the turn is just zombie movement. We tried this new turn order and it worked! Sort of…

We had already decided that players started with only 3 heart tokens. This proved devastatingly hard and we generally didn’t survive the first zombie encounter. Adding the three bullet tokens back didn’t seem to increase the odds past 2 encounters. So, you get to draw 2 cards from the item deck to start as well. This seems to be a great balance as you are essentially meeting up with another player at the start of it all with whatever you managed to grab on the way out the door.

We did clean up the combat rules to indicate that using a Weapon requires a bullet token. Without a weapon, or bullet tokens for the weapon, you instead must make a combat roll under the original rules. We also established basic zombie movement rules: the closest zombie to a player moves towards that player. In the event of a tie, the player chooses which one moves. This prevents the players from just making zombies move away from them or the area they want to go to.

We also implemented a limited inventory and item sharing. Players can now hold up to 3 items at a time whether or not they are broken. A player may discard an item at the end of their turn or when they pick up a 4th item. We also implemented sharing. This still has some kinks to work out as the time it takes for sharing may outweigh the benefits.

When will it end?

We did find out that members of your team will eventually lose all their heart tokens. We discussed then decided to test that when this happens the player becomes a member of the zombie team which gains full control over the zombie hoard surrounding their former partners. This means that zombies move according to the zombie team discretion instead of only moving the closes zombie to a player towards that player. This lets the zombie team become strategic in isolating players from their group.

This, of course, eliminates the win condition of collecting 25 zombies per player. We’re not sure if this is a good idea or not, but I think with a few more play tests and a bit more tweaking it can be a beneficial mechanic for interesting play.

On that note…

We did come up with some new ideas like additional armor, weapons, and items. Barricades and backpacks were some specifics. Barricades would block a street or building entrance so zombies (and players) couldn’t pass. Backpacks would add 1 armor token and 3 inventory spaces as long as it wasn’t broken. When it breaks, you have to discard down to 3 items. We will most likely change the tiles to be more advantageous to the new mechanics and play styles we are introducing. Ideas like parking lots, shortcuts, and alleys all came up. A difficulty system based on starting heart tokens was also discussed. Who knows? Maybe next session will see some of these fleshed out.

Play Testing!

It’s been a short while and Connor and I have had another “game dev” session! He brought his journal with him and it had some ideas! So, we went about play testing some of the ideas from the first session and some of the ideas in his journal.

The Story… So Far…

For this particular game, we already have a platform with a story. It’s a rather simple story without all that backstory that enriches characters. The story is essentially: “Escape the city on the helicopter or capture enough zombies surviving won’t be an issue.” Pretty simple and straightforward. We could always expand this later with, perhaps an actual story? Maybe a few named characters? Who knows!

Goooooooooooooal!

So we have a story, now we need goals. Goals tell players how to win. The original is “escape” but typically comes down to “prevent your opponents from escaping” in actual play. This becomes boring and ends in a long slog through the game until someone flips a table or everybody just gives up.

We didn’t want to play a game like that. We want a fun, albeit challenging, game to play with friends that resolved in a reasonable amount of time. These are our goals. With these goals in mind, we couldn’t see any other ways to win other than “escape.” So we made the game goal “escape with your friends” to set the tone that this game is cooperative!

But escaping is hard, and the original had somewhat of a fail-safe: collect 25 zombies. That sounds reasonable to keep, maybe “25 zombies for each player playing the game”? That seemed better so we went with that!

Mechanics Schmechanics

So if a story is the who, what, when and where and the goal is the why, mechanics are the how. This is how a game is played. In Monopoly, you move around the board buying properties and building houses and hotels so you can collect money. Build and buy enough and your opponents can’t pay the bills and go bankrupt. These mechanics (dice, properties, rent, houses, hotels, mortgaging, passing go, Change, Community Chest) are all the mechanics of how you play Monopoly. They also fit in with the story of being a real estate mogul and the goal of being the last one  in the game. This synergy is hard to hit with mechanics unless the story and the goal are related and aligned.

Card Change Up

So most of the cards in the standard deck revolve around messing with other players. We knew we needed to replace those. In our previous brainstorming session we talked about getting players to work together instead of against each other so we though of how new cards could be added to help that happen. As you could probably imagine, this lead to extremely over powering cards like “Capture all Zombies.” So we talked about the different aspects of games like the story/ploy, player goals, and game mechanics and how they all worked together.

Search and Replace

We struggled with the current card mechanics and how to change them. There was definitely something off with the standard game because it focused more on sending zombies to the opposing players than surviving. What zombie apocalypse has people fighting against the world and not teaming up? So we removed all the “opposition” cards that hurt your new teammates and thought up some alternate cards. We came up with a few replacements and wrote them down on a piece of paper. I broke out the card protection sleeves slipped the original card in it and the piece of paper in front of the card. Boom low budget prototyping.

After adding some risky cards and some beneficial cards it quickly became clear that we needed to change up the deck mechanic and introduce a search mechanic. So we split the deck into “Items” and “Search” decks. We added some proxy cards into the Search deck for drawing an Item (currently the only way to do get an item). This seemed great! We got more weapons and there are risks of searching (like finding a zombie or misplacing/breaking a weapon). This mechanic helped a lot. it got more weapons and items in play and it organically made us stick together for protection. It was a great step in feel for the way we imagined this game playing out.

Weaponized

We found the standard weapon mechanic too limiting. It requires you to go to a specific building after finding a weapon to actually use the weapon. This seemed backwards to us. Once you find a weapon you should be able to use it. So, we lifted that restriction and found that they suddenly became way too powerful! We talked about balance (which is a rather difficult concept to understand). So I offered up a “durability” check when you use a weapon to balance this mechanic. This introduced some risk to using a weapon since it can break. It worked great but weapons were still too powerful. The original game has a concept of increasing your combat roll with a bullet token. We opted to keep that mechanic as more of a general representation of “effort” when you are in combat without a weapon — you may or may not need it. But, once you have a weapon, you aren’t making combat rolls. So this seemed to be a little uneven. We decided to have using a weapon cost a bullet token as ammunition in addition to the “durability” roll. This ended up working out great so far! You have the safety of a weapon but if it breaks, you’re low on effort/energy since a disheartening event just occurred and your risk just increased.

Life or Death

We only kept the basic weapons in the game. Some of them remained one-time use (grenade and Molotov) and we buffed some of them (shotgun has a range of 1). We still didn’t end up searching as much as we hoped. While the search and item decks seemed to have complementing risk and reward, opting to search seemed too much of a risk in itself. We hadn’t changed any of the main mechanics so that you were more likely to want to search. So, we took out heart tokens from building spawns and added Armor to the items deck. Other than First Aid Kits the only way to get life now is through Armor. But, there are limited armor items in the deck (need to still play test that one with more people). So, we talked about repairing items. If a weapon or armor breaks, we just tap it (turn it sideways) to mark it as broken. Broken items can be fixed with a Repair Kit.

Sharing is Caring

This is going great! We have weapons to survive, a reason to search for things, and a way to fix the things we found. What about helping others though? We talked about Pandemic and how researchers can share their research. What if players could share their inventory items? We tried it out. Players have to be on the same square and the active player can forego moving to give 1 item to a player on their square. This seems balanced. It requires communication on trades, planning for meeting up, and both players essentially sacrificing their movement (the receiving player has to use their movement to get to the trading player). This happened almost immediately with armor when Connor found 2 pieces. It sparked the talk on armor tokens remaining on their respective armor cards and how it travels with the armor.

Conclusion

This session was awesome! We have a semi-playable version of our modified Zombies!!! co-operative game. Brainstorming is one thing, but seeing your ideas in action and how it plays out is so much more informational. We almost immediately saw flaws in our ideas, discussed them, did a quick brainstorming session on how to fix it, then tried out the new way. These iterations helped Connor understand how players interact with different mechanics and how they all play together. He’s still got his journal and he’s still writing down ideas. We are both looking forward to the next “game dev” session!

Braaaaaainsssstorming…

The other day I was playing a tabletop game with my son. It was Zombies!!! If you haven’t played it, it’s a tile based tabletop game where you play as a human trying to escape a city filled with zombies. The world starts as 9×9 gridded square. Each turn a new tile is played growing the city larger and larger all the while filling it with zombies. In this game you would rather feed your friend to the zombies to get away then help him out of a jam. So, yes, it’s hyper competitive. The game has its downs though. The only way to win is by either capture 25 zombies, or escaping on the helicopter (last tile in the deck). You can slow your opponents down by dropping zombies on them. As the game progresses, the mechanics break down and the game becomes how fast can you send players back to start. The game has a ton of potential. So we decided to change up some mechanics to see if we could turn this into a multiplayer cooperative game (a kin to Pandemic). What better way to start working on a zombie game than brainstorming!

Brainstorming

So, my son just turned 10. He’s excited (really excited) about Fortnite and YouTube and Twitch! So, I brought up tabletop games. Why not make a game? So, I showed him different type of games. He already knew about card games, and I’ve been showing him other board games like Ticket to Ride First Journey, Pandemic, Life on the Farm, and Monopoly of course! He hasn’t played a tile based game so I thought Zombies!!! would be a great gateway into thinking outside the box (or board) and into a more open format game. So, as we quickly became board with this game in its current condition, we asked ourselves.. How could we make this fun? Connor was the first to answer with “Why aren’t we working together to escape?” which is a great idea! It’s the zombie apocalypse! When have we ever seen a zombie apocalypse that doesn’t have a band of humans (good or bad) working together for survival? Seems like a great idea! Added it to the list. We went down different ideas and talked about how that would affect the game and if it would make it too difficult or not fun. We added them to the list and came up with a bunch of ideas. Here’s some general ideas of where this brainstorming led:

  1. Players work together to escape or capture enough zombies
  2. Different decks to search from
  3. Change the layout of the tiles
  4. Some sort of inventory system
  5. Some better weapons system where they aren’t one-time use

Connor seems very excited to continue the next session that he’s keeping a journal of ideas to share next time! I’ll keep updating this series as we progress in our game! This activity was definitely more fun than I expected and am looking forward to regular “game dev” nights and hopefully a better game to play and share with friends!

GopherCon 2018

So, this was my first ever development convention. I gotta say.. it was pretty awesome and I’m not sure why I haven’t gone to one earlier. In this post I’m going to try and explain what GopherCon is, why you should go next year, and some travel tips that may help you.

So.. What is GopherCon?

It’s a convention! More importantly, it’s a convention of Gophers who are all passionate about coding in Go. Many are open source contributors, all are open source consumers. If you are interested in meeting people in the community this is the place to be. If you couldn’t make it this year, see if you can next year, it’s definitely worth it. Heck, maybe you can get your employer to foot the bill because you will learn a lot! I almost immediately started applying things I learned to my codebases to learn more about them!

If you couldn’t make it to the convention or to all of the talks you wanted to attend, don’t fret! SourceGraph did live blogging live blogging and all of the talks were recorded [links to come as they get posted]. So, if you were freaking out because you weren’t able to keep up in taking notes or you really want to attend Kat Zen’sHow Do You Structure Your Go Apps?” and Filippo Valsorda’sAsynchronous Networking Patterns” but they both happened at the same time (totally the dilemma I was in, I picked Filippo’s), the talks were all recorded and will eventually be available online! Kelsey Hightower had a great talk on Going Serverless which I am very interested in implementing in my side projects.

All work and no play makes a sad Gopher. There are parties on the schedule each evening. Monday’s was at the Denver Performing Arts complex and had free food from some awesome food trucks, pool tables, fooseball tables, bocci ball, corn hole, and Jenga towers. There was also the one and only live GopherCon band. If you are musically inclined (or think you are and want to get up on stage) anyone can be part of the band! They played some awesome cover music and a bit of live karaoke. The ending was a smash hit!

Why Should You Go?

But why should you go if everything is put online after the convention? The short answer is that the experience you have and the immersion in this convention culture will gain you friends and expand your network. You don’t stop talking about a talk you attended when it ends, you find out other people you are getting together for a meal also went to that talk and you get differing viewpoints on the topic of discussion. These alternate view points help drive home the purpose of the topic — to get a discussion started. The topic in a talk isn’t the answer for the topic itself. It’s a starting point; a spring board to innovation and collaboration around an idea.

If you are new to Go, this convention is a welcome wagon of sorts. It’s an intro course for ideas you may be familiar with and a deep dive into topics you’ve never thought about. It exposes you to the online Gopher community and culture of inclusion that I’ve rarely seen in online communities. Everyone wants to help everyone else. Everyone wants to contribute to everyone’s ideas.

There’s also a Gopher Slack (go, click and join right now… I’ll wait). This community is awesome! It’s a global community of Gophers that all want to make Go better and be better at Go. This community is the hub I go to now for all things Go related. Got a question? Ask! Got a problem? Discuss! Got a solution? Share! Maybe you can overcome that huge problem you’re having and present it next year at GopherCon’s community day or as a key note!

During the convention you have the opportunity to meet various vendors and the speakers to get to know them, learn about what they are doing and make some new friends! You also get to meet the faces behind the users in the Gopher Slack community (you joined right?). There is so much I learned about various products, what other companies are doing, and more about Go than I ever expected to learn here!

SO. MUCH. SWAG. Aside from awesome stickers, shirts and other swag, there was a raffle this week for a Microsoft Surface Pro, Oculus Rift, and an iPad! Sadly, I did not win any of these. But, someone at my table did win the Oculus Rift. The positivity at the convention was alluring.

Sadly, since this was my first GopherCon and I didn’t know how it worked, I missed Community Day. But, this day is a day for the community with lightening talks, hackathons, and meetups. There is so much going on I wish I could delay my flight and book another night just to continue hanging out with these awesome people doing cool things I’ve never done before, while also continuing to meet new people and learn new things.

Travel Tips

So, this was my first GopherCon. It was my first convention too. But, it wasn’t my first time flying out somewhere for a long period of time. If you travel a lot, go sign up for TSA Precheck it’ll save you a lot of time getting through TSA if you do frequently travel.

All the points! Book where you get points and fly for points. I am a Starwood Preferred Guest member. They recently merged with Marriott so you get benefits across 29 or so hotels so you have a ton of hotels to choose from!  I generally fly Delta so I’m a Delta SkyMiles member. They recently teamed up with Lyft (referral link) to offer SkyMiles points when you link your Lyft and SkyMiles accounts!

Lastly, leave some room for swag! I got lucky and was a smidge below the 50 lb weight limit on checked bags (although, my bag is also ridiculously heavy). If it is over when you go to check it, you might be able to shift some things to your carry on to get it under the 50 lb (22 kg) limit. Pro-rip: Rolling your clothes saves room in your bag.

Hello world!

So, after much consideration, I’ve retired from computer repair and decided to re-launch a site to focus more on my passion: developing solutions to interesting problems. My goal is to document my forays into the unknown (to me at least) for later reflection. Maybe it’ll help someone facing similar issues.

I have a few projects lined up and I’m considering resurrecting older projects to make them more modern. I’m currently working on a service for No Man’s Sky that allows a user to determine what they can build given resources they have (or could gather). More on this in another post.

I recently completed a Discord bot for Tom Clancy’s: The Division that leverages the very nice and clean APIs at Ruben Alamina’s site to easily search all the vendors when the reset their inventories. This too, I plan on writing up in a later post.

So, this site is running (finally) and I plan on writing up how I made it all automated, repeatable, version controlled, secured and backed up. I haven’t worked much with Docker before, but I do understand the concepts. This site runs on Docker! It’s in it’s own little container with a mounted Docker volume for plugin files. It’s also fronted by Nginx or, more specifically, an nginx-proxy container with a Let’s Encrypt companion container letsencrypt-nginx-companion (go ahead, check the SSL certificate) that auto-magically creates, renews, and configures any new publicly exposed containers with their own SSL certificates provided by Lets Encrypt. The site is backed by a MariaDB Docker container with a mounted data Docker volume. The website volume is read-only mounted to a WordPress Backup container that is also linked to the MariaDB container. This WordPress Backup container periodically copies and compresses the WordPress files in the mounted volume and creates a database dump of the WordPress database. These are both stored on the mounted backup volume. This backup volume also happens to be mounted to a Dropbox container and all of these backup files go straight to Dropbox! So, if this server goes kaboom! All i have to do is relaunch the containers on another host and restore the backups from Dropbox!

That’s it for the teaser. It was a fun project and I’ll definitely be diving into the details when I get back from GopherCon next week. I’m sure I’ll have some goodies from that as well (the No Man’s Sky and the Division projects are both written in Go).

I hope you enjoy the project write-ups I’ll be adding here. It’s not all programming, I’m developing a game with my son. I do plan on writing up other things as well (we have a Raspberry Pi and home automation is something I’ve dabbled in a wee bit).

What are you interested in? What projects would you like to see? I’d love to hear about them so leave a note in the comments below!