Hacktoberfest: Gitlab Artifacts

Hacktoberfest: Gitlab Artifacts

Hacktoberfest is upon us! This month I’m hacking on small projects each week and sharing them.

Background

Gitlab is a great alternative to GitHub. One of the main features I find is the ability to have unlimited private repositories for free.  This lets me work on things without having them publicly exposed until I’m ready for them to be. In addition to private repositories, it also has a private Docker Registry you can store your Docker images in. GitLab also has other built in CI/CD capabilities like secrets that are passed to the CI/CD orchestration file. GitHub has CI/CD capabilities too, but I feel like GitLab seems less involved for setting it all up.

All of my CI/CD jobs are orchestrated in a gitlab-ci.yml file that sits in the repository. Couple this with a self-hosted GitLab Runner with Docker installed and I have a true CI/CD solution where tagging master triggers an automatic build of the tagged code, publishing the Docker image built during this job, and a deployment of that Docker image to a container (replacing the existing on if present). While this does require some thought into how to persist data across deployments, it does make it very easy for automatic deployments. In the event something does go wrong (and it will) you can easily re-run any previous build making rollbacks extremely simple with a one click solution. Repeatability for the win!

Artifacts

So, this week I was attempting to learn the Artifact system of the GitLab API. Instead of mounting a directory to pass files between jobs, GitLab has an Artifacts API that allows GitLab to store (permanently or temporarily) any number of artifacts defined in a successful pipeline execution. These artifacts are available via the web and the Artifacts API. I have several Go projects that could benefit from cross-compiling the binaries. Why not store these compilations so they are easy to grab whenever I need them for a specific environment? As an added benefit, after compiling once, I could deploy to various environments using these artifacts in downstream jobs. So, I jumped at this chance and found it less than ideal.

The Job

There is a Job API document defining how to create artifacts within a pipeline. It looks as simple as definining artifacts in your gitlab-ci.yml file:

<br>
artifacts:<br>
  paths:<br>
    - dist/<br>

Creating artifacts is the easiest bit I came across. This will auto upload the contents of the dist folder to the orchestration service which will make it available on the GitLab site for that specific job. Easy peasy!

Getting those artifacts to a downstream job is pretty easy as well if you keep in mind the concept of a build context and have navigated GitLab’s various API documents. Thankfully, I’ve done that boring part and will explain (with examples) how to get artifacts working in your GitLab project!

The Breakdown

The API documentation that shows how to upload artifacts also shows how to download artifacts. How convenient! Unfortunately, this is not within the framework of the  gitlab-ci.yml file. The only API documentation you should need for multi-job single-pipeline artifact sharing is here:  YAML API. For other uses (cross-pipeline or scripting artifact downloads) you can see the Jobs API (warning: cross-pipeline artifacts are a premium only feature at the time of writing).

Looking at the Dependencies section the dependencies definition should be used in conjunction with artifacts. Defining a dependent job will cause an ordered execution and any artifacts on the dependent job will be download and extracted to the current build context. Here’s an example:

<br>
dependencies:<br>
  - job:name:<br>

So, what’s a build context? It’s the thing that’s sent to Docker when a build is triggered. If the artifacts aren’t part of the build context, they won’t be available for a Dockerfile to access (COPY, etc).

Here’s the gitlab-ci.yml example:

<br>
stages:<br>
  - build<br>
  - release</p>
<p>build-binary:<br>
  stage: build<br>
  image: golang:latest<br>
  variables:<br>
    PROJECT_DIR: "/go/src/gitlab.com/c2technology"<br>
  before_script:<br>
    - mkdir -p ${PROJECT_DIR}<br>
    - cp -r $CI_PROJECT_DIR ${PROJECT_DIR}/${CI_PROJECT_NAME}<br>
    - go get github.com/tools/godep<br>
    - go install github.com/tools/godep<br>
    - cd ${PROJECT_DIR}/${CI_PROJECT_NAME}<br>
    - godep restore ./...<br>
  script:<br>
    - ./crosscompile.sh<br>
  after_script:<br>
    - cp -r ${PROJECT_DIR}/${CI_PROJECT_NAME}/dist $CI_PROJECT_DIR<br>
  tags:<br>
    - docker<br>
  artifacts:<br>
    paths:<br>
      - dist/<br>
    expire_in: 2 hours</p>
<p>publish-image:<br>
  stage: release<br>
  image: docker:latest<br>
  services:<br>
    - docker:dind<br>
  only:<br>
    - "master"<br>
  variables:<br>
    DOCKER_HOST: tcp://docker:2375/<br>
    DOCKER_DRIVER: overlay2<br>
  before_script:<br>
    - "docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY"<br>
  script:<br>
    - docker build -t ${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_NAME} --pull $CI_PROJECT_DIR<br>
    - docker push ${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_NAME}<br>
  after_script:<br>
    - "docker logout ${CI_REGISTRY}"<br>
  tags:<br>
    - docker<br>
  dependencies:<br>
    - build-binary<br>

And the crosscompile.sh

<br>
#!/bin/sh<br>
echo "Cross compiling alexa-bot..."<br>
for GOOS in darwin linux windows; do<br>
  for GOARCH in 386 amd64; do<br>
    echo "Building $GOOS-$GOARCH"<br>
    export GOOS=$GOOS<br>
    export GOARCH=$GOARCH<br>
    go build -o dist/alexa-bot-$GOOS-$GOARCH<br>
  done<br>
done<br>
echo "Complete!"<br>

In this example, we cross-compile a Go binary and artifact it in the build-binary job before executing the public-image job (which is dependent on build-binary). This downloads and extracts the artifacts from build-binary and sends the project directory (including the downloaded artifacts) as the build context to Docker when building the Dockerfile. Let’s look at the Dockerfile:

<br>
FROM alpine:latest<br>
RUN apk add --no-cache --update ca-certificates<br>
COPY dist/alexa-bot-linux-amd64 /bin/alexa-bot<br>
RUN chmod +x /bin/alexa-bot<br>
EXPOSE 443<br>
# Run the binary with all the options<br>
ENTRYPOINT [ "alexa-bot -c " + $CLIENT_ID + " -s " + $CLIENT_SECRET + " -x " + $SERVER_SECRET + " -t " + $PRODUCT_TYPE_ID + " -n 123 -v " + $VERSION + " -r " + $REDIRECT_URL + " -z " + $TOKEN_URL ]<br>

You can see here that the Dockerfile starts with Alpine Linux as the base Docker image, updates the CA certificates then copies the dist/alexa-bot-linux-amd64 binary from the Docker build context to the compiled Docker image then gives it executable permissions. The rest of the file sets up the port the binary will listen to and passes the configurations to the image.

Once this Docker image is pushed to GitLab’s private repository, the Docker image is available to run (provided with some environment configurations)!

Conclusion

All in all, this seems to work out great for single pipeline builds. When you get into multi-pipeline builds things get even trickier. I found that the artifacts system didn’t quite meet my requirements and I opted for condensing builds and not artifacting the compiled binaries. I could, however, trigger a pipeline that cross-compiles and artifacts the compiled binaries and run a second pipeline that also cross-compiles the binaries (duplicated work) then creates an image. Ultimately, I didn’t really care about the artifacts as my purpose was always to create the Docker images. As usual, your mileage may vary!

Hacktoberfest: Use Alexa without talking!

Hacktoberfest: Use Alexa without talking!

Hacktoberfest is upon us! This month I’m hacking on small projects each week and sharing them.

Backstory

When we sleep at night, we like to have some background white noise to help with sleeping. Since we have an Echo in our kitchen and an Echo Dot in our bedroom, we ask it to “play rain sounds” which starts a lovely little piece of audio that puts us right to sleep . Unfortunately, it does not look and after an hour it stops. The sudden silence sometimes wakes me up, and I can’t restart it without talking to Alexa and potentially waking up my wife, and nobody wants that! So, I started researching how to talk to Alexa without talking to her and I came across this article on the DolphinAttack. This got me thinking about a device which could leverage this vulnerability, but quickly gave up as it involved more hardware than I wanted and there was a lot of setup. So, I kept researching and came across this forum which talked about a workaround! This seemed more promising and more similar to what I wanted.

The Goal

My goal for this week’s hackathon was to create a Slack Bot for talking to Alexa. The idea is that the Slack App received a text command to send to Alexa, converts the text to a speech audio file, then sends it to the device I want to receive it. This would let me send a text message to Alexa in the middle of the night without ruining my wife’s sleep!

The Backend

Thankfully, there is already an open source GitLab project for handling the audio file push and an article that shows how to use it! I started by manually proving the concept before moving forward. After this proof-of-concept seemed like it would work out, I started on a Dockerfile to set this baby up!

<br>
FROM alpine:latest<br>
ARG CLIENT_ID<br>
ARG CLIENT_SECRET<br>
ARG PRODUCT_TYPE_ID<br>
ENV CLIENT_ID $CLIENT_ID<br>
ENV CLIENT_SECRET $CLIENT_SECRET<br>
ENV PRODUCT_TYPE_ID $PRODUCT_TYPE_ID<br>
RUN mkdir -pv /opt/alexa<br>
WORKDIR /opt/alexa<br>
COPY *.sh ./<br>
RUN chmod +x *.sh<br>
RUN apk add --update ca-certificates \<br>
    espeak \<br>
    curl<br>
RUN (crontab -l ; echo "*/20 * * * * /opt/alexa/refresh_token.sh") | crontab -<br>

And a GitLab YAML file to auto-deploy it

<br>
stages:<br>
  - build<br>
  - deploy<br>
build-image:<br>
  stage: build<br>
  image: docker:latest<br>
  variables:<br>
    DOCKER_HOST: tcp://docker:2375/<br>
    DOCKER_DRIVER: overlay2<br>
  services:<br>
    - docker:dind<br>
  only:<br>
    - "master"<br>
  before_script:<br>
    - "docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY"<br>
  script:<br>
    - "docker build -t ${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_NAME} -t ${CI_REGISTRY_IMAGE}:latest --pull ."<br>
    - "docker push ${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_NAME}"<br>
    - "docker push ${CI_REGISTRY_IMAGE}:latest"<br>
  after_script:<br>
    - "docker logout ${CI_REGISTRY}"<br>
  tags:<br>
    - docker<br>
deploy-image:<br>
  stage: deploy<br>
  image: docker:latest<br>
  only:<br>
    - "master"<br>
  variables:<br>
    C_NAME: "alexa-bot"<br>
  before_script:<br>
    - "docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY"<br>
  script:<br>
    - "docker pull  ${CI_REGISTRY_IMAGE}:latest"<br>
    - "docker container stop -t 0 ${C_NAME} || true"<br>
    - "docker container rm ${C_NAME} || true"<br>
    - "docker run -d -P --name ${C_NAME} -e CLIENT_ID=${CLIENT_ID} -e CLIENT_SECRET=${CLIENT_SECRET} -e PRODUCT_TYPE_ID=${PRODUCT_TYPE_ID} --restart always ${CI_REGISTRY_IMAGE}:latest"<br>
  after_script:<br>
    - "docker logout ${CI_REGISTRY}"<br>
  tags:<br>
    - deploy<br>
 

TODO

Now that the image is published and the container running, we need to set up some sort of front-end. My major concern is security at this point, so I do still need to figure out that part. We don’t want random people accessing this API and submitting audio we can’t hear to control our device. It’ll probably be some client-server shared secret. I’ll go through some work on that part and make another post when it’s finished. Hackathons are meant to be down and dirty for an MVP. I at least have a container I can SSH into to issue commands now, so that’ll work for a little while to accomplish my goal. Ease of use via Slack would be the logical next step. Until next time!

Hacktoberfest: GitLab Runner

Hacktoberfest is upon us! This month I’m hacking on small projects each week and sharing them.

We Continue…

Last week, my son and I recently ventured out to start a Minecraft server as part of Hacktober fest. This week we are planning on automating that server a bit more. A few days after playing on the server, we restarted it to test things out. Ultimately, this did work and the server came back up, but the world data was not saved and we had no backups. Thinking out loud, we brainstormed both why this happened and how we could prevent it in the future. We certainly don’t want to build an awesome structure then lose it when the server restarts! Backups were definitely in the back of my mind, as were persisted version-controlled configurations for the server (currently after restarting, all the settings are reset to the defaults). So we set about trying to find a backup solution.

Sounds Simple

We definitely wanted to automate backups. After reading a lot about how Minecraft saves files, we knew the backup was rather simple:

save-off
save-all
tar czf /opt/backups/$(date +%Y-%m-%d)-mc-${CLIENT}.tar.gz /opt/minecraft/
save-on

Sounds simple enough right? Wrong! These commands are all sent the Minecraft server from an RCON client. A quick search finds a lot of options! So far, so good. Since the Docker image we were using for Minecraft included an RCON client… but it didn’t really do backups. At least, not the way I wanted to. So, we decided to create our own Dockerfile for our Minecraft server!

DIY

After much searching we found a very simple and very compact Minecraft RCON client written in Go! Why not? There are no dependencies on a VM or additional libraries. This should be dead simple to include in our Minecraft Server Dockerfile. Of course, as are many things that seem simple, it was not as simple as we originally thought. In our build automation, we included a stage called “setup” running on the latest Golang image. Then we simply run a script to go get and go install the Go application. We used the artifacts instruction to share this compiled binary to the next stage which copies it to our Minecraft Server Dockerfile. Now we have an RCON client!

We wanted backups to happen automatically, so we added a backup script. This script runs using Cron to periodically backup the Minecraft Server files and copy them to a backups directory. The idea here is to mount a volume to this directory so it can be used in another container (the one that will ultimately move it to Dropbox). Once we got that set up we ran the pipeline and…

It failed… The custom GitLab Runner image I had hobbled together a few months ago wasn’t working as expected. It wouldn’t pull the Golang image, so we couldn’t get the Go binary. I had been experiencing many problems with this image and was sure I probably made it wrong. I at least didn’t make it in a repeatable fashion. I definitely didn’t have it running in a container… So, we decided to shift our focus to making a new GitLab Runner image in the way I had intended — in a repeatable fashion preferably containerized. So we spun up a new repository to build this container…

With automated deploys in mind, this should be relatively easy. We created a new .gitlab-ci.yml file and had the job run in a docker:latest container (ie: Docker-in-Docker). All this job is going to do s pull the latest GitLab Runner image, configure it, then run it. Let’s see how this works:

image: docker:latest
variables:
  DOCKER_DRIVER: "overlay2"
script:
  - "docker pull gitlab/gitlab-runner:latest"
  - "docker run -d --name gitlab-runner \
    --restart always \
    -v ${GITLAB_VOLUME}:/etc/gitlab-runner \
    -v /var/run/docker.sock:/var/run/docker.sock \
    gitlab/gitlab-runner:latest"
tags:
  - docker

We push and.. wait a second.. before we push let’s walk through this… We want GitLab to automatically deploy a GitLab Runner… This isn’t repeatable. If I were to tear down the current GitLab Runner, this would never run! This can’t be the solution.. let’s take a step back…

We know we want this automated. The first time we run it, there won’t be any automation at all! We can’t use the YAML file! We must use a Shell Script (or some other script)! Let’s take a look at what we need to do

#!/bin/bash
# If docker is not installed, go get it
command -v docker &amp;amp;amp;gt;/dev/null 2&amp;amp;amp;gt;&amp;amp;amp;amp;1 || {
  curl -sSL https://get.docker.com/ | sh
}
# Set defaults
VERSION="latest"
function usage() {
  echo "This script automatically installs Docker and runs a GitLab Runner container."
  echo " ./run.sh -t [options]"
  echo " -t, --token : (Required) The GitLab Token of the container"
  echo " -n, --name : (Required) The name of the container"
  echo " --alpine : (Optional) Use the Alpine version of the gitlab-runner container"
  exit 1
}
#Parse command line options
while [[ $# -gt 0 ]]; do
  case $1 in
    -n | --name)
      NAME="$2"
      shift # past argument
      shift # past value
    ;;
    --alpine)
      VERSION="alpine"
      shift # past argument
    ;;
    -t | --token)
      TOKEN="$2"
      shift # past argument
      shift # past value
    ;;
    *)
      usage
    ;;
  esac
done
if [ -z ${TOKEN} ]; then
      echo "Token is required!"
      usage
fi
if [ -z ${NAME} ]; then
      echo "Name is required!"
      usage
fi
# Create a volume that will persist
docker stop -t 0 ${NAME} || true
docker rm ${NAME} || true
docker run -d --name ${NAME} \
  --restart always \
  -v /var/run/docker.sock:/var/run/docker.sock \
  gitlab/gitlab-runner:${VERSION}
#Register with GitLab
docker exec ${NAME} gitlab-runner register \
  --non-interactive \
  --executor "docker" \
  --docker-image alpine:latest \
  --url "https://gitlab.com/" \
  --registration-token "${TOKEN}" \
  --name "${NAME}" \
  --docker-privileged \
  --tag-list "docker" \
  --run-untagged \
  --locked="false"

There. Now we have a script that can take a container name and your GitLab Runner token then spawn a new docker privileged container. I decided to not mount a config volume as it would continuously add new runner configs when the service restarted. There is a bit of cleanup in GitLab under the CI/CD runners section when a runner is restarted, but that seems to not be a big deal at the moment. Now that we have a new repeatable GitLab Runner up, we can try to get that Minecraft Dockerfile working next time!

Reset

Begin Anew

I recently wiped my hosting server since I was closing down my computer repair business. It always bothered me that I was only hosting one site on a rather large server and I couldn’t do much with it without risking downtime (not that it really mattered given so little traffic). When I shut it down I had a clean slate.. why not try out hosting with Docker? Not knowing much about it, other than some basic commands, a general idea what it is, and some experience at work, I dove in.

Containment

At work I deploy a whole bunch of microservices each in their own container to a local docker host on our development machines for testing/developing. There’s a whole Continuous Integration/Continuous Delivery (CI/CD) pipeline that each Github repo is hooked into that builds a Docker container configured to run the application then unit tests before publishing the Docker container to the private Docker Registry. The CI/CD system is triggered after this to actually deploy it to various environments. If any of these deployments fail, the deploy stops and the team is notified in Slack. Pretty sweet setup since I don’t have to do anything to actually deploy any changes. Just merging a branch to master kicks off an automatic deployment. They are running some sort of cluster that provides fail-over and load balancing… but I don’t quite think I need that… yet. So, knowing what I already know about Docker I set out to deploy this website as a container! Being a lazy developer, I leveraged existing Docker images on the public Docker Registry. Since I am familiar with WordPress, and there is a Docker image with WordPress already set up I went with that. For storage, WordPress needs a MySQL database which exists on the Docker Register. I use those and we’re all done here, right?

Configurations

Not so fast. We still need to actually set up the MySQL and WordPress containers to talk to each other. By default, each container runs in isolation. Nothing is exposed so nothing can access the container. This is great since we’re not quite ready for internet traffic. So I wrote a Dockerfile that builds the vanilla WordPress container to link the MySQL container on a private network. Now that WordPress and MySQL can talk to each other, we still need to configure MySQL and WordPress on the application level. We’re done with the container so, it’s time to hit the configs! MySQL setup is rather easy. Log into the MySQL container, create a username and password and you’re done! For WordPress, there’s a lovely little wizard the first time your hit your website that lets you set it up through a convenient user interface. After creating the username and password on MySQL, I hit the site and walk through the WordPress setup wizard. Once I finished with the wizard I’m done setting up the website!

Wrap-up

In another post, I’ll explain the CI/CD portion of how I accomplished making this repeatable. I did end up automating the MySQL setup so it auto-generates a username and password (of my choosing) when the Docker container is created. Additionally, I made this whole process repeatable and connected it to the GitLab CI/CD system to make it automated. My #1 goal was to make this repeatable and automated so if I ever need to do something like upgrade my server or move hosts, I’d be able to regenerate the site in it’s current state with the push of a button (or at least a few simple steps).