Hacktoberfest: Use Alexa without talking!

Hacktoberfest: Use Alexa without talking!

Hacktoberfest is upon us! This month I’m hacking on small projects each week and sharing them.

Backstory

When we sleep at night, we like to have some background white noise to help with sleeping. Since we have an Echo in our kitchen and an Echo Dot in our bedroom, we ask it to “play rain sounds” which starts a lovely little piece of audio that puts us right to sleep . Unfortunately, it does not look and after an hour it stops. The sudden silence sometimes wakes me up, and I can’t restart it without talking to Alexa and potentially waking up my wife, and nobody wants that! So, I started researching how to talk to Alexa without talking to her and I came across this article on the DolphinAttack. This got me thinking about a device which could leverage this vulnerability, but quickly gave up as it involved more hardware than I wanted and there was a lot of setup. So, I kept researching and came across this forum which talked about a workaround! This seemed more promising and more similar to what I wanted.

The Goal

My goal for this week’s hackathon was to create a Slack Bot for talking to Alexa. The idea is that the Slack App received a text command to send to Alexa, converts the text to a speech audio file, then sends it to the device I want to receive it. This would let me send a text message to Alexa in the middle of the night without ruining my wife’s sleep!

The Backend

Thankfully, there is already an open source GitLab project for handling the audio file push and an article that shows how to use it! I started by manually proving the concept before moving forward. After this proof-of-concept seemed like it would work out, I started on a Dockerfile to set this baby up!

<br>
FROM alpine:latest<br>
ARG CLIENT_ID<br>
ARG CLIENT_SECRET<br>
ARG PRODUCT_TYPE_ID<br>
ENV CLIENT_ID $CLIENT_ID<br>
ENV CLIENT_SECRET $CLIENT_SECRET<br>
ENV PRODUCT_TYPE_ID $PRODUCT_TYPE_ID<br>
RUN mkdir -pv /opt/alexa<br>
WORKDIR /opt/alexa<br>
COPY *.sh ./<br>
RUN chmod +x *.sh<br>
RUN apk add --update ca-certificates \<br>
    espeak \<br>
    curl<br>
RUN (crontab -l ; echo "*/20 * * * * /opt/alexa/refresh_token.sh") | crontab -<br>

And a GitLab YAML file to auto-deploy it

<br>
stages:<br>
  - build<br>
  - deploy<br>
build-image:<br>
  stage: build<br>
  image: docker:latest<br>
  variables:<br>
    DOCKER_HOST: tcp://docker:2375/<br>
    DOCKER_DRIVER: overlay2<br>
  services:<br>
    - docker:dind<br>
  only:<br>
    - "master"<br>
  before_script:<br>
    - "docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY"<br>
  script:<br>
    - "docker build -t ${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_NAME} -t ${CI_REGISTRY_IMAGE}:latest --pull ."<br>
    - "docker push ${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_NAME}"<br>
    - "docker push ${CI_REGISTRY_IMAGE}:latest"<br>
  after_script:<br>
    - "docker logout ${CI_REGISTRY}"<br>
  tags:<br>
    - docker<br>
deploy-image:<br>
  stage: deploy<br>
  image: docker:latest<br>
  only:<br>
    - "master"<br>
  variables:<br>
    C_NAME: "alexa-bot"<br>
  before_script:<br>
    - "docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY"<br>
  script:<br>
    - "docker pull  ${CI_REGISTRY_IMAGE}:latest"<br>
    - "docker container stop -t 0 ${C_NAME} || true"<br>
    - "docker container rm ${C_NAME} || true"<br>
    - "docker run -d -P --name ${C_NAME} -e CLIENT_ID=${CLIENT_ID} -e CLIENT_SECRET=${CLIENT_SECRET} -e PRODUCT_TYPE_ID=${PRODUCT_TYPE_ID} --restart always ${CI_REGISTRY_IMAGE}:latest"<br>
  after_script:<br>
    - "docker logout ${CI_REGISTRY}"<br>
  tags:<br>
    - deploy<br>
 

TODO

Now that the image is published and the container running, we need to set up some sort of front-end. My major concern is security at this point, so I do still need to figure out that part. We don’t want random people accessing this API and submitting audio we can’t hear to control our device. It’ll probably be some client-server shared secret. I’ll go through some work on that part and make another post when it’s finished. Hackathons are meant to be down and dirty for an MVP. I at least have a container I can SSH into to issue commands now, so that’ll work for a little while to accomplish my goal. Ease of use via Slack would be the logical next step. Until next time!

Hacktoberfest: GitLab Runner

Hacktoberfest is upon us! This month I’m hacking on small projects each week and sharing them.

We Continue…

Last week, my son and I recently ventured out to start a Minecraft server as part of Hacktober fest. This week we are planning on automating that server a bit more. A few days after playing on the server, we restarted it to test things out. Ultimately, this did work and the server came back up, but the world data was not saved and we had no backups. Thinking out loud, we brainstormed both why this happened and how we could prevent it in the future. We certainly don’t want to build an awesome structure then lose it when the server restarts! Backups were definitely in the back of my mind, as were persisted version-controlled configurations for the server (currently after restarting, all the settings are reset to the defaults). So we set about trying to find a backup solution.

Sounds Simple

We definitely wanted to automate backups. After reading a lot about how Minecraft saves files, we knew the backup was rather simple:

save-off
save-all
tar czf /opt/backups/$(date +%Y-%m-%d)-mc-${CLIENT}.tar.gz /opt/minecraft/
save-on

Sounds simple enough right? Wrong! These commands are all sent the Minecraft server from an RCON client. A quick search finds a lot of options! So far, so good. Since the Docker image we were using for Minecraft included an RCON client… but it didn’t really do backups. At least, not the way I wanted to. So, we decided to create our own Dockerfile for our Minecraft server!

DIY

After much searching we found a very simple and very compact Minecraft RCON client written in Go! Why not? There are no dependencies on a VM or additional libraries. This should be dead simple to include in our Minecraft Server Dockerfile. Of course, as are many things that seem simple, it was not as simple as we originally thought. In our build automation, we included a stage called “setup” running on the latest Golang image. Then we simply run a script to go get and go install the Go application. We used the artifacts instruction to share this compiled binary to the next stage which copies it to our Minecraft Server Dockerfile. Now we have an RCON client!

We wanted backups to happen automatically, so we added a backup script. This script runs using Cron to periodically backup the Minecraft Server files and copy them to a backups directory. The idea here is to mount a volume to this directory so it can be used in another container (the one that will ultimately move it to Dropbox). Once we got that set up we ran the pipeline and…

It failed… The custom GitLab Runner image I had hobbled together a few months ago wasn’t working as expected. It wouldn’t pull the Golang image, so we couldn’t get the Go binary. I had been experiencing many problems with this image and was sure I probably made it wrong. I at least didn’t make it in a repeatable fashion. I definitely didn’t have it running in a container… So, we decided to shift our focus to making a new GitLab Runner image in the way I had intended — in a repeatable fashion preferably containerized. So we spun up a new repository to build this container…

With automated deploys in mind, this should be relatively easy. We created a new .gitlab-ci.yml file and had the job run in a docker:latest container (ie: Docker-in-Docker). All this job is going to do s pull the latest GitLab Runner image, configure it, then run it. Let’s see how this works:

image: docker:latest
variables:
  DOCKER_DRIVER: "overlay2"
script:
  - "docker pull gitlab/gitlab-runner:latest"
  - "docker run -d --name gitlab-runner \
    --restart always \
    -v ${GITLAB_VOLUME}:/etc/gitlab-runner \
    -v /var/run/docker.sock:/var/run/docker.sock \
    gitlab/gitlab-runner:latest"
tags:
  - docker

We push and.. wait a second.. before we push let’s walk through this… We want GitLab to automatically deploy a GitLab Runner… This isn’t repeatable. If I were to tear down the current GitLab Runner, this would never run! This can’t be the solution.. let’s take a step back…

We know we want this automated. The first time we run it, there won’t be any automation at all! We can’t use the YAML file! We must use a Shell Script (or some other script)! Let’s take a look at what we need to do

#!/bin/bash
# If docker is not installed, go get it
command -v docker &amp;amp;amp;gt;/dev/null 2&amp;amp;amp;gt;&amp;amp;amp;amp;1 || {
  curl -sSL https://get.docker.com/ | sh
}
# Set defaults
VERSION="latest"
function usage() {
  echo "This script automatically installs Docker and runs a GitLab Runner container."
  echo " ./run.sh -t [options]"
  echo " -t, --token : (Required) The GitLab Token of the container"
  echo " -n, --name : (Required) The name of the container"
  echo " --alpine : (Optional) Use the Alpine version of the gitlab-runner container"
  exit 1
}
#Parse command line options
while [[ $# -gt 0 ]]; do
  case $1 in
    -n | --name)
      NAME="$2"
      shift # past argument
      shift # past value
    ;;
    --alpine)
      VERSION="alpine"
      shift # past argument
    ;;
    -t | --token)
      TOKEN="$2"
      shift # past argument
      shift # past value
    ;;
    *)
      usage
    ;;
  esac
done
if [ -z ${TOKEN} ]; then
      echo "Token is required!"
      usage
fi
if [ -z ${NAME} ]; then
      echo "Name is required!"
      usage
fi
# Create a volume that will persist
docker stop -t 0 ${NAME} || true
docker rm ${NAME} || true
docker run -d --name ${NAME} \
  --restart always \
  -v /var/run/docker.sock:/var/run/docker.sock \
  gitlab/gitlab-runner:${VERSION}
#Register with GitLab
docker exec ${NAME} gitlab-runner register \
  --non-interactive \
  --executor "docker" \
  --docker-image alpine:latest \
  --url "https://gitlab.com/" \
  --registration-token "${TOKEN}" \
  --name "${NAME}" \
  --docker-privileged \
  --tag-list "docker" \
  --run-untagged \
  --locked="false"

There. Now we have a script that can take a container name and your GitLab Runner token then spawn a new docker privileged container. I decided to not mount a config volume as it would continuously add new runner configs when the service restarted. There is a bit of cleanup in GitLab under the CI/CD runners section when a runner is restarted, but that seems to not be a big deal at the moment. Now that we have a new repeatable GitLab Runner up, we can try to get that Minecraft Dockerfile working next time!

Hacktoberfest!

Hacktoberfest is upon us! This month I’m hacking on small projects each week and sharing them.

Minecraft

This week my son and I hacked together an auto-deployed Docker based Minecraft server that is also automatically backed up and uses Git for managing server configurations. We haven’t finished the auto-backup to Dropbox portion yet, but that’s something we can always work on later! Here’s what we did…

Continuous Integration & Continuous Deployment

I knew I wanted this server to be a “set it and forget it” type of server. If anything went wrong, restarting it would fix it without losing world data. With this in mind, my first thought was using GitLab‘s CI/CD process and a .gitlab-ci.yml file. Looking through the Docker registry, I found a Minecraft server image that appears to stay up to date: itzg/minecraft-server. We simply used that and mounted a few volumes for version controlled configs and the world data directory. The GitLab Runner file is a lot to take in if this is the first time seeing this type of file. I’ll walk through each line and explain what it does. Here’s it is:

cache:
  key: "$CI_REGISTRY_IMAGE-$CI_COMMIT_SHA"
  untracked: true
stages:
  - deploy
deploy-prod:
  stage: deploy
  image: docker:latest
  only:
    - "master"
  variables:
    DOCKER_DRIVER: "overlay2"
  script:
    - docker pull itzg/minecraft-server:latest
    - docker stop minecraft || true
    - docker rm minecraft || true
    - docker run -d --name minecraft -p 25565:25565 -v minecraft-world:/data/world -v minecraft-config:/config -v minecraft-mods:/mods -v minecraft-plugins:/plugins -e EULA=TRUE --restart always  itzg/minecraft-server
    - docker cp ./config/* minecraft:/config/ || true
    - docker cp ./data/* minecraft:/data/ || true
    - docker cp ./mods/* minecraft:/mods/ || true
    - docker cp ./plugins/* minecraft:/plugins/ || true
    - docker restart minecraft || true
    - docker pull janeczku/dropbox:latest
    - docker stop -t 0 minecraft-backup || true
    - docker rm minecraft-backup || true
    - docker run -d --restart=always --name=minecraft-backup -v minecraft-world:/dbox/Dropbox/minecraft-server/data -v minecraft-config:/dbox/Dropbox/minecraft-server/config -v minecraft-mods:/dbox/Dropbox/minecraft-server/mods -v minecraft-plugins:/dbox/Dropbox/minecraft-server/plugins janeczku/dropbox
  tags:
    - docker

Devil is in the Details

cache:
  key: "$CI_REGISTRY_IMAGE-$CI_COMMIT_SHA"
  untracked: true

Lines 1-3 relate to GitLab and Docker Container caching. This will cache the resulting image to the GitLab Runner cache with the key on line 2. This is useful for downstream builds. I copied this from another project (Hacktoberfest!) so, I’m not sure if they are needed since this container isn’t used by any other job.

stages:
  - deploy

Line 4 defines the stages that will be used during this pipeline and line 5 is that stage. We define only the “deploy” stage as we aren’t performing any testing or other delivery stages. This is useful to define for organization within this file and allows you to introduce pipelines later. Again, this may not be necessary since it’s not involved in more than one pipeline. I did copy this from another project to reduce the amount of time spend recreating it so… Hacktoberfest!

deploy-prod:
  stage: deploy
  image: docker:latest
  only:
    - "master"
  variables:
    DOCKER_DRIVER: "overlay2"
  script:
    - docker pull itzg/minecraft-server:latest
    - docker stop minecraft || true
    - docker rm minecraft || true
    - docker run -d --name minecraft -p 25565:25565 -v minecraft-world:/data/world -v minecraft-config:/config -v minecraft-mods:/mods -v minecraft-plugins:/plugins -e EULA=TRUE --restart always  itzg/minecraft-server
    - docker cp ./config/* minecraft:/config/ || true
    - docker cp ./data/* minecraft:/data/ || true
    - docker cp ./mods/* minecraft:/mods/ || true
    - docker cp ./plugins/* minecraft:/plugins/ || true
    - docker restart minecraft || true
    - docker pull janeczku/dropbox:latest
    - docker stop -t 0 minecraft-backup || true
    - docker rm minecraft-backup || true
    - docker run -d --restart=always --name=minecraft-backup -v minecraft-world:/dbox/Dropbox/minecraft-server/data -v minecraft-config:/dbox/Dropbox/minecraft-server/config -v minecraft-mods:/dbox/Dropbox/minecraft-server/mods -v minecraft-plugins:/dbox/Dropbox/minecraft-server/plugins janeczku/dropbox

Line 6 defines a new job called deploy-prod and is only deployed during the deploy stage we just defined and only for the master branch (line 9 and 10). This will spin up a new docker container using the latest docker image (line 8) from the Docker registry. Once spun up line 11 defines environmental variables available to the container and line 12 sets the DOCKER_DRIVER. This driver is supposed to be more efficient. Again, this was copied from another project and I haven’t had any problems, so I leave it alone. Lines 13-26 are the meat and potatoes. The script section does the heavy lifting.

  script:
    - docker pull itzg/minecraft-server:latest
    - docker stop minecraft || true
    - docker rm minecraft || true
    - docker run -d --name minecraft -p 25565:25565 -v minecraft-world:/data/world -v minecraft-config:/config -v minecraft-mods:/mods -v minecraft-plugins:/plugins -e EULA=TRUE --restart always  itzg/minecraft-server
    - docker cp ./config/* minecraft:/config/ || true
    - docker cp ./data/* minecraft:/data/ || true
    - docker cp ./mods/* minecraft:/mods/ || true
    - docker cp ./plugins/* minecraft:/plugins/ || true
    - docker restart minecraft || true
    - docker pull janeczku/dropbox:latest
    - docker stop -t 0 minecraft-backup || true
    - docker rm minecraft-backup || true
    - docker run -d --restart=always --name=minecraft-backup -v minecraft-world:/dbox/Dropbox/minecraft-server/data -v minecraft-config:/dbox/Dropbox/minecraft-server/config -v minecraft-mods:/dbox/Dropbox/minecraft-server/mods -v minecraft-plugins:/dbox/Dropbox/minecraft-server/plugins janeczku/dropbox

Line 13 defines what will run on the GitLab Runner. This is the reason we use the latest docker image on line 8. The commands on lines 14 through 26 will actually leverage Docker on the GitLab Runner and manipulate docker containers. We start off on line 14 by pulling the itzg/minecraft-server image (in case Minecraft server releases an update). This updates the image that Docker can use but doesn’t update the running container. After we pull the latest container image, we stop (line 15) and remove (line 16) the current running Minecraft server container. The || true guarantees the execution will not return an error which would stop this from succeeding in the event the container isn’t running or doesn’t exist. Line 15 doesn’t have a timeout for forcing the container to shutdown so it will wait to clean up and prepare the world for the server going down. This helps prevent data corruption. Line 16 removes the running container so we can re-deploy with the latest version.

    - docker run -d --name minecraft -p 25565:25565 -v minecraft-world:/data/world -v minecraft-config:/config -v minecraft-mods:/mods -v minecraft-plugins:/plugins -e EULA=TRUE --restart always  itzg/minecraft-server

Line 17 does a lot. It runs a container named Minecraft (--name minecraft) as a daemon service (-d) and binds the host port 25565 to the container port 25565 (-p 25565:25565). Since Minecraft clients check for this specific port, I opted to just bind the public exposed port to the same container port. We then mount several Docker volumes with the -v flag. I like to think of a Docker Volume as a flash drive. It’s just a storage device you can plug into other containers. The only difference is that you can plug it into multiple containers simultaneously and you can mount it to a specific folder. Here, we mounted the minecraft-world volume to the /data/world container directory. We also mount the minecraft-config to /config, minecraft-mods to /mods and minecraft-plugins to /plugins. Once these mounts are in place, we set the container EULA environment variable to true, set the container to always restart if it goes down, and finally tell Docker what image to use for the container. This runs the actual container and the whole server spins up performing any startup tasks before the server is live and available for connection!

    - docker cp ./config/* minecraft:/config/ || true
    - docker cp ./data/* minecraft:/data/ || true
    - docker cp ./mods/* minecraft:/mods/ || true
    - docker cp ./plugins/* minecraft:/plugins/ || true
    - docker restart minecraft || true

Once this container is running, we copy version controlled files (if any) from the checked out repository into the mounted volumes on the Minecraft server container (lines 18-21). This lets us version control configurations, plugins, and mods and auto-deploy them to the server. Once these are in place, we restart the container one more time (line 22) for these to take effect. These files are copied in this way so we don’t have to worry if this is the first time the container is ran. If we mounted the files before starting the container, the container wouldn’t exist on the first run and would require 2 deployments for these changes to take effect.

    - docker pull janeczku/dropbox:latest
    - docker stop -t 0 minecraft-backup || true
    - docker rm minecraft-backup || true
    - docker run -d --restart=always --name=minecraft-backup -v minecraft-world:/dbox/Dropbox/minecraft-server/data -v minecraft-config:/dbox/Dropbox/minecraft-server/config -v minecraft-mods:/dbox/Dropbox/minecraft-server/mods -v minecraft-plugins:/dbox/Dropbox/minecraft-server/plugins janeczku/dropbox

Lines 23-26 are backup related. Line 23 updates the local version of the janeczku/dropbox:latest image. Lines 24 and 25 should look familiar. These stop and remove the existing container while guaranteeing success if the container is already stopped or doesn’t exist. Line 26 should also look familiar. Here, we start another container as a daemon (-d) that restarts whenever it stops (--restart=always) named minecraft-backup (--name=minecraft-backup) with a few volumes mounted to the container. These volume mounts should also look familiar! We mount the same volumes here as we do the Minecraft server so this container can periodically back up the contents of the volumes to Dropbox. We are still troubleshooting why this isn’t quite working and hope to have this resolved next time.

  tags:
    - docker

Finally, likes 27 and 28 tell GitLab what GitLab Runner to run this job on. Each GitLab Runner may have tags that jobs can use to run on specific runners. This particular job requires a GitLab Runner that is Docker enabled. This doesn’t actually mean there is a Docker enabled GitLab Runner available, but in my case, I have already set up a Docker enabled GitLab Runner and I can force jobs to use it by adding this tag to this file.

That’s it! Now we have a Minecraft server! Even without the Dropbox backups, this project was fun. It was an awesome moment when my son wanted to stand up a Minecraft server and actually stayed interested through the build. He got to see behind the scenes how the game is set up, although I don’t think he fully understands how it all works, but maybe one day!

We will continue tinkering with this script as we add configuration files and get the Dropbox backups working. I will be open sourcing this when we have completed working on it. Until then feel free to use this as a template for your next Minecraft Server and I will be updating this post as I update the files.

Reset

Begin Anew

I recently wiped my hosting server since I was closing down my computer repair business. It always bothered me that I was only hosting one site on a rather large server and I couldn’t do much with it without risking downtime (not that it really mattered given so little traffic). When I shut it down I had a clean slate.. why not try out hosting with Docker? Not knowing much about it, other than some basic commands, a general idea what it is, and some experience at work, I dove in.

Containment

At work I deploy a whole bunch of microservices each in their own container to a local docker host on our development machines for testing/developing. There’s a whole Continuous Integration/Continuous Delivery (CI/CD) pipeline that each Github repo is hooked into that builds a Docker container configured to run the application then unit tests before publishing the Docker container to the private Docker Registry. The CI/CD system is triggered after this to actually deploy it to various environments. If any of these deployments fail, the deploy stops and the team is notified in Slack. Pretty sweet setup since I don’t have to do anything to actually deploy any changes. Just merging a branch to master kicks off an automatic deployment. They are running some sort of cluster that provides fail-over and load balancing… but I don’t quite think I need that… yet. So, knowing what I already know about Docker I set out to deploy this website as a container! Being a lazy developer, I leveraged existing Docker images on the public Docker Registry. Since I am familiar with WordPress, and there is a Docker image with WordPress already set up I went with that. For storage, WordPress needs a MySQL database which exists on the Docker Register. I use those and we’re all done here, right?

Configurations

Not so fast. We still need to actually set up the MySQL and WordPress containers to talk to each other. By default, each container runs in isolation. Nothing is exposed so nothing can access the container. This is great since we’re not quite ready for internet traffic. So I wrote a Dockerfile that builds the vanilla WordPress container to link the MySQL container on a private network. Now that WordPress and MySQL can talk to each other, we still need to configure MySQL and WordPress on the application level. We’re done with the container so, it’s time to hit the configs! MySQL setup is rather easy. Log into the MySQL container, create a username and password and you’re done! For WordPress, there’s a lovely little wizard the first time your hit your website that lets you set it up through a convenient user interface. After creating the username and password on MySQL, I hit the site and walk through the WordPress setup wizard. Once I finished with the wizard I’m done setting up the website!

Wrap-up

In another post, I’ll explain the CI/CD portion of how I accomplished making this repeatable. I did end up automating the MySQL setup so it auto-generates a username and password (of my choosing) when the Docker container is created. Additionally, I made this whole process repeatable and connected it to the GitLab CI/CD system to make it automated. My #1 goal was to make this repeatable and automated so if I ever need to do something like upgrade my server or move hosts, I’d be able to regenerate the site in it’s current state with the push of a button (or at least a few simple steps).