New Job, New Infrastructure

Thursday, May 11, 2017

I started a new job a few months ago as a full time DevOps engineer for a non-profit. After doing the large corporate thing for a while, I grew tired of the bureaucracy and the (frankly) bullshit and decided it was time to move back to a smaller team who was more focused on the technology and providing a good service, than focusing on ROI for stockholders. So here I am, it’s been a few months and I couldn’t be happier! I am about to embarq on a very large infrastructure project for this new company, and figured it was time to also update my personal blog so I could chronicle what I’ve learned in the process. Well, updating for me really means starting fresh, all new hosting, all new content.

Way back, like most people, I started off on Wordpress. It was good for a while, offered some cool customizations and plugins, but in the end was WAY too much. I tired of having a full backend, and the constant security updates and issues were daunting to keep up with. From there I went smaller, opting to run Ghost. It was light, it was fast, and was easier to run. I also very much liked that everything was written via Markdown, which was easy to learn, and carried over to my everyday note-taking as well. I spun up an infrastructure I was happy with: Nginx with LetsEncrypt, proxying connections over the Docker overlay network to Ghost; both running happily in Docker containers. This was good for a while too, but now, I want to go even lighter and smaller: enter Hugo.

Hugo is described as “A Fast & Modern Static Website Engine”. What this means is that I write Markdown in Vim, and Hugo than translates (or compiles if you will) my website into static HTML files. The result is fast, static content that can be served almost anywhere. Although you can easily serve content with S3, or GitHub pages, being an infrastructure guy at heart, I am using my own infrastructure. I have opted to use Caddy as my webserver for this new venture. Caddy is fast, uses HTTP/2, and has built in support for LetsEncrypt. Combine all this with Docker (because I love containers) and that makes for one portable, light, and fast infrastructure!

So how is it actually deployed?

Well, generally speaking the deployment would be pretty simple, build a container, push a container, start the container on the host, and done. But, much like everything I do at work, I wanted it to be as automated as possible. I’m lazy, I don’t want to have to push containers, pull containers, stop containers, restart, blah blah blah. So instead, I decided to spin up a local Jenkins instance to do most of the heavy lifting for me. So I fired up a VM in libvirt, provisioned it with Ansible (including installing both Docker and Jenkins) and then starting working on my pipeline.

The pipeline itself is pretty simple, and honestly not even really a pipeline. I’m using the Docker Build and Push plugin to build the container and push it up to my repository once the change hits the GitLab repository. Then I have a Pipeline action that triggers once that build is completed. It connects to the remote instance and runs a very simple shell script that stops and removes the current container, pulls down the newest one, and then runs that new one with the same settings.

Here is the Pipeline action:

echo 'Deploying to server'
node {
  sshagent (credentials: ['d3b8f44f-2c3d-41a7-a6a2-2d06dbe545fb']) {
    sh 'ssh -o StrictHostKeyChecking=no -l user /bin/bash /home/user/'

And the (very simple) shell script it runs:


docker stop binaryweb
docker rm binaryweb
docker pull hub/container
docker run -d --name binaryweb -v /home/user/letsencrypt:/root/.caddy -p 80:80 -p 443:443 hub/container

I know there are better ways to do this, but it works for now. I’ll probably modify it later on when I get time.

All In all I’m happy with this new infrastructure and technology. It’s easy to manage and can be run pretty much anywhere.


My Master Vagrant File