Google is still good for some things

Friday, November 3, 2017

Everyone loves free things, hence the proliferation of services like GMail and the rest of the Google suite. The problem is, as my old boss used to say, “If you aren’t paying for the product, than you are the product”. Nothing in life is free, and Google has to make money somewhere. It’s no secret that I have a love/hate relationship with Google … I love their services but hate their continuous privacy violations and tracking behaviors (hence my move over to services like ProtonMail and self-hosting). However, this doesn’t make me turn a blind eye to them in general. I’ve been watching the Google Cloud Platform (GCP) product suite for some time, and have recently started playing around with it. It’s nice, compatible with all my tooling (Terraform, Vagrant, etc.), and in most cases cheaper than competing AWS services. Even better is their free trial which, unlike AWS, is much more robust. Their trial, available for one year from your sign up, comes with $300 in credit to use on their services. Which makes it a lot less limited than the AWS “Free Tier” for one year.

One such way is using the Google Container Registry for storage of private images. AWS ECR gives you 12 months free of 500Mb of container storage. While I agree that you should keep your containers as small as possible, 500Mb is very limiting. I recently wrote a Selenium script that queries a webpage and putting that into a container was around 300Mb after all the required libraries and such. On the contrary, with Google’s offer of $300 in credits, storing 1Gb a month would only cost you around around $0.02. Now you will also pay egress rates, which are $0.12 per Gb. So, our 1Gb private image, with a few pulls and storage, will cost less than $1/month. Multiply this out, and suddenly you can store quite a bit in GCR for a year for free.

It took a little digging but I was able to use CircleCI to build and push my images up to GCR. In order to do so one needs to first create a service account and then download the JSON credentials for that service account (this is done through the IAM web console). A little known fact is that you can create a new environment variable within the CircleCI UI that can contain the entirity of a JSON file. Since it can contain the entire JSON, you can then reference said variable (including the JSON) in the configuration file. For GCR the username simply _json_key with the password being the JSON credentials:


       - deploy:
           name: Push application Docker image
           command: |
             docker login -u _json_key -p "$(echo $JSON_AUTH)" https://us.gcr.io
             docker push us.gcr.io/project-name/container-name             

Although I’m using this to log into GCR to push my completed build, the same works if you are trying to pull down a private image for use in testing or in builds for CircleCI.

This same way of logging into GCR works on the console for the Docker hosts as well. In this case I needed to use Ansible to log into the GCR repository to pull the same image. I set a variable for my JSON file called gcloud_token_file in order to not have to change the path for every image in case it changes. Important here is the space between the start of the qoute and the first curly bracket. Apparently without it Ansible wants to intrepret the entire JSON which is not what we want in this case:


  - name: Log into GCR
    docker_login:
      username: "_json_key"
      password: " {{lookup('file', gcloud_token_file)}}"
      registry: "us.gcr.io"

Another way to take advantage of GCP is to use them for spinning up Vagrant images. Although definitely not as fast as running local images, they do have some advantages. First is battery life. I’m mobile a lot these days, and being able to remotely spin up images to test with definitely saves my system from having to work any harder than it has to. Although I get pretty good battery life to begin with, spinning up 4 images for testing cluster configurations will definitely kill my battery. Second is being able to spin up much larger machines for testing. It’s nice to be able to spin up a dedicated 4 CPU / 16GB machine to test some beefy stacks I’m working on. That, and being able to spin up 3-4 of said boxes for testing cluster configurations which wouldn’t be possible on my 16GB laptop. Cost wise this can be expensive, but the point of Vagrant is to spin up and down machines for testing, not really to keep them up and running. I’ve been doing this for about a month now and have only spent about $30 in instance costs.

It took a little reading to get my Vagrant to work with GCP and multiple machines, below is the Vagrantfile I am currently using:


Vagrant.configure("2") do |config|

  config.vm.box = "google/gce"

#  config.vm.synced_folder "/home/user/some_directory", "/home/vagrant/data"
#
#  config.vm.provision "shell", inline: <<-SHELL
#    yum install -y epel-release 
#    yum install -y vim wget python-pip git python34 python34-pip python34-requests firefox xorg-x11-server-Xvfb
#    wget -qO- https://get.docker.com/ | sh
#    systemctl start docker.service
#    pip install docker-compose
#    pip3 install selenium
#  SHELL

  config.vm.provision "ansible" do |ansible|
    ansible.groups = {
      "all" => ["default"]
    }
  
    ansible.playbook = "/home/user/git/ansible/site.yml"
    ansible.sudo = true
  end

  config.vm.define :master do |master|
    master.vm.provider :google do |google, override|
      google.google_project_id = ""
      google.google_client_email = ""
      google.google_json_key_location = "./credentials.json"
      google.machine_type = "n1-standard-2"
      google.disk_size = 20
      google.name = "master"
      google.image = "centos-7-v20171003"
     #google.image = "debian-9-stretch-v20170918"
     #google.image = "debian-8-jessie-v20170918"
     #google.image = "rhel-7-v20171002"

      override.ssh.username = "user"
      override.ssh.private_key_path = "/home/user/.ssh/id_rsa"

    end
  end


  config.vm.define :node1 do |node1|
    node1.vm.provider :google do |google, override|
      google.google_project_id = ""
      google.google_client_email = ""
      google.google_json_key_location = "./credentials.json"
      google.machine_type = "n1-standard-2"
      google.disk_size = 20
      google.name = "node1"
      google.image = "centos-7-v20171003"
     #google.image = "debian-9-stretch-v20170918"
     #google.image = "debian-8-jessie-v20170918"
     #google.image = "rhel-7-v20171002"

      override.ssh.username = "user"
      override.ssh.private_key_path = "/home/user/.ssh/id_rsa"

    end
  end

end
devopsdockerregistryfree stuffvagrant

Basic CoreOS Setup

Kubernetes, Cluster Auto-Scaling and RBAC