Using Ansible Vault with environment variables

This is a common trend. You’ve been using Ansible to provision your infrastructure for some time and all of a sudden you will have a couple of secrets to manage, usually SSL/SSH private keys, API credentials, passwords, etc. Because you don’t want these secrets to be stored “in the clear” on your git repository, you will declare them as variables inside yaml files and then use Ansible Vault to encrypt them using an AES symmetric key. You can then run ansible-playbook with –ask-vault-pass, so yaml var files will get decrypted on the fly when running the playbook.

Sometimes I use Ansible together with other tools under the same repository. For example, I prefer to provision AWS infrastructure with Terraform and then call Ansible as a provisioner to customize an EC2 instance and Cloudflare to update the DNS record . Or use Packer to bake an AMI and use Ansible as a local provisioner. In this case, it is common practice to use environment variables for secrets in use by Terraform providers (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, etc) instead of storing them directly in .tf files that are then pushed to github. What if I also want to have an easy way to encrypt these ENV variables on a file? Why not use Ansible Vault with Terraform? Just discovered that Ansible Vault encrypts any kind of text file, not only yaml type. We just have to create a secrets.txt file with one ENV variable per line:

AWS_ACCESS_KEY_ID=<secret>
AWS_SECRET_ACCESS_KEY=<secret>
[email protected]
CLOUDFLARE_TOKEN=<secret>

And then encrypt secrets.txt with Ansible Vault:

ansible-vault encrypt secrets.txt

If all goes well, secrets.txt should now begin with “$ANSIBLE_VAULT;1.1;AES256” followed by the encrypted text. Your secrets.txt can be safely added to the repository. Now, before you run terraform plan, you can easily export your secrets as ENV variables by doing:

for i in `ansible-vault view secrets.txt` ; do export $i ; done

Ansible Vault will ask you for the password so it can decrypt secrets.txt and will output the contents so we can use it with export. If you’re using Makefile, a “make export-secrets” job can make it even more easier. This is just a quick way to store credentials in use by Terraform or other tools in a file encrypted using a shared secret. If you have a big infrastructure team working under the same repository and you don’t want to use Ansible, there are tools like StackExchange’s Blackbox that allow you to easily encrypt files using GPG, making use of a team keyring. Also, I assume there will be a human executing Terraform. If you’re running Ansible/Terraform within a CI/CD environment, there are better ways to use credentials (by using job or role assigned tokens and something like Hashicorp Vault). To be explored.

Upstart and resolvconf cache

I’ve recently found this when I was trying to fix a nameserver config issue with resolvconf on Ubuntu. When resolvconf populates /etc/resolv.conf, it will read what we have configured in /etc/resolvconf/resolv.conf.d (head, base, tail, etc) and also any dns-server declared in /etc/network/interfaces. I had a conflict with something I was populating in the head file (with Puppet) from something that was configured under /etc/network/interfaces. So I removed the conflicting dns-server declaration from the interfaces file and run “resolvconf -u” to update the config. To my surprise, the “deleted” nameservers from /etc/network/interfaces were still included in /etc/resolv.conf. After some debugging, I have noticed that resolvconf’s Upstart script now keeps a cache file under /run/resolvconf/interface that is a copy of the previous /etc/network/interfaces. You need to delete this file and restart resolvconf to make it work: “stop resolvconf ; start resolvconf”.

marques.cx -> fmarques.org

Not long ago I decided to move to a new personal domain and registered fmarques.org. I am in the process of moving everything from marques.cx to my new domain, which will cease to exist in a few months.  If you are one of the brave souls still keeping an eye on my feed, I advise to change to the new domain before the redirect expires.

Discovering jemalloc and debugging native Java memory leaks

I’ve joined ThoughtWorks last August (awesome!) and I’ve been working with the tech team on everything related to infrastructure automation, code deployment and all things “DevOps” for GOV.UK Verify (part of the Government Digital Services). The last few months were very rewarding to me as I got exposed to a lot of different technologies, although I do tend to work a lot with Puppet most of the time and I don’t get the chance to look at other things “from the other side”. Working with the dev team on a Java memory leak issue was a great way to dig into something where I was already familiar with but I had the chance to understand a little bit more about JVM memory allocation, Linux kernel memory management and discovering great tools like jemalloc and the excellent jeprof profiler. We lost a long time playing the guess game and using the wrong tools before we found this excellent post by Evan Jones from Twitter. This led us to the discovery of jemalloc and I highly recommend having a look at it. It’s really worth it. We (Ozz) also wrote our story on GOV.UK Verify and we hope it can help others when dealing with similar native Java memory leaks.

Poor man’s ssh launcher (CLI)

Problem: just wanted an easy way to add my hosts to the ssh config file and connect to each host through the easiest way possible using normal bash command-line.

Solution: configure your .ssh/config like you normally would, with the following:

Host myapache
Hostname myapache.host.com
User fred

Host myapache2
Hostname myapache2.host.com
User fred

Add the following to your .bashrc or .bash_profile (Mac OS X):

shosts=`grep ‘Host ‘ ~/.ssh/config | awk ‘{print $2}’`
for h in $shosts ; do alias $h=”ssh $h” ; done
alias ssh-hosts=’echo -e $shosts | tr ” ” “n”‘

And voilá, if you want to connect to any host, just type the name of the host, for example ‘myapache’. If you want to get a list of ssh hosts, type ‘ssh-hosts’. Keep it simple, stupid.

My first computer

Nostalgia time. While reading a few Wikipedia articles, just remembered that my first computer was a Timex 2068, which in fact was a clone of the ZX Spectrum 48k. I got it in 1987 but I was used to play with other computers (friends, school, etc) like the Commodore 64, Atari ST, Phillips MSX, etc. The Timex 2068 had a cartridge system but with limited support. I had a few ones (a word processor and a Spectrum 48k emulator cartridge to load games using the cassette player). My first language was of course, BASIC. Those were the days. I remember the exact day when I have used a computer. I was a kid in 1984 and at the time I was living in Brazil. I went to an exhibition where there was a TK 82C (a brazilian clone of the ZX81) connected to a big and ugly green monitor playing the Game of Life (very popular at that time as a demo BASIC program). I immediatly fell in love with computers.

OpenDedup Virtual Appliance (based on SDFS)

OpenDedup just released a greatly enhanced Virtual Appliance based on SDFS. The OpenDedup Virtual NAS Appliance is designed for simple setup and management SDFS volumes for virtual environments. The Appliance includes capabilities to create, mount,delete, and export SDFS volumes via NFS from a Web Based interface. It also includes VMWare storage api integration that allow the quick Data Store creation and cloning of Virtual machines located on SDFS Volumes.

Interesting. Video.