New GPG Key

Hash: SHA1,SHA512

Date: 22 JUNE 2014

For a number of reasons[0], I've recently set up a new OpenPGP key,
and will be transitioning away from my old one.

The old key will continue to be valid for some time, but i prefer all
future correspondence to come to the new one. I would also like this
new key to be re-integrated into the web of trust. This message is
signed by both keys to certify the transition.

the old key was:

sec 1024D/0x8CC387DA097F5468 2004-07-14
Key fingerprint = 0FAC 6A6C D9D5 134C C87E 4FF3 8CC3 87DA 097F 5468

And the new key is:

sec 4096R/0xD08FC082B8E46E8E 2014-06-22 [expires: 2019-06-21]
Key fingerprint = F744 94B0 7042 6B14 BB90 D283 D08F C082 B8E4 6E8E

To fetch the full key from a public key server, you can simply do:

gpg --keyserver --recv-key

If you already know my old key, you can now verify that the new key is
signed by the old one:

gpg --check-sigs 0xD08FC082B8E46E8E

If you don't already know my old key, or you just want to be double
extra paranoid, you can check the fingerprint against the one above:

gpg --fingerprint 0xD08FC082B8E46E8E

If you are satisfied that you've got the right key, and the UIDs match
what you expect, I'd appreciate it if you would sign my key. You can
do that by issuing the following command:

NOTE: if you have previously signed my key but did a local-only
signature (lsign), you will not want to issue the following, instead
you will want to use --lsign-key, and not send the signatures to the

gpg --sign-key 0xD08FC082B8E46E8E

I'd like to receive your signatures on my key. You can either send me
an e-mail with the new signatures (if you have a functional MTA on
your system):

gpg --export 0xD08FC082B8E46E8E | gpg --encrypt -r '$your_fingerprint' --armor | mail -s 'OpenPGP Signatures'

Additionally, I highly recommend that you implement a mechanism to keep your key
material up-to-date so that you obtain the latest revocations, and other updates
in a timely manner. You can do regular key updates by using parcimonie to
refresh your keyring. Parcimonie is a daemon that slowly refreshes your keyring
from a keyserver over Tor. It uses a randomized sleep, and fresh tor circuits
for each key. The purpose is to make it hard for an attacker to correlate the
key updates with your keyring.

I also highly recommend checking out the excellent Riseup GPG best
practices doc, from which I stole most of the text for this transition
message ;-)

Please let me know if you have any questions, or problems, and sorry
for the inconvenience.

If you have a keybase account and if you are into it, you can also check my
keybase page[1].

Serge van Ginderachter


Version: GnuPG v1


Packt Publishing Ansible Configuration Management review

Around late November 2013 I – too – got contacted by Packt Publishing, asking to do a review on Ansible Configuration Management. I was a bit surprised, as I had declined their offer to write that book, which they asked me exactly two months earlier. Two months seemed like a short period of time to manage to write a book and get it published.

Either way, I kind of agreed, and got the book in pdf, printed it out, started some reading, lended it to a colleague (we us Ansible extensively at work), and just recently got it back so I could finish to have a look at it.

“Ansible Configuration Management” is an introductory book for beginners. I won’t introduce Ansible here, there are a lot of good resources on that, just duck it. Ansible being relatively new, has evolved quite a bit in the previous year, releasing 1.4 by the end of November. The current development cycle focuses more on bug fixes, and under the hood stuff, and less on new syntax, which was quite the opposite when going from 0.9 through 1.2, and up until the then and now current 1.3.

Knowing what major changes would get into 1.3 was easy when you followed the project. One of the major changes is the syntax for variables and templates. Basically, don’t use $myvar or ${othervar} any more, but only use {{ anicevar }}. If you know ansible, you know this is an important thing. I was very disappointed to notice the author didn’t stress this. Whilst most examples use the new syntax, at one point all syntax’s are presented as equally possible – which is correct for the then latest 1.3, but it was well known at the time it would be deprecated.

Of course, writing a tech book on a rapid evolving Open Source tool, will always be deprecated by the time it gets published. But I think this should be expected, and a good book on such a subject should of course focus on the most recent possible release, but also try to mention the newer features that are to be expected. Especially for a publisher that also focuses on Open Source.

A quirk, is when code snippets are discussed. Some of those longer snippets are printed across more than one page, and the book mentions certain line numbers. Which is confusing, and even unreadable, when the snippets don’t have line numbers. Later in the book, sometimes line numbers are used, but not in a very standard way:


code snippet with weird line number

code snippet with weird line number

Whilst most of this book has a clear layout at first sight, things like this don’t feel very professional.

This books gives a broad overview and discusses several basic things in ansible. It goes from basic syntax, over inventory, small playbooks, and extended playbooks ans also mentions custom code things. It gives lots of examples, discusses special variables, modules, plugins… and many more. Not all of them, but that is not needed, given the very good documentation the project publishes. This book is an introduction to ansible, so focusing on the big principles is more important at this point, than having a full inventory of all features. As it’s a relatively short book (around 75 pages), it’s small enough to be appealing as a quick introduction.

It’s a pity the publisher and the author didn’t pay more attention to details. The less critical user, with little to no previous ansible experience, will however get a good enough introduction with this book, with some more hand-holding and overview than what can easily be found freely on-line.

Git and Github: keeping a feature branch updated with upstream?

Git and github, you gotta love them for managing and contributing to (FLOSS) projects.

Contributing to a Github hosted project becomes very easy. Fork the project to your personal Github account, clone your fork locally, create a feature branch, make some patch, commit, push back to your personal Github account, and issue a pull request from your feature branch to the upstream (master) branch.

git clone -o svg
cd ansible
git remote add upstream git://
git checkout -b user-non-unique
vi library/user
git add library user
git commit -m "Add nonunique option to user module, translating to the -o/--non-unique option to useradd and usermod."
git push --set-upstream svg user-non-unique
[go to github and issue the pull request]

Now, imagine upstream (1) doesn’t approve your commit and asks for a further tweak and (2) you need to pull in newer changes (upstream changes that were committed after you created your feature branch.)

How do we keep this feature branch up to date? Merging the newest upstream commits is easy, but you want to avoid creating a merge commit, as that won’t be appreciated when pushed to upstream: you are then effectively re-committing upstream changes, and those upstream commits will get a new hash (as they get a new parent). This is especially important, as those merged commits would be reflected in your Github pull request when you push those updates to your personal github feature branch (even if you do that after you issued the pull request.)

That’s why we need to rebase instead of merging:

git co devel #devel is ansible's HEAD aka "master" branch
git pull --rebase upstream devel
git co user-non-unique
git rebase devel

Both the rebase option and rebase command to git will keep your tree clean, and avoid having merge commits.
But keep in mind that those areyour first commits (with whom you issued your first pull request) that are being rebased, and which now have a new commit hash, which is different from the original hashes that are still in your remote github repo branch.

Now, pushing those updates out to your personal Github feature branch will fail here, as both branches differ: the local branch tree and the remote branch tree are “out of sync”, because of those different commit hashes. Git will tell you to first git pull --rebase, then push again, but this won’t be a simple fast-forward push, as your history got rewritten. Don’t do that!

The problem here is that you would again fetch your first changed commits as they were originally, and those will get merged on top of your local branch. Because of the out of sync state, this pull does not apply cleanly. You’ll get a b0rken history where your commits appear two times. When you would push all of this to your github feature branch, those changes will get reflected on the original pull request, which will get very, very ugly.

AFAIK, there is actually no totally clean solution to this. The best solution I found is to force push your local branch to your github branch (actually forcing a non-fast-orward update):

As per git-push(1):

Update the origin repository’s remote branch with local branch, allowing non-fast-forward updates. This can leave unreferenced commits dangling in the origin repository.

So don’t pull, just force push like this:

git push svg +user-non-unique

This will actually plainly overwrite your remote branch, with everything in your local branch. The commits which are in the remote stream (and caused the failure) will remain there, but will be dangling commit, which would eventually get deleted by git-gc(1). No big deal.

As I said, this is AFAICS the cleanest solution. The downside of this, is that your PR will be updated with those newest commits, which will get a later date, and could appear out of sync in the comment history of the PR. No big problem, but could potentially be confusing.

bash redirection target gets funky

Can anybody explain me how this funky behaviour in bash works?

find /root  >output 2>error 3

Yes, that’s just “error” followed by a space followed by “3”.

serge@goldorak:~/tmp$ ls -l
total 8
-rw-rw-r-- 1 serge serge 71 Aug 30 13:57 error 3
-rw-rw-r-- 1 serge serge 6 Aug 30 13:57 output

Lets create a file with a space in it:

serge@goldorak:~/tmp$ touch "test 1"
serge@goldorak:~/tmp$ ls -l
total 8
-rw-rw-r-- 1 serge serge 71 Aug 30 13:57 error 3
-rw-rw-r-- 1 serge serge 6 Aug 30 13:57 output
-rw-rw-r-- 1 serge serge 0 Aug 30 13:58 test 1

using bash completion I get:

serge@goldorak:~/tmp$ ls -l test 1
-rw-rw-r-- 1 serge serge 0 Aug 30 13:58 test 1
serge@goldorak:~/tmp$ ls -l error 3
-rw-rw-r-- 1 serge serge 71 Aug 30 13:57 error 3

It seems the space in “error 3” is not a space but some other char?

The Linux-Training Project: Linux Training v2 released

As announced in February new versions of the Linux training courses were being (re-)written by Paul.

I’m pleased to announce that v2 was merged in the master branch on github.

I you want to test it or just check it out:

git clone git://
cd lt
git submodule init
git submodule update
./ build fundamentals

Feedback is welcome, by mail at or via a github issue.

If you prefer to just download the latest books in PDF format, check out the download page. These are nightly builds from the master branch.

The Linux-Training Project: New Books

You may have noticed that since February 2011 there have been no updates to the free pdf’s. That is because we have been redesigning the docbook source code to accommodate smaller books (we call them minibooks). This mandates a rewrite of the build system and a refitting of the .xml files and directories. This is all being done in the git experimental branch.

We take the minibook redesign also as an opportunity to rewrite/improve the content of each subject. Making this a time consuming process. The good news is that some minibooks should be coming online soon (say July 2011).  We plan to finish at least the Linux Fundamentals (big)book by the end of July (it will consist of six or seven minibooks).


Modern times revisited

Ge moet u voorstellen dat de muziekindustrie gewoon een nest bedrijven zijn die jarenlang op een berg geld gezeten hebben, waar een hoop managers het steeds heel goed hadden, en waar de vakbonden geen klagen hadden. Nu moeten die bedrijven plots allemaal extra hard beginnen werken voor hun geld, en is een reorganistaie niet mogelijk of minstens extreem moeilijk wegens redenen die eigen zijn aan de parabel van de tien aapjes en de menselijke natuur die tegen verandering is.

En zo zijn er een hoop bedrijven. Truvo aka. De Gouden Gids is daar nog een goed voorbeeld van.