fabernovel loader

Feb 23, 2017 | 5 min read


Git-Merge: Part 1 - Workshops

Edouard Siegel

Head of Engineering

Sub-premacy, Jedi for Git, Git-Ask-Core, Edouard Siegel, Head of Engineering, was in Brussels between 2nd and 3rd of February for the Git-Merge. Here's his first feedback about the workshop.

Git Merge 2017 is a two-day Git-focused conference that took place in Brussels earlier this month. It was hosted at The Egg, nearby the train station of Brussels-Midi. Both the location and the staff felt on point. As for the event itself, it started by a day of workshops followed by another of conferences. We will cover the content of both in this two-part article.

Following is a detailed review of the six workshops spread over the first day. As a disclaimer, these workshops are intended for « Git users of all levels wanting to deep-dive into a variety of topics with some of the best Git trainers in the world ».

Git and the Terrible, Horrible, No Good, Very Bad Day

The first workshop was given by Hector Alfaro, a Trainer at GitHub. The content of this workshop was super accessible. It dealt with the different means at your disposal to spot and fix an error inducing commit. He talked about git bisect, revert, reset and finally git filter-branch. Duly note that if you want to use the latter to erase sensitive information (like accidentally committed credentials), you also need to scrub your reflog, and any kind of cache your distant server may have. Better to consider it leaked and change your credentials to be on the safe side.

Submodules vs. Subtrees: The Battle for Sub-premacy

Hector was then joined by Kyle Macey, also working at GitHub but as a Services Engineer. This workshop was focused on the similarities and dissimilarities of a submodule and a subtree. They each took turn integrating code from a subproject using the two different strategies. Successively adding, updating, and pushing back the subproject code to the distant repository to try and present a basic workflow. Here at Applidium we happen to use submodules for specific projects and tend to avoid subtrees which we see as a bit messier, but nothing beats a good dependency manager. This is why we use CocoaPods, Gradle, or Bundler in all our projects, depending on the platform.

Jedi Mind Tricks for Git

Next was the workshop I loved most of all. Johan Abildskov and Jan Krag, a consultant and an engineer from Praqma explored some of the possibilities offered by Hooks, Git Attributes and Custom Drivers. We started with Hooks, where they showed how to use them to help your workflow. Using relevant scripts we can then assure that we respect rules such as not committing onto master, referencing an issue from the tracker and so forth. The interesting part was the idea to have your tests run locally and, in case of failure, leave everything as-is in a specific branch for you to checkout later to have a look at the issue.

We quickly turned to the usage of Git Attributes and Custom Drivers. Jan showed us how to get rid of the pesky useless diff on what are viewed as binary files by combining the two together. For instance, you define a custom driver that knows how to convert .docx files to markdown, and flag that any such files should use this driver and TADA, a decent useful diff. He then proceeded to show us almost a dozen custom drivers, for pdf, images, and even zip files. The best part of the workshops in my mind.

Greatest Hits from Ask-Git-Core

After a quick lunch break, Patrick McKenna, a Data Scientist at GitHub took us down for a tour of what was supposed to be some gems of their internal slack channel dedicated to Git support for tricky questions, where you can converse with some Git core maintainers. It did look promising, but left me hungering for some more.

We started the session with a script to visualize the newly pulled commits on any given branch, before quickly turning to the issue of ignored files, comparing .gitignore to .git/info/exclude. One use-case of the latter would be to exclude some local test data, while at the same time allowing to easily list such data to be passed to the test script (using check-ignore, even though the command itself has some limitations, such as no recursive option for wildcard searches)

The last issue Patrick tackled was how to migrate code between repositories, while maintaining history. This can be solved using either a subtree or a merge allowing unrelated histories once the specific interesting part of the history has been filtered.

Repo 911

Title tribute to Reno 911, a fifteen-year-old comedy television series. For the penultimate workshop of the day we met up with Kyle Macey once more to show us how to clean an absurdly large repository to the bare minimum, filtering out any kind of build artifacts, huge assets or commit message noise. This extensively used BFG Repo Cleaner, a Scala tool supposedly easier to use than git filter-branch. Nonetheless, we still used the later to filter message noise, as well as Git LFS for hosting large assets.

This workshop was a three-step process, which we started with BFG Repo Cleaner to clear out 900 MB of build artifacts out of the repository, before using it again to activate Git LFS on some Photoshop files. Finally, a filter-branch removed some original commit message noise and some additional information from BFG Repo Cleaner that we do not need in this example case. Overall this was interesting as it really showed how to use these tools, especially since I had never heard of BFG before.

Git Simple: Writing primary Git functionalities in Ruby

The final workshop of the day was given by Matt Duff, another Services Engineer at GitHub. This was the total opposite of what I was expecting, seeing as we dove rather deep into Git but mostly as an exercise. It really focused on how it works under the hood. That is, after a quick reminder of the Git object types, and what was created from scratch when doing a git init followed by a rather simple commit, Matt then replicated every step and command we used, by hand, using ruby. He created files, directories, computed SHA-1s  and constructed the tree and the commit incrementally. This had no actual benefit other than a refresher on the internals of Git and how everything meshes together.


This first day felt good overall. The subjects were indeed for « Git users of all levels », starting rather really slow and easy before getting to the better stuff. It did felt slightly lacking in terms of reusable content (the stories were nice though…), but there still was a full day of Conferences to get one’s fill. Hold on for part two!


We're looking for bright people. Want to be part of the mobile revolution?

Open positions
logo business unit


150 talents to face technological challenges of digital transformation

next read