This blog finally goes git-annex!

Posted on June 11, 2025 with tags . See previous post.

git-annex is cool! but also way more complex than I thought…

A long, long time ago…

I have a few pictures on this blog, mostly in earlier years, because even with small pictures, the git repository became 80MiB soon—this is not much in absolute terms, but the actual Markdown/Haskell/CSS/HTML total size is tiny compared to the picture, PDFs and fonts. I realised I need a better solution, probably about ten years ago, and that I should investigate git-annex. Then time passed, and I heard about git-lfs, so I thought that’s the way forward.

Now, I recently got interested again into doing something about this repository, and started researching.

Detour: git-lfs

I was sure that git-lfs, being supported by large providers, would be the modern solution. But to my surprise, git-lfs is very server centric, which in hindsight makes sense, but for a home setup, it’s not very good. Maybe I misunderstood, but git-lfs is more a protocol/method for a forge to store files, rather than an end-user solution. But then you need to backup those files separately (together with the rest of the forge), or implement another way of safeguarding them.

Further details such as the fact that it keeps two copies of the files (one in the actual checked-out tree, one in internal storage) means it’s not a good solution. Well, for my blog yes, but not in general. Then posts on Reddit about horror stories—people being locked out of github due to quota, as an example, or this Stack Overflow post about git-lfs constraining how one uses git, convinced me that’s not what I want. To each their own, but not for me—I might want to push this blog’s repo to github, but I definitely wouldn’t want in that case to pay for github storage for my blog images (which are copies, not originals). And yes, even in 2025, those quotas are real—GitHub limits—and I agree with GitHub, storage and large bandwidth can’t be free.

Back to the future: git-annex

So back to git-annex. I thought it’s going to be a simple thing, but oh boy, was I wrong. It took me half a week of continuous (well, in free time) reading and discussions with LLMs to understand a bit how it works. I think, honestly, it’s a bit too complex, which is why the workflows page lists seven (!) levels of workflow complexity, from fully-managed, to fully-manual. IMHO, respect to the author for the awesome tool, but if you need a web app to help you manage git, it hints that the tool is too complex.

I made the mistake of running git annex sync once, to realise it actually starts pushing to my upstream repo and creating new branches and whatnot, so after enough reading, I settled on workflow 6/7, since I don’t want another tool to manage my git history. Maybe I’m an outlier here, but everything “automatic” is a bit too much for me.

Once you do managed yourself how git-annex works (on the surface, at least), it is a pretty cool thing. It uses a git-annex git branch to store metainformation, and that is relatively clean. If you do run git annex sync, it creates some extra branches, which I don’t like, but meh.

Trick question: what is a remote?

One of the most confusing things about git-annex was understanding its “remote” concept. I thought a “remote” is a place where you replicate your data. But not, that’s a special remote. A normal remote is a git remote, but which is expected to be git/ssh/with command line access. So if you have a git+ssh remote, git-annex will not only try to push it’s above-mentioned branch, but also copy the files. If such a remote is on a forge that doesn’t support git-annex, then it will complain and get confused.

Of course, if you read the extensive docs, you just do git config remote.<name>.annex-ignore true, and it will understand that it should not “sync” to it.

But, aside, from this case, git-annex expects that all checkouts and clones of the repository are both metadata and data. And if you do any annex commands in them, all other clones will know about them! This can be unexpected, and you find people complaining about it, but nowadays there’s a solution:

git clone … dir && cd dir
git config annex.private true
git annex init "temp copy"

This is important. Any “leaf” git clone must be followed by that annex.private true config, especially on CI/CD machines. Honestly, I don’t understand why by default clones should be official data stores, but it is what it is.

I settled on not making any of my checkouts “stable”, but only the actual storage places. Except those are not git repositories, but just git-annex storage things. I.e., special remotes.

Is it confusing enough yet ? 😄

Special remotes

The special remotes, as said, is what I expected to be the normal git annex remotes, i.e. places where the data is stored. But well, they exist, and while I’m only using a couple simple ones, there is a large number of them. Among the interesting ones: git-lfs, a remote that allows also storing the git repository itself (git-remote-annex), although I’m bit confused about this one, and most of the common storage providers via the rclone remote.

Plus, all of the special remotes support encryption, so this is a really neat way to store your files across a large number of things, and handle replication, number of copies, from which copy to retrieve, etc. as you with.

And many of other features

git-annex has tons of other features, so to some extent, the sky’s the limit. Automatic selection of what to add git it vs plain git, encryption handling, number of copies, clusters, computed files, etc. etc. etc. I still think it’s cool but too complex, though!

Uses

Aside from my blog post, of course.

I’ve seen blog posts/comments about people using git-annex to track/store their photo collection, and I could see very well how the remote encrypted repos—any of the services supported by rclone could be an N+2 copy or so. For me, tracking photos would be a bit too tedious, but it could maybe work after more research.

A more practical thing would probably be replicating my local movie collection (all legal, to be clear) better than “just run rsync from time to time” and tracking the large files in it via git-annex. That’s an exercise for another day, though, once I get more mileage with it - my blog pictures are copies, so I don’t care much if they get lost, but movies are primary online copies, and I don’t want to re-dump the discs. Anyway, for later.

Migrating to git-annex

Migrating here means ending in a state where all large files are in git-annex, and the plain git repo is small. Just moving the files to git annex at the current head doesn’t remove them from history, so your git repository is still large; it won’t grow in the future, but remains with old size (and contains the large files in its history).

In my mind, a nice migration would be: run a custom command, and all the history is migrated to git-annex, so I can go back in time and the still use git-annex. I naïvely expected this would be easy and already available, only to find comments on the git-annex site with unsure git-filter-branch calls and some web discussions. This is the discussion on the git annex website, but it didn’t make me confident it would do the right thing.

But that discussion is now 8 years old. Surely in 2025, with git-filter-repo, it’s easier? And, maybe I’m missing something, but it is not. Not from the point of view of plain git, that’s easy, but because interacting with git-annex, which stores its data in git itself, so doing this properly across successive steps of a repo (when replaying the commits) is, I think, not well defined behaviour.

So I was stuck here for a few days, until I got an epiphany: As I’m going to rewrite the repository, of course I’m keeping a copy of it from before git-annex. If so, I don’t need the history, back in time, to be correct in the sense of being able to retrieve the binary files too. It just needs to be correct from the point of view of the actual Markdown and Haskell files that represent the “meat” of the blog.

This simplified the problem a lot. At first, I wanted to just skip these files, but this could also drop commits (git-filter-repo, by default, drops the commits if they’re empty), and removing the files loses information - when they were added, what were the paths, etc. So instead I came up with a rather clever idea, if I might say so: since git-annex replaces files with symlinks already, just replace the files with symlinks in the whole history, except symlinks that are dangling (to represent the fact that files are missing). One could also use empty files, but empty files are more “valid” in a sense than dangling symlinks, hence why I settled on those.

Doing this with git-filter-repo is easy, in newer versions, with the new --file-info-callback. Here is the simple code I used:

import os
import os.path
import pathlib

SKIP_EXTENSIONS={'jpg', 'jpeg', 'png', 'pdf', 'woff', 'woff2'}
FILE_MODES = {b"100644", b"100755"}
SYMLINK_MODE = b"120000"

fas_string = filename.decode()
path = pathlib.PurePosixPath(fas_string)
ext = path.suffix.removeprefix('.')

if ext not in SKIP_EXTENSIONS:
  return (filename, mode, blob_id)

if mode not in FILE_MODES:
  return (filename, mode, blob_id)

print(f"Replacing '{filename}' (extension '.{ext}') in {os.getcwd()}")

symlink_target = '/none/binary-file-removed-from-git-history'.encode()
new_blob_id = value.insert_file_with_contents(symlink_target)
return (filename, SYMLINK_MODE, new_blob_id)

This goes and replaces files with a symlink to nowhere, but the symlink should explain why it’s dangling. Then later renames or moving the files around work “naturally”, as the rename/mv doesn’t care about file contents. Then, when the filtering is done via:

git-filter-repo --file-info-callback <(cat ~/filter-big.py ) --force

It is easy to onboard to git annex:

  • remove all dangling symlinks
  • copy the (binary) files from the original repository
  • since they’re named the same, and in the same places, git sees a type change
  • then simply run git annex add on those files

For me it was easy as all such files were in a few directories, so just copying those directories back, a few git-annex add commands, and done.

Of course, then adding a few rsync remotes, git annex copy --to, and the repository was ready.

Well, I also found a bug in my own Hakyll setup: on a fresh clone, when the large files are just dangling symlinks, the builder doesn’t complain, just ignores the images. Will have to fix.

Other resources

This is a blog that I read at the beginning, and I found it very useful as an intro: https://switowski.com/blog/git-annex/. It didn’t help me understand how it works under the covers, but it is well written. The author does use the ‘sync’ command though, which is too magic for me, but also agrees about its complexity 😅

The proof is in the pudding

And now, for the actual first image to be added that never lived in the old plain git repository. It’s not full-res/full-size, it’s cropped a bit on the bottom.

Earlier in the year, I went to Paris for a very brief work trip, and I walked around a bit—it was more beautiful than what I remembered from way way back. So a bit random selection of a picture, but here it is:

Un bateau sur la Seine
Un bateau sur la Seine

Enjoy!