Buildablog v2
I Finally Did It
I finally solved a problem I mentioned in a previous post. There, I had left things halfway: I had only installed some infrastructure, in the form of Cgit, that was partially suggestive of a solution.
Here, I document how I finally leveraged that piece as part of a full solution to the problem.
Previously, the steps for updating my blog's content had been:
- Commit all changes.
- Push the changes to GitHub.
- Log in via SSH into my VPS.
- Perform a
cdinto thebrandons_blogdirectory, and rungit pull.
This has now been reduced to two steps:
- Commit all changes.
- Push to
https://git.brandonirizarry.xyz/brandons_blog.
How it Works
Previous versions of Buildablog would read posts directly from the local filesystem. For starting out, this was an entirely intuitive and sensible thing to do. However, I had run into a wall, since I couldn't push to a non-bare remote, and Buildablog needs to see actual files in order to publish them.
I also wanted to keep things conceptually simple and avoid something
like a separate call to scp or rsync — I would only rely on
Git. After all, this is how Git forges themselves work: push, and
everything is just there, present. Not just present, but presumably
usable in some form or another. I wanted my application to take
advantage of this intuitive simplicity.
Luckily, go-git comes to the rescue here. At first I tried to implement Git-based reads alongside conventional filesystem reads, but couldn't figure out how to make these two methods play nicely in the same codebase. So I decided to throw out the latter, relying solely on reading from a Git repo.
The allArticles function commandeers this logic. It reads all
articles from the blog repo. This is what it currently looks like:
1func allArticles[F types.Frontmatter](repo string) ([]types.Article[F], error) {
2 fs := memfs.New()
3 genre := (*new(F)).Genre()
4
5 _, err := git.Clone(memory.NewStorage(), fs, &git.CloneOptions{
6 URL: repo,
7 })
8 if err != nil {
9 return nil, fmt.Errorf("can't clone repository %s: %w", repo, err)
10 }
11
12 log.Printf("Successfully cloned repository %s", repo)
13
14 entries, err := fs.ReadDir("./" + genre)
15 If err != nil {
16 return nil, err
17 }
18
19 log.Printf("Successfully fetched genre entries for '%s'", genre)
20
21 articles, err := entriesToArticles[F](fs, genre, entries)
22 if err != nil {
23 return nil, err
24 }
25
26 return articles, nil
27}
There are five pivotal steps that can be outlined here:
- Create the in-memory filesystem:
fs := memfs.New() - Clone the blog repo worktree into this filesystem:
git.Clone(memory.NewStorage(), fs, &git.CloneOptions{...} - Read the given genre from within the in-memory worktree:
entries, err := fs.ReadDir("./" + genre). I go into more detail on genres in the Buildablog README. - Run some code that marshals each Markdown entry under the genre
folder into an article struct that later on gets used inside a Go
template:
articles, err := entriesToArticles[F](fs, genre, entries) - Return these articles, along with an error, to the REST endpoint handler call site.
Flexibility
The blog repo itself is configurable via the BLOGDIR environment
variable. This name is a throwback from when it was using the local
filesystem directly; now, it can also be set to an https remote
repo. On my VPS, I have it set to /var/git/brandons_blog, which
indeed was my endgame all along; the only admin-type thing I had to do
was mark it as a safe repo using git config.
Now, the one drawback to all this is that I've gotten used to seeing immediate feedback once I edit my content. Now, I have to remember to commit changes first when testing locally.
Anyway, really good stuff.