004 - Automating the blog(with GitHub Actions)
Context#
Just to give a background on how this blog is ran, this website is hosted on a docker container with nginx. Every time I updated this blog I would have to manually move the site data to my docker server, build the image, delete the last container and run the new container. This was quite a tedious process, especially if I didn’t update the blog often and forget how to solve the small bugs.
So I decided to write this post to document how I created a pipeline that automates this boring process, utilising a private GitHub repository to keep track of my blog’s uncompiled format and to kick off the builds on my CI/CD server.
1. Starting the Repository#
I keep the source of the site (Hugo content, theme, config) in a private GitHub repo and let CI do the compiling. Here’s the minimal layout I use:
.
├─ wizard-cat/ # Hugo project (content/, layouts/, theme, config)
│ └─ ...
├─ Dockerfile # serves ./wizard-cat/public with nginx
└─ .github/
└─ workflows/
└─ deploy.yaml # the GitHub Actions pipeline
I decided to have 2 branches in this repository main and beta, the former being the stable and complete version of the site, and the latter being the version where I upload changes first for visualization first.
2. Dockerization#
As stated before I intend to host my blog using a docker container, and the best way I see to do this is by building a docker image in the CI/CD server and sending it over to the docker server to be deployed.
This is the Dockerfile being used to create that image:
# 1) Build the site with Hugo (extended edition)
FROM klakegg/hugo:ext-alpine AS builder
WORKDIR /src
COPY wizard-cat/ ./wizard-cat/
# build to /out so we can copy in the next stage
ARG BASE_URL="/"
RUN hugo build --minify -t terminal --source ./wizard-cat --destination /out --baseURL "${BASE_URL}"
# 2) Serve the static site with Nginx
FROM nginx:1.27-alpine
COPY wizard-cat/public/ /usr/share/nginx/html/
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
And just to test if the image works I built and deployed it using the docker build and docker run commands.
3. Finally the automation#
On every bush to the repo, we:
- Build the site with Hugo
- Build a docker image
- Ship the image over to the docker server
- start a new container with the new image
This .yml file does exactly that:
name: compile docker image
on:
push:
branches: ["beta"]
jobs:
build:
runs-on: self-hosted
steps:
- uses: actions/checkout@v4
- name: Build site with Hugo
shell: bash
working-directory: ./wizard-cat
run: |
hugo --minify -t terminal --baseURL="/"
- name: Build Docker image
shell: bash
run: |
docker build -t wizardcatblog:latest .
- name: Deploy to server (stream image + restart container)
shell: bash
env:
SSH_HOST: ${{ secrets.SSH_HOST }}
SSH_USER: ${{ secrets.SSH_USER }}
run: |
set -euo pipefail
# 1) Stream the image to the server and load it into Docker
docker save wizardcatblog:latest | gzip | \
ssh -T -o BatchMode=yes "${SSH_USER}@${SSH_HOST}" '
# Uncomment if you use rootless Docker:
# export DOCKER_HOST=unix:///run/user/$(id -u)/docker.sock
gunzip | docker load
'
# 2) Restart the container
ssh -T -o BatchMode=yes "${SSH_USER}@${SSH_HOST}" '
set -euo pipefail
docker stop wizardcatblog || true
docker rm wizardcatblog || true
# For rootless Docker, bind a high port (e.g., 8080:80)
docker run -d --name wizardcatblog -p 80:80 wizardcatblog:latest
docker ps --format "table {{.Names}}\t{{.Image}}\t{{.Status}}\t{{.Ports}}"
'
The repo is currently private, I’m still not sure if I do decide to publicly share the repository in the future, so I decided to use GitHub’s repo secrets:
SSH_HOST- The docker server’s IP addressSSH_USER- The user being used to connect to the docker server- My runner is self-hosted and already has SSH keys configured, so the workflow doesn’t need to handle keys.
I also have a .bat file stored on my machine that automates the whole changing branches, staging and committing the changes.
echo Updating local data...
robocopy "C:\Users\MaskedTitan\Documents\Obsidian Vault\wizard-cat-blog\posts" "C:\Users\MaskedTitan\Documents\blog-pipeline\wizard-cat\content\posts" /mir
python3 C:\Users\MaskedTitan\Documents\blog-pipeline\wizard-cat\images.py
echo Updated local data
set /p commitName=Enter commit name:
echo pushing update...
git checkout beta
git add .
git commit -m %commitName%
git push
echo pushed update
staging and production versions#
I want to be able to preview the changes I’ve made, so I have decided to have another instance of the site being updated only when I am satisfied regarding the changes made. This is where having two branches comes in handy, one to store the “production” code (main branch) and another to store the “development” code (beta branch). I have a modified version of the GitHub action workflow file that only runs when there is a change on the main branch and deploys the container to a different host port.
How I use it#
- I make a change in obsidian
- I run the
update.batto see the changes in the staging site - I then merge the beta branch into the main branch to promote the changes into production
Conclusion#
What used to be a clunky checklist—copy files, rebuild, stop, remove, run—became a single action: push to beta to preview, merge to main to go live. GitHub Actions compiles the Hugo site, builds the Docker image, streams it to the server over SSH, and (re)starts the container. No secrets for keys in CI, no fiddling with paths on the server, and far fewer opportunities for “wait, how did I do this last time?” bugs.
A few improvements I may add next:
Health checks & smoke tests: fail the deploy if the container isn’t healthy or a simple HTTP check doesn’t pass.
Blue/green (zero-downtime): start a new container on a different port, verify, then flip the reverse proxy.
Registry pull flow: push to GHCR and
docker pullon the server for faster, cacheable deploys.
For now, this pipeline is exactly what I needed: repeatable, fast, and boring—in the best possible way. Future me, if you forget: edit on beta to preview; merge to main when happy.!