005 - Monitoring my Blog with Prometheus and Grafana
Monitoring My Hugo Blog with Prometheus and Grafana#
Like a lot of Hugo users, I love how fast and simple a static site can be. But “static” doesn’t mean I don’t care about uptime, SSL expiry, or response times. I wanted an easy way to keep an eye on my blog (blog.wizard.cat) and get notified if something breaks.
Here’s how I set up Prometheus, the Blackbox Exporter, and Grafana to build my own monitoring stack — all running inside LXC containers on Proxmox.
Why Prometheus + Blackbox?#
Hugo sites don’t have application-level metrics like dynamic apps. Instead, the best way to monitor them is “outside-in”:
- HTTP probes: does the homepage return 200 OK?
- Content checks: does the page include expected text?
- TLS expiry: how many days until my certificate runs out?
- ICMP ping: is the host itself reachable?
Blackbox Exporter runs these probes, Prometheus scrapes the results, and Grafana visualises them.
Installing Blackbox Exporter#
On my Prometheus LXC I installed the exporter as a systemd service:
curl -LO https://github.com/prometheus/blackbox_exporter/releases/download/v0.25.0/blackbox_exporter-0.25.0.linux-amd64.tar.gz
tar -xzf blackbox_exporter-0.25.0.linux-amd64.tar.gz
mv blackbox_exporter-0.25.0.linux-amd64/blackbox_exporter /usr/local/bin/
Minimal config (/etc/blackbox.yml):
http_2xx:
prober: http
timeout: 10s
http_content_ok:
prober: http
timeout: 15s
http:
fail_if_body_not_matches_regexp:
- "WizardCat Blog"
tls_only:
prober: tcp
timeout: 10s
tcp:
tls: true
icmp_ping:
prober: icmp
timeout: 5s
Prometheus scrape jobs#
Then I pointed Prometheus at the exporter (prometheus.yml):
- job_name: 'blackbox-http'
metrics_path: /probe
params:
module: [http_2xx]
static_configs:
- targets:
- https://blog.wizard.cat/
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: 127.0.0.1:9115
- job_name: 'blackbox-content'
metrics_path: /probe
params:
module: [http_content_ok]
static_configs:
- targets: [https://blog.wizard.cat/]
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: 127.0.0.1:9115
- job_name: 'blackbox-tls'
metrics_path: /probe
params:
module: [tls_only]
static_configs:
- targets: [blog.wizard.cat:443]
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: 127.0.0.1:9115
- job_name: 'blackbox-ping'
metrics_path: /probe
params:
module: [icmp_ping]
static_configs:
- targets: [blog.wizard.cat]
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: 127.0.0.1:9115
Visualising in Grafana#
Grafana was already running in another LXC. I added Prometheus as a data source and started building panels.
Example queries#
- Uptime (last 24h)
avg_over_time(probe_success{instance="https://blog.wizard.cat/"}[24h]) * 100
- Latency (P90 over 5m)
historgram_quantile(0.9,sum by (le) (rate(probe_http_duration_seconds_bucket{instance="http://blog.wizard.cat"}))
- Ping round trip
probe_icmp_duration_seconds{instance="blog.wizard.cat"}
Visualisations#
Stat panels for up/down.
Time series for latency trends.
State timeline for probe success (green = up, red = down).
Results#
Now I have a dashboard showing:
✅ Uptime timeline
🚦 Response time percentiles
📡 Ping round trip
Whenever my Hugo blog has an issue, I know about it before readers do, that’s if anyone reads it.
Closing thoughts#
Monitoring a static Hugo site is simpler than a dynamic app, but still worthwhile. With Prometheus + Blackbox Exporter + Grafana, I’ve got a free, flexible, self-hosted solution that tells me whether blog.wizard.cat is healthy — from DNS resolution, to HTTPS, to page content.
!