Appearance
GetHook — Production Deployment Guide
About 534 wordsAbout 2 min
Architecture overview
Two binaries, both containerized:
api— HTTP server (port 8080), statelessworker— delivery poller, Postgres-backed queue
Both connect to a single Postgres instance. No Redis, no external queue.
Horizontal scaling
API instances — scale freely
The API server is completely stateless. Every request reads from Postgres, no in-memory state, no sticky sessions. Run as many api instances as needed behind a load balancer.
[Load Balancer]
├── api instance 1 (:8080)
├── api instance 2 (:8080)
└── api instance N (:8080)
└── [Postgres]Worker instances — safe to scale
The worker uses FOR UPDATE SKIP LOCKED in PollDue (internal/events/store.go). This is a Postgres-native distributed lock — multiple worker instances each grab a non-overlapping batch of events atomically. No event is processed twice.
[Worker 1] \
[Worker 2] ──► Postgres events table (SKIP LOCKED = no double-processing)
[Worker N] /2–10 worker replicas is a reasonable range. Tune WORKER_MAX_CONCURRENCY per instance.
Migration race condition (fix before scaling API replicas)
migrate.go does a SELECT COUNT check then applies — not atomic. Two API replicas starting simultaneously can both see count = 0 and both try to apply the same migration.
Option A — Postgres advisory lock (minimal change)
Wrap the entire Migrate() body:
db.ExecContext(ctx, "SELECT pg_advisory_lock(1234567890)")
defer db.ExecContext(ctx, "SELECT pg_advisory_unlock(1234567890)")Option B — Separate migration binary (recommended for production)
Add cmd/migrate/main.go, run it as a pre-deploy job or Kubernetes init container. API replicas start only after it exits 0. This is the production-standard pattern.
Recommended platforms
| Platform | Fit | Notes |
|---|---|---|
| Fly.io | Great | Run api + worker as separate apps, managed Postgres add-on |
| Railway | Great | Multi-service per project, easy env config |
| AWS ECS + RDS | Production-grade | ECS task for api (autoscale), separate task for worker |
| GCP Cloud Run | Good for API | Workers need always-on — use Cloud Run jobs or GKE instead |
| Kubernetes (any) | Full control | Deployment for api, Deployment for worker, Job for migrations |
Environment variables
| Variable | Required | Description |
|---|---|---|
DATABASE_URL | Yes | Postgres connection string |
ENCRYPTION_KEY | Yes | 64 hex chars (32 bytes AES-256). Process refuses to start without it. |
PORT | No | API listen port (default 8080) |
BASE_URL | No | Public base URL of the API |
MIGRATIONS_DIR | No | Path to migrations folder (default /app/migrations) |
CORS_ORIGIN | No | Allowed CORS origin (default *) |
WORKER_POLL_INTERVAL | No | How often worker polls for due events (default 5s) |
WORKER_MAX_CONCURRENCY | No | Max concurrent deliveries per worker instance (default 10) |
DELIVERY_TIMEOUT | No | HTTP timeout for outbound delivery (default 30s) |
Kubernetes example (minimal)
# Migration job — runs before API/worker start
apiVersion: batch/v1
kind: Job
metadata:
name: gethook-migrate
spec:
template:
spec:
containers:
- name: migrate
image: your-registry/gethook-migrate:latest
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef: { name: gethook, key: database_url }
- name: ENCRYPTION_KEY
valueFrom:
secretKeyRef: { name: gethook, key: encryption_key }
restartPolicy: OnFailure
---
# API deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: gethook-api
spec:
replicas: 3
template:
spec:
containers:
- name: api
image: your-registry/gethook:api
ports:
- containerPort: 8080
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef: { name: gethook, key: database_url }
- name: ENCRYPTION_KEY
valueFrom:
secretKeyRef: { name: gethook, key: encryption_key }
---
# Worker deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: gethook-worker
spec:
replicas: 3
template:
spec:
containers:
- name: worker
image: your-registry/gethook:worker
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef: { name: gethook, key: database_url }
- name: ENCRYPTION_KEY
valueFrom:
secretKeyRef: { name: gethook, key: encryption_key }
- name: WORKER_MAX_CONCURRENCY
value: "10"