Black Square
initialsdb is a public bulletin board (message store) implemented as Go + React + PostgreSQL. It uses proof of work and rate limiting to fight bots.
This repo: code + infrastructure. The latter (Makefile, Dockerfile, docker-compose.yml) automates environments:
-
debug: Postgres runs inside a dev mode container. Go is debuggable in VS Code, React in browser with F12 and the React Dev Tools addon.
-
dev: almost prod, to test containers locally, no Caddy, no debugging.
-
prod: make commands will update everything on VPS, for initial setup follow this README.md below.
Technology stack:
-
Browser → Caddy: HTTPS (443).
-
Caddy → app: HTTP over Docker network (external).
-
App → Postgres: TCP inside Docker network (internal).
-
Volumes: Postgres data and Caddy TLS state.
-
Containers use debian:bookworm-slim (Alpine lacks debugging tools).
-
No building inside containers. Cross-compilation to ARM64 happens on dev.
-
Go compiles to x86 and arm64 (does not hog VPS with packages).
-
React solves accessibility/EU compliance via shadcn/ui and Radix UI.
-
PostgreSQL: web-ready (write many, read many), psql cli.
-
12-factor inspired.
-
initialsdb runs its own Postgres and data volume. Caddy is global per VPS.
The deployment requires setting up a VPS, and running Makefiles, e.g.
Dev:
cd ~/opt/initialsdb/src
make build
cd ~/opt/initialsdb/deploy
make copy
make copy-backup-script
make install-backup-cronVPS:
cd /opt/initialsdb
make upRead below for details how to set this all up, stop/redeploy, clone, debug.
Dev: Ubuntu 22.04 x86, VPS: Ubuntu 22.04 arm64 (Hetzner CAX11). VPS: Create user deploy with passwordless sudo, ensure login with ssh keys and passphrase, disable password logins. Optionally set the ufw rules, see ufw.sh.
Create /opt (root owned):
VPS:
sudo mkdir -p /opt
sudo chown root:root /opt
sudo chmod 755 /optand then caddy and the app folder (initialsdb), all owned by deploy:
sudo mkdir -p /opt/caddy
sudo mkdir -p /opt/initialsdb
sudo chown -R deploy:deploy /opt/caddy
sudo chown -R deploy:deploy /opt/initialsdbSet up DNS records with www and wildcards @ and *.
Create Caddyfile with your domain name and point to the app container:
initials.dev, www.initials.dev {
reverse_proxy initialsdb-app:8080
encode gzip
}Clone this repo to dev. On dev:
cd ~/opt/initialsdb/src
make buildThis compiles and distributes binaries and their assets (Js) to bin and web, resp., on dev and prod.
cd ~/opt/initialsdb/dev
make upThis will create and run all the containers on dev:
[+] up 5/5
[+] up 5/5
✔ Image initialsdb-dev
✔ Network initialsdb_dev_app_net
✔ Volume initialsdb_postgres_dev
✔ Container initialsdb-dev-db
✔ Container initialsdb-dev-appGo to http://localhost:8080/, the app should work now.
To simply stop/restart all the containers (data intact), use
make down
make upThis is enough for code updates: make down, rebuild, make up.
If Dockerfile changes (rarely):
make soft-reset
make upIf you are done testing and do not care about any Postgres data (!), nuke it all:
make hard-resetprod (VPS) is almost identical to dev, except that prod:
- adds a reverse-proxy (Caddy),
- must install and run Posgres backup.
- must be more careful about .secrets (though everything is the same routine).
On dev, inside /prod, add new passwords to .secrets with
openssl rand -base64 32Adjust VPS if it already has Makefile and older instance running.
VPS:
cd /opt/initialsdb
make downThis will stop containers and also remove them. Intact: images (build time), volumes (DB data), networks (unless orphaned).
If prod Dockerfile got updated, remove the image, but keep the DB volume intact:
VPS:
cd /opt/initialsdb
make soft-resetTo nuke the whole old app running (including data!):
VPS:
cd /opt/initialsdb
make hard-resetTo nuke the Caddy container and the Docker container network edge_net:
cd /opt/caddy
make clean
make net-removeOn dev:
cd ~/opt/initialsdb/deploy
make copy
make copy-backup-script
make install-backup-cronVPS (if never run before or after make net-remove):
cd /opt/caddy
make netVPS (if Caddyfile updated, skip otherwise):
cd /opt/caddy
make restartVPS:
cd /opt/initialsdb
make upThis should output:
[+] up 5/5
✔ Image initialsdb-prod
✔ Network initialsdb_app_net
✔ Volume initialsdb_postgres_prod
✔ Container initialsdb-db
✔ Container initialsdb-appThey are:
make hard-reset
make nuke-db
docker volume rm initialsdb_postgres_prod
docker compose down --volumesUse them only if you explicitly want to destroy all data and start everything from scratch!
For normal updates, use make down and make up, or make soft-reset if the prod Dockerfile is updated.
Also, I do not use binds to regular files outside the containers, but if for some reason one does that (see dev/docker-compose.yml.volume-bind), then removing the bind destroys data, i.e.
sudo rm -rf ./volumes/postgresInitially, I had these binds in dev, but removed them as they turned the data volume (mounted to containers) to some kind of a pointer. Removing a volume would not destroy data, but removing the bind would do that. I did not need any of that.
The Postgres container initialsdb-db does not own data, it owns the Postgres process and its file system.
One can make the container gone:
VPS:
docker stop initialsdb-db
docker rm initialsdb-dbThe data will remain intact. Postgres stores data in /var/lib/postgresql/data, Docker mounts this volume into the container initialsdb-db. The containers can be stopped, removed, rebuild with new images, the whole VPS can reboot, the data volume survives.
Inside .gitignore put these lines:
# --------------------------------------------------
# Never commit .secrets (.env fine as they are public)
# --------------------------------------------------
.secrets
.secrets.*
!.secrets.example
!.secrets.*.example
Rule order matters. Git applies ignores top to bottom:
-
.secrets ignores .secrets.
-
.secrets.* ignores everything starting with .secrets.
-
!.secrets.example punches a hole for the base example.
-
!.secrets.*.example punches holes for all variant examples.
So now git won't push .secrets, .secrets.local, .secrets.prod should there be any later. It will commit .secrets.example, .secrets.local.example.
Just in case, for extra safety, esp. if the dev machine gets ever busted, generate a backup ssh key, add it for use, and also stash it somewhere on non-dev.
All this is very optional, and a bit of a hassle, but it might be useful to know how to deal with multiple SSH keys. Otherwise, one can also simply stash the main key. Hetzner also provides the VPS recovery with the account credentials.
On dev:
ssh-keygen -t ed25519 -a 100 \
-f ~/.ssh/id_ed25519_vps_backup \
-C "deploy@vps-backup"
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/id_ed25519_vps_backup
ssh-copy-id -f -i ~/.ssh/id_ed25519_vps_backup.pub deploy@vpsSee my .ssh/config.example with the both keys, it is necessary to replace the old ./ssh/config manually.
VPS:
cat ~/.ssh/authorized_keysmust show two keys.
On dev:
ssh -i ~/.ssh/id_ed25519_vps vps
ssh -i ~/.ssh/id_ed25519_vps_backup vps
ssh vpsAll three should succeed.
To see the default key in use, login to VPS with -v:
ssh -v vpsand look for "Offering public key:".
To set only the specific key to use, such as d_ed25519_vps_backup:
ssh-add -D # remove all keys
ssh-add ~/.ssh/id_ed25519_vps_backup
ssh vpsVPS:
cd /opt/initialsdb
docker exec -it initialsdb-db psql -U initialsdb -d initialsdbInside psql (initialsdb=#):
-- list tables
\dt
-- describe the listings table
\d listings
-- see some rows
SELECT id, created_at, body, is_hidden
FROM listings
ORDER BY created_at DESC
LIMIT 10;
-- count visible listings
SELECT COUNT(*) FROM listings WHERE is_hidden = false;Exit psql with \q.
If the DB credentials change, to quickly get the DB name, user, and password:
VPS:
cd /opt/initialsdb
docker exec -it initialsdb-db env | grep POSTGRES
A quick direct inspection without getting into the psql prompt:
cd /opt/initialsdb
docker exec -it initialsdb-db psql -U initialsdb -d initialsdb -c \
"SELECT id, created_at, body FROM listings ORDER BY created_at DESC LIMIT 5;"It changes nothing, tested! What actually happens on VPS reboot:
-
Linux boots.
-
systemd starts services.
-
Docker daemon starts automatically.
-
Docker looks at containers it knows about.
-
Containers with a restart policy are handled.
-
All the containers have
restart: unless-stoppedpolicy in their docker-compose.yml files.
On dev:
cd ~/opt
make clone-app SRC=initialsdb DST=yournewappIt will do these:
- copy
initialsdbwhile skipping mounts, binaries, .git, .secrets, but not .secrets.example. - search and replace every occurrence of
initialsdbwithyournewapp.
This looks archaic, but it is simple and reliable.
Avoid variable interpolation, nonlocal ../ environments, aliases inside docker-compose.yml.
Docker removes tight coupling with the host OS, but why Docker Compose?
Instead of prod/docker-compose.yml and prod/Makefile we could have one slightly more complicated prod/Makefile:
SHELL := /bin/bash
.SHELLFLAGS := -e -o pipefail -c
APP_NAME := initialsdb
APP_IMAGE := initialsdb-prod
APP_CONTAINER := initialsdb-app
DB_CONTAINER := initialsdb-db
APP_NET := initialsdb_app_net
EDGE_NET := edge_net
DB_VOLUME := initialsdb_postgres_prod
ENV_FILES := --env-file .env --env-file .secrets
.PHONY: up down build app db networks volumes clean logs ps
# ---------------------------
# Top-level lifecycle
# ---------------------------
up: networks volumes build db app
@echo "✅ initialsdb is up"
down:
docker stop $(APP_CONTAINER) $(DB_CONTAINER) 2>/dev/null || true
docker rm $(APP_CONTAINER) $(DB_CONTAINER) 2>/dev/null || true
@echo "🛑 Containers stopped and removed"
clean: down
docker image rm $(APP_IMAGE) 2>/dev/null || true
@echo "🧹 Images cleaned"
logs:
docker logs -f $(APP_CONTAINER)
ps:
docker ps --filter name=$(APP_NAME)
# ---------------------------
# Infra primitives
# ---------------------------
networks:
@docker network inspect $(APP_NET) >/dev/null 2>&1 || \
docker network create --internal $(APP_NET)
@docker network inspect $(EDGE_NET) >/dev/null 2>&1 || \
docker network create $(EDGE_NET)
@echo "🌐 Networks ready"
volumes:
@docker volume inspect $(DB_VOLUME) >/dev/null 2>&1 || \
docker volume create $(DB_VOLUME)
@echo "💾 Volume ready"
# ---------------------------
# Build & run
# ---------------------------
build:
docker build -t $(APP_IMAGE) .
db:
docker run -d \
--name $(DB_CONTAINER) \
--restart unless-stopped \
$(ENV_FILES) \
-v $(DB_VOLUME):/var/lib/postgresql/data \
--network $(APP_NET) \
postgres:16-bookworm
@echo "⏳ Waiting for Postgres to be ready..."
@until docker exec $(DB_CONTAINER) pg_isready -U initialsdb -d initialsdb >/dev/null 2>&1; do sleep 1; done
@echo "🐘 Postgres ready"
app:
docker run -d \
--name $(APP_CONTAINER) \
--restart unless-stopped \
$(ENV_FILES) \
--network $(APP_NET) \
--network $(EDGE_NET) \
-p 8080:8080 \
$(APP_IMAGE)This is more verbose than docker-compose.yml, yet guaranteed fewer bugs with environment loading and variable expansion. One tool less as well. *.yml files are tiny, but their debugging time is not.
This git repo also includes a complete running application called initialsdb which is Go with sqlc and net/http (no frameworks). Go also serves a React SPA, which is a Js artifact as vite + React output in web.
The best way to understand the system is to extend it, e.g. add the global counter which will display the number of total messages stored on the landing page above the search bar.
To add the global counter, first add the SQL query to db/queries.sql:
-- name: CountVisibleListings :one
SELECT COUNT(*)::bigint
FROM listings
WHERE is_hidden = FALSE;followed by
~/opt/initialsdb/src/backend
sqlc generateIt will create the code inside db/queries.sql.go:
func (q *Queries) CountVisibleListings(ctx context.Context) (int64, error) {
row := q.db.QueryRowContext(ctx, countVisibleListings)
var column_1 int64
err := row.Scan(&column_1)
return column_1, err
}This is the brilliance of the sqlc: it is a static generator. Ask AI to write SQL queries, save on tokens as the Go code will be generated by the sqlc. No ORMs, no SQL strings inside Go.
Add the endpoint listings/count.go:
package listings
import (
"context"
"database/sql"
"net/http"
"time"
"app.root/db"
"app.root/guards"
"app.root/httpjson"
)
type CountHandler struct {
DB *sql.DB
Guards []guards.Guard
}
func (h *CountHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodGet {
httpjson.WriteError(w, http.StatusMethodNotAllowed, "INVALID_INPUT", "method not allowed")
return
}
for _, g := range h.Guards {
if !g.Check(r) {
httpjson.Forbidden(w, "RATE_LIMITED", "request blocked")
return
}
}
/*
Add timeout to prevent request goroutine block indefinitely
if DB stalls or network hiccup for some strange reason:
query auto-cancels
DB receives cancel signal
goroutine freed
client gets 500
*/
ctx, cancel := context.WithTimeout(r.Context(), 3*time.Second)
defer cancel()
q := db.New(h.DB)
n, err := q.CountVisibleListings(ctx)
if err != nil {
httpjson.InternalError(w, "count failed")
return
}
httpjson.WriteOK(w, map[string]int64{
"count": n,
})
}along with its corresponding setup and call inside routes/routes.go:
// ────────────────────────────────────────
// Listings: count (GET)
// ────────────────────────────────────────
mux.Handle("/api/listings/count",
&listings.CountHandler{
DB: db,
Guards: guardsCommon,
},
)This is a bit of a hassle, but it has its pros.
Guards are manually applied per handler, no middleware pattern. A guard simply outputs true and false which is checked inside a handler's loop over the guards. They are opt-in.
A set of guards per handler/route is hard-coded in routes.go, but the guards can be disabled via their boolean flags inside .env.
The paradigm is:
Guards protect scarce resources, not endpoints.
CreateHandler: Handles POST, parses body, mutates DB. Therefore, it is maximally protected with a proof of work (PoW), rate limit, body size guards.
SearchHandler and CountHandler do not use PoW.
PoW is a computation imposed on the browser, see this line
const nonce = await solvePoW(...)inside App.tsx. At the moment, it blocks the UI, ignores AbortController, and it cannot be interrupted. If user navigates away, the computation continues until solved.
It is a concrete barrier tied to:
challenge + expiry + IP + UserAgent.
-
A solved token cannot be reused from another IP.
-
Cannot be replayed multiple times.
-
Cannot be farmed centrally and distributed.
-
Cannot be shared between bot workers.
Potential future optimizations:
-
replay store as time-bucket wheel,
-
shard PoWGuard by IP,
-
move replay tracking to Redis for multi-instance scale.
PoW has two parameters: the difficulty level and the TTL value. The latter cannot be too small as a slower device won't be able to complete the challenge. It can not be too big as the attacker can solve it quickly and then bombard the endpoint with a solved challenge for the remaining TTL time. The recommendation is 2-3x value a slow computer requires solving. For the difficulty level 21, the TTL is set to 100s.
The first version leaked memory, the second one was a simple fixed window. The third variant is a lot of things, supposedly fixes vulnerability to synchronized abuse (not tested):
-
Proper X-Forwarded-For parsing (first IP only).
-
IPv6 normalization.
-
Sliding window per IP (not global reset bucket).
-
Per-IP expiry cleanup.
-
No global synchronized window resets.
-
Works cleanly behind Caddy.
-
Memory bounded by natural expiry.
Timeout is protection against DB misbehavior, to prevent goroutine pile up.
Each request handler has a timeout. Read/lightweight endpoints - 3s., CreateHandler - 5s. If DB stalls, or there is a network hiccup, the context is canceled: the driver sends a cancellation signal to PostgreSQL or aborts the TCP connection, returns context deadline exceeded, the handler stops waiting, returns 500 or timeout. Otherwise goroutine would hang indefinitely.
Without context, if a goroutine blocks forever, connection pool slot remains busy, eventually the pool exhausts, entire app stalls, we get a cascading failure.
Context cancellation works as a short-circuit:
HTTP request
→ handler
→ service layer
→ db
→ redis
→ external API
Once it activates, everything downstream stops.
See App.tsx. For correctness, Fetch is supposed to be accompanied by AbortController, but this might be overcomplication.
async function fetchCount(signal?: AbortSignal): Promise<number> {
const res = await fetch("/api/listings/count", { signal });
if (!res.ok) throw new Error("count failed");
const json = await res.json();
return json.count as number;
}The grand idea:
Effects must be written as if the component can disappear at any time.
useEffect(() => {
const ac = new AbortController();
fetchCount(ac.signal)
.then(setTotalCount)
.catch(() => {});
return () => ac.abort();
}, []);Note [] as the last argument, so the effect gets executed only when App mounts. However, fetch can outlive App. If App unmounts while fetch is in progress, ac.signal becomes "abort" and prevents the execution of then. The code jumps into catch which does nothing. We have achieved correct fetch abortion and App unmounting, but was it necessary?!
What is annoying about correctness and error handling here is that there are only three API endpoints, and already 500 LOC of frontend, with 24 (!) instances of "abort". ChatGPT5 produces working code though.
The counter is stored and updated as everything in React:
const [totalCount, setTotalCount] = useState<number | null>(null);
...
setTotalCount((n) => (n === null ? n : n + 1));Cumbersome, but not as bad as abortion.
Feb 25, 2026 Update: Removed AbortController entirely as unnecessary complication, this should not be in the user space somehow. We have enough of that try catch nonsense around the fetch API to begin with.
Containers and the web complicate debugging. One will focus more on logging. Still, it is great to debug with React extensions in the browser, and develop with hot reload. VS Code allows to step through Go. So we add debug folder for this purpose.
Docker with .env and .secret is reused from dev folder only to run Postgres.
Terminal 1 (containerized Postgres):
cd ~/opt/initialsdb/debug
make db-upTerminal 2 (Go backend with full debug symbols):
cd ~/opt/initialsdb/debug
make backend-runTerminal 3 (React frontend with hot reload, Vite dev):
cd ~/opt/initialsdb/debug
make frontend-runTo exit, run make db-down and twice ctrl+C (to kill backend and frontend).
In the debug mode, frontend talks to backend directly due to these lines inside vite.config.ts:
server: {
proxy: {
"/api": {
target: "http://localhost:8080",
changeOrigin: true,
},
"/pow": {
target: "http://localhost:8080",
changeOrigin: true,
},
},
},If you add more endpoints which are not under /api or /pow (routes.go), they should also appear here.
To add an endpoint takes:
-
Use of Fetch and AbortController APIs in src/frontend/src/App.tsx.
-
Guard and config param loading in src/backend/routes/routes.go.
-
Business logic inside ServeHTTP(w http.ResponseWriter, r *http.Request), e.g. src/backend/listings/count.go.
-
Entry inside vite.config.ts if the endpoint is not under /api or /pow.
No wonder people invent metaframeworks and add fetch automation layers, but these become the n+1 thing when one decides to add a mobile app later on, when one goes back to JSON APIs.
So this is manual and verbose, but also very standard, debuggable, and extendable.
After a while, npm starts barfing about vulnerable versions, severity, audits. The problem is devDependencies inside package.json. Linting/tooling must stay one major behind bleeding edge (10.0.0):
"devDependencies": {
"@eslint/js": "^9.0.0",
}The only critical check to execute:
cd ~/opt/initialsdb/src/frontend
npm audit --omit=dev
found 0 vulnerabilitiesThe rest is the npm noise which is safe to ignore.
What is also critical is that Tailwind3 is applied, not Tailwind4:
"tailwindcss": "^3.4.19",If for some reason everything gets updated to the latest versions, Tailwind4 will break all the styling here, so keep it at 3.4.19, manually. Set the value inside package.json, and reinstall Node packages:
rm -rf node_modules package-lock.json
npm installAIs do not have visual feedback, which demands adjusting style/appearance manually.
Uncomment the following lines in frontend/src/index.css:
/*
* {
outline: 1px solid red;
}
*/This often reveals why some positioning does not work as it becomes relative to some extra bounding box not visible in Tailwind.
Formatting depends on where one opens the editor. Open one instance from backend for Go editing, another one from frontend for Ts, .vscode settings will be loaded automatically.
Ctrl + P - to quickly search and open a file.
Ctrl + Shift + F - search inside files per project.
Ctrl + Shift + P - reload window, restart TS server.
Always use explicit content type before posting anything. initialsdb uses only "application/json", and this is guarded on both ends. Implicit ways are allowed everywhere and will lead to spectacular heisenbugs, esp. with forms.
Heisenbug 1
Case A: This Js/HTML code
<form method="POST" action="/register">sends HTML as
Content-Type: application/x-www-form-urlencodedA Go server based on net/http expects to parse this with
r.ParseForm()Case B: Let the browser send the HTML form data from Js with the fetch API:
const body = new FormData(form);
fetch(url, { method: "POST", body });The Fetch spec mandates this default:
Content-Type: multipart/form-data; boundary=...The Go server needs to parse the data with
r.ParseMultipartForm()Suppose now the browser uses Case B to send the form, but the server assumes Case A. The compiler does not catch anything. ParseForm() silently does nothing. FormValue() sometimes parses multipart (but only if r.Form == nil).
So one must be explicit, parse the body of the request r before guards inside a handler
if err := ParseFormStrict(r, 64<<10); err != nil {
http.Redirect(w, r, "/register?error=invalid", http.StatusSeeOther)
return
}with
func ParseFormStrict(r *http.Request, maxMemory int64) error {
ct := r.Header.Get("Content-Type")
switch {
case strings.HasPrefix(ct, "application/x-www-form-urlencoded"):
return r.ParseForm()
case strings.HasPrefix(ct, "multipart/form-data"):
return r.ParseMultipartForm(maxMemory)
default:
return fmt.Errorf("unsupported content-type: %q", ct)
}
}and then extract data as
username := strings.TrimSpace(r.Form.Get("username"))
password := r.Form.Get("password")Avoid getting data directly with r.FormValue("username") and r.FormValue("password") as these are not simple getters. They read from stream, invoke parsing, and drain the stream with EOF. This reading needs to be done once at the correct place, preferably inside handler before guards as shown above.
See guards.go which establishes certain rules what is allowed and disallowed with HTTP bodies inside guards (middleware) when using net/http.
These issues do not appear in Python's higher level FastAPI, but the same problem space with Starlette used by FastAPI underneath, so this is not Go and Go's net/http, but rather HTTP streaming. We are at the lower level here which has pros and cons. Verbosity and some problems upfront, but we know the system better when things go South.
Rate limiting is one area to watch out. See my previous Go application where parallel reads in SQLite needed to be artificially serialized with a fake DB write in order to avoid an edge case with IP counter overflow. This is not needed with Postgres.
Rate limiters here also use sync.Mutex, but this is not pervasive. It ensures that each request/goroutine updates a hashmap properly atomically, one at a time. Maps in Go are not thread-safe.
var counts = make(map[string]int)
var mu sync.Mutex
func allow(ip string) bool {
mu.Lock()
defer mu.Unlock()
counts[ip]++
return counts[ip] <= 10
}If this runs inside a handler, it runs in a separate goroutine which serves each request. Two goroutines may read counts[ip] = 5 and one will overwrite the other. sync.Mutex prevents that.
sync.Mutex is also used during PoW to prevent replay attacks (ensure lock on a single IP).
Interestingly, single-threaded does not mean race-free. If a condition on the maximal IP number is checked inside each request before updating anything, both requests will see the same number of the IP counts! The first await will switch to the other request which will hit its own await. Once any await completes, it will update the IP counter bypassing the limit. This will happen in sequence, one await at a time.
To fix these logical races (not memory races), async Python has asyncio.Lock() and Node will need an external package like async-mutex. All this is like shared memory in Go. It is thread-safe, but already with races, which is very error-prone.
Go brings true parallelism within a single process: it will use all the CPU cores. We get shared memory (goroutines see the same heap), mutexes work across all goroutines. If we added multiple containers and load balancer, counting would break as the counting structures get simply duplicated. This must be solved with Redis or Postgres, or smarter Go with channels and actors.
Python is so between Go and Node. We get sync (WSGI) vs async (ASGI) runtimes. Django is WSGI classically, but also supports the ASGI now. For CPU-bound requests one can go with Flask (sync) + gunicorn workers (async), or FastAPI (async) + multiple workers (sync). There is also GIL vs non-GIL axis. GIL makes CPython thread-safe(r), but it does not allow threads to be executed in parallel. However, it yields on I/O waiting just like single-threaded async runtimes (ASGI, Node). At some point GIL will be removed, but it will be like WSGI', one more runtime requiring different C extensions.
I begin to appreciate Go even more now, but rate limiters are better done with Redis/Postgres. So "shared services" rather than "shared memory". This will add network and disk contention, expired Redis keys, but at least no manual low level syncing as this gets delegated to DB transactions.
