
SAVVA's protocol has always been permissionless on-chain — anyone can
publish, anyone can read, anyone can stand up a domain. The
implementation, in practice, was a different story: until now you
needed access to the source and a Go toolchain to build the backend,
and you needed to write a long YAML config from scratch and figure out
which of its dozens of fields you actually had to set. That's a
barrier we'd rather not put between people and their own sovereignty
over the content layer.
So we're shipping the backend as a public Docker image. One image,
one config file with a handful of values to fill in, and a single
docker compose up -d. If you have Docker installed and an hour on
a Saturday afternoon, you can run a SAVVA node on your own domain —
no source access required.
Why this matters
SAVVA is a multi-domain platform — the same protocol, the same
on-chain content registry, served from any number of independent
domains. Anyone can stand up a domain. Each domain is its own brand,
its own community, its own moderation policy. The protocol doesn't
care.
But a permissionless protocol with a hard-to-install reference
implementation is permissionless in theory only. The point of this
release is to make the practice match the theory: you should be able
to spin up your own SAVVA-protocol site for the cost of a small VPS,
and you shouldn't have to be a Go developer to do it.
What you'll need before you start
Five things. None of them require any SAVVA-specific knowledge:
-
A Linux server or Mac with Docker (24+) and the Compose plugin.
Any small VPS works — you don't need a beefy machine. You will,
however, need disk space for the bundled IPFS node's datastore (see
About IPFS storage below). -
A Postgres database the backend can reach. Any 14+. You can run
it on the same machine, on a managed service (RDS, Supabase,
Neon, etc.), or anywhere else. -
A blockchain RPC URL. SAVVA runs on Monad. The standard public
mainnet RPC ishttps://rpc.monad.xyzand works out of the box —
no signup, no API key, just paste it into.env.The catch: public RPCs are rate-limited and shared. They're
fine for a small personal node or for trying out the install, but
under any real traffic you'll hit throttling, intermittent
timeouts, and slower block sync. For a node you intend to keep
up — especially one that serves a public domain — plan to either:- Run your own Monad node (full control, no rate limits, but
non-trivial sysadmin work and disk), or - Rent a private RPC from a provider (QuickNode, Alchemy,
Ankr, etc. — they sell dedicated endpoints with much higher
rate limits and better uptime guarantees than the public RPCs).
Either way the URL goes into
BLOCKCHAIN_RPCin.env. You can
start with the public RPC and switch later — no migration is
needed beyond editing one line. - Run your own Monad node (full control, no rate limits, but
-
An admin wallet address. The wallet identity that will be
allowed to administer the domain. (Optional, but recommended later:
a separate processor wallet — a signing key the backend uses
to handle paid / encrypted content. You can boot a node without
one and add it whenever you're ready.) -
One — ideally two — IPFS pinning-service accounts. This is
the one external sign-up you need. The bundled IPFS node holds
content locally, but a single node is a single point of failure;
a pin service replicates everything you pin to durable external
storage and gives you a public gateway URL so anyone on the
internet can fetch your content even when your own node is offline.We recommend Pinata as your
primary. Most pinning services only let you ask them to fetch a
CID off the public IPFS network after it's already been
published — which means your content has to propagate through the
swarm before it becomes durably pinned, sometimes minutes of
unavailability for a freshly posted file. Pinata's API exposes a
direct upload endpoint: the backend hands the file straight to
Pinata at the same time it adds it locally, so content is durably
pinned and reachable through the gateway immediately.A note on Pinata's gateway tiers. The free plan uses the
sharedgateway.pinata.cloud(rate-limited and shared across
all free users — workable for low-traffic personal nodes,
risky for anything public-facing). A dedicated gateway on a
subdomain you control (yourname.mypinata.cloud) requires a paid
Pinata plan. If your node will serve real traffic, plan to
upgrade — it's the difference between "your content is reachable"
and "your content is reachable as long as the shared infra holds
up." Other services (web3.storage,
Filebase,
4everland) have analogous shared/dedicated tier splits.And add a second service alongside Pinata. A single pin
service is one company's uptime, billing relationship, and
policy decisions away from total content loss. Two independent
services effectively eliminate that risk. The bundle supports up
to ten — setPIN_SERVICE_2_URL/_API_KEY/_GATEWAY(and
_3_,_4_, etc.) in.env. A common pairing is Pinata as the
fast/durable primary andweb3.storageorFilebaseas a
second, lower-cost backstop.From each service you'll need three strings: the API endpoint URL,
an API key (usually a JWT), and the service's public gateway URL.
The bundle ships its own IPFS node — you don't need to provide one
separately. (If you already run an IPFS node and want to point at
that, see the override note in step 1.)
That's it. No SAVVA-side registration, no API keys other than the pin
service.
The five-minute install
1. Create the deploy directory and the two files
mkdir savva && cd savva
Create docker-compose.yml with this content:
services:
ipfs:
image: ipfs/kubo:latest
container_name: savva-ipfs
restart: unless-stopped
environment:
- IPFS_PROFILE=server
volumes:
# Override IPFS_DATA_PATH in .env to put the datastore on a
# different disk. Default is ./ipfs-data alongside this file.
- ${IPFS_DATA_PATH:-./ipfs-data}:/data/ipfs
ports:
# Swarm port — must be reachable from the public internet (or
# at least NAT-traversable) for the node to participate in pin
# replication. Bind both TCP and UDP.
- "4001:4001"
- "4001:4001/udp"
healthcheck:
test: ["CMD-SHELL", "ipfs --api=/ip4/127.0.0.1/tcp/5001 id >/dev/null 2>&1 || exit 1"]
interval: 5s
timeout: 3s
retries: 12
start_period: 5s
savva-backend:
image: ghcr.io/alexna-holdings/savva-backend:${SAVVA_VERSION:-latest}
container_name: savva-backend
restart: unless-stopped
env_file: .env
depends_on:
ipfs:
condition: service_healthy
ports:
- "${PORT:-8080}:8080"
volumes:
- ./data:/data
# Optional: mount a private key file and set PROCESSOR_KEY_FILE in .env
# to point at this path inside the container.
- ./secrets:/run/secrets:ro
Create .env with this content (you'll fill in the values in
step 2):
# ----------------------------------------------------------------------
# REQUIRED — fill these in before `docker compose up`.
# ----------------------------------------------------------------------
# Public hostname this instance serves (no scheme, no path).
DOMAIN=mysavva.example.com
# Wallet address(es) that administer the domain (EIP-55 checksummed).
# To list multiple admins, separate with commas: 0xAaa...,0xBbb...
ADMIN_ADDRESS=0xYourAdminWalletAddress
# Postgres connection string. The DB must already exist; see step 3.
DB_CONNECTION_STRING=postgres://savva:[email protected]:5432/savva?sslmode=disable
# IPFS API endpoint. By default this points at the `ipfs` service
# bundled in docker-compose.yml above. Override only if you want to
# point at an IPFS node you run elsewhere.
# IPFS_URL=http://ipfs:5001
# Blockchain RPC URL. The Monad public mainnet RPC works out of the box;
# swap for a private endpoint if you need higher throughput / reliability.
BLOCKCHAIN_RPC=https://rpc.monad.xyz
# Primary IPFS pin service. Required — see step 5 in the prereqs.
# PIN_SERVICE_URL: the IPFS Pinning Service API endpoint
# PIN_SERVICE_API_KEY: the JWT / bearer token from your account
# PIN_SERVICE_GATEWAY: the service's public gateway URL
PIN_SERVICE_URL=https://api.pinata.cloud/psa
PIN_SERVICE_API_KEY=
PIN_SERVICE_GATEWAY=https://gateway.pinata.cloud/ipfs/
# Strongly recommended: a SECOND pin service for redundancy. The
# bundle supports up to ten (PIN_SERVICE_2_*, PIN_SERVICE_3_*, ...).
# PIN_SERVICE_2_URL=https://api.web3.storage/pins
# PIN_SERVICE_2_API_KEY=
# PIN_SERVICE_2_GATEWAY=https://w3s.link/ipfs/
# Processor signing key. OPTIONAL — leave empty to boot a node without
# processor capability. Set later when you want to handle paid /
# encrypted content. EITHER paste the raw hex key here, OR mount a
# file at ./secrets/processor.key and set PROCESSOR_KEY_FILE below.
PROCESSOR_KEY=
# PROCESSOR_KEY_FILE=/run/secrets/processor.key
# ----------------------------------------------------------------------
# OPTIONAL — sensible defaults are baked in. Uncomment to override.
# ----------------------------------------------------------------------
# On-chain Config contract. Default is Monad mainnet; change for other chains.
# CONFIG_CONTRACT=0xEeDf3fd85b8C955160CBee10FB45e02add055e39
# Where the bundled IPFS node stores its data on the host. Defaults to
# ./ipfs-data alongside this file. Point at a different disk for
# production deployments — the datastore grows with pinned content.
# IPFS_DATA_PATH=./ipfs-data
# Telegram bot for the domain (optional). Set both TOKEN and NAME to
# enable; leave either blank to disable. TOKEN comes from BotFather,
# NAME is the bot's @-username without the @. The bot ID is auto-
# derived from the token's "<id>:<secret>" prefix.
# TELEGRAM_BOT_TOKEN=123456789:ABCdef-the-rest-of-your-token
# TELEGRAM_BOT_NAME=YourSavvaBot
# Image version to pull (matches a release tag).
# SAVVA_VERSION=latest
# Host port exposed by docker compose. The container always listens
# on 8080 internally; this only changes the port your host binds to.
# PORT=8080
# Verbosity: trace, debug, info, warn, error.
# VERBOSITY=info
# Block to start indexing from on a fresh DB.
# INITIAL_BLOCK=0
# Size limits.
# MAX_FILE_SIZE=50MB
# MAX_POST_SIZE=10MB
# MAX_USER_DISK_SPACE=1GB
# Public website URL for the domain (defaults to https://${DOMAIN}).
# DOMAIN_WEBSITE=https://mysavva.example.com
That's the whole install bundle — two files in one directory.
2. Fill in .env
Open .env and replace the placeholder values. Seven fields are
required:
DOMAIN,ADMIN_ADDRESS,DB_CONNECTION_STRING,BLOCKCHAIN_RPCPIN_SERVICE_URL,PIN_SERVICE_API_KEY,PIN_SERVICE_GATEWAY
(from your pin service account — see prereq 5)
PROCESSOR_KEY is optional and can be added later, and IPFS_URL is
auto-defaulted to the bundled IPFS service. Everything below the
OPTIONAL divider has a sensible default and can stay commented out.
About the port. The container always listens on 8080 internally — that's hardcoded by the image. The compose mapping ${PORT:-8080}:8080 publishes that to the host on port 8080 by default, so curl http://localhost:8080/info works out of the box. Set PORT= in .env only if you want a different host port (e.g. PORT=9000 when 8080 is already taken on your machine). Your reverse proxy talks to the container's 8080 either way.
If you'd rather not paste a private key into a file, mount it as a
secret instead:
mkdir -p secrets
echo "0xYourProcessorPrivateKey" > secrets/processor.key
chmod 600 secrets/processor.key
…and in .env:
PROCESSOR_KEY=
PROCESSOR_KEY_FILE=/run/secrets/processor.key
The secrets/ folder is mounted read-only into the container by the
default docker-compose.yml. The container reads the key from disk at
startup; the value never appears in docker inspect or process
listings.
3. Bootstrap the database
You have two ways to populate the database. Restoring from a
snapshot is strongly recommended.
Option A (recommended) — restore from a public snapshot
SAVVA publishes daily Postgres snapshots at
savva.app/public_files/, one per
chain, named like:
savva-db-backup-monad-2026-05-03.sql.gz
savva-db-backup-pls-2026-05-03.sql.gz
Pick the chain you're indexing (monad is the default in this guide)
and the latest date. The dump is plain gzipped SQL — restore it with
psql:
# Pick the latest snapshot for your chain.
SNAP=https://savva.app/public_files/savva-db-backup-monad-2026-05-03.sql.gz
# Empty target database must already exist and match $DB_CONNECTION_STRING.
curl -L "$SNAP" | gunzip -c | psql "$DB_CONNECTION_STRING"
When the backend starts, it will pick up exactly where the snapshot
left off — usually a few hours behind tip — and finish syncing in
minutes rather than hours.
Option B — initialize an empty schema and resync from genesis
Useful if you're running on a custom chain, want independent
verification, or just want to watch the indexer work:
docker compose run --rm savva-backend -initdb
This creates every table the backend needs and sets the schema version.
The first docker compose up -d afterwards will start indexing from
the configured INITIAL_BLOCK forward — expect a long initial sync.
4. Start it
docker compose up -d
The container pulls (≈100 MB), reads .env, renders its own YAML config, and starts indexing the blockchain. Watch the logs:
docker compose logs -f savva-backend
A healthy startup looks something like:
INF Config: Blockchain RPC configured
INF Config: Processor key configured
INF Connected to DB
INF SAVVA Backend. v:1.0.25
…followed by lines about the blockchain listener catching up. If you
see errors instead, see "Troubleshooting" below.
5. Verify
The backend listens on port 8080. From the same machine:
curl http://localhost:8080/info
You should get a JSON response describing the system: contract
addresses, your domain, the version, IPFS gateways, and so on.
That's a working SAVVA node.
Putting it on the public internet
The image doesn't terminate TLS — that's deliberate. Different
operators want different things (Cloudflare, Caddy, nginx, Traefik,
Tailscale Funnel) and we'd rather not pick for you. The minimum is
something that:
- Listens on
:443, terminates TLS, proxies to the container's
:8080. - Forwards the WebSocket upgrade for the
/wsendpoint. - Routes
/api/*and the SEO discovery URLs (/robots.txt,
/sitemap*.xml) into the backend.
Caddy with reverse_proxy 127.0.0.1:8080 is a reasonable two-line
choice if you don't already have a preference. For a full
production-grade nginx config, see the For administrators section
of the SEO announcement — it's the same
config you'd use for any SAVVA-platform site.
Setting your domain assets (the UI bundle)
A SAVVA backend by itself doesn't ship a UI — it serves the API and
expects your reverse proxy to serve the SolidJS web client out of an
IPFS-hosted bundle. Once the backend is running:
- Build (or fork) the savva-ui-solidjs
project, pin the build output to IPFS, and grab the resulting CID. - From a SAVVA client signed by your admin wallet, call the
setDomainAssetsCIDadmin command with the CID. The backend
downloads the bundle, stores it underdata/domain_assets/, and
serves it from there.
The CID is not part of the YAML config — it's set at runtime and
persisted in the database. That means you can swap UIs without ever
restarting the backend.
Updating to a new version
Releases are published as tagged Docker images. To update:
# Pin a specific version (recommended for production):
echo "SAVVA_VERSION=1.0.26" >> .env
docker compose pull
docker compose up -d
# Or just track latest:
docker compose pull && docker compose up -d
Schema migrations are applied automatically on startup. Watch the
release notes for any version that bumps the schema in case there's a
manual step.
Troubleshooting
ERROR: required env var X is not set — you missed a required
field in .env. The error names the variable.
dial tcp: connection refused on the DB — the container can't
reach Postgres. If your DB runs on the same host as Docker, use
host.docker.internal (Mac/Windows) or your machine's LAN IP, not
localhost. localhost inside the container means the container.
http: server gave HTTP response to HTTPS client for the IPFS URL — you have http:// for an IPFS endpoint that's actually HTTPS, or
vice versa. Check the scheme.
Logs say RPC error repeatedly — your RPC URL is wrong, rate-limited,
or the chain ID doesn't match. The default config contract address is
for Monad; if you're connecting to a different chain, set
CONFIG_CONTRACT in .env to the right address for that chain.
The container starts but nothing happens for a long time — that's
normal if you went with option B in step 3 (empty schema). The
backend is syncing blockchain history from INITIAL_BLOCK forward,
which can take hours on a chain with a long history. Watch
docker compose logs -f — you'll see block numbers climbing. If you
don't want to wait, stop the container, drop the database, and
restore from a public snapshot (option A) instead.
If you hit something that isn't covered here, reach out through the
SAVVA support channels with your docker compose logs output and your
sanitized .env (redact the processor key).
About IPFS storage
There are two layers of pinning at work in a SAVVA install:
- The bundled Kubo node (running in the
ipfs:compose service)
holds every uploaded file locally. This is fast, free, and
immediately reachable — but it's a single point of failure. If
that disk dies, the local copy goes with it. - Your external pin service (configured via
PIN_SERVICE_*in
.env) takes a copy too. The backend asks the pin service to pin
each new CID right after it's added to the local node, so your
community's content is durably replicated and remains reachable
through the service's public gateway even when your own node is
offline.
The combination of "fast local + durable external" is why both halves
exist. Don't skip the external pin service unless you're spinning
up a throwaway test node — pin loss is irreversible.
Beyond that, the bundled IPFS datastore deserves the same treatment
as any other growing state directory. Unlike a Postgres database
(which is a fixed schema and only grows when you add domains), the
IPFS datastore grows in proportion to your community's content.
And because the bundle ships with process-all-domains: true set in
the rendered config, your node indexes and pins posts from every
domain on the network, not just yours — that's deliberate (it
keeps content available even when individual domain operators go
offline), but it does mean datastore growth tracks the whole
platform, not just your own community. Plan for it the way you'd
plan for any other pin-storage workload:
- Put the datastore on the disk you're willing to grow. The
IPFS_DATA_PATH=setting in.envcontrols the host path. Default
is./ipfs-datanext to the compose file; for production, point
it at a dedicated disk or volume (/mnt/data1/ipfs, an attached
EBS volume, etc.). - Monitor disk usage. No alarm bell rings if the disk fills.
Watchdu -sh ipfs-data/(or wherever you pointed it) and a
generic disk-usage alert. - Back it up like any other state directory. Stopping the
ipfsservice and rsync'ing the data folder is the simplest path. - Open port 4001 (TCP and UDP). That's the IPFS swarm port. If
it's firewalled off, content still pins locally but doesn't
replicate to the wider IPFS network — the rest of the world will
fetch your content directly through your node, slowly. Most cloud
providers require you to open this in the security group / VPC firewall explicitly. - Kubo defaults to no MaxStorage cap. If you want a hard ceiling
with automatic GC, editipfs-data/configafter first start and
setDatastore.StorageMaxto a size like"100GB".
If you already operate an IPFS node and would rather use that, set
IPFS_URL= in .env to point at it and remove the ipfs: service
block from docker-compose.yml. The backend doesn't care.
What's intentionally not in the image
The image itself runs only the backend. The compose stack adds the
IPFS service, but Postgres, TLS, and the web client are
still your responsibility:
- Postgres — operators have strong opinions about backups,
replicas, and managed-vs-self-hosted. Bundling one would make all of
those harder. - TLS — the choice of reverse proxy is yours.
- The web client — distributed via IPFS and pinned by the admin,
not baked into the backend image.
If you want an "everything in one box" install that also includes
Postgres, Caddy, and the UI, that's a separate compose file we may
publish later for casual / hobby use. The current bundle targets
people who'll run something they intend to keep up.
What you can do now
- Try it. Even if you don't plan to run a public domain, the
five-minute install above gives you a working SAVVA node on
localhost. It's a useful way to see what the platform actually does
under the hood. - Fork a domain. If you've ever wanted a community-specific
SAVVA site — your DAO, your magazine, your friends — you now have a
weekend project, not a monthlong one. - Tell us what's missing. This is the first public release of the
self-hosting bundle. If something in your environment doesn't work,
we want to know.
The protocol was always permissionless. Now the implementation is
finally easy enough that the permissionlessness is real.