WhatIsUp.dev
Comenzar
Esta página está disponible solo en inglés por ahora.

Deploy

WhatIsUp.dev runs as a single Node process plus Postgres and Redis. This guide walks you through the Railway-flavoured production deploy, but the principles port to Fly, Render, or your own boxes.

Topology

ServiceWhat it isSizing for v1
BackendFastify + Baileys + webhook worker1 instance, 512 MB → 1 GB
FrontendNext.js dashboard1 instance, 256 MB
PostgresCustomers, instances, deliveries, auditSupabase free tier or 1 vCPU / 1 GB
RedisBullMQ job boardUpstash free tier or 256 MB

The backend is the only stateful service (auth-state lives on a persistent volume). Everything else is "cattle" — kill and recreate at will.

Step 1 — Provision Postgres

Create a fresh Postgres. Supabase, Neon, or self-hosted — doesn't matter. Capture the DATABASE_URL. Make sure SSL is on (?sslmode=require if connecting from outside the provider's VPC).

Run migrations from your laptop once:

DATABASE_URL=postgres://... \
pnpm --filter @whatisup/backend migrate:up

The runner takes an advisory lock so concurrent migrate:ups are safe.

Step 2 — Provision Redis

Upstash is the path of least resistance — they give you a REDIS_URL over TLS. Capture it.

Use a separate Redis instance per environment. Sharing one Redis between staging and prod is a recipe for cross-env queue pollution. The free tier on Upstash is fine for two instances.

Step 3 — Generate secrets

Three secrets the backend needs at runtime:

# 32+ char random; rotate by setting SECRETS_KEY_PREVIOUS=<old> before swap
SECRETS_KEY=$(openssl rand -base64 32)
 
# Per-environment salt for API key hashing
API_KEY_PEPPER=$(openssl rand -base64 32)
 
# Default fallback signing secret (per-endpoint secrets override this)
WEBHOOK_SIGNING_SECRET=$(openssl rand -base64 32)

These are per-environment. Don't reuse staging values in prod. If you do leak one, rotate via the SECRETS_KEY_PREVIOUS fallback (read-only support for the prior key).

Step 4 — Push to Railway

Three Railway services on a single project:

railway service create backend
railway service create frontend
railway service create docs   # this site

Set env vars per service. The backend wants:

DATABASE_URL=...
REDIS_URL=...
SECRETS_KEY=...
API_KEY_PEPPER=...
WEBHOOK_SIGNING_SECRET=...
NODE_ENV=production
PORT=8080
CORS_ORIGINS=https://app.whatisup.dev,https://whatisup.dev/en/docs

The frontend wants:

NEXT_PUBLIC_API_URL=https://api.whatisup.dev
NEXT_PUBLIC_DISABLE_MOCKS=true

The CI workflow already has deploy-staging / deploy-production jobs that call railway up. Add RAILWAY_TOKEN and RAILWAY_PROJECT_ID as repo secrets, and set the per-env service-name vars (RAILWAY_PRODUCTION_SERVICE, RAILWAY_PRODUCTION_FRONTEND_SERVICE, etc.). Push to prod and the workflow does the rest.

Step 5 — Run health checks

Once the backend boots:

curl https://api.whatisup.dev/healthz   # should be 200 within ~1s
curl https://api.whatisup.dev/readyz    # 200 = DB + Redis reachable

/readyz runs the deep checks: opens a Postgres transaction, pings Redis, runs Baileys' health probe. Use it as your load-balancer's readiness probe; never /healthz (that one always 200s once the process is up — it's a liveness signal).

Step 6 — First customer

Use POST /v1/customers to seed your first customer (admin-only, gated by ADMIN_TOKEN). From there, every flow is API-key authenticated.

Disaster recovery

  • Backend container dies — auth-state is on a persistent volume; the new container resumes sessions without re-pairing.
  • Postgres rolls back to a backup — webhook deliveries and instances both have stable IDs, so re-shipping the same event_id is safe (idempotent on your side, dedupe by event_id).
  • Redis is wiped — in-flight queue jobs are lost. Webhook deliveries already in the DB will not be re-attempted unless you have a backfill job. We can build one if you need it; today you'd query webhook_deliveries WHERE status='pending' and re-enqueue manually.