Deployment
This guide covers deploying UniAuth to production using Docker, Vercel, or a traditional server. It also covers health checks, log monitoring, and backup strategies.
Docker Deployment
Docker is the recommended deployment method for most environments. It ensures consistent behavior across development, staging, and production.
Dockerfile
FROM node:22-alpine AS base
# Install dependencies only when needed
FROM base AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci --only=production
# Build the application
FROM base AS builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
RUN npm run build
# Production image
FROM base AS runner
WORKDIR /app
ENV NODE_ENV=production
ENV PORT=4000
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
COPY --from=builder /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 4000
CMD ["node", "server.js"]Docker Compose
Use Docker Compose to run UniAuth alongside PostgreSQL and Redis:
version: "3.8"
services:
app:
build: .
ports:
- "4000:4000"
environment:
- DB_HOST=postgres
- DB_PORT=5432
- DB_USER=postgres
- DB_PASSWORD=${DB_PASSWORD}
- DB_NAME=uniauth_db
- JWT_SECRET=${JWT_SECRET}
- ENCRYPTION_KEY=${ENCRYPTION_KEY}
- HOST=${HOST}
- REDIS_URL=redis://redis:6379
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
restart: unless-stopped
postgres:
image: postgres:16-alpine
environment:
- POSTGRES_DB=uniauth_db
- POSTGRES_PASSWORD=${DB_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
- ./scripts/pg-schema.sql:/docker-entrypoint-initdb.d/01-schema.sql
- ./scripts/migration-crypto.sql:/docker-entrypoint-initdb.d/02-crypto.sql
- ./scripts/migration-hardening.sql:/docker-entrypoint-initdb.d/03-hardening.sql
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
restart: unless-stopped
redis:
image: redis:7-alpine
volumes:
- redis_data:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 5s
retries: 5
restart: unless-stopped
volumes:
postgres_data:
redis_data:Create a .env file alongside your docker-compose.yml with your secrets, then start with:
docker compose up -dVercel Deployment
UniAuth can be deployed to Vercel as a Next.js application with some considerations:
Setup
- Connect your repository to Vercel.
- Add all required environment variables in the Vercel dashboard (Settings > Environment Variables).
- Set
HOSTto your Vercel domain (e.g.,https://auth.your-domain.com). - Use a managed PostgreSQL provider (e.g., Vercel Postgres, Neon, Supabase) and set the
DB_*variables accordingly. - Deploy.
Serverless Considerations
Important Caveats
- In-memory rate limiting will not work across multiple serverless functions. Configure Redis (
REDIS_URL) for distributed rate limiting. - Cold starts may affect the first request after idle periods. The
instrumentation.tshook runs on each cold start to initialize PQC keys and OIDC keypairs. - Database connections should use connection pooling (e.g., PgBouncer or your provider's built-in pooler) to avoid exhausting PostgreSQL connections.
- Scheduled cleanup (data retention) relies on a long-lived process. On Vercel, use a cron job or external scheduler to call the cleanup endpoint periodically.
Health Checks
Use the auth endpoint as a lightweight health check:
# Returns 401 when healthy (no auth cookie = unauthorized, but server is responding)
curl -s -o /dev/null -w "%{http_code}" https://auth.example.com/api/auth/me
# Expected: 401
# For Docker health checks:
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:4000/api/auth/me"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40sA 401 response confirms the server is running, the database connection is active, and the auth layer is functional. A 500 or connection failure indicates a problem.
Log Monitoring
UniAuth outputs structured logs that can be captured and analyzed by log aggregation services:
- Activity logs are stored in the
activity_logsdatabase table and include user actions (login, logout, password change, 2FA setup, etc.) with timestamps, IP addresses, and user agents. - Security events such as failed login attempts, account lockouts, and threat detection alerts are logged with relevant context.
- Admin audit trail tracks administrative actions including user management, role changes, and configuration updates.
For production deployments, consider forwarding logs to a centralized logging service (e.g., Datadog, Grafana Loki, ELK Stack) for alerting and analysis.
Backup Strategy
Regular backups are essential for disaster recovery. The two critical components to back up are the database and the encryption key.
Database Backups
# Full database dump
pg_dump -U postgres -d uniauth_db -Fc -f uniauth_backup_$(date +%Y%m%d).dump
# Restore from backup
pg_restore -U postgres -d uniauth_db -c uniauth_backup_20260101.dump
# Automated daily backup (add to crontab)
0 2 * * * pg_dump -U postgres -d uniauth_db -Fc -f /backups/uniauth_$(date +\%Y\%m\%d).dumpEncryption Key Backup
Critical: Back Up Your Encryption Key
The ENCRYPTION_KEY is used to encrypt TOTP secrets, OAuth tokens, and PQC keys at rest. If this key is lost, all encrypted data becomes permanently inaccessible. Store the key separately from your database backups in a secure location (e.g., a hardware security module, sealed envelope in a safe, or a secrets manager like HashiCorp Vault).
Backup Retention
Recommended retention policy:
- Daily backups: retained for 7 days
- Weekly backups: retained for 4 weeks
- Monthly backups: retained for 12 months
Test your restore process regularly. A backup that cannot be restored is not a backup.