Deployment and Operations Guide
This guide covers deploying and operating Alloy — from single-developer SQLite to team PostgreSQL with integrations.
SQLite vs PostgreSQL: Which Backend?
| Criteria | SQLite | PostgreSQL |
|---|---|---|
| Best for | Solo developer, small team, evaluation | Teams, multi-tenant, production |
| Setup | Zero — single file, auto-created | Requires running PostgreSQL server |
| Multi-tenancy | Single-tenant only | Full multi-tenant with RLS |
| Concurrency | Limited (single writer) | High concurrency |
| Backup | Copy the .db file | pg_dump / replication |
| Migration path | Export via API, re-import into PG | N/A |
Start with SQLite if you’re evaluating Alloy or running it for personal use. Use PostgreSQL when you need multi-tenant isolation, team access, or production reliability.
Migrating from SQLite to PostgreSQL
There is no built-in migration tool. The recommended approach:
- Stand up a PostgreSQL instance and start Alloy pointed at it (migrations run automatically)
- Export your data from the SQLite instance using the API (list projects, tickets, etc.)
- Import into PostgreSQL via the API
- Switch your
ALLOY_DATABASE_URLto the PostgreSQL connection string
SQLite Deployment
The simplest deployment — a single binary with a single database file.
# Start with defaults (creates ./alloy.db automatically)
cargo run -p alloy-cli -- serve
# Or with a prebuilt binary
alloy serve
Data location: By default, alloy.db is created in the current working directory. Override with:
ALLOY_DATABASE_URL=sqlite:///var/data/alloy.db alloy serve
Backup: Just copy the .db file while the server is stopped (or use SQLite’s .backup command for online backup):
cp alloy.db alloy.db.backup
PostgreSQL Deployment
Using Docker Compose (recommended for development)
The repository includes a docker-compose.yml that runs PostgreSQL 16 and MinIO (S3-compatible storage for attachments):
docker compose up -d
This starts:
- PostgreSQL on port 5432 (user:
postgres, password:postgres, database:alloy_dev) - MinIO on port 9000 (console on 9001, user:
minioadmin, password:minioadmin) - minio-init creates the
alloy-attachmentsbucket automatically
Then start Alloy pointed at PostgreSQL:
ALLOY_DATABASE_URL=postgres://postgres:postgres@localhost:5432/alloy_dev alloy serve
Connection string format
postgres://USER:PASSWORD@HOST:PORT/DATABASE
Multi-tenancy and RLS
PostgreSQL mode enables Row-Level Security (RLS) for full multi-tenant isolation:
- Each request sets
app.tenant_idviaSET LOCALin the transaction - RLS policies ensure tenants can only see their own data
- This is transparent to the application — queries return only tenant-scoped rows
- No data leaks between organizations, even if a bug skips application-level filtering
TLS / HTTPS
Alloy supports automatic TLS certificate provisioning via Let’s Encrypt using the ACME protocol. When TLS is enabled, the server listens over HTTPS with no reverse proxy required.
Enabling automatic TLS
Pass --tls-domain to alloy serve with the public domain name:
alloy serve --tls-domain api.example.com --tls-contact admin@example.com
This will:
- Automatically request a TLS certificate from Let’s Encrypt for
api.example.com - Cache the certificate on disk (default:
./acme_cache) - Serve HTTPS on the configured port (default 3000)
CLI flags
| Flag | Description |
|---|---|
--tls-domain | Domain to provision a certificate for. Omit for plain HTTP. |
--tls-contact | Email address for Let’s Encrypt notifications (recommended). |
--tls-staging | Use the Let’s Encrypt staging environment (for testing — avoids rate limits). |
--tls-cache-dir | Directory to cache certificates. Default: ./acme_cache. |
Environment variables (TLS)
| Variable | Default | Description |
|---|---|---|
ALLOY_TLS_DOMAIN | — | Domain for automatic TLS (alternative to --tls-domain). |
ALLOY_TLS_CONTACT | — | Contact email for Let’s Encrypt (alternative to --tls-contact). |
ALLOY_TLS_STAGING | false | Use staging environment (alternative to --tls-staging). |
ALLOY_TLS_CACHE_DIR | ./acme_cache | Certificate cache directory (alternative to --tls-cache-dir). |
Example: production HTTPS
alloy serve \
--tls-domain api.example.com \
--tls-contact ops@example.com \
--port 443
Once running, verify with curl:
curl -s https://api.example.com/health
Expected response:
{"status":"ok"}
Example: staging / testing
Use --tls-staging to test certificate provisioning without hitting production rate limits:
alloy serve \
--tls-domain staging.example.com \
--tls-contact ops@example.com \
--tls-staging
Docker with TLS
Mount a volume for certificate persistence across container restarts:
docker run -d \
--name alloy \
-p 443:443 \
-v alloy-data:/data \
-v alloy-certs:/certs \
-e ALLOY_DATABASE_URL=sqlite:///data/alloy.db \
alloy \
serve --tls-domain api.example.com --tls-contact ops@example.com --tls-cache-dir /certs --port 443
Notes
- DNS must resolve first: The domain must point to your server before ACME validation can succeed.
- Port 443: Let’s Encrypt HTTP-01 challenge may require port 80 or 443 to be reachable. If running behind a firewall, ensure the challenge port is open.
- Certificate renewal: Certificates are renewed automatically before expiration. Keep the cache directory persistent.
- Without TLS flags: The server runs plain HTTP exactly as before — no TLS overhead.
Docker Deployment
Building the image
docker build -t alloy .
The multi-stage Dockerfile produces a minimal Debian-based image with just the alloy binary.
Running with SQLite
docker run -d \
--name alloy \
-p 3000:3000 \
-v alloy-data:/data \
alloy
Data is stored at /data/alloy.db inside the container. The volume mount persists it across container restarts.
Running with PostgreSQL
docker run -d \
--name alloy \
-p 3000:3000 \
-e ALLOY_DATABASE_URL=postgres://user:pass@db-host:5432/alloy \
-e ALLOY_JWT_PRIVATE_KEY_FILE=/secrets/private.pem \
-e ALLOY_JWT_PUBLIC_KEY_FILE=/secrets/public.pem \
-v /path/to/secrets:/secrets:ro \
alloy
Volume mounts
| Mount | Purpose |
|---|---|
/data | SQLite database file (default ALLOY_DATABASE_URL=sqlite:///data/alloy.db) |
/secrets | JWT key files (if using file-based keys) |
Environment Variables Reference
Core
| Variable | Default | Description |
|---|---|---|
ALLOY_DATABASE_URL | sqlite://alloy.db | Database connection string. Use sqlite://path for SQLite or postgres://... for PostgreSQL. |
ALLOY_AUTO_MIGRATE | true | Run database migrations automatically on startup. Set to false to skip. |
PORT | 3000 | TCP port the HTTP server listens on. |
ALLOY_REGISTRATION | open | Registration mode: open (anyone can register) or invite (invite-only). |
Authentication (JWT)
| Variable | Default | Description |
|---|---|---|
ALLOY_JWT_PRIVATE_KEY | (required) | Ed25519/RSA private key (PEM string) for signing JWTs. |
ALLOY_JWT_PRIVATE_KEY_FILE | — | Path to private key file (alternative to inline). |
ALLOY_JWT_PUBLIC_KEY | (required) | Corresponding public key (PEM string) for verifying JWTs. |
ALLOY_JWT_PUBLIC_KEY_FILE | — | Path to public key file (alternative to inline). |
ALLOY_JWT_ISSUER | alloy | JWT iss claim value. |
ALLOY_JWT_AUDIENCE | alloy-api | JWT aud claim value. |
ALLOY_JWT_TTL_SECONDS | 3600 | JWT token lifetime in seconds. |
S3 / Object Storage (Attachments)
| Variable | Default | Description |
|---|---|---|
ALLOY_S3_ENDPOINT | — | S3-compatible endpoint URL (e.g., http://localhost:9000 for MinIO). |
ALLOY_S3_BUCKET | alloy-attachments | Bucket name for file attachments. |
ALLOY_S3_REGION | us-east-1 | S3 region. |
ALLOY_S3_ACCESS_KEY_ID | — | S3 access key. |
ALLOY_S3_SECRET_ACCESS_KEY | — | S3 secret key. |
Security
| Variable | Default | Description |
|---|---|---|
ALLOY_HTTPS | false | Set to true to mark cookies as Secure (requires TLS termination). |
ALLOY_CORS_ORIGINS | (permissive) | Comma-separated list of allowed CORS origins. |
Rate Limiting
| Variable | Default | Description |
|---|---|---|
ALLOY_RATE_LIMIT_GLOBAL | — | Global requests per minute limit. |
ALLOY_RATE_LIMIT_AUTH | — | Authenticated endpoint requests per minute. |
ALLOY_RATE_LIMIT_LOGIN | — | Login endpoint requests per minute. |
Slack Integration
| Variable | Default | Description |
|---|---|---|
ALLOY_SLACK_SIGNING_SECRET | — | Slack app signing secret for verifying webhook requests. |
ALLOY_SLACK_BOT_TOKEN | — | Slack bot OAuth token (starts with xoxb-). |
ALLOY_SLACK_NOTIFICATION_CHANNEL | general | Default Slack channel for notifications. |
ALLOY_SLACK_DEFAULT_USER_ID | — | Fallback Slack user ID when user mapping is unavailable. |
GitHub Integration
| Variable | Default | Description |
|---|---|---|
ALLOY_GITHUB_WEBHOOK_SECRET | — | Secret for verifying GitHub webhook payloads (HMAC-SHA256). |
SCIM Provisioning
| Variable | Default | Description |
|---|---|---|
ALLOY_SCIM_BEARER_TOKEN | — | Bearer token for authenticating SCIM provisioning requests. |
ALLOY_SCIM_ORG_ID | — | Organization ID to provision SCIM users/groups into. |
MCP Server
| Variable | Default | Description |
|---|---|---|
ALLOY_API_URL | (required) | Base URL of the Alloy API (e.g., http://localhost:3000). |
ALLOY_API_TOKEN | (required) | API key for authenticating MCP requests (must start with alloy_live_ or alloy_test_). |
TUI
| Variable | Default | Description |
|---|---|---|
ALLOY_BASE_URL | http://localhost:3000 | Alloy API URL for the TUI client. |
Auto-Migration Behavior
On startup, Alloy checks ALLOY_AUTO_MIGRATE (defaults to true):
- If
true: Runs all pending migrations from the embedded migration set. Migrations are compiled into the binary — no external SQL files needed at runtime. - If
false: Skips migrations entirely. Use this in environments where you run migrations as a separate step.
Migrations are idempotent — re-running an already-applied migration is a no-op. The migration runner detects the database backend (SQLite or PostgreSQL) and applies the correct dialect automatically.
MCP Server
The MCP server (alloy-mcp) lets AI assistants like Claude interact with Alloy. For complete setup instructions, configuration examples, and troubleshooting, see the dedicated MCP Guide.
For a full reference of available tools, parameters, and response formats, see the MCP Tools Reference.
Integrations Setup
Slack
- Create a Slack App at api.slack.com/apps
- Enable Event Subscriptions and set the request URL to
https://your-alloy-host/api/v1/integrations/slack/events - Enable Slash Commands (e.g.,
/alloy) with the request URLhttps://your-alloy-host/api/v1/integrations/slack/commands - Under OAuth & Permissions, add bot scopes:
chat:write,commands - Install the app to your workspace and copy the Bot User OAuth Token
- Set environment variables:
ALLOY_SLACK_SIGNING_SECRET=your_signing_secret ALLOY_SLACK_BOT_TOKEN=xoxb-your-bot-token ALLOY_SLACK_NOTIFICATION_CHANNEL=engineering # optional
GitHub
- Create a GitHub App or use repository webhooks
- Set the webhook URL to
https://your-alloy-host/api/v1/integrations/github/webhook - Select events:
push,pull_request,issues, etc. - Generate a webhook secret and set:
ALLOY_GITHUB_WEBHOOK_SECRET=your_webhook_secret
Okta SSO (OIDC)
SSO is configured per-organization via the API (not environment variables). To set up OIDC:
- In Okta, create a new Web Application with:
- Sign-in redirect URI:
https://your-alloy-host/api/v1/auth/sso/callback - Sign-out redirect URI:
https://your-alloy-host
- Sign-in redirect URI:
- Note the Client ID, Client Secret, and Issuer URL (e.g.,
https://your-org.okta.com/oauth2/default) - Register the identity provider via the Alloy API for your organization:
curl -X POST https://your-alloy-host/api/v1/orgs/{org_id}/identity-providers \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: application/json" \ -d '{ "provider_type": "oidc", "provider_name": "okta", "issuer_url": "https://your-org.okta.com/oauth2/default", "client_id": "your-client-id", "client_secret": "your-client-secret" }' - Users can then sign in via
GET /api/v1/auth/sso/login?org={org_slug}
Troubleshooting
Port conflicts
Symptom: error binding to 0.0.0.0:3000: Address already in use
Solution: Either stop the other process using port 3000, or set a different port:
PORT=3001 alloy serve
Migration failures
Symptom: migration error or refinery errors on startup
Solutions:
- Ensure the database is accessible (correct
ALLOY_DATABASE_URL) - For PostgreSQL, verify the user has
CREATE TABLE/ALTER TABLEpermissions - If a migration was partially applied, check the
refinery_schema_historytable and fix manually - Set
ALLOY_AUTO_MIGRATE=falseto skip auto-migration and run migrations separately
Authentication errors
Symptom: 401 Unauthorized on API requests
Solutions:
- Verify JWT keys are set: both
ALLOY_JWT_PRIVATE_KEY(or_FILE) andALLOY_JWT_PUBLIC_KEY(or_FILE) are required - For API key auth, ensure the key starts with
alloy_live_oralloy_test_ - Check
ALLOY_JWT_ISSUERandALLOY_JWT_AUDIENCEmatch between token issuer and verifier - If using SSO, confirm the identity provider’s
issuer_urlis correct and reachable
Database connection issues (PostgreSQL)
Symptom: error connecting to database or connection timeouts
Solutions:
- Verify PostgreSQL is running:
pg_isready -h localhost -p 5432 - Check the connection string format:
postgres://user:password@host:port/database - Ensure the database exists:
createdb alloy_dev - For Docker Compose:
docker compose up -d postgresand wait for the health check
CORS errors
Symptom: Browser console shows CORS errors when calling the API
Solution: Set allowed origins:
ALLOY_CORS_ORIGINS=http://localhost:5173,https://your-app.com alloy serve
Slack integration not working
Symptom: Slash commands or events not reaching Alloy
Solutions:
- Verify
ALLOY_SLACK_SIGNING_SECRETmatches your Slack app’s signing secret - Ensure webhook URLs are publicly reachable (use ngrok for local development)
- Check that bot scopes include
chat:writeandcommands
MCP server connection issues
Symptom: Claude Desktop can’t connect to the MCP server
Solution: See the troubleshooting section in the MCP Guide for detailed solutions covering connection errors, authentication issues, and configuration problems.