chore(backend): upgrade deps and remove redis/rq

This commit is contained in:
Abhimanyu Saharan
2026-02-10 16:05:49 +05:30
parent dcdc0a25b1
commit 5c25c4bb91
17 changed files with 538 additions and 439 deletions

View File

@@ -8,7 +8,6 @@ At a high level:
- The **frontend** is a Next.js app used by humans.
- The **backend** is a FastAPI service that exposes REST endpoints under `/api/v1/*`.
- **Postgres** stores core state (boards/tasks/agents/etc.).
- **Redis** supports async/background primitives (RQ queue scaffolding exists).
## Components
@@ -20,7 +19,6 @@ flowchart LR
FE -->|HTTP /api/v1/*| BE[FastAPI Backend :8000]
BE -->|SQL| PG[(Postgres :5432)]
BE -->|Redis protocol| R[(Redis :6379)]
BE -->|WebSocket (optional integration)| GW[OpenClaw Gateway]
GW --> OC[OpenClaw runtime]
@@ -50,8 +48,6 @@ flowchart LR
- **Postgres**: persistence for boards/tasks/agents/approvals/etc.
- Models: `backend/app/models/*`
- Migrations: `backend/migrations/*`
- **Redis**: used for background primitives.
- RQ helper: `backend/app/workers/queue.py`
### Gateway integration (optional)
Mission Control can call into an OpenClaw Gateway over WebSockets.
@@ -64,7 +60,7 @@ Mission Control can call into an OpenClaw Gateway over WebSockets.
### UI → API
1. Browser loads the Next.js frontend.
2. Frontend calls backend endpoints under `/api/v1/*`.
3. Backend reads/writes Postgres and may use Redis depending on the operation.
3. Backend reads/writes Postgres.
### Auth (Clerk — required for now)
- **Frontend** enables Clerk when a publishable key is present/valid.
@@ -76,11 +72,8 @@ Automation/agents can use the “agent” API surface:
- Endpoints under `/api/v1/agent/*` (router: `backend/app/api/agent.py`).
- Auth via `X-Agent-Token` (see `backend/app/core/agent_auth.py`, referenced from `backend/app/api/deps.py`).
### Background jobs (RQ / Redis)
The codebase includes RQ/Redis dependencies and a queue helper (`backend/app/workers/queue.py`).
If/when background jobs are added, the expected shape is:
- API enqueues work to Redis.
- A separate RQ worker process executes queued jobs.
### Background jobs
There is currently no queue runtime configured in this repo.
## Key directories

View File

@@ -4,14 +4,13 @@ This guide covers how to self-host **OpenClaw Mission Control** using the reposi
> Scope
> - This is a **dev-friendly self-host** setup intended for local or single-host deployments.
> - For production hardening (TLS, backups, external Postgres/Redis, observability), see **Production notes** below.
> - For production hardening (TLS, backups, external Postgres, observability), see **Production notes** below.
## What you get
When running Compose, you get:
- **Postgres** database (persistent volume)
- **Redis** (persistent volume)
- **Backend API** (FastAPI) on `http://localhost:${BACKEND_PORT:-8000}`
- Health check: `GET /healthz`
- **Frontend UI** (Next.js) on `http://localhost:${FRONTEND_PORT:-3000}`
@@ -61,7 +60,6 @@ curl -I http://localhost:${FRONTEND_PORT:-3000}/
`compose.yml` defines:
- `db` (Postgres 16)
- `redis` (Redis 7)
- `backend` (FastAPI)
- `frontend` (Next.js)
@@ -70,7 +68,6 @@ curl -I http://localhost:${FRONTEND_PORT:-3000}/
By default:
- Postgres: `5432` (`POSTGRES_PORT`)
- Redis: `6379` (`REDIS_PORT`)
- Backend: `8000` (`BACKEND_PORT`)
- Frontend: `3000` (`FRONTEND_PORT`)
@@ -81,7 +78,6 @@ Ports are sourced from `.env` (passed via `--env-file .env`) and wired into `com
Compose creates named volumes:
- `postgres_data` → Postgres data directory
- `redis_data` → Redis data directory
These persist across `docker compose down`.
@@ -100,7 +96,7 @@ docker compose -f compose.yml --env-file .env ...
### Backend env
The backend container loads `./backend/.env.example` via `env_file` and then overrides DB/Redis URLs for container networking.
The backend container loads `./backend/.env.example` via `env_file` and then overrides the DB URL for container networking.
If you need backend customization, prefer creating a real `backend/.env` and updating compose to use it (optional improvement).
@@ -174,8 +170,6 @@ docker compose -f compose.yml --env-file .env logs -f --tail=200
- If the repo doesnt have `frontend/public`, the Dockerfile should not `COPY public/`.
- **Backend build fails looking for `uv.lock`**
- If backend build context is repo root, Dockerfile must copy `backend/uv.lock` not `uv.lock`.
- **Redis warning about `vm.overcommit_memory`**
- Usually non-fatal for dev; for stability under load, set `vm.overcommit_memory=1` on the host.
## Reset / start fresh
@@ -185,7 +179,7 @@ Safe (keeps volumes/data):
docker compose -f compose.yml --env-file .env down
```
Destructive (removes volumes; deletes Postgres/Redis data):
Destructive (removes volumes; deletes Postgres data):
```bash
docker compose -f compose.yml --env-file .env down -v
@@ -195,7 +189,7 @@ docker compose -f compose.yml --env-file .env down -v
If youre running this beyond local dev, consider:
- Run Postgres and Redis as managed services (or on separate hosts)
- Run Postgres as a managed service (or on a separate host)
- Add TLS termination (reverse proxy)
- Configure backups for Postgres volume
- Set explicit resource limits and healthchecks

View File

@@ -2,11 +2,11 @@
This document describes **production-ish** deployment patterns for **OpenClaw Mission Control**.
Mission Control is a web app (frontend) + API (backend) + Postgres + Redis. The simplest reliable
Mission Control is a web app (frontend) + API (backend) + Postgres. The simplest reliable
baseline is Docker Compose plus a reverse proxy with TLS.
> This repo currently ships a developer-friendly `compose.yml`. For real production, you should:
> - put Postgres/Redis on managed services or dedicated hosts when possible
> - put Postgres on a managed service or dedicated host when possible
> - terminate TLS at a reverse proxy
> - set up backups + upgrades
> - restrict network exposure (firewall)
@@ -33,7 +33,6 @@ On one VM:
- frontend container (internal port 3000)
- backend container (internal port 8000)
- Postgres container (internal 5432)
- Redis container (internal 6379)
### Ports / firewall
@@ -44,7 +43,6 @@ Expose to the internet:
Do **not** expose:
- Postgres 5432
- Redis 6379
- backend 8000
- frontend 3000
@@ -160,13 +158,13 @@ The main reason to split is reliability and blast-radius reduction.
### Option A: 2 hosts
- Host 1: reverse proxy + frontend + backend
- Host 2: Postgres + Redis (or managed)
- Host 2: Postgres (or managed)
### Option B: 3 hosts
- Host 1: reverse proxy + frontend
- Host 2: backend
- Host 3: Postgres + Redis (or managed)
- Host 3: Postgres (or managed)
### Networking / security groups
@@ -175,14 +173,12 @@ Minimum rules:
- Public internet → reverse proxy host: `80/443`
- Reverse proxy host → backend host: `8000` (or whatever you publish internally)
- Backend host → DB host: `5432`
- Backend host → Redis host: `6379`
Everything else: deny.
### Configuration considerations
- `DATABASE_URL` must point to the DB host (not `localhost`).
- `REDIS_URL` must point to the Redis host.
- `CORS_ORIGINS` must include the public frontend URL.
- `NEXT_PUBLIC_API_URL` should be the public API base URL.
@@ -197,7 +193,7 @@ The backend currently runs Alembic migrations on startup (see logs). In multi-ho
- [ ] TLS is enabled, HTTP redirects to HTTPS
- [ ] Only 80/443 exposed publicly
- [ ] Postgres/Redis not publicly accessible
- [ ] Postgres not publicly accessible
- [ ] Backups tested (restore drill)
- [ ] Log retention/rotation configured
- [ ] Regular upgrade process (pull latest, rebuild, restart)
@@ -209,4 +205,3 @@ The backend currently runs Alembic migrations on startup (see logs). In multi-ho
- `NEXT_PUBLIC_API_URL`
- backend CORS settings (`CORS_ORIGINS`)
- firewall rules between proxy ↔ backend