Merge pull request #261 from AndrewNLauder/feat/macos-support
feat: macOS support, run at boot (systemd/launchd), and auth token re-sync
This commit is contained in:
@@ -50,6 +50,8 @@ Open:
|
||||
- Frontend: `http://localhost:${FRONTEND_PORT:-3000}`
|
||||
- Backend health: `http://localhost:${BACKEND_PORT:-8000}/healthz`
|
||||
|
||||
To have containers restart on failure and after host reboot, add `restart: unless-stopped` to the `db`, `redis`, `backend`, and `frontend` services in `compose.yml`, and ensure Docker is configured to start at boot.
|
||||
|
||||
### 3) Verify
|
||||
|
||||
```bash
|
||||
@@ -112,3 +114,65 @@ Typical setup (outline):
|
||||
- Ensure the frontend can reach the backend over the configured `NEXT_PUBLIC_API_URL`
|
||||
|
||||
This section is intentionally minimal until we standardize a recommended proxy (Caddy/Nginx/Traefik).
|
||||
|
||||
## Run at boot (local install)
|
||||
|
||||
If you installed Mission Control **without Docker** (e.g. using `install.sh` with "local" mode, or inside a VM where Docker is not used), the installer does not configure run-at-boot. You can start the stack after each reboot manually, or configure the OS to start it for you.
|
||||
|
||||
### Linux (systemd)
|
||||
|
||||
Use the example systemd units and instructions in [systemd/README.md](./systemd/README.md). In short:
|
||||
|
||||
1. Copy the unit files from `docs/deployment/systemd/` and replace `REPO_ROOT`, `BACKEND_PORT`, and `FRONTEND_PORT` with your paths and ports.
|
||||
2. Install the units under `~/.config/systemd/user/` (user) or `/etc/systemd/system/` (system).
|
||||
3. Enable and start the backend, frontend, and RQ worker services.
|
||||
|
||||
The RQ queue worker is required for gateway lifecycle (wake/check-in) and webhook delivery; run it as a separate unit.
|
||||
|
||||
### macOS (launchd)
|
||||
|
||||
LaunchAgents run at **user login**, not at machine boot. Use LaunchAgents so the backend, frontend, and worker run under your user and restart on failure. For true boot-time startup you would need LaunchDaemons or other configuration (not covered here).
|
||||
|
||||
1. Create a plist for each process under `~/Library/LaunchAgents/`, e.g. `com.openclaw.mission-control.backend.plist`:
|
||||
|
||||
```xml
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
|
||||
<plist version="1.0">
|
||||
<dict>
|
||||
<key>Label</key>
|
||||
<string>com.openclaw.mission-control.backend</string>
|
||||
<key>ProgramArguments</key>
|
||||
<array>
|
||||
<string>/usr/bin/env</string>
|
||||
<string>uv</string>
|
||||
<string>run</string>
|
||||
<string>uvicorn</string>
|
||||
<string>app.main:app</string>
|
||||
<string>--host</string>
|
||||
<string>0.0.0.0</string>
|
||||
<string>--port</string>
|
||||
<string>8000</string>
|
||||
</array>
|
||||
<key>WorkingDirectory</key>
|
||||
<string>REPO_ROOT/backend</string>
|
||||
<key>EnvironmentVariables</key>
|
||||
<dict>
|
||||
<key>PATH</key>
|
||||
<string>/usr/local/bin:/opt/homebrew/bin:REPO_ROOT/backend/.venv/bin</string>
|
||||
</dict>
|
||||
<key>KeepAlive</key>
|
||||
<true/>
|
||||
<key>RunAtLoad</key>
|
||||
<true/>
|
||||
</dict>
|
||||
</plist>
|
||||
```
|
||||
|
||||
Replace `REPO_ROOT` with the actual repo path. Ensure `uv` is on `PATH` (e.g. add `~/.local/bin` to the `PATH` in the plist). Load with:
|
||||
|
||||
```bash
|
||||
launchctl load ~/Library/LaunchAgents/com.openclaw.mission-control.backend.plist
|
||||
```
|
||||
|
||||
2. Add similar plists for the frontend (`npm run start -- --hostname 0.0.0.0 --port 3000` in `REPO_ROOT/frontend`) and for the RQ worker (`uv run python ../scripts/rq worker` with `WorkingDirectory=REPO_ROOT/backend` and `ProgramArguments` pointing at `uv`, `run`, `python`, `../scripts/rq`, `worker`).
|
||||
|
||||
60
docs/deployment/systemd/README.md
Normal file
60
docs/deployment/systemd/README.md
Normal file
@@ -0,0 +1,60 @@
|
||||
# Systemd unit files (local install, run at boot)
|
||||
|
||||
Example systemd units for running Mission Control at boot when installed **without Docker** (e.g. local install in a VM).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- **Backend**: `uv`, Python 3.12+, and `backend/.env` configured (including `DATABASE_URL`, `RQ_REDIS_URL` if using the queue worker).
|
||||
- **Frontend**: Node.js 22+ and `frontend/.env` (e.g. `NEXT_PUBLIC_API_URL`).
|
||||
- **RQ worker**: Redis must be running and reachable; `backend/.env` must set `RQ_REDIS_URL` and `RQ_QUEUE_NAME` to match the backend API.
|
||||
|
||||
If you use Docker only for Postgres and/or Redis, start those first (e.g. `docker compose up -d db` and optionally Redis) or add `After=docker.service` and start the stack via a separate unit or script.
|
||||
|
||||
## Placeholders
|
||||
|
||||
Before installing, replace in each unit file:
|
||||
|
||||
- `REPO_ROOT` — absolute path to the Mission Control repo (e.g. `/home/user/openclaw-mission-control`). Must not contain spaces (systemd unit values do not support shell-style quoting).
|
||||
- `BACKEND_PORT` — backend port (default `8000`).
|
||||
- `FRONTEND_PORT` — frontend port (default `3000`).
|
||||
|
||||
Example (from repo root):
|
||||
|
||||
```bash
|
||||
REPO_ROOT="$(pwd)"
|
||||
for f in docs/deployment/systemd/openclaw-mission-control-*.service; do
|
||||
sed -e "s|REPO_ROOT|$REPO_ROOT|g" -e "s|BACKEND_PORT|8000|g" -e "s|FRONTEND_PORT|3000|g" "$f" \
|
||||
> "$(basename "$f")"
|
||||
done
|
||||
# Then copy the generated .service files to ~/.config/systemd/user/ or /etc/systemd/system/
|
||||
```
|
||||
|
||||
**User units** start at **user login** by default. To have services start at **machine boot** without logging in, enable lingering for your user: `loginctl enable-linger $USER`. Alternatively, use system-wide units in `/etc/systemd/system/` (see below).
|
||||
|
||||
## Install and enable
|
||||
|
||||
**User units** (recommended for single-user / VM):
|
||||
|
||||
```bash
|
||||
cp openclaw-mission-control-backend.service openclaw-mission-control-frontend.service openclaw-mission-control-rq-worker.service ~/.config/systemd/user/
|
||||
systemctl --user daemon-reload
|
||||
systemctl --user enable openclaw-mission-control-backend openclaw-mission-control-frontend openclaw-mission-control-rq-worker
|
||||
systemctl --user start openclaw-mission-control-backend openclaw-mission-control-frontend openclaw-mission-control-rq-worker
|
||||
```
|
||||
|
||||
**System-wide** (e.g. under `/etc/systemd/system/`):
|
||||
|
||||
```bash
|
||||
sudo cp openclaw-mission-control-*.service /etc/systemd/system/
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable --now openclaw-mission-control-backend openclaw-mission-control-frontend openclaw-mission-control-rq-worker
|
||||
```
|
||||
|
||||
## Order
|
||||
|
||||
Start order is not strict between backend, frontend, and worker; all use `After=network-online.target`. Ensure Postgres (and Redis, if used) are running before or with the backend/worker (e.g. start Docker services first, or use system units for Postgres/Redis with the Mission Control units depending on them).
|
||||
|
||||
## Logs
|
||||
|
||||
- `journalctl --user -u openclaw-mission-control-backend -f` (or `sudo journalctl -u openclaw-mission-control-backend -f` for system units)
|
||||
- Same for `openclaw-mission-control-frontend` and `openclaw-mission-control-rq-worker`.
|
||||
@@ -0,0 +1,23 @@
|
||||
# Mission Control backend (FastAPI) — example systemd unit for local install.
|
||||
# Copy to ~/.config/systemd/user/ or /etc/systemd/system/, then:
|
||||
# sed -e 's|REPO_ROOT|/path/to/openclaw-mission-control|g' -e 's|BACKEND_PORT|8000|g' -i openclaw-mission-control-backend.service
|
||||
# systemctl --user daemon-reload # or sudo systemctl daemon-reload
|
||||
# systemctl --user enable --now openclaw-mission-control-backend # or sudo systemctl enable --now ...
|
||||
#
|
||||
# Requires: uv in PATH (e.g. ~/.local/bin), backend/.env present.
|
||||
|
||||
[Unit]
|
||||
Description=Mission Control backend (FastAPI)
|
||||
After=network-online.target
|
||||
Wants=network-online.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
WorkingDirectory=REPO_ROOT/backend
|
||||
EnvironmentFile=-REPO_ROOT/backend/.env
|
||||
ExecStart=uv run uvicorn app.main:app --host 0.0.0.0 --port BACKEND_PORT
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
|
||||
[Install]
|
||||
WantedBy=default.target
|
||||
@@ -0,0 +1,23 @@
|
||||
# Mission Control frontend (Next.js) — example systemd unit for local install.
|
||||
# Copy to ~/.config/systemd/user/ or /etc/systemd/system/, then:
|
||||
# sed -e 's|REPO_ROOT|/path/to/openclaw-mission-control|g' -e 's|FRONTEND_PORT|3000|g' -i openclaw-mission-control-frontend.service
|
||||
# systemctl --user daemon-reload # or sudo systemctl daemon-reload
|
||||
# systemctl --user enable --now openclaw-mission-control-frontend # or sudo systemctl enable --now ...
|
||||
#
|
||||
# Requires: Node.js/npm in PATH (e.g. from nvm or system install), frontend/.env present.
|
||||
|
||||
[Unit]
|
||||
Description=Mission Control frontend (Next.js)
|
||||
After=network-online.target
|
||||
Wants=network-online.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
WorkingDirectory=REPO_ROOT/frontend
|
||||
EnvironmentFile=-REPO_ROOT/frontend/.env
|
||||
ExecStart=npm run start -- --hostname 0.0.0.0 --port FRONTEND_PORT
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
|
||||
[Install]
|
||||
WantedBy=default.target
|
||||
@@ -0,0 +1,24 @@
|
||||
# Mission Control RQ queue worker — example systemd unit for local install.
|
||||
# Processes lifecycle and webhook queue tasks; required for gateway wake/check-in and webhooks.
|
||||
# Copy to ~/.config/systemd/user/ or /etc/systemd/system/, then:
|
||||
# sed -e 's|REPO_ROOT|/path/to/openclaw-mission-control|g' -i openclaw-mission-control-rq-worker.service
|
||||
# systemctl --user daemon-reload # or sudo systemctl daemon-reload
|
||||
# systemctl --user enable --now openclaw-mission-control-rq-worker # or sudo systemctl enable --now ...
|
||||
#
|
||||
# Requires: uv in PATH, Redis reachable (RQ_REDIS_URL in backend/.env), backend/.env present.
|
||||
|
||||
[Unit]
|
||||
Description=Mission Control RQ queue worker
|
||||
After=network-online.target
|
||||
Wants=network-online.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
WorkingDirectory=REPO_ROOT/backend
|
||||
EnvironmentFile=-REPO_ROOT/backend/.env
|
||||
ExecStart=uv run python ../scripts/rq worker
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
|
||||
[Install]
|
||||
WantedBy=default.target
|
||||
@@ -104,3 +104,29 @@ Actions:
|
||||
- gateway logs around bootstrap
|
||||
- worker logs around lifecycle events
|
||||
- agent `last_provision_error`, `wake_attempts`, `last_seen_at`
|
||||
|
||||
## Re-syncing auth tokens when Mission Control and OpenClaw have drifted
|
||||
|
||||
Mission Control stores a hash of each agent’s token and provisions OpenClaw by writing templates (e.g. `TOOLS.md`) that include `AUTH_TOKEN`. If the token on the gateway and the backend hash drift (e.g. after a reinstall, token change, or manual edit), heartbeats can fail with 401 and the agent may appear offline.
|
||||
|
||||
To re-sync:
|
||||
|
||||
1. Ensure Mission Control is running (API and queue worker).
|
||||
2. Run **template sync with token rotation** so the backend issues new agent tokens and rewrites `AUTH_TOKEN` into the gateway’s agent files.
|
||||
|
||||
**Via API (curl):**
|
||||
|
||||
```bash
|
||||
curl -X POST "http://localhost:8000/api/v1/gateways/GATEWAY_ID/templates/sync?rotate_tokens=true" \
|
||||
-H "Authorization: Bearer YOUR_LOCAL_AUTH_TOKEN"
|
||||
```
|
||||
|
||||
Replace `GATEWAY_ID` (from the Gateways list or gateway URL in the UI) and `YOUR_LOCAL_AUTH_TOKEN` with your local auth token.
|
||||
|
||||
**Via CLI (from repo root):**
|
||||
|
||||
```bash
|
||||
cd backend && uv run python scripts/sync_gateway_templates.py --gateway-id GATEWAY_ID --rotate-tokens
|
||||
```
|
||||
|
||||
After a successful sync, OpenClaw agents will have new `AUTH_TOKEN` values in their workspace files; the next heartbeat or bootstrap will use the new token. If the gateway was offline, trigger a wake/update from Mission Control so agents restart and pick up the new token.
|
||||
|
||||
140
install.sh
140
install.sh
@@ -3,13 +3,7 @@
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_NAME="$(basename "$0")"
|
||||
if [[ "$SCRIPT_NAME" == "bash" || "$SCRIPT_NAME" == "-bash" ]]; then
|
||||
SCRIPT_NAME="install.sh"
|
||||
fi
|
||||
REPO_ROOT=""
|
||||
REPO_GIT_URL="${OPENCLAW_REPO_URL:-https://github.com/abhi1693/openclaw-mission-control.git}"
|
||||
REPO_CLONE_REF="${OPENCLAW_REPO_REF:-}"
|
||||
REPO_DIR_NAME="openclaw-mission-control"
|
||||
REPO_ROOT="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd -P)"
|
||||
STATE_DIR="${XDG_STATE_HOME:-$HOME/.local/state}"
|
||||
LOG_DIR="$STATE_DIR/openclaw-mission-control-install"
|
||||
|
||||
@@ -30,6 +24,7 @@ FORCE_LOCAL_AUTH_TOKEN=""
|
||||
FORCE_DB_MODE=""
|
||||
FORCE_DATABASE_URL=""
|
||||
FORCE_START_SERVICES=""
|
||||
FORCE_INSTALL_SERVICE=""
|
||||
|
||||
if [[ -t 0 ]]; then
|
||||
INTERACTIVE=1
|
||||
@@ -56,66 +51,6 @@ command_exists() {
|
||||
command -v "$1" >/dev/null 2>&1
|
||||
}
|
||||
|
||||
repo_has_layout() {
|
||||
local dir="$1"
|
||||
[[ -f "$dir/Makefile" && -f "$dir/compose.yml" ]]
|
||||
}
|
||||
|
||||
resolve_script_directory() {
|
||||
local script_source=""
|
||||
local script_dir=""
|
||||
|
||||
if [[ -n "${BASH_SOURCE:-}" && -n "${BASH_SOURCE[0]:-}" ]]; then
|
||||
script_source="${BASH_SOURCE[0]}"
|
||||
elif [[ -n "${0:-}" && "${0:-}" != "bash" ]]; then
|
||||
script_source="$0"
|
||||
fi
|
||||
|
||||
[[ -n "$script_source" ]] || return 1
|
||||
|
||||
script_dir="$(cd -- "$(dirname -- "$script_source")" 2>/dev/null && pwd -P)" || return 1
|
||||
printf '%s\n' "$script_dir"
|
||||
}
|
||||
|
||||
bootstrap_repo_checkout() {
|
||||
local target_dir="$PWD/$REPO_DIR_NAME"
|
||||
|
||||
if ! command_exists git; then
|
||||
die "Git is required for one-line bootstrap installs. Install git and re-run."
|
||||
fi
|
||||
if [[ -e "$target_dir" ]]; then
|
||||
die "Cannot auto-clone into $target_dir because it already exists. Run ./install.sh from that repository or remove the directory."
|
||||
fi
|
||||
|
||||
info "Repository checkout not found. Cloning into $target_dir ..."
|
||||
if [[ -n "$REPO_CLONE_REF" ]]; then
|
||||
git clone --depth 1 --branch "$REPO_CLONE_REF" "$REPO_GIT_URL" "$target_dir"
|
||||
else
|
||||
git clone --depth 1 "$REPO_GIT_URL" "$target_dir"
|
||||
fi
|
||||
|
||||
REPO_ROOT="$target_dir"
|
||||
SCRIPT_NAME="install.sh"
|
||||
}
|
||||
|
||||
resolve_repo_root() {
|
||||
local script_dir=""
|
||||
|
||||
if script_dir="$(resolve_script_directory)"; then
|
||||
if repo_has_layout "$script_dir"; then
|
||||
REPO_ROOT="$script_dir"
|
||||
return
|
||||
fi
|
||||
fi
|
||||
|
||||
if repo_has_layout "$PWD"; then
|
||||
REPO_ROOT="$PWD"
|
||||
return
|
||||
fi
|
||||
|
||||
bootstrap_repo_checkout
|
||||
}
|
||||
|
||||
usage() {
|
||||
cat <<EOF
|
||||
Usage: $SCRIPT_NAME [options]
|
||||
@@ -131,6 +66,7 @@ Options:
|
||||
--db-mode <docker|external> Local mode only
|
||||
--database-url <url> Required when --db-mode external
|
||||
--start-services <yes|no> Local mode only
|
||||
--install-service Local mode only: install systemd user units for run at boot (Linux)
|
||||
-h, --help
|
||||
|
||||
If an option is omitted, the script prompts in interactive mode and uses defaults in non-interactive mode.
|
||||
@@ -220,6 +156,10 @@ parse_args() {
|
||||
FORCE_START_SERVICES="$2"
|
||||
shift 2
|
||||
;;
|
||||
--install-service)
|
||||
FORCE_INSTALL_SERVICE="yes"
|
||||
shift
|
||||
;;
|
||||
-h|--help)
|
||||
usage
|
||||
exit 0
|
||||
@@ -733,9 +673,52 @@ start_local_services() {
|
||||
)
|
||||
}
|
||||
|
||||
install_systemd_services() {
|
||||
local backend_port="$1"
|
||||
local frontend_port="$2"
|
||||
local systemd_user_dir
|
||||
systemd_user_dir="${XDG_CONFIG_HOME:-$HOME/.config}/systemd/user"
|
||||
local units_dir="$REPO_ROOT/docs/deployment/systemd"
|
||||
|
||||
if [[ "$REPO_ROOT" == *" "* ]]; then
|
||||
warn "REPO_ROOT must not contain spaces (systemd unit paths do not support it): $REPO_ROOT"
|
||||
return 1
|
||||
fi
|
||||
if [[ "$PLATFORM" != "linux" ]]; then
|
||||
info "Skipping systemd install (not Linux). For macOS run-at-boot see docs/deployment/README.md (launchd)."
|
||||
return 0
|
||||
fi
|
||||
if [[ ! -d "$units_dir" ]]; then
|
||||
warn "Systemd units dir not found: $units_dir"
|
||||
return 1
|
||||
fi
|
||||
for name in openclaw-mission-control-backend openclaw-mission-control-frontend openclaw-mission-control-rq-worker; do
|
||||
if [[ ! -f "$units_dir/$name.service" ]]; then
|
||||
warn "Unit file not found: $units_dir/$name.service"
|
||||
return 1
|
||||
fi
|
||||
done
|
||||
|
||||
mkdir -p "$systemd_user_dir"
|
||||
for name in openclaw-mission-control-backend openclaw-mission-control-frontend openclaw-mission-control-rq-worker; do
|
||||
sed -e "s|REPO_ROOT|$REPO_ROOT|g" \
|
||||
-e "s|BACKEND_PORT|$backend_port|g" \
|
||||
-e "s|FRONTEND_PORT|$frontend_port|g" \
|
||||
"$units_dir/$name.service" > "$systemd_user_dir/$name.service"
|
||||
info "Installed $systemd_user_dir/$name.service"
|
||||
done
|
||||
if command_exists systemctl; then
|
||||
systemctl --user daemon-reload
|
||||
systemctl --user enable openclaw-mission-control-backend openclaw-mission-control-frontend openclaw-mission-control-rq-worker
|
||||
info "Systemd user units enabled. Start with: systemctl --user start openclaw-mission-control-backend openclaw-mission-control-frontend openclaw-mission-control-rq-worker"
|
||||
else
|
||||
warn "systemctl not found; units were copied but not enabled."
|
||||
fi
|
||||
}
|
||||
|
||||
ensure_repo_layout() {
|
||||
[[ -f "$REPO_ROOT/Makefile" ]] || die "Missing Makefile in expected repository root: $REPO_ROOT"
|
||||
[[ -f "$REPO_ROOT/compose.yml" ]] || die "Missing compose.yml in expected repository root: $REPO_ROOT"
|
||||
[[ -f "$REPO_ROOT/Makefile" ]] || die "Run $SCRIPT_NAME from repository root."
|
||||
[[ -f "$REPO_ROOT/compose.yml" ]] || die "Missing compose.yml in repository root."
|
||||
}
|
||||
|
||||
main() {
|
||||
@@ -750,7 +733,6 @@ main() {
|
||||
local database_url=""
|
||||
local start_services="yes"
|
||||
|
||||
resolve_repo_root
|
||||
cd "$REPO_ROOT"
|
||||
ensure_repo_layout
|
||||
parse_args "$@"
|
||||
@@ -879,14 +861,6 @@ main() {
|
||||
if [[ "$deployment_mode" == "docker" ]]; then
|
||||
ensure_file_from_example "$REPO_ROOT/backend/.env" "$REPO_ROOT/backend/.env.example"
|
||||
|
||||
# Docker services load backend/.env; ensure required runtime values are populated.
|
||||
upsert_env_value "$REPO_ROOT/backend/.env" "ENVIRONMENT" "prod"
|
||||
upsert_env_value "$REPO_ROOT/backend/.env" "AUTH_MODE" "local"
|
||||
upsert_env_value "$REPO_ROOT/backend/.env" "LOCAL_AUTH_TOKEN" "$local_auth_token"
|
||||
upsert_env_value "$REPO_ROOT/backend/.env" "CORS_ORIGINS" "http://$public_host:$frontend_port"
|
||||
upsert_env_value "$REPO_ROOT/backend/.env" "BASE_URL" "http://$public_host:$backend_port"
|
||||
upsert_env_value "$REPO_ROOT/backend/.env" "DB_AUTO_MIGRATE" "true"
|
||||
|
||||
upsert_env_value "$REPO_ROOT/.env" "DB_AUTO_MIGRATE" "true"
|
||||
|
||||
info "Starting production-like Docker stack..."
|
||||
@@ -954,6 +928,16 @@ SUMMARY
|
||||
wait_for_http "http://127.0.0.1:$frontend_port" "Frontend" 120 || true
|
||||
fi
|
||||
|
||||
if [[ -n "$FORCE_INSTALL_SERVICE" ]]; then
|
||||
if ! install_systemd_services "$backend_port" "$frontend_port"; then
|
||||
warn "Systemd service install failed; see errors above."
|
||||
die "Cannot continue when --install-service was requested and install failed."
|
||||
fi
|
||||
if [[ "$PLATFORM" == "linux" ]]; then
|
||||
info "Run at boot: systemd user units were installed and enabled. Start with: systemctl --user start openclaw-mission-control-backend openclaw-mission-control-frontend openclaw-mission-control-rq-worker"
|
||||
fi
|
||||
fi
|
||||
|
||||
cat <<SUMMARY
|
||||
|
||||
Bootstrap complete (Local mode).
|
||||
|
||||
Reference in New Issue
Block a user