Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.adrian.secureagentics.ai/llms.txt

Use this file to discover all available pages before exploring further.

The Adrian backend is a single Go server that combines the WebSocket ingestion endpoint, the dashboard REST API, and the in-process classifier engine. It runs alongside a Next.js dashboard and a Llama.cpp container serving a local Gemma 4 model under Docker Compose. The full open-source release lives at github.com/secureagentics/Adrian. This page covers the configuration surface and the externally-visible endpoints; for step-by-step bring-up and operational walkthroughs, see the repository README.

Bring-up

git clone https://github.com/secureagentics/Adrian
cd Adrian

# One-shot bootstrap: creates data/adrian.db, applies migrations,
# generates an admin password, downloads Gemma 4 E4B (~5 GB) into ./models/.
docker compose --profile setup run --rm setup bootstrap

# Start backend + dashboard + classifier
docker compose --profile llm up -d

# Dashboard at http://localhost:3000
# WebSocket ingestion at ws://localhost:8080/ws
Requires Docker + Docker Compose v2 and an NVIDIA GPU with the NVIDIA Container Toolkit. Around 10 GB free disk for the bundled classifier model.

Configuration

Backend config is read from environment variables loaded via the .env file the bootstrap writes. Override values directly in .env, or use the setup set-model subcommand to update model-related settings.
VariableDefaultPurpose
ADRIAN_LLM_URLhttp://adrian-llm:8081/v1/chat/completionsClassifier endpoint. POSTed to verbatim - no path appending.
ADRIAN_LLM_API_KEYlocal-no-authBearer token sent on classifier requests. The bundled llama.cpp ignores it.
ADRIAN_LLM_MODELlocalModel id sent in the request body’s model field.
ADRIAN_LLM_MODEL_PATH/models/<gguf>In-container path the llm service loads at start. Set by bootstrap --gguf <name>.
ADRIAN_LLM_CTX_SIZE8192Llama.cpp context window. Higher = more history, more VRAM.
ADRIAN_BACKEND_PORT8080Host-side port for the Go server (WebSocket + dashboard API).
ADRIAN_DASHBOARD_PORT3000Host-side port for the Next.js dashboard.
ADRIAN_PII_REDACTtrueToggle the SDK-side PII regex sweep (default on).
ADRIAN_SLIDING_WINDOW_SIZE16Per-agent ring buffer of recent classified turns the prompt prepends for cross-step context.
ADRIAN_SLIDING_WINDOW_TTL_SECONDS86400TTL for inactive sliding-window entries before the in-memory cache evicts them.
ADRIAN_SESSION_SECRET(generated)Dashboard session-cookie secret. The bootstrap mints a fresh one on first run.

Endpoints

The Go server exposes three externally-visible surfaces on the backend port (default 8080).

WebSocket ingestion

ws://localhost:8080/ws    (or wss:// behind TLS)
This is the endpoint the Adrian SDK connects to. Authentication is via Authorization: Bearer <api_key> on the upgrade. Frames are protobuf-encoded ClientFrame (login / paired_batch / mcp_inventory) and ServerFrame (login_ack / verdict).

Health probes

PathPurpose
/healthzLiveness only. Returns ok as soon as the Go process is up. Does not say the backend can classify.
/readyzReadiness. Returns {"ok": true, "checks": {"db": "ok", "classifier": "ok"}} once the database is reachable AND the classifier upstream has answered. Returns 503 with the failing subsystem named while the model is still loading or if the upstream is unreachable.
Compose’s healthcheck for the backend service polls /readyz, so docker compose --profile llm ps reporting (healthy) is the canonical “stack is fully up” signal.

Dashboard API

REST endpoints under /api/ for the Next.js dashboard - authentication, agents, policies, events, verdicts, reviews, webhooks. These are internal to the bundled dashboard at v1 and are not part of a stable public API; see the repo source for current routes.

Architecture

The classifier is the bundled Llama.cpp container running Gemma 4 (E2B or E4B by default). The model is downloaded by the bootstrap step; swap variants via setup set-model --gguf <name>.

Operational tasks

Reset the admin password

docker compose --profile setup run --rm setup reset-password
Generates a new random password, updates the SQLite admin row, and prints the plaintext to stdout. The password is shown once and never persisted to disk; if you lose it, run reset-password again. Pass --password <plaintext> for a non-interactive flow.

Switch the local GGUF

docker compose --profile setup run --rm setup set-model \
    --gguf gemma-4-E2B-it-Q4_K_M.gguf --ctx-size 16384
Updates ADRIAN_LLM_MODEL_PATH in .env and the llm service picks up the new model on next restart. The GGUF must already be present under ./models/.