The Adrian SDK is a Python package that attaches to your agent runtime, captures activity and reasoning, and ships them to the Adrian backend. It auto-instruments LangChain / LangGraph and emits paired events: each LLM call (Documentation Index
Fetch the complete documentation index at: https://docs.adrian.secureagentics.ai/llms.txt
Use this file to discover all available pages before exploring further.
chat_model_start + llm_end) and each tool execution (tool_start + tool_end) is assembled into a single PairedEvent carrying agent identity, parent context, and paired payload.
Install
Initialise
wss://adrian.secureagentics.ai/ws (the hosted Adrian backend). Override via ws_url= or ADRIAN_WS_URL to point at a self-hosted backend.
Use the async pattern (
asyncio.run + await llm.ainvoke) rather than sync llm.invoke. The WebSocket transport runs on the asyncio loop, and sync llm.invoke returns before the loop has a chance to flush events.Configuration
All parameters are optional; unset values fall back to env vars, then defaults.| Parameter | Env var | Default | Purpose |
|---|---|---|---|
api_key | ADRIAN_API_KEY | None | Required for the hosted / self-hosted backend. |
ws_url | ADRIAN_WS_URL | wss://adrian.secureagentics.ai/ws | Adrian backend WebSocket URL. Override for self-hosted. |
log_file | ADRIAN_LOG_FILE | events.jsonl | JSONL output path (when no handlers= override). |
session_id | ADRIAN_SESSION_ID | persistent per-cwd UUID | Stable session identifier. Persisted across runs. |
block_timeout | ADRIAN_BLOCK_TIMEOUT | 30.0 | Max wait for a verdict in MODE_BLOCK. Ignored in MODE_ALERT (no wait) and MODE_HITL (waits indefinitely). |
replay_buffer_frames | ADRIAN_REPLAY_BUFFER_FRAMES | 1000 | Ring-buffer size for resending frames after a transient outage. |
auto_instrument | - | True | Patch LangChain at init time. Set False to attach the handler manually via adrian.get_handler(). |
log_level | - | None | Optional override for the adrian logger’s level. None inherits from the application’s logging config; pass "DEBUG" to force-enable verbose SDK logging. |
handlers | - | None | Override default handlers with a custom list. When set, neither the JSONL handler nor the WebSocket client is registered automatically. |
Callbacks
Register any of the following onadrian.init(...) to observe the event and verdict stream. Sync or async callables are both accepted.
| Callback | Fires on | Receives |
|---|---|---|
on_event | Every PairedEvent emission | (event_type, data, run_id, parent_run_id, event_id) |
on_verdict | Every verdict returned by the backend | VerdictContext |
on_audit | NOTIFY-tier (M2) verdicts | VerdictContext |
on_block | BLOCK-tier (M3 / M4) verdicts | VerdictContext - notification only; halt is policy-driven |
on_mcp_server | An MCP server is registered or its details change | McpServer |
on_disconnect | WebSocket loss | reason: str |
on_reconnect | WebSocket re-established after a prior disconnect | - |
VerdictContext carries the event ID, session ID, original event type/data, run IDs, the classifier’s mad_code + escalate flag, the active policy snapshot, and a hitl field present only on dashboard-resolved verdicts.
PairedEvent shape
Each paired event is a dataclass serialised identically to JSONL and protobuf. Fields:LlmPairData carries the model name, full message list (chat_model_start input), output text, tool_calls, and token usage. ToolPairData carries the tool name, tool_call_id, input string, and output string.
Agent identity is derived from LangGraph’s langgraph_checkpoint_ns, producing stable paths like "reason", "director|team_lead|worker", "research_supervisor|supervisor_tools|1|researcher".
What’s captured
- Activity events. Tool calls, tool outputs, model inputs and outputs.
- Reasoning traces. The agent’s chain of thought, where the underlying model exposes it. Capture is passive; Adrian does not modify your system prompt.
Supported frameworks
- LangChain. Pinned to
langchain-core >= 1.2.19, < 2.0at launch. The Adrian callback handler attaches via the standard LangChain callback interfaces.
Known limitations
- Hidden chain-of-thought. Some model families (notably OpenAI’s GPT-O series) hide reasoning steps. Adrian captures whatever the framework exposes and works with or without model reasoning - reasoning simply improves detection accuracy.
- MCP visibility. MCP server names are captured via LangChain’s
mcp-adaptersintegration. Agents that bypassmcp-adaptersand use lower-level MCP client APIs directly will not have MCP server names captured at v1.

