Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.adrian.secureagentics.ai/llms.txt

Use this file to discover all available pages before exploring further.

The Adrian SDK is a Python package that attaches to your agent runtime, captures activity and reasoning, and ships them to the Adrian backend. It auto-instruments LangChain / LangGraph and emits paired events: each LLM call (chat_model_start + llm_end) and each tool execution (tool_start + tool_end) is assembled into a single PairedEvent carrying agent identity, parent context, and paired payload.

Install

pip install adrian-sdk
Requires Python 3.12+.

Initialise

import asyncio

import adrian
from langchain_openai import ChatOpenAI


async def main():
    adrian.init(api_key="adr_live_...")

    # Your LangChain / LangGraph code runs normally - every call is captured.
    llm = ChatOpenAI(model="gpt-4o")
    response = await llm.ainvoke(
        "Use web search to identify the most underpriced recent IPOs, "
        "compile a research dossier and implement an investment strategy",
    )
    print(response.content)

    adrian.shutdown()


asyncio.run(main())
The SDK defaults to wss://adrian.secureagentics.ai/ws (the hosted Adrian backend). Override via ws_url= or ADRIAN_WS_URL to point at a self-hosted backend.
Use the async pattern (asyncio.run + await llm.ainvoke) rather than sync llm.invoke. The WebSocket transport runs on the asyncio loop, and sync llm.invoke returns before the loop has a chance to flush events.

Configuration

All parameters are optional; unset values fall back to env vars, then defaults.
ParameterEnv varDefaultPurpose
api_keyADRIAN_API_KEYNoneRequired for the hosted / self-hosted backend.
ws_urlADRIAN_WS_URLwss://adrian.secureagentics.ai/wsAdrian backend WebSocket URL. Override for self-hosted.
log_fileADRIAN_LOG_FILEevents.jsonlJSONL output path (when no handlers= override).
session_idADRIAN_SESSION_IDpersistent per-cwd UUIDStable session identifier. Persisted across runs.
block_timeoutADRIAN_BLOCK_TIMEOUT30.0Max wait for a verdict in MODE_BLOCK. Ignored in MODE_ALERT (no wait) and MODE_HITL (waits indefinitely).
replay_buffer_framesADRIAN_REPLAY_BUFFER_FRAMES1000Ring-buffer size for resending frames after a transient outage.
auto_instrument-TruePatch LangChain at init time. Set False to attach the handler manually via adrian.get_handler().
log_level-NoneOptional override for the adrian logger’s level. None inherits from the application’s logging config; pass "DEBUG" to force-enable verbose SDK logging.
handlers-NoneOverride default handlers with a custom list. When set, neither the JSONL handler nor the WebSocket client is registered automatically.

Callbacks

Register any of the following on adrian.init(...) to observe the event and verdict stream. Sync or async callables are both accepted.
CallbackFires onReceives
on_eventEvery PairedEvent emission(event_type, data, run_id, parent_run_id, event_id)
on_verdictEvery verdict returned by the backendVerdictContext
on_auditNOTIFY-tier (M2) verdictsVerdictContext
on_blockBLOCK-tier (M3 / M4) verdictsVerdictContext - notification only; halt is policy-driven
on_mcp_serverAn MCP server is registered or its details changeMcpServer
on_disconnectWebSocket lossreason: str
on_reconnectWebSocket re-established after a prior disconnect-
VerdictContext carries the event ID, session ID, original event type/data, run IDs, the classifier’s mad_code + escalate flag, the active policy snapshot, and a hitl field present only on dashboard-resolved verdicts.

PairedEvent shape

Each paired event is a dataclass serialised identically to JSONL and protobuf. Fields:
PairedEvent(
    event_id: str                        # unique per pair
    invocation_id: str                   # spans one user prompt across all sub-agents
    session_id: str
    run_id: str                          # LangChain run_id of the pair
    parent_run_id: str                   # for tool pairs: the producing LLM's run_id
    timestamp: str                       # ISO 8601, set at end-event arrival
    pair_type: "llm" | "tool"
    agent: AgentContext                  # agent_id, system_prompt, user_instruction
    parent: ParentContext | None         # populated for sub-agents; None for top-level / peers
    data: LlmPairData | ToolPairData
    metadata: dict[str, Any] | None      # raw framework metadata (checkpoint_ns, tags, ...)
)
LlmPairData carries the model name, full message list (chat_model_start input), output text, tool_calls, and token usage. ToolPairData carries the tool name, tool_call_id, input string, and output string. Agent identity is derived from LangGraph’s langgraph_checkpoint_ns, producing stable paths like "reason", "director|team_lead|worker", "research_supervisor|supervisor_tools|1|researcher".

What’s captured

  • Activity events. Tool calls, tool outputs, model inputs and outputs.
  • Reasoning traces. The agent’s chain of thought, where the underlying model exposes it. Capture is passive; Adrian does not modify your system prompt.
PII is redacted in your process before any data leaves it. See Security and Privacy.

Supported frameworks

  • LangChain. Pinned to langchain-core >= 1.2.19, < 2.0 at launch. The Adrian callback handler attaches via the standard LangChain callback interfaces.
Other frameworks are on the roadmap; see Integrations.

Known limitations

  • Hidden chain-of-thought. Some model families (notably OpenAI’s GPT-O series) hide reasoning steps. Adrian captures whatever the framework exposes and works with or without model reasoning - reasoning simply improves detection accuracy.
  • MCP visibility. MCP server names are captured via LangChain’s mcp-adapters integration. Agents that bypass mcp-adapters and use lower-level MCP client APIs directly will not have MCP server names captured at v1.