Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.adrian.secureagentics.ai/llms.txt

Use this file to discover all available pages before exploring further.

Secureagentics integrates with the AI frameworks and LLM providers you already use. Each integration works by instrumenting your agent code to send events — such as prompts, completions, tool calls, and errors — to Secureagentics in real time. Once events are flowing, Secureagentics evaluates them against your security policies, stores them in the audit log, and surfaces anomalies in the dashboard.
An “integration” means adding instrumentation to your agent code. Secureagentics does not proxy your LLM requests. Instead, your code sends event data directly to the Secureagentics API alongside your normal LLM calls. This keeps your agent’s latency characteristics intact while giving Secureagentics full observability.

Available integrations

OpenAI

Instrument OpenAI-based agents to send prompt, completion, and tool call events for monitoring and policy enforcement.

LangChain

Add a callback handler to any LangChain agent to automatically forward events to Secureagentics.

Custom agents

Connect any agent — regardless of framework — using the Secureagentics REST API directly.

Integration comparison

IntegrationFramework typeGuide
OpenAILLM provider SDKOpenAI
LangChainAgent frameworkLangChain
Custom agentsAny / REST APICustom agents

Choosing an integration

  • Use the OpenAI integration if your agent calls the OpenAI API directly using the openai Python or Node.js SDK.
  • Use the LangChain integration if your agent is built with LangChain chains, agents, or tool-use workflows.
  • Use the Custom agents integration for any other framework, in-house agent runtime, or language not covered above.
If your stack uses multiple frameworks, you can register and instrument each component as a separate agent in Secureagentics.