Skip to content

AgenticAPI

Agent-native web framework with harness engineering for Python.

AgenticAPI lets you build web applications where endpoints accept natural language intents, dynamically generate code via LLMs, and execute it in a sandboxed, policy-controlled environment. Think of it as FastAPI for agent-powered APIs — with safety guardrails built in.

from agenticapi import AgenticApp, AgentResponse, Intent
from agenticapi.runtime.context import AgentContext

app = AgenticApp(title="Hello Agent")

@app.agent_endpoint(name="greeter", autonomy_level="auto")
async def greeter(intent: Intent, context: AgentContext) -> AgentResponse:
    return AgentResponse(
        result={"message": f"Hello! You said: {intent.raw}"},
        reasoning="Direct greeting response",
    )
agenticapi dev --app myapp:app
# Swagger UI at http://127.0.0.1:8000/docs

Highlights

  • Intent-based endpoints — Natural language in, structured results out
  • Typed intents — Schema-aware Intent[T] parsing with validation and OpenAPI publication
  • Dynamic code generation — LLMs generate Python code on the fly
  • Native function callingToolCall + tool_choice + finish_reason across all four backends (Anthropic, OpenAI, Gemini, Mock) with automatic retry
  • Harness engineering — Policies, static analysis, sandbox, audit trails
  • Persistent audit — In-memory for dev or SqliteAuditRecorder for production
  • Cost budgetingBudgetPolicy and PricingRegistry primitives for LLM spend ceilings
  • Observability — OpenTelemetry spans, Prometheus metrics, W3C trace propagation — all graceful no-ops when unused
  • Dependency injection — FastAPI-style Depends() with generator-based teardown
  • Tool decorator@tool turns plain functions into registered tools with auto-generated JSON schemas
  • Multi-LLM — Anthropic Claude, OpenAI GPT, Google Gemini, or your own
  • Authentication — API key, Bearer token, Basic auth — per-endpoint or app-wide
  • Custom responsesHTMLResult, PlainTextResult, FileResult, or any Starlette Response
  • HTMX supportHtmxHeaders auto-injection, partial page updates
  • File handling — Upload via multipart, download via FileResult, streaming
  • MCP support — Expose endpoints as MCP tools for Claude Desktop, Cursor, etc.
  • OpenAPI / Swagger / ReDoc — Auto-generated, like FastAPI
  • Background tasks — Post-response processing via AgentTasks
  • Approval workflows — Human-in-the-loop for sensitive operations
  • ASGI-native — Built on Starlette, runs on uvicorn

  • Multi-agent orchestrationAgentMesh with @mesh.role / @mesh.orchestrator, budget propagation, cycle detection

  • Agentic loop — Multi-turn ReAct pattern where the LLM autonomously calls tools and reasons to a final answer, all harness-governed
  • Workflow engine — Declarative multi-step workflows with typed state, conditional branching, parallel execution, checkpoints
  • Agent playground — Self-hosted debugger UI at /_playground for chatting with agents and inspecting execution traces
  • Trace inspector — Self-hosted trace inspection UI at /_trace with search, diff, cost analytics, and compliance export
  • Harness-governed MCPHarnessMCPServer exposes @tool functions as MCP tools with full policy enforcement, audit, and budget tracking
  • Multi-turn tool conversationsLLMMessage carries tool_call_id and tool_calls for provider-native multi-turn format translation across Anthropic, OpenAI, and Gemini
  • Project scaffoldingagenticapi init generates a ready-to-run project with tools, harness, and eval set

Current scale: 141 Python modules, ~26,725 lines of code, 1,507 tests (+38 in extensions), 32 examples, 1 extension.

For the full shipped / active / deferred / superseded status matrix see ROADMAP.md at the repo root. For speculative forward tracks (Agent Mesh, Hardened Trust, Self-Improving Flywheel) see VISION.md at the repo root.

Getting started

Core guides

For contributors

Reference