Introduction
A modular runtime for building AI-native applications
Kairo is a production-grade, open-source TypeScript framework for building intelligent agentic systems. It provides a clean, headless AI orchestration core that teams can compose with any LLM provider, memory strategy, and tooling — without coupling product logic to any single vendor or technology.
Why Kairo?
Most agent frameworks couple everything together: providers, memory, tools, and UI. Kairo's design goal is to keep the pipeline small and push everything else to typed, isolated extensions.
| Principle | What it means for you |
|---|---|
| Protocol-First | Extensions communicate over a versioned JSON-RPC protocol. Components built today are compatible with the core of tomorrow. |
| Provider Freedom | Switch between OpenAI, Azure, or any OpenAI-compatible model without rewriting application or extension logic. |
| Isolation by Default | Extensions run as separate processes (stdio), browser Workers, or remote microservices — safer, easier to restart, no shared-state pain. |
| Streaming-First | Every pipeline output is a ReadableStream. Extensions can intercept and transform token streams in real-time. |
| Composable Pipelines | Enable or disable enrichment, memory retrieval, tool injection, pruning, and refinement independently per run. |
Architecture
Kairo separates concerns into three distinct layers:
- Providers (
@kairo/provider-*): Talk to model APIs and exposeLanguageModels with declared capabilities. - Core pipeline (
@kairo/core): Orchestrates a single "run" — messages → model → streaming output → tool calls → final turn. - Extensions (
extensions/*): Plug into the pipeline at well-defined stages via the typed extension protocol.
The pipeline lifecycle, in order: enrichInput → retrieveMemory → pruneContext → injectTools → configModel → generation → streamTransform → executeTool → refineTurn → persistContext
Your App
├─ chooses Provider + Model
├─ builds LmPipeline(messages, options)
├─ connects ExtensionClient(s) via a Transport
└─ streams pipeline output
LmPipeline (@kairo/core)
├─ enrichInput (extensions)
├─ retrieveMemory (extensions)
├─ pruneContext (extensions)
├─ injectTools (extensions)
├─ configModel (extensions)
├─ generate (provider → ReadableStream)
├─ streamTransform (extensions)
├─ executeTool (extension-driven or app-driven)
├─ refineTurn (extensions)
└─ persistContext (extensions)Extension Protocol
Extensions communicate over a small typed JSON message protocol:
request/response— with IDs, timeouts, and schema validation (Zod)notification— fire-and-forget eventsmetadata— shared envelope for routing and tracing
Transports wire both sides together:
@kairo/extension-transport-stdio— spawn an extension process and talk over stdio (great for Node apps)@kairo/extension-transport-http— connect to remote extensions via HTTP + SSE@kairo/extension-transport-worker— connect to a browserWorkeror Node.jsworker_threads
Monorepo Layout
Kairo is a pnpm workspace organized by layer:
| Directory | Purpose |
|---|---|
packages/core | Pipeline orchestration, provider interfaces, and extension protocol |
packages/provider-openai | OpenAI provider implementation |
packages/provider-azure | Azure OpenAI provider (built on OpenAI surface) |
packages/extension-transport-stdio | Stdio transport for out-of-process extensions |
packages/extension-transport-http | HTTP/SSE transport for remote extensions |
packages/extension-transport-worker | Web Worker transport for browser environments |
extensions/mcp | Model Context Protocol integration — manages MCP servers and injects their tools |
extensions/rag-local | Local embedding & retrieval for private RAG workflows |
frameworks/gui | State management abstractions (e.g., React hooks for streaming chat state) |
frameworks/gui-components | Reusable AI chat UI components |
applets/chat | Reference chat application (under construction) |
Getting Started
Prerequisites
- Node.js (recent LTS recommended)
- pnpm (
pnpm@10.14.0, seepackage.json)
Install, Build, Test
pnpm install
pnpm build
pnpm test
pnpm lintUseful workspace patterns:
# Run a script in a single package
pnpm -C packages/core test
# Run a script across all packages
pnpm -r buildThe repo uses Biome for formatting/linting (pnpm format, pnpm lint) and Lefthook for Git hooks (installed on postinstall).
Examples
MCP Extension + OpenAI Provider (Node)
Connect a Kairo extension over stdio, inject MCP tools into the pipeline, and run a model that supports function calling:
cd extensions/mcp
echo "OPENAI_API_KEY=sk-..." > .env
pnpm tsx watch --env-file=./.env examples/index.tsTo inject tools, configure at least one MCP server in kairo-mcp.config.json in the working directory:
{
"servers": {
"example": {
"label": "Example MCP server",
"active": true,
"transport": "stdio",
"command": "node",
"arguments": ["path/to/your-mcp-server.js"],
"env": { "PATH": "/usr/bin:/bin" }
}
}
}Local RAG Extractor (rag-local)
cd extensions/rag-local
pnpm tsx examples/index.tsOn first run this may download model files depending on cache state.
Building Your Own Extension
- Implement an
ExtensionServer - Declare pipeline capabilities via
LMPipelineExtension.init(...) - Connect from your app using
ExtensionClientover a transport
import { ExtensionServer, LMPipelineExtension } from "@kairo/core";
class MyExtensionServer extends ExtensionServer {
constructor() {
super({ name: "my-extension", version: "0.0.1" });
LMPipelineExtension.init(this, { supportsInputEnrichment: true }).build(() => ({
enrichInput: async ({ params }) => ({
result: { messages: params.enrichedMessages }
}),
}));
}
}See Extensions for the full builder pattern and long-running tool execution guide.
Roadmap
- Tool execution E2E — Complete async tool execution with progress polling and cancellation across all transports
- Extension starter template — A
create-kairo-extensionscaffold and end-to-end worked examples - API stabilization — Audit and lock public package APIs; clarify the boundary between
@kairo/coreand the extension protocol
Contributing
Contributions are welcome. The highest-impact areas are:
- New extensions — RAG strategies, new tool integrations, or specialized agent capabilities
- Provider implementations — OpenAI-compatible or fully custom
LanguageModelintegrations - Examples and documentation — Real-world usage patterns and tutorials
- Protocol hardening — Observability, tracing, richer error taxonomies
- Design discussions — RFC-style proposals for new pipeline stages or extension capabilities
Workflow:
git clone https://github.com/orano-labs/kairo
pnpm install && pnpm build && pnpm test