Kairo

Applications

Applications & Frameworks - Build applications using Kairo frameworks

Applications & Frameworks (applets/ & frameworks/)

Kairo is fundamentally designed as a Headless AI Engine. The @kairo/core package provides the raw reasoning pipeline, but it is intentionally decoupled from any specific visual presentation.

To bridge the gap between the headless core and final user-facing applications, the Kairo repository provides highly reusable abstracted Frameworks and reference implementations called Applets.

The UI Architecture (Onion Model)

When building an end-to-end application with Kairo, you are typically composing layers from the inside out:

Frameworks Directory

Frameworks provide platform-specific state management, browser adapters, and UI components. They save you from reinventing the wheel when turning a raw ReadableStream into a reactive chat interface.

  • frameworks/gui: General GUI state abstractions (e.g., React hooks that manage the complex state of a streaming conversation, tool invocation statuses, and message history arrays)
  • frameworks/gui-components: A library of "dumb", reusable visual UI components specifically designed for AI chat interfaces (e.g., Markdown-rendered message bubbles, tool execution progress cards)
  • frameworks/web: Abstractions optimized for building web-based AI applications, such as browser-compatible Transports

Building Custom Applications

Because Kairo is headless, you can use the exact same @kairo/core logic to drive a Node CLI program, a React web frontend, or a mobile app.

Key Steps to Build Your App

  1. Initialize the Provider: Instantiate an LLM provider (e.g., OpenAIProvider, AzureProvider)
  2. Select the Model: Extract the desired LanguageModel instance
  3. Setup the Pipeline: Create an LmPipeline holding the model and any loaded extensions
  4. Consume the Stream: Call pipeline.generate() and consume the text-delta stream

Example: A Minimal CLI / TUI App

You don't need React to use Kairo. You can build a raw, terminal-based assistant using just @kairo/core and a provider in fewer than 30 lines of code:

import { LmPipeline } from "@kairo/core";
import { OpenAIProvider } from "@kairo/provider-openai";

async function main() {
  // 1. & 2. Provider and Model
  const provider = new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY! });
  const model = provider.models.find((m) => m.id === "gpt-4o");

  if (!model) throw new Error("Model not configured");

  // 3. Mount Pipeline
  const pipeline = new LmPipeline({ model });

  console.log("Assistant: Thinking...");

  // 4. Generate & Consume Stream
  const stream = await pipeline.generate({
    messages: [
      {
        role: "user",
        content: "Explain quantum physics in exactly 1 sentence.",
      },
    ],
  });

  const reader = stream.getReader();

  while (true) {
    const { done, value } = await reader.read();
    if (done) break;

    // Incrementally print to stdout (CLI)
    if (value.type === "text-delta") {
      process.stdout.write(value.textDelta);
    }
  }
}

main();

Reference: The Chat Applet (applets/chat)

The applets/ directory is designed to house final composed references.

Currently, applets/chat is designated as the primary reference implementation for a full-featured web-based chat interface. Once fully fleshed out, it will serve as the golden standard for:

  • Managing complex, multi-turn conversation state using frameworks/gui
  • Providing polished visual feedback for tool execution using frameworks/gui-components
  • Connecting multiple extensions (like RAG and MCP tools) into a cohesive user experience

On this page