ReAct Agentic Loop
Multi-turn reasoning-action-observation cycle with configurable stop conditions, tool concurrency, and fail strategies.
From ReAct loops to quality gates — the infrastructure layer for safe, observable, evaluable AI agents.
import { HarnessBuilder, PromptInjectionGuard } from 'colony-harness' import { OpenAIProvider } from '@colony-harness/llm-openai' import { ConsoleTraceExporter } from '@colony-harness/trace-console' import { calculatorTool } from '@colony-harness/tools-builtin' // Build a production-ready agent in 30 seconds const harness = new HarnessBuilder() .llm(new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY, model: 'gpt-4o' })) .tool(calculatorTool) .trace(new ConsoleTraceExporter()) .guard(new PromptInjectionGuard()) .build()
Models are powerful, but lack a reliable production runtime. We fill that gap.
Five-layer guard pipeline covering injection detection, PII redaction, token limits, sensitive words, and rate control.
Register tools with Zod schemas — automatic input/output validation and JSON Schema generation for LLM consumption.
Built-in Span / Event / Metrics tracing with four exporters covering terminal, file, OTel, and Langfuse.
Working / Episodic / Semantic memory architecture with automatic context compression when tokens exceed limits.
Seven scorers plus Eval Gate — automatic quality enforcement that blocks sub-threshold releases.
Unified interface across OpenAI, Anthropic, Gemini, and OpenAI-compatible endpoints. Swap with one line.
Centralized runtime, pluggable ecosystem — assemble only what you need.
18 packages, organized by function — install only what you need.
Whether you're just starting or going deep — there's a path for you.
Run a minimal example in 5 minutes. Verify the core loop works end-to-end.
~5 minProgressive 8-step tutorial from install to production. Covers memory, tracing, guards, and evals.
~75 minJoin the development. Read architecture docs, ADRs, and understand package boundaries.
Open Source