Fabr Beta Release: A Platform for Distributed AI Agents
December 16, 2025 | 10 min read
Two months ago, we announced the alpha release of Fabr with a focus on our core innovations: persistent memory and adaptive behavior for AI agents. Today, we're excited to announce the beta release—and with it, a deeper look at the platform architecture that makes it all possible.
Fabr has evolved from an experimental concept into a comprehensive framework for building, deploying, and managing distributed AI agents at scale. This post is for architects and technical leaders who want to understand what Fabr is, how it's structured, and how it might fit into their systems.
The Fabr Architecture
At its core, Fabr is a distributed agent platform built on three pillars: a Server that hosts and orchestrates agents, a Client layer that enables applications to interact with agents, and an Agent SDK that developers use to build intelligent agents.
Server
Hosts agents, manages lifecycle, provides APIs and monitoring
Client
Connects applications to agents with real-time messaging
Agent SDK
Framework for building intelligent, stateful agents
Server: The Agent Runtime
The Fabr Server is the runtime environment where agents live. It handles agent creation, message routing, state persistence, and cluster coordination. From an architect's perspective, here's what matters:
REST API for Agent Management
The server exposes a comprehensive REST API for agent lifecycle management:
- Agent Creation: Create agents with specific configurations, including agent type, model settings, and custom parameters
- Chat Interface: Send messages to agents and receive responses through a simple request/response pattern
- Diagnostics: Query agent status, retrieve statistics, and manage agent lifecycle across the cluster
- File Handling: Upload and retrieve files with automatic expiration for agent interactions
- Model Configuration: Centralized management of AI model endpoints and credentials
Monitoring Dashboard
Fabr includes a built-in monitoring dashboard that provides:
- Real-time agent status and health monitoring
- Agent spy functionality for debugging agent interactions
- Message flow visualization across the system
- Integration options for embedding into existing applications or running standalone
Distributed Architecture
The server is designed for horizontal scaling. Agents are distributed across cluster nodes with automatic load balancing and failover. State is persisted externally, allowing agents to survive node restarts and migrate between servers as needed.
Client: Connecting Applications to Agents
The Fabr Client layer provides the bridge between your applications and the agent cluster. It's designed for both server-side applications (APIs, background services) and interactive applications (web UIs, dashboards).
Context Management
At the heart of client integration is the concept of a Client Context—an abstraction that manages the connection between a user session and the agent cluster. Key capabilities include:
- Session binding: Each context is bound to a user identity, ensuring agents maintain context per user
- Message handling: Send messages and receive responses, with support for both synchronous and asynchronous patterns
- Event subscription: Receive real-time notifications when agents send messages
- Health monitoring: Query agent health status with configurable detail levels
Design Pattern: Context Factory
The Client Context Factory supports two usage patterns: per-request contexts for stateless API scenarios (create, use, dispose), and cached contexts for interactive applications where the same user maintains a persistent connection across multiple interactions.
UI Components
For web applications, Fabr provides ready-to-use UI components:
- Chat Dock: A floating chat interface with markdown rendering, typing indicators, health status display, and multi-dock coordination
- Agent Proxy Components: Embed AI agents directly into UI components, giving agents access to component state and the ability to update the UI in real-time
These components handle the complexity of real-time communication, thread safety, and lifecycle management, letting developers focus on their application logic.
Agent SDK: Building Intelligent Agents
The Agent SDK is where developers build the AI agents themselves. It provides a structured framework for creating agents that can reason, remember, communicate, and act.
Agent Lifecycle
Every Fabr agent follows a defined lifecycle:
Configure AI models, set up tools, prepare state
Accept messages from users or other agents
Reason, call tools, consult memory
Return results, update state, notify others
Tool Integration (Function Calling)
Agents can be equipped with tools—functions that the AI can invoke to take actions or retrieve information. The SDK provides a clean pattern for defining tools with typed parameters and descriptions that the AI model uses to understand when and how to call them.
Common tool patterns include:
- Database queries and updates
- External API integrations
- File processing and generation
- Business logic execution
- UI state updates (for client-side agents)
Memory and Context Providers
Building on our alpha release, the SDK includes a powerful context provider system. Context providers inject information before each agent invocation and extract learnings afterward. This enables:
- Dynamic instructions: Modify the agent's behavior based on accumulated knowledge
- Information extraction: Automatically capture and store relevant details from conversations
- Progressive learning: Agents that get smarter through interaction
Agent-to-Agent Communication
Fabr agents can communicate with each other, enabling sophisticated multi-agent architectures:
- Direct messaging: One agent sends a request to another and waits for a response
- Event broadcasting: Fire-and-forget notifications to other agents
- Orchestration patterns: Coordinator agents that delegate to specialized agents
Architecture Pattern: Agent Teams
Complex tasks can be handled by teams of specialized agents. An orchestrator agent receives user requests, breaks them into subtasks, delegates to specialist agents (research, analysis, writing, etc.), and synthesizes the results. Each agent maintains its own memory and expertise.
Timers and Scheduled Tasks
Agents aren't limited to responding to messages. They can schedule work:
- Timers: Short-lived, in-memory scheduling for frequent tasks (heartbeats, polling)
- Reminders: Persistent, survives restarts, for longer-term scheduled work (daily reports, periodic checks)
This enables agents that proactively perform work, monitor conditions, and reach out when needed—not just respond when asked.
Health and Observability
Production systems need visibility. Fabr agents report health status with customizable metrics, and the framework includes built-in telemetry for distributed tracing and performance monitoring.
Enterprise Features
The beta release includes capabilities designed for enterprise deployment:
Multi-Tenant Support
Agents are isolated by user identity. A single Fabr cluster can serve multiple users or tenants with complete separation of agent state and conversations.
Horizontal Scaling
Add server nodes to handle more agents and higher message throughput. The distributed architecture automatically balances load across the cluster.
Model Flexibility
Configure multiple AI model providers (Azure OpenAI, OpenAI, etc.) with centralized credential management. Switch models per agent or globally.
Reporting Infrastructure
Built-in support for specification-based queries, enabling AI agents to dynamically filter and retrieve data from your databases using declarative patterns.
What Beta Means
The beta release represents a stable, feature-complete platform ready for serious development and pilot deployments. The APIs are stabilizing, documentation is comprehensive, and we're actively working with early adopters on real-world implementations.
What's included in beta:
- Full server runtime with REST APIs and dashboard
- Client libraries with context management and UI components
- Agent SDK with tools, memory, and inter-agent communication
- Configuration management for models and credentials
- Documentation and integration guides
What's coming next:
- Additional pre-built agent templates
- Enhanced memory and retrieval capabilities
- Expanded integration connectors
- Performance optimizations for high-scale deployments
Use Cases We're Seeing
Early adopters are building with Fabr across a range of scenarios:
- Customer Support Agents: Persistent agents that remember customer history, access knowledge bases, and escalate when needed
- Document Processing: Agents that ingest, analyze, and answer questions about document collections
- Workflow Automation: Agents that monitor conditions, coordinate with external systems, and execute multi-step processes
- Internal Assistants: Company-specific agents that understand internal systems, policies, and can take actions on behalf of employees
- Data Analysis: Agents that query databases, generate reports, and explain findings in natural language
Get Started with Fabr
Whether you're an architect evaluating agent platforms, a developer ready to build, or a technical leader exploring how AI agents could transform your operations—we'd love to talk.
The Fabr beta is available now for qualified organizations. We're looking for partners who want to push the boundaries of what's possible with long-lived, distributed AI agents.
About Vulcan365 AI: We're building the next generation of AI infrastructure for business. Fabr is our platform for distributed, long-lived AI agents—systems that persist, remember, collaborate, and evolve. Based in Birmingham, Alabama, we're focused on making advanced AI accessible and practical for real-world enterprise applications.