Skip to main content
A world is the foundational unit of organization in the ecosystem. It provides an isolated container for an agent’s relationships, history, and facts.
The problem: Why AI forgetsTraditional AI models are largely hobbled; they operate effectively only within short, back-and-forth dialogues. Once that context window fills up or the chat ends, they lose the thread. Because they lack long-term memory, their interactions are fresh starts, making it impossible for agents to take on big, multi-day projects like a human coworker might.The solution: Linked memoryInstead of just hoping the AI finds the right data in a massive pile, Worlds creates a structured map, acting as a graph. By linking related facts together, your agent can always follow the logical path from one piece of information to another, no matter how much data you add.

Goals

GoalDescriptionConstraint
IsolationPrevent data leakage between agents.One RDF dataset per world.
PortabilityMaintain memory across model swaps.Decoupled from LLM provider.
FusionHybrid neural/symbolic reasoning.Requires both vector + SPARQL.

Architecture

A world functions as a neuro-symbolic memory container. It pairs an isolated RDF dataset with a high-performance vector search index.

Neural layer

The neural layer uses vector embeddings to manage semantic similarity. It allows the agent to navigate the world through natural language queries.

Symbolic layer

The symbolic layer uses RDF and SPARQL for deterministic facts. It provides deterministic validation, reducing the risk of hallucinations inherent in standalone LLMs.

Audit trails

Worlds eliminates opaque, untraceable behavior. Every retrieved fact has a verifiable audit trail in the RDF graph. If an agent makes a claim, the symbolic layer proves it.