Skip to main content
After disambiguating a starting item with Hybrid search, agents must traverse its connected facts. SPARQL (SPARQL Protocol and RDF Query Language) is the W3C-standard query language for knowledge graphs. In Worlds, it provides the strict symbolic logic to perform deterministic reasoning.

How it looks

Given triples such as user:ethan schema:worksAt org:wazoo, query for all employees:
SELECT ?person WHERE {
  ?person <http://schema.org/worksAt> <http://example.com/wazoo> .
}
SPARQL matches patterns against facts, deterministically following relationships to return exact results.
DimensionSPARQLVector search
PrecisionExact, deterministicApproximate, probabilistic
Best forStructured relationshipsSemantic similarity
OutputVerified factsRanked candidates
Worlds combines both via hybrid search so you get the best of each. Standard RAG struggles with evolving facts and complex relational queries. Worlds solves this using stateful Graph RAG.

Limits of stateless RAG

In traditional RAG, you chunk and embed text for retrieval based on semantic similarity. This works for static information but fails to capture relationship dynamics or state changes.

The evolving fact

  1. Monday: “I am working on Project Apollo.”
  2. Wednesday: “I am pausing Apollo to focus on Project Hermes.”
  3. Friday: “What am I working on?”
Traditional RAG retrieves both chunks, forcing the LLM to resolve the contradiction via heuristics. As data grows, retrieval becomes noisy.

Stateful memory

Worlds maintains a living knowledge graph. Instead of storing raw text, it extracts meaning as triples, namely subject → predicate → object. When facts change, Worlds updates specific graph relationships. This resolves contradictions at the data layer rather than relying on LLM reasoning.

RAG vs Worlds

RAG vs Graph RAG
FeatureTraditional RAGWorlds
SearchSemantic similarityHybrid, combining semantic and relational
StateStatelessStateful, resolving contradictions
InferenceHallucination-proneDeterministic reasoning
StructureUnstructured chunksKnowledge primitives, namely items and triples

Implementing graph RAG

To implement graph RAG with Worlds, follow the ingestion pipeline to transform your unstructured data into a queryable graph. Ontology RAG uses rigid, predefined schemas to guide agentic reasoning and data retrieval. This ensures your agents operate within a well-defined conceptual boundary.

Grounding agents in ontologies

By using discover-schema, your agent can retrieve the available classes and properties of a world before attempting to query it.
  1. Discovery: Agent retrieves the world’s ontology.
  2. Mapping: Agent maps the user’s natural language request to specific RDF classes and predicates.
  3. Querying: Agent executes precise SPARQL queries instead of depending solely on vector similarity.

Why use ontology RAG?

Deep learning feeds unstructured data into neural networks, hoping models infer ground truth. By contrast, neuro-symbolic AI in Worlds explicitly grounds language models in semantic structures. Steering models into a predefined ontology provides the strict context needed to map semantic intent to logical execution.
  • Complex logic: If user A is the parent of B, and B is the parent of C, vector search cannot reliably infer A is the grandparent of C based on text similarity. SPARQL and a defined ontology deterministically execute the precise logic to traverse the graph and return the grandparent.
  • Zero hallucination: Agents only use terms that actually exist in the world’s schema.
  • Deterministic reliability: Ensure the agent’s mental model strictly matches the actual data structure.

Architecture

While agents interact with Worlds abstractly, the platform manages the underlying RDF data through a pipeline.

RDF blob handling

The system manages RDF data through several core utilities:
  • N3 utility: Provides functions to convert between RDF blobs, such as N-Quads strings, and in-memory N3 stores.
  • SPARQL utility: Executes SPARQL queries using the Comunica engine over an N3 store.
    Comunica is an open-source SPARQL query engine built for the web. It handles large RDF data sources efficiently.

RDF patching

Centralized in the core patch service, the system processes mutations for the world database:
  1. Deterministic identification: Skolemizes and hashes each quad to a unique triple ID.
  2. Relational storage: Upserts the resulting triples into the triples table.
  3. Semantic indexing: Chunks and embeds literal values for hybrid search.
  4. Triggers: SQL triggers synchronize rdf:type relations to the entity_types table.

SPARQL execution pipeline

When you execute a SPARQL query via the API, the system follows this pipeline:
  1. Loads the world metadata from the main database.
  2. Resolves the world-specific database client via databaseManager.get(worldId).
  3. Executes handlePatch with a handler that applies updates to the world client.
    worldId
    string
    required
    The specific world ID to target for the patch context.
    patch
    Quad[]
    required
    The array of RDF Quads representing the changes to be applied.
  4. Executes the SPARQL query over the resulting RDF blob.
  5. Updates the world metadata in the main database if the blob changed during a SPARQL update.