RDF blob handling
The system manages RDF data through several core utilities:- N3 utility: Provides functions to convert between RDF blobs (e.g., N-Quads strings) and in-memory N3 stores.
- SPARQL utility: Executes SPARQL queries using the Comunica engine over an
N3 store.
Comunica is a highly modular, open-source SPARQL query engine built for the web. It is flexible and handles large RDF data sources efficiently.
RDF patching
Centralized in the core patch service, the system processes mutations for the world database:- Deterministic identification: Skolemizes and hashes each quad to a unique triple ID.
- Relational storage: Upserts the resulting triples
into the
triplestable. - Semantic indexing: Chunks and embeds literal values for hybrid search.
- Triggers: SQL triggers synchronize
rdf:typerelations to theentity_typestable.
SPARQL execution pipeline
When you execute a SPARQL query via the API, the system follows this pipeline:- Loads the world metadata from the main database.
- Resolves the world-specific database client via
databaseManager.get(worldId). - Executes
handlePatchwith a handler that applies updates to the world client.The specific World ID to target for the patch context.The array of RDF Quads representing the changes to be applied. - Executes the SPARQL query over the resulting RDF blob.
- Updates the world metadata in the main database if the blob changed during a SPARQL update.