Skip to main content
The story of human progress is the story of how information is stored, shared, and processed. Each layer of progress compounds into the next in an accelerating trajectory. The Worlds Ecosystem is the latest chapter in this ancient narrative. It builds upon millennia of innovation in externalized thought, returning to the original vision of universal data portability, associative linking, and the persistent item graph.
The architectural critique: The desktop metaphorTo understand Worlds, you must first unlearn the desktop. The 1980s GUI prioritized mass adoption by office workers by mimicking physical constraints, such as files in paper folders. This decision fundamentally broke data portability. When data is locked inside apps and files, associative linking becomes impossible, and AI agents lack the necessary context to truly augment cognition.

Oral history

300,000–10,000 BCE For the vast majority of human existence, the human brain was the only hard drive. Progress was slow not because early humans lacked intelligence, but lost if a tribe or elder perished. Early societies relied on oral traditions, using myth, rhythm, and song as mnemonic devices to encode vital survival data across generations. Eventually, the impulse to externalize thought began to emerge. Long before the famous cave paintings of Lascaux, early humans were experimenting with symbolic storage. In places like Blombos Cave in South Africa, archaeologists have found pieces of ochre etched with deliberate, cross-hatched geometric patterns dating back over 70,000 years. This marks the dawn of symbolic representation—the realization that a physical object could hold a mental concept.

Agriculture

10,000–1,000 BCE The true catalyst for systemic knowledge externalization was not poetry or religion, but bureaucracy. The Agricultural Revolution forced humans to settle, creating food surpluses that needed to be managed, taxed, and traded. In ancient Mesopotamia, accountants began using small clay tokens to represent bushels of grain or heads of cattle. Over millennia, to prevent theft and fraud, they began pressing these tokens into flat clay envelopes. They soon realized they didn’t need the tokens inside at all—the impressions on the outside were enough. This evolution produced Cuneiform, the first fully developed writing system. Writing did not begin as a way to record history; it began as a spreadsheet.

Phonetics

1,500 BCE–1,000 CE Early writing systems like Cuneiform and Egyptian Hieroglyphs were logographic— symbols representing whole words—and incredibly complex, requiring years of study. This centralized knowledge in the hands of elite scribal classes. The democratization of knowledge began with the invention of the alphabet. Canaanite miners in the Sinai Peninsula adapted Egyptian hieroglyphs to represent individual sounds rather than whole words, known as the Proto-Sinaitic script. The Phoenicians later refined and spread this phonetic system across the Mediterranean. Because an alphabet only required learning two or three dozen symbols, it made literacy highly portable and accessible to a broader population. Meanwhile, other complex societies engineered vastly different media. The Inca Empire managed millions of subjects across mountainous Andean terrain without a written script. Instead, they used the Khipu—a sophisticated and enigmatic system of knotted, dyed strings that encoded numerical data, tax records, and potentially narrative histories in a three-dimensional, tactile format.

Replication

1,000–1900 CE Even with portable alphabets and lightweight substrates like parchment and paper, invented in Han Dynasty China, reproducing knowledge remained a slow, manual bottleneck. A monk could spend a year copying a single text. The paradigm shifted with the mechanization of writing. China and Korea developed movable type first, but Johannes Gutenberg’s printing press in 15th-century Europe triggered a rapid information revolution. By collapsing the cost of producing books, the press broke the Church and State’s monopoly on information. It fueled the Renaissance, catalyzed the Scientific Revolution, and allowed scholars across continents to compare standardized data, find errors, and build upon each other’s work.

Digital substrate

20th century By the mid-20th century, humanity reached the physical limits of paper-based knowledge. Managing the sheer volume of global information required a new substrate entirely.

The Memex

1945 As World War II concluded, Vannevar Bush published his influential essay, As We May Think. He recognized that human knowledge was expanding faster than the human ability to navigate it. He proposed the Memex (memory extension)—a mechanized private file and library system. Crucially, Bush realized that the human mind does not work via rigid alphabetical indexes or hierarchical folders. The mind operates by association. Bush argued that any data system must allow users to map associative “trails” between arbitrary items. The Memex was not only a storage device; it served as the first conceptual model for a knowledge graph.

The mother of all demos

1968 Inspired directly by Bush, Douglas Engelbart formalized these ideas at the Stanford Research Institute. In 1962, he published Augmenting Human Intellect, arguing that computers should not merely automate tasks, but rather evolve human analytical capability, specifically neuro-symbolic reasoning. In his landmark 1968 “mother of all demos,” Engelbart demonstrated:
  • The computer mouse
  • Hypertext linking
  • Collaborative, real-time editing
  • Transclusion: The ability to view a live instance of a conceptual item across multiple contexts without duplicating the underlying data.
Engelbart proved that the computer served as a malleable medium for associated thought, not only an electronic abacus.

Desktop metaphor

The 1980s So why do modern computers not work this way? When companies like Apple and Microsoft brought computing to the masses, Engelbart’s concepts of infinite, associative graphs proved too complex for mass appeal. The industry required an immediate, recognizable crutch: the desktop metaphor. By organizing data into isolated physical metaphors—documents placed inside folders and owned by proprietary applications—the system neutralized the cognitive load of learning computing. But it severed the associative links envisioned by Bush. Data became a hostage to the specific interface that created it. This created application silos.

Return of the graph

The 2000s As the internet scaled, the limitations of the desktop metaphor severely impacted the web. Sir Tim Berners-Lee championed the Semantic Web, aiming to give meaning to information rather than linking static HTML pages. This led to the W3C’s standardization of the Resource Description Framework (RDF). RDF enforces a strict rule: all data must be expressed as statements (triples) consisting of a subject, predicate, and object. However, raw RDF proved too complex and abstract for widespread adoption by everyday web developers. Instead of becoming the way humans built the web, the Semantic Web succeeded by going invisible—becoming the underlying infrastructure that machines use to understand it.

Schema.org

The 2010s In 2011, the operators of the world’s largest search engines—Google, Bing, and Yahoo—collaborated to launch Schema.org. They recognized that to build intelligent search engines, they needed a universal vocabulary for structured data that was easier to implement than raw RDF. By translating human-readable web content into machine-readable markup, such as JSON-LD, Schema.org allowed algorithms to stop merely matching keywords and start understanding items and their relationships. Today, this semantic layer powers the massive Knowledge Graphs, rich search snippets, and AI overviews that define the modern internet. Simultaneously, the Semantic Web evolved into the underlying infrastructure for codified official data across the globe. Governments, healthcare networks, and scientific communities use linked open data principles to break down information silos. From standardizing global electronic health records to mapping federal regulatory codes, semantic technologies now provide a secure, interoperable framework for the world’s most critical data. Looking back, the creation of ARPANET and eventually the World Wide Web made information weightless, instantaneously transmissible, and globally networked. Humanity transitioned from individual, localized libraries into a single, interconnected, planetary nervous system. But it was the integration of the Semantic Web that finally gave this nervous system a structured, decipherable memory.

Symbiotic era

Present Every era in this story follows the same pattern: a new medium for externalizing thought removes a bottleneck, and civilization surges forward. Cave walls freed memory from mortal minds. Writing freed accounting from human recall. The printing press freed ideas from scarcity. The internet freed information from physical location. But all of these systems share a fundamental limitation: they are passive. Clay tablets, printed books, and even Wikipedia do not think. They wait silently until a human mind retrieves, interprets, and applies the information they contain. The threshold where that changes is being crossed. With the rise of large language models, autonomous agents, and machine learning systems trained on humanity’s collective written output, externalized memory is becoming active. For the first time, the medium itself can read, synthesize, pattern-match, and generate knowledge—operating across vast datasets far beyond any individual human’s cognitive reach. The external hard drive has learned to think.

Completing the arc

The Worlds Ecosystem unites Engelbart’s vision of augmented intellect with the standards of RDF, and packages it for modern developers. Worlds removes the app icon entirely. It treats all data as a universal item store, returning computing to its associative roots:
  • Persistent, neuro-symbolic memory for AI agents: Agents do not rely on probabilistic guessing to retrieve facts. Because Worlds uses knowledge graphs, they navigate deterministic, verifiable truths—the same associative trails Bush imagined in 1945, now traversed autonomously.
  • Transclusion replaces copy-pasting: A statement, which is an atom of knowledge, exists objectively in the world. Multiple applications and agents can reference, mutate, and fork that exact item in real-time without duplicating it—realizing the vision of connected knowledge at a universal scale.
  • Universal portability: Because Worlds uses open standards, your knowledge graph never remains trapped inside a proprietary vendor silo. The data is the platform.

The horizons ahead

Looking further, two emerging frontiers suggest the externalization of thought is far from complete:
  • Brain-Computer Interfaces (BCIs): Technologies that will blur the boundary between biological memory and digital storage, potentially granting direct neural access to externalized knowledge—collapsing the last remaining friction between thinking and knowing.
  • Ambient Computing: An environment where digital intelligence is woven into the physical world, making the retrieval and application of human knowledge as effortless and invisible as breathing.
From ochre scratched onto stone 70,000 years ago to a planetary knowledge graph traversed by autonomous minds—the arc of human progress has always bent toward the same destination: making captured knowledge available to everyone, and everything, that can use it. Worlds is the latest and most deliberate step along that path.