Executive Summary & Vision
GDS (Geometrodynamic Semantics) is a research prototype exploring an alternative to traditional, statistics-based Transformer architectures. Rather than predicting the next token, GDS models semantic reasoning as a physical phenomenon.
Inspired by Einstein’s theory of General Relativity, GDS treats concepts as “semantic particles” possessing intrinsic properties: mass (semantic importance), charge (the hyperdimensional vector), and spin (affective value). These particles are generated by the CSI-HDC (Conceptual State Injector using Hyperdimensional Computing)—a semantic tokenizer that replaces traditional token sequences with 20,000-dimensional binary hypervectors.
The CSI-HDC’s output is not a flat sequence of tokens, but a dynamic field of interacting particles. When processed by the GDS engine, this field warps a high-dimensional “conceptual space”. Reasoning is then modeled as finding the path of least resistance—a geodesic—through this curved semantic manifold.
Learning occurs not through backpropagation, but through a Hebbian-style mechanism that modifies the geometry of the space itself. A dynamic Overlay layer adds contextual adjustments to edge costs in the graph. Successful reasoning paths are reinforced, making them “cheaper” and more likely in future queries. This process is governed by internal evaluation and a ValidationGate, enabling autonomous learning based on coherence principles rather than direct supervision.
The result is a research prototype demonstrating efficient, scalable, and—most importantly—explainable semantic reasoning, where every path can be audited and understood step-by-step.
Learning Paradigm
The GDS learning paradigm is fundamentally different from the backpropagation and gradient descent methods that power traditional Large Language Models. It is a form of autonomous, Hebbian-style learning that modifies the geometry of the conceptual space in response to experience.
Core Principles
- No Backpropagation: The model does not compute gradients across a massive neural network. Learning is a local, lightweight process.
- Learning by Modifying Costs: Instead of adjusting neuron weights, GDS learns by adjusting the “cost” of traversing specific edges in the semantic graph. This is done by writing small delta values to the dynamic
Context Overlay.
- Reinforcement and Penalization: Paths that lead to successful or “coherent” outcomes are reinforced (their edges receive a negative delta, making them cheaper and more attractive to the
Reasoner). Paths that are evaluated as poor alternatives are penalized (their edges receive a positive delta, making them more expensive).
- Internal Evaluation: The model does not strictly require external, supervised labels to learn. As demonstrated in our simulation, it can employ internal heuristics (such as a “coherence score” based on concept mass) to decide which paths are “better” and thus worthy of reinforcement.
- Stability and Explainability: Because learning only affects the overlay, the foundational knowledge graph remains stable. The changes are auditable (one can inspect the deltas in the overlay) and their effect is directly observable in the
Reasoner’s behavior and cost calculations.
Case Study: The Simulation
Our simulation provided a perfect, concrete example of this paradigm in action:
- Initial State: The
Reasoner initially chose the cheapest, most obvious path: king -> power.
- Internal Evaluation: An internal metric, the “coherence score” (sum of concept masses), evaluated the alternative path
king -> crown -> power as being semantically richer, despite its higher initial cost.
- Autonomous Learning: This internal evaluation triggered a learning event. The
learn_edges function was called to apply a strong negative delta (reinforcement) to the king -> crown and crown -> power edges, and a positive delta (penalty) to the king -> power edge.
- Behavioral Change: When the query was run again, the
Reasoner, factoring in the new deltas from the Overlay, found that the path through crown was now the new cheapest path.
This demonstrates a complete, autonomous cycle: Reason → Evaluate → Self-Reinforce → Reason Differently. The system adapts its reasoning based on internal evaluation principles, a process resembling neuroplasticity more than traditional supervised learning.