Part 6: Explainability & Interpretability

GDS achieves explainability through a physics-inspired reasoning paradigm where cognitive processes emerge from observable graph traversal. Unlike neural networks with millions of opaque parameters, GDS reasoning is inherently transparent: every decision corresponds to a path through a semantic graph, where each step has measurable properties and justifications.

Why Explainability Matters for GDS

Traditional language models face the “black box problem” – their reasoning emerges from billions of learned weights that humans cannot interpret. GDS takes a fundamentally different approach:

  • Graph-based reasoning provides natural traceability (every path is a sequence of concepts)
  • Physics-inspired cost functions make decisions based on interpretable properties (mass, distance, frequency)
  • Explicit uncertainty quantification separates knowledge limits from data noise
  • Meta-cognitive chains document the system’s reasoning process step-by-step

This makes GDS particularly suitable for high-stakes applications (legal, medical, financial) where decisions must be justified and audited.


1. Meta-Cognition Chain

The Meta-Cognition Chain provides a transparent record of GDS’s reasoning process, decomposed into four steps that mirror human problem-solving:

Chain Structure

1️⃣ Analysis      → Understand the query and assess available knowledge
2️⃣ Tools         → Execute graph search in the knowledge base
3️⃣ Synthesis     → Evaluate path quality and formulate answer
4️⃣ Response      → Deliver final answer with confidence level

Confidence Levels

GDS explicitly quantifies confidence using three levels:

  • High: Based on verified information with optimal path quality (low cost < 5.0, high margin > 1.0)
  • Low: Uncertain due to high path cost (> 10.0) or low margin (< 0.5)
  • Rejection: Insufficient data or expertise to answer reliably

Example Chain

For the query "What connects 'go_to_barber' to 'supermarket'?":

1️⃣ Analysis
“Analyzing semantic relationship between start concept ‘go_to_barber’ (m₀=11.108) and target concept ‘supermarket’ (m₀=9.534) in the knowledge graph.”
Confidence: High (both concepts have strong frequency indicators)

2️⃣ Tools (Knowledge Graph Search)
“Found path with 4 nodes, total cost 2.844. Nodes: go_to_barber → cut_hair → mass → supermarket”
Result: Path successfully discovered

3️⃣ Synthesis
“Based on the path found in the knowledge graph, with optimal cost 2.844 and safety margin 0.797, I formulate a verified answer.”
Path Quality: Low cost indicates strong semantic connection

4️⃣ Response
“‘go_to_barber’ is connected to ‘supermarket’ through 2 intermediate concepts: ‘cut_hair’, ‘mass’. The total path cost is 2.844, which indicates a strong semantic connection.”
Confidence: High

Confidence Explanation:
“Confidence based on: low optimal cost, short path (direct connection), high-frequency nodes (m₀=7.98).”


2. Attribution Analysis

Attribution identifies which concepts and relationships contribute most to a reasoning outcome. GDS uses frequency-weighted positional attribution to score node importance:

Attribution Formula

\[ \text{Importance}(node) = \frac{m_0(node) \times \text{position\_weight}}{\sum_{n \in path} m_0(n)} \]

Where: - m₀ (mass) = concept frequency/importance from lexicon (0-20 scale) - position_weight = higher for concepts in central path positions (avoids bias toward start/goal)

Critical Nodes

Nodes with importance > 80% are marked as critical – removing or weakening these nodes would significantly alter the reasoning outcome.

Visualization Example

In the saliency map visualization: - 🔴 Red nodes (High importance: 0.8-1.0): Core concepts that anchor the semantic path - 🟠 Orange nodes (Medium-High: 0.6-0.8): Supporting concepts that strengthen the connection - 🟡 Yellow nodes (Medium: 0.4-0.6): Contextual concepts that provide nuance - 🟢 Green/Blue nodes (Low: 0.0-0.4): Peripheral concepts with minimal impact

Edge thickness reflects the contribution weight (thicker = stronger semantic relationship).

Metrics Displayed

  • Critical Nodes: Count of nodes with importance > 0.8
  • Total Nodes: Full path length
  • Average Importance Score: Mean attribution across all nodes
  • Attribution Method: Frequency-weighted positional attribution

3. Uncertainty Quantification

GDS decomposes uncertainty into two independent components following established uncertainty theory:

Types of Uncertainty

📚 Epistemic Uncertainty (Knowledge Limits)

  • Definition: Uncertainty due to incomplete knowledge or missing data in the graph
  • Calculation: Variability in path costs across alternative routes (standard deviation)
  • Interpretation: High epistemic uncertainty suggests the system needs more data or connections
  • Visual Encoding: Blur effect on nodes (more blur = higher epistemic uncertainty)
  • Thresholds: Low (0-15%), Medium (15-20%), High (>20%)

🎲 Aleatoric Uncertainty (Data Noise)

  • Definition: Inherent randomness in concept relationships (cannot be reduced by adding more data)
  • Calculation: Based on concept frequency stability and edge weight variance
  • Interpretation: High aleatoric uncertainty indicates ambiguous or context-dependent concepts
  • Visual Encoding: Stripe pattern overlay (more stripes = higher aleatoric uncertainty)
  • Thresholds: Low (0-15%), Medium (15-20%), High (>20%)

📊 Total Uncertainty

  • Combination: \(\text{Total} = \sqrt{\text{Epistemic}^2 + \text{Aleatoric}^2}\)
  • Visual Encoding: Pulse animation for nodes with total uncertainty > 30%
  • Accessibility: Animation respects prefers-reduced-motion system setting

Confidence Interval (95% CI)

The system reports a 95% confidence interval for path quality: - Narrow interval (e.g., [40%-45%]): High certainty, reliable answer - Wide interval (e.g., [23%-57%]): Low certainty, answer is tentative

Example: CI [23% - 57%] with width 34% indicates high uncertainty – alternative paths exist with similar costs, making the choice sensitive to small variations.


4. Interactive Visualizations

GDS provides three HTML-based visualizations optimized for research transparency and public communication:

🎯 Try the Live Demos

Explore real XAI visualizations generated from actual reasoning queries:

All demos are fully interactive with export capabilities (PDF/SVG).

🗺️ Saliency Map (Node & Edge Importance)

Purpose: Show which concepts and relationships drive the reasoning outcome.

Features: - Color-coded nodes by importance (red = critical, blue = peripheral) - Edge thickness proportional to contribution weight - Interactive tooltips showing: - Concept lemma and importance score - Frequency (m₀) value - Uncertainty breakdown (epistemic, aleatoric, total) - Uncertainty glyphs (blur, stripes, pulse) - Responsive SVG with semantic HTML landmarks

Use Case: Understand which concepts are most influential in connecting two ideas.


🔄 Counterfactual Scenarios (What-If Analysis)

Purpose: Explore how reasoning would change under different conditions.

Scenarios Generated: 1. Remove Expensive Edge (e.g., “Remove edge 127→2, cost 1.04”) - Shows impact of eliminating a weak connection - Displays alternative path cost comparison 2. Use Alternative Path (e.g., “Use runner-up path, +0.80 cost”) - Reveals second-best reasoning route - Quantifies decision margin

Metrics Displayed: - Impact %: How much the counterfactual affects path cost (36% = major change, 28% = moderate) - Original vs Alternative: Side-by-side comparison of path costs - Confidence Interval: Shows uncertainty range for the current path

Contrastive Explanation:
“Alternative paths exist with similar costs. The choice is sensitive to small variations.”
→ This indicates low decision margin – the system’s choice is not strongly preferred.


📊 Complete XAI Dashboard

Purpose: Unified view of all explainability features in a single page.

Sections: 1. Meta-Cognition Chain (4-step reasoning process) 2. Saliency Map (embedded SVG with uncertainty glyphs) 3. Attribution Metrics (critical nodes, average score, method) 4. Uncertainty Quantification (epistemic, aleatoric, total, 95% CI) 5. Counterfactual Scenarios (2 alternative scenarios) 6. Contrastive Explanation (why this path?)

Export Options: - PDF Export (A4 format, 2× quality, metadata preserved) - SVG Download (vector format, preserves filters and patterns)


5. Accessibility & Standards Compliance

All visualizations are designed with accessibility-first principles:

WCAG 2.1 AA Compliance

0 violations (validated with axe-core 4.11.0)

Accessibility Features: - Semantic HTML: Proper <nav>, <main>, <section> landmarks - ARIA Labels: All interactive elements have descriptive labels - Example: aria-label="Node: go_to_barber, Importance: 100%, Epistemic Uncertainty: 0%, Aleatoric Uncertainty: 4%" - Color Contrast: All text meets 4.5:1 minimum ratio (AA standard) - Updated button colors from #667eea (4.0:1) → #5a67d8 (4.57:1) - Keyboard Navigation: Full support for tab navigation and focus indicators - Screen Reader Support: All visualizations include text alternatives - Reduced Motion: Pulse animations respect prefers-reduced-motion system preference

Research Contribution: Uncertainty Glyphs

GDS introduces novel visual encodings for uncertainty representation:

  • Epistemic Blur (SVG <filter> with Gaussian blur, 3 levels)
  • Aleatoric Stripes (SVG <pattern> with diagonal lines, 3 densities)
  • High Uncertainty Pulse (CSS @keyframes animation with accessibility override)

These glyphs allow dual-channel encoding: color represents importance, while visual effects represent uncertainty – enabling richer information density without sacrificing clarity.


6. Technical Implementation

Core Technologies

  • Language: Rust (for performance and memory safety)
  • Graph Representation: Custom SVG generation with physics-inspired layout
  • Export Libraries:
    • jsPDF 2.5.2 (PDF generation)
    • html2canvas 1.4.1 (HTML-to-canvas rendering)
  • Validation: axe-core 4.11.0 (accessibility testing)

File Structure

src/reasoner/
├── visualization.rs       # HTML/SVG generators (1,315 lines)
│   ├── SaliencyMapGenerator
│   ├── CounterfactualUIGenerator
│   └── XAIDashboard
├── xai.rs                 # Attribution & counterfactual logic (605 lines)
├── metacognition.rs       # Confidence chain builder (354 lines)
└── mod.rs                 # Main reasoner with XAI integration (599 lines)

Example Usage

use gds_core::reasoner::{Reasoner, SaliencyMapGenerator};

let reasoner = Reasoner::new(&graph, &overlay, params);
let (path, metacog, xai) = reasoner.reason_with_xai(start, goal, &constraints)?;

// Generate saliency map visualization
let generator = SaliencyMapGenerator::new();
let html = generator.generate_html(
    &path,
    &xai.attribution,
    &xai.uncertainty,
    "Query: concept_A → concept_B",
);

std::fs::write("saliency_map.html", html)?;

Generated Output

Three standalone HTML files (no external dependencies): - saliency_map.html (node importance + uncertainty glyphs) - counterfactuals.html (what-if scenarios + contrastive explanation) - dashboard.html (unified XAI view with all features)


Summary

GDS’s explainability framework addresses the black box problem through:

  1. Transparent Reasoning: Graph paths are inherently interpretable
  2. Confidence Quantification: Explicit high/low/rejection levels with justifications
  3. Uncertainty Decomposition: Separate epistemic (knowledge) from aleatoric (noise) uncertainty
  4. Attribution Analysis: Identify which concepts drive outcomes
  5. Counterfactual Exploration: Test “what if” scenarios to validate decisions
  6. Accessible Visualizations: WCAG 2.1 AA compliant, professional-quality outputs

This makes GDS suitable for research transparency, regulatory compliance, and public communication – domains where decision justification is not optional but mandatory.


Next: Project Chronicle →
Previous: ← Data Sources

"