GLOSSARY

GLOSSARY

A

Acquisition Illusion

The structurally unfakeable feeling during AI-assisted learning that genuine capability relationships formed—when structure remains fragmented or borrowed despite acquisition feeling authentic. Emerges because acquisition genuinely feels like learning even when persistent structure never forms. You engage material through AI, understand explanations, complete tasks successfully—all genuine experiences. But relationships enabling independent transfer may be AI-provided temporarily rather than internalized permanently. Only temporal testing when assistance is absent reveals which occurred. Not user error but information-theoretic property of assisted learning.

Assisted Performance

Task completion enabled by AI assistance where performance quality does not indicate genuine capability internalization. Assisted performance appears identical to independent mastery during activity but collapses when assistance is removed. Distinguished from genuine capability through independence testing: either structure exists enabling performance alone, or performance required continuous assistance revealed through collapse when support removed. Critical for understanding why completion metrics fail when AI makes perfect outputs achievable without requiring understanding from assisted individuals.

B

Baseline

The initial mapping of capability relationships before learning intervention, establishing what structure existed prior to learning event. Baseline measurement enables comparison determining what structural changes occurred: which edges formed, which relationships strengthened, what transfer capabilities emerged. Without baseline, verification cannot distinguish learning from pre-existing capability. Baseline establishes falsifiable comparison: test whether structure demonstrably changed in ways enabling new transfer that baseline measurement showed was absent before learning intervention occurred.

Borrowed Structure

Capability relationships provided by AI assistance that enable immediate performance but never internalize in human cognition. Borrowed structure vanishes when assistance is removed, revealed through independence testing showing performance collapse. Distinguished from genuine structure through temporal verification: borrowed structure requires continuous access while genuine structure persists independently. Critical concept for understanding AI capability crisis—perfect performance can emerge from borrowed structure while learning remains zero.

C

Capability

The persistent structure of relationships that produces performance when assistance is absent—not skills possessed but relational topology enabling independent transfer. Traditional definitions treat capability as skills or knowledge you possess. Learning Graph defines it structurally: the network of relationships between concepts enabling independent performance. You can possess every fact yet lack capability if relationships enabling transfer never formed. This structural definition makes capability falsifiable: either coherent topology exists enabling independent performance, or relationships remain fragmented requiring continuous assistance.

Capability Collapse

What occurs when capability structure fragments or vanishes despite past demonstration of competence—revealed through temporal testing showing performance degradation or independence testing showing dependency. Not skill degradation from disuse but structural fragmentation where relationships enabling transfer never consolidated or weakened without reinforcement. Capability collapse is diagnostic: reveals borrowed or temporary structure rather than genuine persistent internalization. Makes collapse predictable and verifiable rather than mysterious performance failure.

Capability Provenance

The verifiable origin and ownership of capability structure tracked through cryptographic attestation. Capability provenance answers: where did this capability come from, when did relationships form, can formation be independently verified, who owns the structural evidence? Binds Learning Graph to Portable Identity enabling cryptographically owned verification records that travel with individuals across all systems. Essential for preventing institutional monopoly over capability determination—individuals own proof of their structural capability rather than institutions controlling verification.

Capability Structure

The network of relationships between concepts, methods, and experiences that produces performance when assistance is absent. Not skills possessed or knowledge acquired, but relational topology enabling independent transfer. Capability becomes falsifiable through structural definition: either coherent relationships exist enabling independent performance in novel contexts (verifiable through testing), or structure remains fragmented requiring continuous assistance (revealed when support is removed). This structural definition replaces all traditional capability definitions based on possession or acquisition with verification-based definition based on persistent topology.

D

Definitional Sovereignty

Control over what ”capability” and ”learning” mean when AI makes performance without learning frictionless. Whoever defines how capability is verified controls what institutions optimize toward, which credentials are legitimate, and whether learning remains distinguishable from performance theater. Learning Graph establishes definitional sovereignty through open protocol before competing proprietary definitions capture verification infrastructure. Ensures measurement remains public protocol accessible to civilization rather than proprietary territory captured by platforms whose revenue depends on verification monopoly.

E

Edge

A falsifiable relationship enabling transfer between nodes—the connections that make knowledge functional rather than inert. Edges represent capability relationships testable through independent performance: does relationship A between concepts X and Y enable transfer to application Z without assistance? Learning happens in edges, not nodes. You can possess every node (know facts, complete topics) while lacking edges (relationships enabling transfer). Edge formation is what distinguishes genuine learning from information exposure—edges persist independently while nodes can exist without enabling any capability.

Edge Claim

Standard form for falsifiable learning verification: ”Node A → Node B under Condition C enables Transfer T at Difficulty D, verified without assistance after Temporal Gap G.” Edge claims make capability relationships testable rather than narrative—state what relationship formed, specify transfer enabled by that relationship, define conditions where transfer occurs independently, establish how testing would refute claim. Without edge claims, ”edges” remain unfalsifiable assertions. Edge claims transform Learning Graph from conceptual framework to operational infrastructure with machine-readable, legally citable verification standards.

F

Falsifiability

The requirement that every capability claim must be testable through conditions that would refute it if capability does not exist. Falsifiability makes learning verification scientific: state what structure formed, define how testing would demonstrate presence or absence, establish conditions where claimed capability must function independently. Unfalsifiable claims (”I understand calculus”) cannot be verified. Falsifiable claims (”derivative concept → optimization application enables economics problem-solving independently after 3 months”) can be tested through conditions revealing whether structure exists or performance was theater.

Fragmented Structure

Capability existing as disconnected knowledge pieces unable to transfer coherently across contexts—nodes present but edges weak or absent. Fragmented structure enables performance in narrow practiced scenarios but collapses when novel situations require adaptation. Revealed through transfer testing: capability works in familiar contexts but fails when situations change requiring genuine relationship-based understanding. Not absence of knowledge but absence of relationships enabling knowledge to function independently across varied applications. Fragmented structure is what AI-assisted completion often creates—nodes acquired, edges never formed.

G

H

I

Independence Testing

Verification that capability relationships function when all assistance is removed—no AI access, no external tools beyond genuine application contexts, no support infrastructure. Independence testing distinguishes genuine internalization from collaboration capability: either structure exists enabling performance alone, or performance required continuous assistance. Essential for preventing Learning Graph from becoming ”AI literacy” measurement—tests not how well you collaborate with AI but whether structure formed persisting without AI. Independence is architectural requirement, not optional enhancement.

Internalization

The process through which capability relationships consolidate into persistent structure enabling independent function—genuine learning creating topology that survives when enabling conditions disappear. Internalization cannot be observed during acquisition, only verified through temporal testing: relationships either consolidated enabling independent transfer months later, or remained temporary vanishing when assistance ended. What makes internalization different from exposure or assisted completion is persistence under testing conditions that destroy performance theater: time passage, assistance removal, transfer requirement, novel contexts.

J

K

L

Learning Graph

Web4’s protocol for temporal verification of capability development through persistent structural relationships—the infrastructure making learning verifiable as coherent topology when AI can generate perfect outputs. Not Knowledge Graph (static information), not Graph Learning (ML technique), not learning analytics (node measurement)—but verification protocol proving genuine capability formation through testing whether relationships persist independently, transfer to novel contexts, and survive temporal separation from enabling conditions. The protocol that makes learning falsifiable when performance observation fails structurally.

M

N

Node

A concept, method, or experience—the discrete elements of knowledge. Nodes are what traditional education measures: topics covered, concepts introduced, skills listed. But learning does not happen in nodes—learning happens in edges connecting nodes. You can possess every node (know every fact, recognize every concept, execute every procedure) and still lack learning if edges never formed. Nodes without edges create fragmented knowledge enabling no independent transfer. Nodes are necessary but insufficient—capability requires edges making nodes functional through relationships.

O

P

Performance Theater

Perfect task completion appearing identical to genuine learning but collapsing when assistance is removed or time passes—the phenomenon Learning Graph exists to detect. Performance theater emerges when AI assistance enables flawless outputs without requiring capability internalization: students complete assignments perfectly while building no persistent structure, professionals generate expert work while losing independent problem-solving capacity. Only structural testing distinguishes performance theater from genuine learning because both produce identical observable outputs during assisted activity. Unfakeable through temporal verification revealing whether structure persisted independently.

Persistent Structure

Coherent topology of capability relationships that survives temporal separation, functions independently when assistance is removed, and transfers to novel contexts. Structure becomes persistent when relationships consolidate enabling independent performance regardless of conditions that created them. Temporary structure collapses when time passes. Borrowed structure vanishes when assistance ends. Context-specific structure fails when situations change. Only genuine internalization creates structure surviving all three conditions. The structural signature distinguishing genuine learning from performance theater—cannot be faked because requires consolidation testable through conditions destroying borrowed or temporary structure.

Persistent Topology

Coherent pattern of capability relationships that survives temporal separation, functions independently when assistance is removed, and transfers to novel contexts. Topology becomes persistent when relationships consolidate enabling independent performance regardless of conditions that created them. Temporary topology collapses when time passes. Borrowed topology vanishes when assistance ends. Context-specific topology fails when situations change. Only genuine internalization creates topology surviving all three simultaneously. The structural signature distinguishing genuine learning from performance theater—persistent topology cannot be faked because it requires consolidation in human cognition testable through conditions destroying borrowed or temporary structure.

Q

R

S

Structural Illusion

The unfakeable feeling during AI-assisted learning that genuine capability relationships formed—when structure remains fragmented or borrowed despite acquisition feeling authentic and performance appearing flawless. Emerges because acquisition genuinely feels like learning even when persistent structure never forms. Only temporal testing reveals truth: when assistance ends, independent testing either demonstrates persistent structure or reveals fragmentation. Information-theoretic property of assisted learning: no self-monitoring during acquisition distinguishes borrowed from internalized structure—both enable successful performance in the moment, both feel authentic, both generate satisfaction. Only structural testing when assistance is absent reveals which occurred.

Structural Verification

Testing whether genuine capability relationships formed by measuring if coherent topology emerged enabling independent transfer, survived temporal separation, and demonstrates falsifiable patterns. Examines structure directly rather than inferring from performance: map baseline relationships, implement learning intervention, enforce temporal separation, remove all assistance, test at comparable difficulty in novel contexts, compare to baseline. If coherent relationships formed enabling independent transfer—learning occurred. If structure remained fragmented—performance theater. Distinguishes Learning Graph from assessment (observes performance) and analytics (tracks activity)—verification tests structure directly through falsifiable patterns.

T

Temporal Persistence

The property of capability structure surviving when tested after time has passed—remaining functional despite memory decay, context changes, and absence of optimization pressure. Temporal persistence distinguishes genuine internalization from temporary retention: cramming creates activation patterns collapsing within days, genuine learning creates structural relationships surviving months. Cannot be faked because time cannot be compressed—either relationships consolidated in human cognition enabling independent function after temporal separation, or they remained borrowed requiring continuous assistance revealed when testing occurs months later without support available.

Temporal Separation

The time gap between learning event and verification testing—weeks or months allowing temporary structures to collapse while persistent structures consolidate. Temporal separation makes persistence testable: cramming collapses within days, AI-assisted completion vanishes when assistance ends, shallow exposure disappears, only deeply internalized relationships survive memory decay and context changes. Cannot be compressed or eliminated—time reveals what was always true about whether genuine structure formed. Architectural requirement: without temporal separation, verification measures momentary activation not persistent capability. Testing immediately measures retention, testing after temporal separation measures structure.

Temporal Verification

Proving learning occurred through testing capability persistence after time has passed when assistance is removed and contexts have changed. Time becomes the verification dimension because AI cannot compress consolidation: memory either consolidated surviving months or remained temporary, capability either persists independently or collapses without assistance, understanding either transfers to novel contexts or remains context-bound. Temporal verification makes learning falsifiable—wait months, remove assistance, test independently: capability either survives revealing genuine internalization or collapses revealing performance theater. Not delayed assessment or spaced repetition but protocol for making structure formation verifiable through patterns only time reveals.

Topology

The whole pattern of edges that persists independently—the structural signature of genuine learning. Topology describes how capability relationships connect, strengthen, and enable transfer across the entire network rather than isolated connections. Coherent topology enables independent problem-solving across novel contexts because relationships form interconnected structure adapting to unexpected situations. Fragmented topology enables only narrow performance because connections remain isolated requiring assistance for each application. Topology is what Learning Graph verifies—not whether individual facts were acquired but whether coherent relational structure formed enabling genuine capability.

Topology Collapse

What occurs when assistance is removed or time passes and no persistent capability structure remains—performance that appeared perfect during assisted activity vanishes completely when tested independently. Topology collapse is diagnostic not moral: reveals borrowed or temporary structure rather than genuine internalization, testable through conditions destroying performance theater while genuine topology survives. Critical concept because it names the failure mode traditional assessment cannot detect: perfect completion during education followed by total capability absence when graduates enter workforce. Makes collapse verifiable and predictable rather than mysterious institutional failure.

Transfer Failure

When capability structure cannot generalize to novel contexts despite perfect performance in original contexts—reveals memorization or context-specific patterns rather than genuine understanding. Transfer failure is diagnostic: structure exists but remains narrow, unable to adapt when situations change unpredictably. Distinguishes from topology collapse (no structure remains) and independence failure (structure requires assistance)—transfer failure means structure persists and functions independently but lacks coherence enabling generalization. Essential for preventing ”I learned calculus” claims when capability works only on practiced problem types, failing when novel applications require genuine transfer.

Transfer Validation

Testing whether capability relationships generalize to novel contexts never practiced—proving understanding rather than memorization. Transfer validation reveals whether structure enables adaptation: present problems requiring capability application in situations different from where learning occurred, measure whether performance succeeds independently. If transfer occurs—genuine understanding exists as general relationships. If transfer fails—capability remains context-specific memorization. Architectural requirement: without transfer validation, verification measures pattern matching not genuine capability structure. Only relationships enabling transfer across varied contexts demonstrate internalization forming general understanding.

U

V

Verification Invariants

Architectural requirements that cannot be negotiated, bypassed, or redefined—conditions Learning Graph verification must satisfy to function as universal standard. Invariants include: structural representation (capability as network not list), relationship falsifiability (every edge testable), temporal persistence (testing after time passes), independence verification (all assistance removed), transfer validation (generalization to novel contexts), cross-institutional interoperability (works across all systems). Violating any invariant produces measurement of something other than learning structure. Protects against ”lightweight implementations” claiming to be Learning Graph while eliminating conditions making verification unfakeable.

Verified Learning

Capability that survives structural testing: demonstrably persists when tested independently after time passes, functions without assistance, transfers to novel contexts, and leaves falsifiable structural signatures distinguishing genuine internalization from borrowed performance. Verified learning distinguishes genuine capability formation from performance theater AI creates: temporary activation that collapses when time passes, assisted performance requiring continuous AI access, fragmented memorization failing to transfer. Verification prevents claiming learning through self-report or completion certificates. Capability must survive independent structural testing: documented baseline, temporal separation, independent testing at comparable difficulty, transfer validation. Verified learning is what genuine internalization creates: persistent topology enabling independent transfer—the only pattern performance theater cannot achieve.

W

X

Y

Z