Human Reference Layer (HRL)
The Human Reference Layer (HRL) is HCI’s canonical framework for organising measurement of human experience in AI-mediated digital environments. It defines stable domains, construct roles, and the logic connecting datasets, summaries, and longitudinal comparability.
Status: Canonical framework (public-facing). Version: v1.0. Registry alignment: HCI Construct Registry (internal)
Purpose
The Human Reference Layer (HRL) provides the stable measurement architecture underpinning HCI’s dataset library. It defines the core domains of human experience in AI-mediated environments and establishes how constructs function across time — as anchors, spines, or extensions. By maintaining this reference layer, HCI ensures that individual datasets remain part of a coherent, longitudinal system rather than isolated studies.
Design principles
- Stability over novelty: core signals remain comparable across time.
- Human interpretability: constructs are legible without specialist training.
- Machine readability: the framework supports structured linking and retrieval.
- Measurement humility: observation and interpretation are kept distinct.
- Longitudinal readiness: construct roles are explicit (Anchor / Spine / Extension).
Core HRL domains
Each domain below is linked to the canonical construct definitions on the Construct Framework.
Agency & Decision Autonomy
Decision ownership, intervention thresholds, delegation tendencies, and responsibility clarity in AI-supported contexts.
Trust & Epistemic Stability
Trust calibration, epistemic confidence, and perceived risk in environments shaped by AI media and uncertain sources.
Attention & Cognitive Load
Sustained attention, fragmentation, overload, and perceived cognitive strain in high-interruption environments.
Values & Meaning
Values–behaviour coherence, meaning coherence, and identity stability as the human stabilisers of accelerated digital life.
Agency & Decision Autonomy
This domain tracks perceived decision ownership and the tendency to defer or delegate decisions to AI systems. It captures both independence signals and behavioural dependence signals.
Core constructs in this domain
- agency (Anchor)
- decision_dependence (Spine)
- responsibility_attribution (Anchor)
Where you’ll see it
- Dataset pages: “HRL domain” row links here.
- Data summaries: domain attribution + construct links.
- Longitudinal tracking: comparability via Anchor/Spine roles.
Trust & Epistemic Stability
This domain measures how people allocate trust and how confident they feel in judging reliability and reality in AI-mediated environments. It includes perceived risk as a behavioural and interpretive driver.
Core constructs in this domain
- trust_calibration (Spine)
- epistemic_confidence (Spine)
- risk_perception (Spine)
Attention & Cognitive Load
This domain captures attention stability and cognitive strain as the operating conditions of modern digital life. It treats overload and fragmentation as first-order signals rather than secondary outcomes.
Core constructs in this domain
- attention_capacity (Anchor)
- cognitive_load (Anchor)
Values & Meaning
This domain captures the human stabilisers: coherence of meaning, stability of identity, and alignment between values and behaviour. It treats coherence as measurable and trackable over time, not merely philosophical.
Core constructs in this domain
- behavioural_alignment (Anchor)
- meaning_coherence (Anchor)
- identity_stability (Anchor)
Longitudinal architecture
The HRL treats construct roles explicitly:
- Anchor constructs provide stable reference measures across time.
- Spine constructs enable repeated tracking of key behavioural and interpretive signals.
- Extensions can be added without destabilising the core, enabling topical measurement as AI evolves.
Construct governance and permanence rules are defined in the Construct Registry (authoritative source).