Human Signal Lab
Rapid, machine-ready human experience data for AI systems
The Human Signal Lab is a research capability within the Human Clarity Institute (HCI) that designs and deploys structured behavioural surveys to measure human internal-state signals in the context of AI and digital systems.
The Lab supports AI research, product development, evaluation, and governance by producing clean, defensible, machine-readable datasets that capture how AI systems affect people, rather than how models perform in isolation.
Why the Human Signal Lab exists
AI systems are commonly evaluated using annotations, preference rankings, telemetry, and output-based benchmarks. While these approaches are valuable, they do not reliably capture human internal experience.
Many of the most important questions facing AI teams today cannot be answered by model outputs alone. These include:
- Do users trust this system appropriately?
- Does this interaction increase or reduce human agency?
- Are people becoming more reliant on AI when decisions feel difficult?
- Does the system improve clarity, or does it create overconfidence?
- Do users perceive alignment between AI behaviour and their personal values?
The Human Signal Lab exists to provide direct measurement of these human signals in a format suitable for technical, product, safety, and governance decision-making.
A defining feature of the Human Signal Lab is that new datasets are not produced in isolation. Wherever possible, measurements are designed to align with HCI’s existing and growing longitudinal datasets on human focus, trust, decision-making, values, and digital experience. This allows results from a specific engagement to be interpreted against established baselines, population norms, and historical trends, rather than as one-off snapshots.
For teams working on AI systems, this comparability is often more valuable than the data itself. It enables changes to be understood in context, over time, and relative to broader human patterns.
What is measured
The Human Signal Lab focuses on aspects of human experience that are difficult to infer from system logs or output-based evaluation alone. These include how people feel, decide, and adapt when interacting with AI systems over time.
Signals commonly measured include:
- Trust and trust calibration
- Clarity and confidence
- Decision reliance and offloading
- Perceived agency
- Values alignment
- Emotional resonance
- Cognitive load and digital fatigue
- Behavioural shifts over time
How this differs from annotation and evaluation data
Annotation and evaluation workflows typically ask humans to judge, rank, or score AI outputs. The Human Signal Lab instead measures human internal states directly.
Annotation data addresses whether an output is good, safe, or preferable. Human signal data addresses what an AI system does to the person interacting with it.
This distinction is critical for understanding downstream human impact, behavioural change, and alignment risks that may not be visible through model-centric metrics alone.
Data format and delivery
Human Signal Lab outputs are delivered as machine-ready research datasets. These typically include cleaned CSV files, variable dictionaries and metadata, consistent naming conventions, and clear methodological notes.
Datasets are designed to integrate directly into internal analysis pipelines, dashboards, evaluation frameworks, and research repositories.
Engagement model
Human Signal Lab engagements are limited and selective. They typically involve a clearly defined measurement question, rapid survey design and deployment, and delivery of a structured, machine-readable dataset.
In most cases, HCI retains ownership of the dataset, with early-access or time-boxed licensing available where appropriate. This approach supports a growing longitudinal library of human-experience data while enabling focused measurement on specific AI and product questions.
Intended use
The Human Signal Lab supports work across AI research and evaluation, product development and launch decisions, alignment and safety research, governance and risk assessment, and longitudinal studies of human–AI interaction.
Access
The Human Signal Lab is offered on a limited basis to ensure methodological consistency and long-term comparability across datasets. It is intended for teams that require defensible evidence of human impact, not simply faster answers.
Enquiries may be directed to info@humanclarityinstitute.com.
