Trust Calibration in Information Environments 2026 (Dataset)
A de-identified open dataset (n=394) examining how digitally active adults calibrate trust in AI-generated content, assess reliability within digital information environments, and determine when to intervene in automated or AI-mediated systems.
Measures include validated 1–7 Likert-scale instruments assessing perceived AI reliability, confidence in digital decision-support outputs, verification behaviour, and human intervention thresholds, alongside digital exposure metrics and standard demographic variables.
Part of the Human Clarity Institute’s Human–AI Experience Data Series.
Framework
HRL domain(s): Agency & Decision Autonomy, Trust & Epistemic Stability
Registry Construct Alignment: Decision dependence, Epistemic confidence, Trust calibration, Risk perception
Listed constructs reflect longitudinal, registry-mapped item alignment and do not represent the full thematic scope of this dataset.
DOI and Repository Links
This dataset is archived in GitHub, Zenodo, and Figshare for long-term preservation.
Citation
APA
Human Clarity Institute. (2026).
Trust Calibration in Information Environments 2026 (Dataset).
Human Clarity Institute.
https://doi.org/10.5281/zenodo.18625243
BibTeX
@dataset{hci_trust_calibration_information_environments_2026,
author = {Human Clarity Institute},
title = {Trust Calibration in Information Environments 2026 (Dataset)},
year = {2026},
doi = {10.5281/zenodo.18625243},
url = {https://humanclarityinstitute.com/datasets/trust-calibration-information-environments-2026/},
license = {CC-BY-4.0}
}
Licence
Creative Commons Attribution 4.0 International (CC BY 4.0)
You are free to share, adapt, and build upon this dataset for any purpose, including commercial use,
provided appropriate credit is given to the Human Clarity Institute.
Full licence text: https://creativecommons.org/licenses/by/4.0/
Study Methodology
This dataset forms part of the Human Clarity Institute’s Human–AI Experience research programme, examining how people calibrate trust, assess reliability, and experience confidence or doubt in their own judgement when navigating digital information environments. The study uses a cross-sectional online survey design and focuses on descriptive patterns in epistemic confidence, internal judgement tension, and trust calibration under conditions of uncertainty.
Data were collected via the Prolific research platform on 2026-02-09 from adults across six English-speaking countries. Participants provided explicit consent for anonymised open publication as part of HCI’s open research programme.
Sampling & participants
- Clean dataset: 394 valid responses
- Countries: United Kingdom, United States, Canada, Australia, New Zealand, Ireland
- Eligibility: Adults (18+)
- Recruitment platform: Prolific
- Anonymisation: Participant IDs, timestamps, and direct identifiers removed prior to release
Study limitations
- The survey uses a non-probability convenience sample and is not nationally representative.
- Results are based on self-reported responses and reflect perceived experiences of trust calibration, confidence, doubt, and judgement in digital information environments.
- The study uses a cross-sectional design, capturing responses at a single point in time.
- The dataset is descriptive and exploratory and does not support causal inference.