Digital Trust Survey 2025 (Dataset)
A de-identified open dataset of 505 adults, capturing how people assess trustworthiness in digital and AI-mediated environments — including uncertainty about what is real, concern about AI-generated content, perceived manipulation, detection confidence, and verification behaviour.
Measures include digital trust indicators, misinformation concern, AI-generated content awareness, trust-cue evaluation, AI detection confidence, avoidance behaviours, emotional responses to uncertainty, and demographic variables across six English-speaking countries.
Part of the Human Clarity Institute’s AI–Human Experience Data Series.
Framework
HRL domain(s): Trust & Epistemic Stability
Registry Construct Alignment: Epistemic confidence, Trust calibration
Listed constructs reflect longitudinal, registry-mapped item alignment and do not represent the full thematic scope of this dataset.
DOI and Repository Links
This dataset is archived in GitHub, Zenodo, and Figshare for long-term preservation.
Citation
Human Clarity Institute. (2025). Digital Trust Survey 2025 (Dataset). Human Clarity Institute. https://doi.org/10.5281/zenodo.17717450
APA
BibTex
@dataset{hci_digital_trust_2025,
author = {Human Clarity Institute},
title = {Digital Trust Survey 2025 (Dataset)},
year = {2025},
doi = {10.5281/zenodo.17717450},
url = {https://humanclarityinstitute.com/datasets/digital-trust-2025/},
license = {CC-BY-4.0}
}
Licence
Creative Commons Attribution 4.0 International (CC BY 4.0)
You are free to share, adapt, and build upon this dataset for any purpose, including commercial use, provided appropriate credit is given to the Human Clarity Institute.
Full licence text: https://creativecommons.org/licenses/by/4.0/
Study Methodology
This dataset forms part of the Human Clarity Institute’s Human–AI Experience research programme, examining how people judge credibility, experience uncertainty, and decide what to trust in digital environments increasingly shaped by AI-generated content. The study uses a cross-sectional online survey design and focuses on descriptive patterns in perceived deception risk, verification behaviour, confidence in judging information, trust in AI systems, and the values people believe AI should reflect.
Data were collected via the Prolific research platform from adults across six English-speaking countries. Participants provided explicit informed consent for anonymous open publication as part of HCI’s open research programme.
Sampling & participants
- Clean dataset: 505 valid responses
- Countries: United Kingdom, United States, Canada, Australia, New Zealand, Ireland
- Eligibility: Adults (18+) in English-speaking countries
- Recruitment platform: Prolific
- Compensation: £6.55/hour average
- Approval-rate filter: None
- Attention checks: None
- AI deception traps: None
- Anonymisation: Prolific IDs removed; timestamps stripped
Study limitations
- The survey uses a non-probability convenience sample and is not nationally representative.
- Results are based on self-reported responses and reflect perceived experiences of digital trust and authenticity.
- The study uses a cross-sectional design, capturing responses at a single point in time.
- The dataset is descriptive and exploratory and does not support causal inference.
Digital Trust
Digital Trust explores how people judge credibility, navigate uncertainty, and respond to authenticity concerns in AI-shaped digital environments.
Digital Fatigue and Energy
Digital Fatigue and Energy examines how digital life can deplete energy, intensify mental strain, and shape people’s sense of cognitive capacity.
Related Question Topics
Trust, Reality & Uncertainty in the AI Era
Evidence-based answers addressing credibility uncertainty, perceived authenticity erosion, verification strain, AI trust calibration, and behavioural adaptation under digital ambiguity.
Losing Confidence in My Own Thinking
Questions exploring AI decision dependence, second-guessing, validation patterns, delegation risk, and perceived shifts in confidence or ownership.
Data use and reuse terms are outlined in our Data Use & Disclaimer.