AI Media & Online Authenticity 2025 (Dataset)

A de-identified open dataset of 505 adults, capturing how people perceive authenticity, manipulation risk, and trustworthiness in AI-generated media — including uncertainty about what is real, concerns about synthetic content, confidence in detection skills, perceived difficulty in identifying AI-altered media, and behavioural responses to online uncertainty.

Measures include AI-media exposure, perceived realism of AI-generated images and text, trust-cue evaluation, confidence in detecting manipulated or AI-generated content, authenticity concern, avoidance and verification behaviours, emotional responses to uncertainty, and demographic variables across six English-speaking countries.

Part of the Human Clarity Institute’s AI–Human Experience Data Series.

Framework

HRL domain(s): Trust & Epistemic Stability

Registry Construct Alignment: Meaning Coherence · Epistemic Confidence · Trust Calibration

Listed constructs reflect longitudinal, registry-mapped item alignment and do not represent the full thematic scope of this dataset.

DOI and Repository Links

Zenodo: Zenodo DOI: 10.5281/zenodo.17744452
Figshare: Figshare DOI: 10.6084/m9.figshare.30736247
GitHub: GitHub repository: HCI AI Media & Online Authenticity 2025

This dataset is archived in GitHub, Zenodo, and Figshare for long-term preservation.

APA

Citation

Human Clarity Institute. (2025). AI Media & Online Authenticity 2025 (Dataset). Human Clarity Institute. https://doi.org/10.5281/zenodo.17744452

BibTeX

@dataset{hci_ai_media_online_authenticity_2025,
  author    = {Human Clarity Institute},
  title     = {AI Media \& Online Authenticity 2025 (Dataset)},
  year      = {2025},
  doi       = {10.5281/zenodo.17744452},
  url       = {https://humanclarityinstitute.com/datasets/ai-media-online-authenticity-2025/},
  license   = {CC-BY-4.0}
}

Licence

Creative Commons Attribution 4.0 International (CC BY 4.0)
You are free to share, adapt, and build upon this dataset for any purpose, including commercial use, provided appropriate credit is given to the Human Clarity Institute.

Full licence text: https://creativecommons.org/licenses/by/4.0/

Study Methodology

This dataset forms part of the Human Clarity Institute’s Human–AI Experience research programme, examining how people judge trust, authenticity, and reality uncertainty in digital environments increasingly shaped by AI-generated media. The study uses a cross-sectional online survey design and focuses on descriptive patterns in deception concern, detection confidence, verification behaviour, emotional response, and trust in AI-mediated information.

Data were collected in November 2025 via the Prolific research platform from adults across six English-speaking countries. Participants provided explicit informed consent for anonymised data publication as part of HCI’s open research programme.

Sampling & participants

  • Clean dataset: 202 valid responses
  • Countries: United Kingdom, United States, Canada, Australia, New Zealand, Ireland
  • Eligibility: English-speaking adults (18+) across six English-speaking countries
  • Recruitment platform: Prolific
  • Anonymisation: Prolific IDs and timestamps removed before publication

Study limitations

  • The survey uses a non-probability convenience sample and is not nationally representative.
  • Results are based on self-reported responses and reflect perceived experiences.
  • The study uses a cross-sectional design, capturing responses at a single point in time.
  • The dataset is descriptive and exploratory and does not support causal inference.