How People Navigate Trust and Uncertainty Online in the Age of AI — 2025 Data

This page summarises findings from the Human Clarity Institute’s Digital Trust 2025 dataset, based on 505 valid responses across six English-speaking countries. The research examines how people decide what to trust online in an environment increasingly shaped by AI-generated content, including deception risk, uncertainty, verification behaviour, and confidence in judgement.

View the Digital Trust 2025 Dataset

Construct tags: Trust Calibration · Meaning Coherence · Epistemic Confidence

What the data shows

Four signals stand out in this dataset: verification has become near-universal behaviour, concern about AI-enabled deception is extremely high, many people say it is harder than it used to be to know what is real online, and most want clearer ways to verify whether digital content is genuine. Together, these findings point to a trust environment where uncertainty is widespread and personal checking has become routine.

95%

Double-check information when they are unsure

When they are unsure, they check other sources before deciding what to believe.

91%

Worry AI makes deception easier

AI-generated content increases the ease with which people can be deceived online.

89%

Say it is harder to know what is real online

It now feels harder than it used to be to tell what is real and what is not online.

87%

Want clearer ways to verify what is real

They would like simpler and clearer methods for checking whether something online is genuine.

Overall, the data show a population that no longer treats trust online as automatic. Instead, people appear to assume uncertainty, expect manipulation risk, and respond by checking for themselves before relying on what they see. Trust in digital environments now looks less like passive belief and more like active verification.

By the numbers (from HCI data)

84%

Say honesty and truth are harder to protect

Honesty and truth are becoming harder to protect in a digital world shaped by AI.

83%

Feel uncertain even when information seems credible

Content can look believable and still leave them unsure whether to trust it.

81%

Worry AI systems may present false information confidently

AI tools may deliver false answers in a way that appears confident or convincing.

78%

Worry detection tools will fail as AI content improves

AI detection systems may struggle to keep pace as synthetic content becomes more realistic.

73%

Feel confident verifying information online

They feel confident checking online information, even though most still report double-checking when unsure.

57%

Find manipulated information difficult to detect

It can be difficult to tell when information has been altered, manipulated, or generated.

38%

Trust AI tools to detect AI-generated content accurately

AI-based detection tools can accurately identify synthetic content.

36%

Name deepfakes as their biggest digital trust concern

Deepfakes and manipulated visuals are the most commonly selected top concern in this dataset.

35%

Trust AI systems to give accurate information

Only around one in three trust AI systems to provide accurate information overall.

Patterns observed in the data

Verification now looks less like caution and more like routine self-defence

The clearest behavioural signal in this dataset is that most people no longer rely on first impressions when uncertainty appears. Almost everyone reports double-checking information when unsure, suggesting that personal verification has become a default response rather than a specialist habit.

Trust is being strained by realism as well as uncertainty

Respondents do not only describe a vague sense of confusion. They describe a more specific shift: AI-generated and manipulated content is making digital material feel more believable at the surface while harder to judge underneath. That helps explain why uncertainty remains high even when something appears credible.

Confidence and caution now coexist

A notable pattern in this dataset is that many people say they feel confident verifying information online, yet even more still double-check when unsure. This suggests that confidence has not removed caution. Instead, people appear to believe they can verify information, but also feel they need to do so frequently in today’s trust environment.

Trust now depends on active checking, not passive acceptance

Taken together, these findings suggest that trust online is becoming more effortful. People are not simply withdrawing from digital information. They are responding to higher perceived risk by checking more, questioning more, and relying less on first-glance credibility.

Methodology

This dataset forms part of the Human Clarity Institute’s Human–AI Experience research programme, examining how people navigate trust, authenticity, uncertainty, and verification in digital environments increasingly shaped by AI-generated content. The study uses a cross-sectional online survey design and focuses on descriptive patterns in trust judgement, deception risk, source uncertainty, verification behaviour, AI accuracy concerns, and the human values people want AI systems to reflect.

Data were collected via the Prolific research platform from adults across six English-speaking countries. Participants provided explicit consent for anonymous open publication as part of HCI’s open research programme.

Sampling & participants

  • Final n: 505
  • Countries: UK, US, Australia, Canada, New Zealand, Ireland
  • Eligibility: Adults 18+ in English-speaking countries
  • Recruitment platform: Prolific

The resulting dataset should be interpreted as a non-probability convenience sample and is not intended to represent national populations.

The cleaned dataset, variable dictionary, and reuse terms are publicly available through the HCI dataset repository: Digital Trust 2025 Dataset →

Data integrity

All percentages reported on this page are calculated from valid responses in the cleaned dataset (n = 505). Percentages are rounded to the nearest whole number for readability. Unless otherwise stated, summary percentages combine respondents selecting 5–7 on the 7-point agreement scale (slightly agree, moderately agree, or strongly agree).

Where questions use single-category or multi-select response formats, the wording on the page makes that explicit. For example, some figures represent the share of respondents selecting a named option, while values-related figures may represent the share of total selections across all chosen values.

No approval-rate filter, attention checks, or AI deception trap items were applied in this study. Prolific IDs were removed and timestamps were stripped before publication as part of the anonymisation process.

This dataset is exploratory and descriptive in nature. It does not support causal inference and results should be interpreted as observed patterns within the survey sample.

This dataset is released as open research to support transparent analysis of trust calibration, authenticity uncertainty, verification behaviour, and human judgement in digitally mediated life.

Suggested citation:
Human Clarity Institute. (2025). Digital Trust Survey 2025 (Dataset). https://doi.org/10.5281/zenodo.17717450

Data use and reuse terms are outlined in our Data Use & Disclaimer.

Explore further analysis on Human Clarity Insights, or browse the full collection of HCI research reports.