How People Navigate Trust and Uncertainty Online in the Age of AI — 2025 Data
This page summarises findings from the Human Clarity Institute’s Digital Trust 2025 dataset, based on 505 valid responses across six English-speaking countries. The research examines how people decide what to trust online in an environment increasingly shaped by AI-generated content, including deception risk, uncertainty, verification behaviour, and confidence in judgement.
View the Digital Trust 2025 Dataset
What the data shows
Four signals stand out in this dataset: verification has become near-universal behaviour, concern about AI-enabled deception is extremely high, many people say it is harder than it used to be to know what is real online, and most want clearer ways to verify whether digital content is genuine. Together, these findings point to a trust environment where uncertainty is widespread and personal checking has become routine.
Double-check information when they are unsure
When they are unsure, they check other sources before deciding what to believe.
Worry AI makes deception easier
AI-generated content increases the ease with which people can be deceived online.
Say it is harder to know what is real online
It now feels harder than it used to be to tell what is real and what is not online.
Want clearer ways to verify what is real
They would like simpler and clearer methods for checking whether something online is genuine.
Overall, the data show a population that no longer treats trust online as automatic. Instead, people appear to assume uncertainty, expect manipulation risk, and respond by checking for themselves before relying on what they see. Trust in digital environments now looks less like passive belief and more like active verification.
By the numbers (from HCI data)
Say honesty and truth are harder to protect
Honesty and truth are becoming harder to protect in a digital world shaped by AI.
Feel uncertain even when information seems credible
Content can look believable and still leave them unsure whether to trust it.
Worry AI systems may present false information confidently
AI tools may deliver false answers in a way that appears confident or convincing.
Worry detection tools will fail as AI content improves
AI detection systems may struggle to keep pace as synthetic content becomes more realistic.
Feel confident verifying information online
They feel confident checking online information, even though most still report double-checking when unsure.
Find manipulated information difficult to detect
It can be difficult to tell when information has been altered, manipulated, or generated.
Trust AI tools to detect AI-generated content accurately
AI-based detection tools can accurately identify synthetic content.
Name deepfakes as their biggest digital trust concern
Deepfakes and manipulated visuals are the most commonly selected top concern in this dataset.
Trust AI systems to give accurate information
Only around one in three trust AI systems to provide accurate information overall.
Patterns observed in the data
Verification now looks less like caution and more like routine self-defence
The clearest behavioural signal in this dataset is that most people no longer rely on first impressions when uncertainty appears. Almost everyone reports double-checking information when unsure, suggesting that personal verification has become a default response rather than a specialist habit.
Trust is being strained by realism as well as uncertainty
Respondents do not only describe a vague sense of confusion. They describe a more specific shift: AI-generated and manipulated content is making digital material feel more believable at the surface while harder to judge underneath. That helps explain why uncertainty remains high even when something appears credible.
Confidence and caution now coexist
A notable pattern in this dataset is that many people say they feel confident verifying information online, yet even more still double-check when unsure. This suggests that confidence has not removed caution. Instead, people appear to believe they can verify information, but also feel they need to do so frequently in today’s trust environment.
Trust now depends on active checking, not passive acceptance
Taken together, these findings suggest that trust online is becoming more effortful. People are not simply withdrawing from digital information. They are responding to higher perceived risk by checking more, questioning more, and relying less on first-glance credibility.
Questions this data can answer
These questions reflect common real-world queries about digital trust, authenticity, uncertainty, verification behaviour, and confidence in AI-mediated information environments. Each answer below is supported directly by this dataset.
How many people double-check online information when they are unsure?
95% double-check using other sources when they are unsure.
Do people think AI makes deception easier online?
91% say AI-generated content makes deception easier.
Is it getting harder for people to know what is real online?
89% say it feels harder than it used to be to know what is real and what is not online.
Do people want clearer ways to verify content online?
87% want clearer ways to verify whether something online is real.
How many people feel uncertain even when information seems credible?
83% still feel uncertain even when information seems credible.
Do people trust AI tools to detect AI-generated content accurately?
Only 38% trust AI tools to detect AI-generated content accurately.
What is the single biggest digital trust concern?
Deepfakes and manipulated visuals are the most commonly selected top concern at 36%.
Do people trust AI to give accurate information?
Only 35% trust AI systems to give accurate information overall.
Digital Trust
Digital Trust examines how people judge credibility, navigate uncertainty, and respond to authenticity concerns in AI-shaped digital environments.
Values vs Noise
Values vs Noise explores how digital distraction and busyness can obscure purpose — and how reconnecting with values helps restore direction and clarity.
Methodology
This dataset forms part of the Human Clarity Institute’s Human–AI Experience research programme, examining how people navigate trust, authenticity, uncertainty, and verification in digital environments increasingly shaped by AI-generated content. The study uses a cross-sectional online survey design and focuses on descriptive patterns in trust judgement, deception risk, source uncertainty, verification behaviour, AI accuracy concerns, and the human values people want AI systems to reflect.
Data were collected via the Prolific research platform from adults across six English-speaking countries. Participants provided explicit consent for anonymous open publication as part of HCI’s open research programme.
Sampling & participants
- Final n: 505
- Countries: UK, US, Australia, Canada, New Zealand, Ireland
- Eligibility: Adults 18+ in English-speaking countries
- Recruitment platform: Prolific
The resulting dataset should be interpreted as a non-probability convenience sample and is not intended to represent national populations.
The cleaned dataset, variable dictionary, and reuse terms are publicly available through the HCI dataset repository: Digital Trust 2025 Dataset →
Data integrity
All percentages reported on this page are calculated from valid responses in the cleaned dataset (n = 505). Percentages are rounded to the nearest whole number for readability. Unless otherwise stated, summary percentages combine respondents selecting 5–7 on the 7-point agreement scale (slightly agree, moderately agree, or strongly agree).
Where questions use single-category or multi-select response formats, the wording on the page makes that explicit. For example, some figures represent the share of respondents selecting a named option, while values-related figures may represent the share of total selections across all chosen values.
No approval-rate filter, attention checks, or AI deception trap items were applied in this study. Prolific IDs were removed and timestamps were stripped before publication as part of the anonymisation process.
This dataset is exploratory and descriptive in nature. It does not support causal inference and results should be interpreted as observed patterns within the survey sample.
This dataset is released as open research to support transparent analysis of trust calibration, authenticity uncertainty, verification behaviour, and human judgement in digitally mediated life.
Suggested citation:
Human Clarity Institute. (2025). Digital Trust Survey 2025 (Dataset). https://doi.org/10.5281/zenodo.17717450
Data use and reuse terms are outlined in our Data Use & Disclaimer.
Explore further analysis on Human Clarity Insights, or browse the full collection of HCI research reports.