Can You Trust What You See Online? — AI Media & Authenticity Data 2025
This page summarises findings from the Human Clarity Institute’s AI Media & Online Authenticity 2025 dataset, based on 202 valid responses across six English-speaking countries. The research examines how people judge whether online content is real, how confident they feel spotting manipulation or synthetic media, and how AI-generated content affects authenticity judgement, verification behaviour, and everyday digital caution.
View the AI Media & Online Authenticity 2025 Dataset
What the data shows
Four signals stand out in this dataset: concern about AI-enabled deception is extremely high, many people say AI-generated media makes reality harder to judge, clear demand exists for stronger verification methods, and many now move through digital life with more caution. Taken together, the data describe a population adjusting to a world where authenticity can no longer be assumed.
Worry AI-generated media makes it easier to deceive people
Almost everyone in this dataset is concerned that AI-generated images, audio and video increase the risk of people being misled online.
Say AI-generated media makes them less certain about what is real
Large majorities report that the rise of AI-generated media has weakened their confidence in what they can safely treat as genuine.
Want clearer ways to verify what is real online
Over nine in ten say they want stronger, simpler ways to check whether images, videos or information have been manipulated or generated by AI.
Say AI media makes them more cautious in daily life
For many, AI-generated media is not just a technical issue. It changes how carefully they move through news, feeds, and everyday online decisions.
Overall, the data suggest that authenticity uncertainty is becoming a routine feature of digital life. People are not only worried about deception in principle; they are changing how they judge, verify, and move through online environments in practice. For many, authenticity now feels like something that must be checked rather than assumed.
By the numbers (from HCI data)
Find it difficult to tell when content has been manipulated
Close to two-thirds say it is hard to see when images, videos or text have been altered or artificially generated.
Look for visual clues that something might be AI-generated
The vast majority say they actively scan things like lighting, texture and realism to judge whether an image or video might be synthetic.
Use tone and behaviour to judge whether something feels real
Almost everyone also pays attention to behavioural cues such as tone, consistency, and human imperfections when deciding whether content feels authentic.
Feel confident noticing details that indicate content may be synthetic
Around six in ten say they feel confident spotting signs that something has been generated or altered by AI, even though many still worry about missing things.
Believe most people would struggle to identify AI-generated media
While many trust their own judgement, they are far less confident in other people’s ability to tell real from synthetic content.
Double-check information when they are unsure
Over nine in ten say they cross-check with other sources when they are unsure whether something is real, making verification a routine part of digital life.
Sometimes avoid content because they do not know what to believe
Almost half report stepping back from certain online spaces or stories when they are unsure whether the material is genuine.
Trust AI to act in their best interests
Just under a third select mostly, very or completely when asked how much they trust AI systems to act in their best interests.
Patterns observed in the data
Reality feels harder to judge when media looks convincing
AI-generated media adds a further layer of uncertainty to online life. When convincing images, clips or audio can be synthetic, the old shorthand of seeing and hearing as proof becomes less reliable. The data suggest that for many people, authenticity now feels like something that must be checked rather than assumed.
People use both visual and behavioural cues to judge authenticity
Looking for visual clues and paying attention to tone, consistency, and human imperfections have become routine ways of assessing whether content feels real. People are not relying on a single signal. They are combining surface appearance with behavioural judgement to decide what to trust.
Verification is becoming a default response to authenticity uncertainty
Rather than assuming platforms or systems will protect them from manipulation, many people are building their own methods for deciding what is real. Cross-checking across sources now looks less like occasional caution and more like a routine response to uncertainty.
Confidence in personal judgement coexists with concern about wider detection failure
Many respondents feel confident spotting synthetic details themselves, but a much larger share believe most people would struggle to identify AI-generated media. This suggests that authenticity confidence is personal, but not widely generalised to the broader public.
Questions this data can answer
These questions reflect common real-world queries about trust, authenticity, manipulation detection, verification behaviour, and AI-shaped uncertainty. Each answer below is supported directly by this dataset.
How many people worry that AI-generated media makes deception easier?
96% worry that AI-generated media makes it easier to deceive people.
Does AI-generated media make people less sure what is real?
86% say AI-generated media makes them less certain about what is real.
How many people want clearer ways to verify what is real online?
93% want clearer ways to verify what is real online.
How many people find manipulated content difficult to detect?
62% find it difficult to tell when content has been manipulated.
What do people look for when judging whether content is real?
91% look for visual clues and 92% use tone and behaviour to judge whether something feels real.
Do people double-check information when they are unsure?
91% double-check information when they are unsure.
Do people sometimes avoid content when they do not know what to believe?
48% sometimes avoid content because they are unsure whether material is genuine.
Do people trust AI systems to act in their best interests?
31% trust AI systems to act in their best interests at mostly, very, or completely.
Digital Trust
Digital Trust explores how people judge credibility, navigate uncertainty, and respond to authenticity concerns in AI-shaped digital environments.
Digital Fatigue and Energy
Digital Fatigue and Energy examines how digital life can deplete energy, intensify mental strain, and shape people’s sense of cognitive capacity.
Methodology
This dataset forms part of the Human Clarity Institute’s Human–AI Experience research programme, examining how people judge trust, authenticity, and reality uncertainty in digital environments increasingly shaped by AI-generated media. The study uses a cross-sectional online survey design and focuses on descriptive patterns in deception concern, detection confidence, verification behaviour, emotional response, and trust in AI-mediated information.
Data were collected in November 2025 via the Prolific research platform from adults across six English-speaking countries. Participants provided explicit consent for anonymised publication as part of HCI’s open research programme.
Sampling & participants
- Final n: 202
- Countries: UK, US, Australia, New Zealand, Canada, Ireland
- Eligibility: English-speaking adults, 18+, six English-speaking countries
- Recruitment platform: Prolific
Participants were recruited using platform-based sampling. The resulting dataset should be interpreted as a non-probability convenience sample and is not intended to represent national populations.
The cleaned dataset, raw export, variable dictionary, and reuse terms are publicly available through the HCI dataset repository: AI Media & Online Authenticity 2025 Dataset →
Data integrity
All percentages reported on this page are calculated from valid responses in the cleaned dataset (n = 202). Percentages are rounded to the nearest whole number for readability. Unless otherwise stated, summary figures on this page combine respondents who selected 5–7 on the 7-point agreement scale (slightly agree, moderately agree, or strongly agree).
The trust figure on this page uses respondents selecting mostly, very, or completely. The values figure reports share of all value selections, so totals for that item reflect selections rather than respondent percentages.
No subgroup or co-occurrence percentages are reported on this page. All other figures refer to full-sample descriptive patterns from the cleaned dataset.
Prolific IDs and timestamps were removed before publication as part of the anonymisation process.
This dataset is exploratory and descriptive in nature. It does not support causal inference and results should be interpreted as observed patterns within the survey sample.
This dataset is released as open research to support transparent analysis of trust, authenticity, verification behaviour, and reality uncertainty in digitally mediated life.
Suggested citation:
Human Clarity Institute. (2025). AI Media & Online Authenticity 2025 (Dataset). https://doi.org/10.5281/zenodo.17744452
Data use and reuse terms are outlined in our Data Use & Disclaimer.
Explore further analysis on Human Clarity Insights, or browse the full collection of HCI research reports.
