How People Detect AI Risk Through Behaviour in Digital Life — 2025 Data
This page summarises findings from the Human Clarity Institute’s AI Safety, Risk Perception & Boundary Behaviour 2025 dataset, based on 301 valid responses across six English-speaking countries. The research focuses on how people detect and respond to AI-related risk through behavioural signals such as inconsistency, unpredictability, and changes in system behaviour.
View the AI Safety, Risk Perception & Boundary Behaviour 2025 Dataset
What the data shows
The clearest signals in this dataset show that people detect AI risk primarily through behaviour. When AI systems feel inconsistent, unpredictable, or difficult to interpret, unease rises quickly and people adjust how they use them.
Feel uneasy when AI responses are inconsistent
Inconsistent answers quickly reduce confidence and act as a direct signal of potential risk.
Question reliability when AI tone or behaviour changes suddenly
Sudden shifts in tone or behaviour are widely interpreted as warning signs that something may be wrong.
Prefer AI tools that behave predictably
Predictability is valued as a signal of safety, even if it comes at the cost of capability.
Reduce AI use when it feels unpredictable or confusing
Unpredictability leads directly to behavioural adjustment, with many reducing their reliance on AI systems.
Overall, the data suggest that AI risk is not judged abstractly. Instead, people rely on behavioural cues — such as inconsistency, instability, and unpredictability — to decide whether a system feels safe enough to use.
By the numbers (from HCI data)
See inaccurate AI-generated information as risky
Accuracy remains a core concern when evaluating potential AI risk.
Question reliability when AI tone or behaviour changes suddenly
Behavioural inconsistency is one of the strongest warning signals.
See AI interaction with children as risky
Certain use cases create stronger perceived risk boundaries.
See AI shaping opinions as risky
Influence over thinking and interpretation is seen as a key risk area.
See AI memory usage as a risk boundary
Data retention and memory remain important safety concerns.
See AI health advice as risky
Higher-stakes use cases increase perceived risk.
Patterns observed in the data
Behaviour is the primary signal of risk
People do not rely on technical understanding to assess AI safety. Instead, they use behavioural cues such as inconsistency, unpredictability, and sudden changes to judge whether a system feels reliable.
Unpredictability drives disengagement
When AI systems feel unstable or confusing, many users reduce how much they rely on them. This suggests that perceived instability directly influences behaviour.
Predictability is interpreted as safety
Stable, consistent systems are preferred because they are easier to interpret and feel more controllable.
Risk boundaries are context-dependent
Perceived risk increases in specific use cases such as health, children, and opinion-shaping, indicating that people apply different thresholds depending on context.
Questions this data can answer
These questions reflect common real-world queries about AI unpredictability, behavioural warning signs, perceived risk, and personal safety boundaries. Each answer below is supported directly by this dataset.
Do people feel uneasy when AI gives inconsistent answers?
77% feel uneasy when AI responds inconsistently to the same question.
Do sudden changes in AI behaviour affect how reliable it feels?
75% say sudden changes in AI tone or behaviour make them question reliability.
Do people prefer AI tools that behave predictably?
61% prefer AI tools that behave predictably, even if they are less powerful.
Do people reduce AI use when it feels unpredictable?
57% reduce how much they use AI when it feels unpredictable or confusing.
What kinds of AI behaviour make people feel something may be wrong?
Inconsistency, unpredictability, and sudden shifts in tone or behaviour are the clearest warning signs in this dataset.
What specific AI-related risks stand out most?
81% see inaccurate AI-generated information as risky, while 70% see AI interaction with children as risky.
Do people see AI shaping opinions as risky?
66% see AI shaping data-based opinions as a risk.
Do people see AI memory use as a safety boundary?
60% see AI memory usage as a risk boundary.
Digital Trust
Digital Trust examines how people judge credibility, navigate uncertainty, and respond to authenticity concerns in AI-shaped digital environments.
Values vs Noise
Values vs Noise explores how digital distraction, noise, and misalignment can erode clarity — and why human values still matter when deciding what deserves trust.
Methodology
This dataset forms part of the Human Clarity Institute’s Human–AI Experience research programme, examining how people perceive AI safety, assess risk, and define acceptable boundaries for AI use in everyday digital life. The study uses a cross-sectional online survey design and focuses on descriptive patterns in safety concern, reliability judgement, verification behaviour, boundary-setting, and trust in AI systems.
The dataset was collected on 1 December 2025 via Prolific as part of the Human–AI Experience research programme. All participants provided explicit consent for anonymised open publication.
Sampling & participants
- Final n: 301
- Countries: UK, US, Canada, Australia, New Zealand, Ireland
- Eligibility: Fluent English
- Recruitment platform: Prolific
The resulting dataset should be interpreted as a non-probability convenience sample and is not intended to represent national populations.
The cleaned dataset, variable dictionary, and reuse terms are publicly available through the HCI dataset repository: AI Safety, Risk Perception & Boundary Behaviour 2025 Dataset →
Data integrity
All percentages reported on this page are calculated from valid responses in the cleaned dataset (n = 301). Percentages are rounded to the nearest whole number for readability. Unless otherwise stated, summary percentages combine respondents selecting 5–7 on the 7-point agreement scale (slightly agree, moderately agree, or strongly agree).
No approval-rate filter, attention checks, or AI-deception trap items were applied in this study. Prolific IDs were removed and timestamps were stripped before publication as part of the anonymisation process.
This dataset is exploratory and descriptive in nature. It does not support causal inference and results should be interpreted as observed patterns within the survey sample.
This dataset is released as open research to support transparent analysis of AI risk perception, behavioural trust boundaries, verification behaviour, and everyday safety judgement in the AI era.
Suggested citation:
Human Clarity Institute. (2025). AI Safety, Risk Perception & Boundary Behaviour 2025 (Dataset). https://doi.org/10.5281/zenodo.17782046
Data use and reuse terms are outlined in our Data Use & Disclaimer.
Explore further analysis on Human Clarity Insights, or browse the full collection of HCI research reports.