AI & the Human Experience 2025 — The Trust Threshold
Drawing on early behavioural data from the Human Clarity Institute’s 2025 Digital Life Survey, this page explores how people perceive, trust, and emotionally respond to artificial intelligence in daily life.
The findings capture a world quietly negotiating its relationship with AI — intrigued by its potential, yet uneasy about its presence. From creative value to personal trust, people express both fascination and fatigue as technology becomes more emotionally intelligent and more invisible.
What the Data Shows
The data suggests that most people are not rejecting AI itself, but questioning how it’s being used and who controls it. Almost half express no trust at all in large technology companies to use AI responsibly, and a further 28% trust them only slightly. This distrust mirrors emotional reactions seen elsewhere in the survey — words such as annoyed, betrayed, and deceived are amoung the most common responces when participants discover content was AI-generated.
A strong emotional divide emerges between curiosity and control. While over 90% of respondents think about AI’s implications at least occasionally, fewer than one in four feel confident they can reliably tell human and AI content apart. The result is a subtle erosion of certainty — a sense that the line between authentic and artificial experience is becoming blurred.
Wider fears about the human role in creativity persist. Around half of all respondents are worried “quite a lot” or “very much” that AI will make human creativity less valued. This concern is not only about art or work — it reflects a deeper cultural anxiety about meaning, purpose, and human distinctiveness in an age of automation.
Overall, the early data indicates that trust, not capability, defines the new threshold for AI adoption. Emotional authenticity and transparent use may determine whether AI becomes a partner or a point of friction in human experience.
By the Numbers (from HCI Data)
Trust in AI Responsibility
Have little to no trust that big tech companies will use AI responsibly (50% “not at all”, 28% “slightly”).
Personal Disruption Concern
Are at least somewhat worried that AI will disrupt their job, studies, creativity, or relationships (31% “moderately”, 29% “slightly”, 15% “very”, 7% “extremely”).
Confidence in Detecting AI Content
Say they are only moderately or slightly confident in telling whether content was created by AI or a real person — revealing growing uncertainty about what’s authentic online.
Human Creativity Value
Are worried “quite a lot” or “very much” that AI will make human creativity less valued (28% “quite a lot”, 22% “very much”).
Patterns in the Queries
Trust & authenticity: People question how to know what’s real, whether AI-generated content can be trusted, and how much control algorithms have.
Identity & creativity: Questions like “will AI replace artists?” and “what does creativity mean when machines can imitate emotion?
Human relevance: Worries about losing personal meaning, connection, and distinct human worth in an automated world.
These search and emotional patterns highlight how AI is not only changing technology — it’s redefining what people believe it means to be human.
Related Questions People Ask
How confident are people at telling if something online was made by AI?
In the Digital Life 2025 dataset, fewer than one in four respondents report being very or extremely confident in telling whether content was created by a human or by AI. Around 69% describe themselves as only slightly or moderately confident, indicating widespread uncertainty about authenticity online.
Are people worried that AI will reduce the value of human creativity?
Yes. Around 50% of respondents say they are worried “quite a lot” or “very much” that AI will make human creativity less valued. This concern appears consistently across age groups and is not limited to people working in creative professions.
How do people emotionally react when they discover content was AI-generated?
Open-text responses in the survey show that emotional reactions are often negative when people discover content was AI-generated. Common one-word responses include annoyed, betrayed, deceived, and disappointed, suggesting a sense of emotional breach rather than neutrality.
Do people trust large technology companies to use AI responsibly?
Trust levels are low. In this dataset, 77% of respondents say they trust large technology companies either not at all or only slightly to use AI responsibly. Only a small minority report high levels of trust on this question.
How much does AI feel personally disruptive to people’s lives?
A large majority report at least some concern. Around 82% of respondents say they are at least slightly worried that AI will disrupt their job, studies, creativity, or personal relationships, with many reporting moderate or high levels of concern.
Methodology & Notes
Insights on this page draw from behavioural evidence gathered by the Human Clarity Institute in 2025 as part of the Digital Life dataset. We analysed responses from over one thousand participants about AI’s emotional, social, and creative impacts, combining quantitative questions with open word associations.
All data are anonymised, open, and publicly accessible through HCI’s dataset repository. Sampling procedures and instrument details will be available on the HCI Methodology page.
Emerging Trends — Early Human–AI Signals
Drawing on insights from the HCI Digital Life 2025 dataset, these early analyses explore new human–AI tensions emerging across behaviour, trust, and creativity. Each topic represents an early signal of psychological or cultural change that HCI is actively monitoring.
Explore more insights and analysis on Human Clarity Insights, or view the full catalogue of HCI Research Reports.
Data use and reuse terms are outlined in our Data Use & Disclaimer.