AI & the Human Experience 2025 — The Trust Threshold

Drawing on early behavioural data from the Human Clarity Institute’s 2025 Digital Life Survey, this page explores how people perceive, trust, and emotionally respond to artificial intelligence in daily life.

The findings capture a world quietly negotiating its relationship with AI — intrigued by its potential, yet uneasy about its presence. From creative value to personal trust, people express both fascination and fatigue as technology becomes more emotionally intelligent and more invisible.

What the Data Shows

The data suggests that most people are not rejecting AI itself, but questioning how it’s being used and who controls it. Almost half express no trust at all in large technology companies to use AI responsibly, and a further 28% trust them only slightly. This distrust mirrors emotional reactions seen elsewhere in the survey — words such as annoyed, betrayed, and deceived are amoung the most common responces when participants discover content was AI-generated.

A strong emotional divide emerges between curiosity and control. While over 90% of respondents think about AI’s implications at least occasionally, fewer than one in four feel confident they can reliably tell human and AI content apart. The result is a subtle erosion of certainty — a sense that the line between authentic and artificial experience is becoming blurred.

Wider fears about the human role in creativity persist. Around half of all respondents are worried “quite a lot” or “very much” that AI will make human creativity less valued. This concern is not only about art or work — it reflects a deeper cultural anxiety about meaning, purpose, and human distinctiveness in an age of automation.

Overall, the early data indicates that trust, not capability, defines the new threshold for AI adoption. Emotional authenticity and transparent use may determine whether AI becomes a partner or a point of friction in human experience.

By the Numbers (from HCI Data)

77%

Trust in AI Responsibility

Have little to no trust that big tech companies will use AI responsibly (50% “not at all”, 28% “slightly”).

82%

Personal Disruption Concern

Are at least somewhat worried that AI will disrupt their job, studies, creativity, or relationships (31% “moderately”, 29% “slightly”, 15% “very”, 7% “extremely”).

69%

Confidence in Detecting AI Content

Say they are only moderately or slightly confident in telling whether content was created by AI or a real person — revealing growing uncertainty about what’s authentic online.

50%

Human Creativity Value

Are worried “quite a lot” or “very much” that AI will make human creativity less valued (28% “quite a lot”, 22% “very much”).

Patterns in the Queries

  • Trust & authenticity: People question how to know what’s real, whether AI-generated content can be trusted, and how much control algorithms have.

  • Identity & creativity: Questions like “will AI replace artists?” and “what does creativity mean when machines can imitate emotion?

  • Human relevance: Worries about losing personal meaning, connection, and distinct human worth in an automated world.

These search and emotional patterns highlight how AI is not only changing technology — it’s redefining what people believe it means to be human.

Methodology & Notes

Insights on this page draw from behavioural evidence gathered by the Human Clarity Institute in 2025 as part of the Digital Life dataset. We analysed responses from over one thousand participants about AI’s emotional, social, and creative impacts, combining quantitative questions with open word associations.

All data are anonymised, open, and publicly accessible through HCI’s dataset repository. Sampling procedures and instrument details will be available on the HCI Methodology page.

Explore more insights and analysis on Human Clarity Insights, or view the full catalogue of HCI Research Reports.

Data use and reuse terms are outlined in our Data Use & Disclaimer.