HUMAN CLARITY INSTITUTE · FULL RESEARCH REPORT

Digital Trust

How we lost confidence in what’s real — and how trust is being recalibrated

Human Clarity Report 2025 · Version 2.0 · Digital Edition

Based on the Human Clarity Institute Digital Life Survey (2025)

Abstract

The Human Clarity Institute’s Digital Trust Report (2025) examines how confidence in what we see and believe online is eroding—and what that erosion means for focus, emotion, and collective wellbeing. Based on survey data from more than 1,000 participants across six English-speaking countries, the report explores how misinformation, synthetic AI content, and constant exposure to algorithmic design have reshaped human certainty.

Results reveal a striking paradox: people trust experts more than any other source, yet one in four trust no one at all. Most question the authenticity of digital content weekly, and half say they do not trust major technology companies to use AI responsibly. Younger generations express both the highest confidence in recognizing AI-generated content that imitates human voices, faces, or writing—and the greatest anxiety about the influence such synthetic media could have on their future.

The findings highlight a growing trust gap between what people see and what they feel they can believe. Restoring that confidence will require more than technological transparency—it demands clarity of values and renewed human discernment in a world where truth can be manufactured at scale.

Executive Summary

The Digital Trust Report (2025) examines how confidence in online information is collapsing and how this erosion affects human perception, trust, and values. It follows the Human Clarity Institute’s earlier reports—Why Can’t I Focus? (2025) and Digital Fatigue & Energy (2025)—which explored how attention and effort are shaped by digital environments. This report extends that research into the domain of trust and authenticity. The study examines trust at three levels: belief in digital content, confidence in technology systems, and trust in one’s own discernment.

The findings reveal a broad decline in perceived authenticity. 61% of people question the truth of online content weekly, and nearly 74% identify social media as their least-trusted environment. While academics and experts remain the most trusted information sources, 25% of respondents trust no one in particular—a sign of deep uncertainty in digital spaces.

AI intensifies this unease. Only 21% feel highly confident identifying AI-generated images, text, or videos, yet 50% say they do not trust major technology companies to use AI responsibly. Younger generations report both the strongest confidence in spotting synthetic, human-like content and the greatest anxiety about its growing influence.

This constant vigilance carries an emotional cost. More than half worry that AI will disrupt their work, creativity, or relationships, and many describe feeling drained by the need to verify what they see. Trust has become a cognitive burden rather than a given.

The findings suggest that confidence is not shaped by technical accuracy alone. Across responses, trust is more often described as emerging in contexts where information is perceived as aligned with honesty, fairness, and transparency—values that respondents associate with meaning amid increasingly noisy digital environments.

By the Numbers

61%

question whether online content is real or trustworthy often or almost always

74%

identify social media as the environment they trust least

25%

say they trust no one in particular for reliable online information

50%

look for multiple sources as their first step when unsure what to believe

21%

feel highly confident identifying AI-generated text, images, or video that mimic humans

50%

do not trust major technology companies to use AI responsibly

53%

are moderately to extremely worried that AI will disrupt their work, creativity, or relationships

70%

think about the implications of AI on their life at least occasionally

83%

say they trust information more when it reflects honesty, fairness, or transparency

Younger < 30 yrs

most likely to feel very worried about AI disruption and very bothered by synthetic content

Older 60+ yrs

least worried and least confident identifying AI content

Experts & academics

remain the most trusted group in every country surveyed, followed by no one in particular — a sign of structural scepticism across English-speaking cultures

The Collapse of Confidence

Across the digital world, confidence in what people see and read has weakened. What began as occasional doubt has become a routine habit of skepticism. In the Human Clarity Institute’s Digital Trust Lens survey, 61% of participants said they question whether online content is real or trustworthy often or almost always. Only a small minority reported rarely doing so. The act of doubting has become part of daily media consumption rather than an exception to it.

Social media stands out as the least-trusted environment. 74% of respondents selected it when asked where they place the least confidence, well ahead of news sites, videos, or email. The same pattern appears across every English-speaking country in the study: people scroll, they see, and they hesitate. The open-text responses that followed these questions were filled with words such as annoyed, disappointed, frustrated, and betrayed—language that signals a personal sense of deception rather than simple disagreement.

Chart showing frequency of people questioning whether online content is real or trustworthy.

When asked whom they trust most for reliable information, participants most often chose academics and experts. Yet 25% answered “no one in particular.” This absence of a trusted anchor points to a deeper structural scepticism that spans geography and age. Even those who rely on professional or institutional sources describe feeling the need to cross-check facts through multiple channels before believing them.

These findings align with other international studies showing a similar decline in media confidence. The 2024 Edelman Trust Barometer reports record lows in trust across digital and traditional media. Pew Research Center (2023) found that only about one-third of Americans trust information from news organizations, while the Reuters Institute (2024) observed increasing “news avoidance” in nearly every country it measured.

Together, these results portray a public that is simultaneously informed and uncertain. People have access to more knowledge than at any other time in history, yet they approach that knowledge with growing hesitation. The simple question “Can I trust what I see?” has become the starting point of modern information life.

The Engineered Environment

The systems that now shape digital life are built to predict behaviour, not simply to display information. Algorithms learn from patterns of attention and deliver more of what holds it. The result is an environment that constantly adjusts to preference, turning every feed into a reflection of what it already knows. What once appeared spontaneous now functions as design.

Survey data show how this design is perceived. 18% of respondents said they were highly bothered when discovering that what they consumed was created by artificial intelligence rather than a person, while another 41% were somewhat or quite bothered. Half said they do not trust major technology companies to use AI responsibly. Only 21% described themselves as confident they could identify AI-generated text, images, or video that resemble human work. Together, these results indicate that most people recognise automation but remain uncertain about its boundaries.

Chart showing how much people are bothered when discovering that content was generated by AI instead of a person.
Chart showing that 50% of respondents do not trust major technology companies to use AI responsibly.

Generational data reveal a divide between recognition and tolerance. Respondents under 30 were both the most confident in identifying AI and the most worried about its growing role. Older groups were less confident yet less concerned. Awareness appears to heighten sensitivity to manipulation rather than reduce it.

External research echoes these findings. PwC’s 2023 survey on AI transparency reported that most consumers expect clear labelling of synthetic content but doubt the labels are reliable. MIT Technology Review Insights (2024) found similar results in its study of public attitudes toward automated media, while the World Economic Forum (2023) called for stronger disclosure standards and accountability for AI systems. Each source points to a consistent pattern: technological progress continues faster than trust can adapt.

The outcome is an information ecosystem designed to anticipate rather than surprise. People engage with systems that know what they will pause on, share, or question, often before they do. This predictive design is efficient for attention but uncertain for trust, leaving the experience of being informed increasingly shaped by what machines expect of us.

50% of people do not trust major technology companies to use AI responsibly

Why do I doubt myself after being online?

Because information online rarely stands alone, claims are often encountered alongside alternatives or competing interpretations. What once signalled an abundance of choice now frequently requires comparison. Verification has become a routine part of digital engagement, and certainty is often provisional rather than assumed.

Survey data show how this pattern is reflected in behaviour. When unsure whether something online is real or trustworthy, 50% of participants said they first look for multiple sources to confirm it. Another 28% turn to search engines, while smaller groups rely on family, friends, or independent creators. Yet even among those who check several sources, trust remains limited: 25% said they ultimately trust no one in particular for reliable information. Academics and experts remain the most trusted group overall, though this trust is often described as conditional, reinforced by repetition and corroboration rather than taken at face value. A similar pattern has been observed in earlier Human Clarity Institute research, where frequent task-switching in digital environments was associated with reduced confidence in one’s own judgement.

Chart showing first steps people take when unsure about the authenticity of online content

This habit of cross-checking increases awareness but reduces ease. The effort involved in verifying routine information contributes to ongoing uncertainty: people encounter more information, yet report less confidence in what they believe. Rather than resolving doubt, repeated verification often sustains it, keeping trust tentative and situational.

The erosion of ease

Doubt is no longer experienced only as an occasional interruption; it is commonly described as part of how people participate online. Digital spaces are entered with an expectation of questioning, comparison, and confirmation, while trust remains cautious and limited. Each act of verification builds competence, but it can also constrain confidence. The result is a form of digital literacy that supports accuracy, while making certainty feel harder to maintain.

The Cost of Vigilance

Across the datasets referenced in this report, vigilance emerges as a common feature of contemporary digital attention. In the Digital Trust Lens survey, 53% of respondents said they are moderately to extremely worried that artificial intelligence will disrupt their work, creativity, or relationships. A further 70% report thinking about AI’s implications for their lives at least occasionally. For many, sustained awareness has become a regular condition of digital engagement rather than an active choice.

Chart showing worry levels about AI disrupting life, work, creativity, or relationships

The emotional cost of this sustained alertness is evident in how people describe their experience. Continuous exposure to uncertain or ambiguous information requires ongoing evaluation, keeping attention engaged even in the absence of immediate threat. Rather than resolving uncertainty, this persistent monitoring can maintain a low level of mental readiness, making it harder to disengage from information once encountered.

Behavioural patterns help explain this effect. When people encounter content they cannot easily verify, they often remain engaged longer—scrolling further, revisiting sources, or delaying disengagement in search of reassurance. The effort intended to restore certainty can instead extend exposure, contributing to a sense of attentional fatigue. Similarly, routine acts of micro-evaluation—checking notifications, comparing sources, scanning for signs of manipulation—draw on limited cognitive resources, even when no explicit stress is reported.

Taken together, the survey findings suggest that mistrust functions not only as a belief pattern, but as an ongoing demand on attention and emotional regulation. Each moment of doubt draws on focus, time, and mental energy. The capacity to question protects accuracy, but sustained questioning can erode clarity over time.

Fatigue, in this sense, can be understood as an outcome of the information environment itself—shaped less by volume alone than by the continuous effort required to remain discerning within it.

Chart showing how often people think about AI’s implications for daily life, work, relationships, creativity, or opportunities

The Path to Restoration

Rebuilding trust is closely associated with clarity of values. When people have a clearer sense of what matters to them, information can become easier to interpret and less effortful to evaluate. In the Digital Trust Lens survey, 65% of respondents said that factual accuracy and evidence matter most when deciding what to believe online, while 83% said they trust content more when it reflects honesty, fairness, or transparency. Together, these findings indicate that credibility is shaped not only by technical accuracy, but also by perceived integrity and intent.

This pattern is consistent with earlier Human Clarity Institute research, which found that clarity of personal values is associated with greater focus and more confident decision-making in digital environments. Across responses, participants describe trust forming more readily when information aligns with principles they recognise as meaningful, rather than when accuracy is presented in isolation. In this sense, trust is often described as emerging through coherence between message and motive, rather than verification alone.

Taken together, the results point to a shift in how trust is experienced. Rather than being rebuilt through exhaustive checking, trust appears more stable when people feel oriented by a consistent internal reference point. Transparency increases visibility, but values provide context for interpretation, helping people decide what deserves attention and belief.

Re-establishing this orientation does not imply rejecting technology or information systems. Instead, it reflects a recalibration of how information is engaged with, where interpretation is paced in a way that allows understanding to form. In such conditions, trust is described less as belief in every source, and more as confidence in one’s own capacity to judge what matters.

Conclusion / Path Forward

Trust is not disappearing; it is being recalibrated. Each report in this series has traced the same pattern in different forms—attention scattered by speed, energy reduced by repetition, and confidence shaped by uncertainty. The data on digital trust complete that pattern: once information becomes personalized, clarity depends less on what is true and more on how people orient themselves within it. Viewed alongside Why Can’t I Focus?Digital Fatigue & Energy, and Values vs Noise, this report completes the Institute’s first research cycle mapping how digital life affects human clarity.

The coming decade will test how societies adapt to this new environment. Efforts to regulate transparency and label synthetic content will help, but technical compliance cannot replace human discernment. The long-term path forward lies in designing systems and habits that support reflection rather than reaction. Platforms that slow the rhythm of exposure, workplaces that measure attention as carefully as output, and individuals who align their media use with their values will collectively decide how sustainable digital life becomes.

The Human Clarity Institute’s findings point to a recurring pattern: clarity regenerates when people know what they stand for before deciding what to believe. In an era defined by algorithms that anticipate behavior, that awareness may be the most human form of trust still available.

83% say they trust information more when it reflects honesty, fairness, or transparency.

Key Takeaways

1. Mistrust has become increasingly common.
61% of people question whether online content is real or trustworthy often or almost always. What began as selective doubt has become routine — a baseline habit of verification that defines how people now read and watch.

2. The feed no longer feels neutral.
74% identify social media as the environment they trust least. The space designed for connection has become a testing ground for credibility, where every post carries a presumption of design.

3. Technology confidence has fractured.
Only 21% feel highly confident identifying AI-generated text, images, or video, while 50% do not trust major technology companies to use AI responsibly. Awareness of automation has outpaced belief in accountability.

4. Constant vigilance drains focus.
More than half worry that AI will disrupt their work, creativity, or relationships. The need to assess, check, and confirm creates a steady expenditure of energy — a mental cost of staying informed.

5. Values remain the final filter.
83% say they trust information more when it reflects honesty, fairness, or transparency. Clarity no longer depends on volume or speed but on alignment between message and motive.

Data & Methods Note

This report draws on data from the Human Clarity Institute’s Digital Life Survey, conducted in September 2025. The study used an anonymised, self-report survey design to capture how participants described their experiences of attention, distraction, busyness, fatigue, and values alignment in everyday digital contexts.

Participants were recruited via an online research panel and provided informed consent prior to participation. Responses were collected anonymously, with no personally identifiable information retained. Findings are reported at a population level only.

The survey collected responses from 1,003 adults aged 18 and over, residing in six English-speaking countries: the United Kingdom, United States, Canada, Australia, Ireland, and New Zealand. Participants were required to be fluent in English. Results reflect observations from a single survey wave and should be interpreted as indicative rather than representative.

The underlying dataset used in this report is published by the Human Clarity Institute as the Digital Life 2025 dataset and is available in the Institute’s open data library, where the full dataset, variable definitions, and supporting documentation can be accessed:

Non-Diagnosis & Interpretation Boundaries

All findings presented in this report are descriptive in nature. The report does not diagnose individuals, classify behaviours as conditions, determine causes or mechanisms, or evaluate the effectiveness of any approach. It does not provide advice or recommendations.

The observations described here reflect population-level patterns derived from a single survey wave. Individual experiences may differ, and interpretations beyond what can be directly supported by the data rest with the reader.

How to Cite & Where to Go Deeper

This report is published by the Human Clarity Institute as an independent research report documenting descriptive patterns observed in a large-scale survey on focus, fatigue, and values alignment in digital environments.

The report is intended to be cited as institute research or grey literature. It provides population-level observations and interpretive framing designed to support understanding, exploration, and context-setting across research, policy, and design discussions. It does not present causal findings, predictive models, or policy recommendations.

For academic or analytical work requiring statistical inference, modelling, or hypothesis testing, the underlying dataset should be cited directly rather than the narrative report. The dataset used in this report is openly published by the Human Clarity Institute and includes full variable definitions, documentation, and supporting materials suitable for secondary analysis.

Suggested citation (report):

Human Clarity Institute. (2025). Digital Trust Report 2025

Suggested citation (dataset):
Human Clarity Institute. (2025). Digital Life Survey 2025 [dataset]

Readers seeking deeper understanding may explore other Human Clarity Institute reports and insights drawn from the same survey, while those seeking technical detail, replication, or extended analysis are encouraged to consult the underlying dataset directly.

 © 2025 Human Clarity Institute. All rights reserved.