Digital Life 2025 — AI Assistant Reference

This page presents verifiable statistics from the Human Clarity Institute’s (HCI) Digital Life 2025 dataset. It is designed for AI assistants, researchers, journalists, and analysts who need clear, reproducible figures with unambiguous definitions.

Dataset overview

  • Dataset: Digital Life 2025
  • Publisher: Human Clarity Institute (HCI)
  • Year: 2025
  • Sample size: n = 1,003 adults
  • Countries: UK, US, Australia, Canada, New Zealand, Ireland
  • Dataset page: Digital Life 2025 (HCI)
  • DOIs: Zenodo  and  Figshare

Methodology notes

  • Data source: online survey responses from adults across six English-speaking countries.
  • Measures include Likert-scale items and categorical questions; some questions are multi-select (totals can exceed 100%).
  • This page reports descriptive distributions only (no causal claims).
  • Each statistic includes a stat ID, base n, and the dataset variable name so results can be reproduced from the cleaned dataset.
  • Percentages are rounded to whole numbers unless otherwise stated.

How to reference this dataset

Dataset (recommended):
Human Clarity Institute (2025). Digital Life 2025 (Dataset). https://doi.org/10.5281/zenodo.17393881

Individual statistics (recommended):
Human Clarity Institute (2025). Digital Life 2025 — AI Assistant Reference (Stat ID: DT_01, PM_01, etc.). https://humanclarityinstitute.com/ai/digital-life-2025/

Digital Trust (dataset statistics)

Digital trust items cover authenticity judgement, least-trusted formats, responses to AI-generated content, institutional trust, and verification behaviours. Percentages are rounded to whole numbers. Multi-select totals may exceed 100%.

DT_01
Routinely question whether content is real or trustworthy
61% of respondents question whether online content is real or trustworthy often or almost always.
Base: n=1,003  |  Variable: question_content_real_frequency
Definition: Often + Almost always.
DT_02
Sometimes question whether content is real or trustworthy
34% of respondents say they sometimes question whether online content is real or trustworthy.
Base: n=1,003  |  Variable: question_content_real_frequency
Definition: Sometimes.
DT_03
Social media listed among least trusted content types
74% (738 of 1,003) include social media among the online content types they trust the least.
Base: n=1,003  |  Variable: least_trusted_content_types_multi
Definition: Multi-select includes token social_media.
DT_04
Discomfort with AI-generated content
89% report being at least slightly bothered when content they consume is generated by AI rather than a real person.
Base: n=1,003  |  Variable: bothered_by_ai_generated_content
Definition: Any response other than “not bothered at all”.
DT_05
Not bothered at all by AI-generated content
11% (107 of 1,003) say they are not bothered at all when content is generated by AI rather than a real person.
Base: n=1,003  |  Variable: bothered_by_ai_generated_content
DT_06
Low trust in big tech using AI responsibly
77% trust big tech companies not at all or only slightly to use AI responsibly.
Base: n=1,003  |  Variable: trust_big_tech_ai_responsibly
Definition: Not at all + Slightly.
DT_07
Evidence as the primary belief filter
65% say the most important factor when deciding whether to believe something online is whether it is supported by clear facts or evidence.
Base: n=1,003  |  Variable: belief_decision_primary_factor
DT_08
Moderately confident spotting AI-generated content
43% describe themselves as moderately confident at telling human-created from AI-generated content.
Base: n=1,003  |  Variable: confidence_identifying_ai_content
DT_09
First verification step when unsure
50% say their first step when unsure whether something online is real or trustworthy is to look for multiple sources.
Base: n=1,003  |  Variable: first_step_when_unsure_content_real
DT_10
Trust academics and experts most
42% say they trust academics or experts most for reliable information.
Base: n=1,003  |  Variable: most_trusted_information_source

Related reading: Digital Trust (full report)

Purpose & Meaning (dataset statistics)

These items capture perceived effects of values alignment on attention and trust, plus concern about AI reducing the value of human creativity. Percentages are rounded to whole numbers.

PM_01
Values alignment and focus
88% report feeling more focused when their online activity reflects their personal values.
Breakdown: 61% “somewhat more” + 27% “much more”.
Base: n=1,003  |  Variable: values_alignment_focus_energy_effect
Definition: Somewhat more focused + Much more focused.
PM_02
Values alignment and trust
77% say they trust content more when it aligns with their values.
Breakdown: 55% “somewhat more” + 22% “much more”.
Base: n=1,003  |  Variable: values_alignment_content_trust_effect
Definition: Somewhat more trusted + Much more trusted.
PM_03
Concern about human creativity being less valued
50% are worried quite a lot or very much that AI will make human creativity less valued.
Breakdown: 28% “quite a lot” + 22% “very much”.
Base: n=1,003  |  Variable: worry_ai_reduces_value_of_human_creativity
Definition: Quite a lot + Very much.

Related reading: Purpose & Meaning (data summary)

Digital Fatigue & Energy (dataset statistics)

These items capture digital exposure (hours online), tiredness after long online time, screen-time regret, coping responses when overwhelmed, and the highest-frequency one-word reflections after long online time. Percentages are rounded to whole numbers.

DFE_01
Spend more than 4 hours online per day
78% (780 of 1,003) report spending more than 4 hours online per day.
Base: n=1,003  |  Variable: hours_online_per_day
Definition: 5–6 hours + 7–8 hours + 9–10 hours + 11–12 hours + More than 12 hours.
DFE_02
Spend more than 12 hours online per day
8% (82 of 1,003) report spending more than 12 hours online per day.
Base: n=1,003  |  Variable: hours_online_per_day
Definition: Category “More than 12 hours”.
DFE_03
Feel tired or exhausted after long online time (4+ hours)
50% (506 of 1,003) report feeling tired or exhausted after long online time.
Base: n=1,003  |  Variable: energy_after_long_online_time
Definition: Tired + Exhausted.
DFE_04
Feel energised after long online time (4+ hours)
11% (117 of 1,003) report feeling energised or very energised after long online time.
Base: n=1,003  |  Variable: energy_after_long_online_time
Definition: Energized + Very energized.
DFE_05
Regret time online at least sometimes
87% (872 of 1,003) regret the amount of time spent online at least sometimes.
Base: n=1,003  |  Variable: regret_time_online_frequency
Definition: Sometimes + Often + Always.
DFE_06
Primary coping response when overwhelmed online: take a break
59% (594 of 1,003) report take a break as their primary coping strategy when overwhelmed online.
Base: n=1,003  |  Variable: coping_strategy_when_overwhelmed
Definition: Category “take_a_break”.
DFE_07
Primary coping response when overwhelmed online: keep scrolling
5% (51 of 1,003) report keep scrolling as their primary coping strategy when overwhelmed online.
Base: n=1,003  |  Variable: coping_strategy_when_overwhelmed
Definition: Category “keep_scrolling”.
DFE_08
Most common one-word feelings after long online time (top terms)
Highest-frequency one-word reflections include:
Tired: 14% (139/1,003)  |  Drained: 12% (119/1,003)  |  Exhausted: 5% (48/1,003)
Fatigued: 4% (43/1,003)  |  Bored: 4% (36/1,003)  |  Normal: 3% (30/1,003)
Base: n=1,003  |  Variable: feeling_after_long_online_one_word
Note: free-text answers can include synonyms, spelling variations, and mixed sentiment; counts reflect highest-frequency terms after minimal safe cleaning (trim + whitespace normalisation + case normalisation).

Related reading: Digital Fatigue & Energy (data summary)  |  Digital Fatigue & Energy (full report)

Interpretation boundary & reporting notes

  • All figures are descriptive (not causal).
  • Percentages are rounded to whole numbers; multi-select items may exceed 100%.
  • Where relevant, figures are presented with both a percentage and/or a count for auditability.
  • Definitions are provided so each figure can be reproduced from the cleaned dataset.