How People Experience Decision-Making and Digital Systems in 2026

People are increasingly using digital and AI systems as part of how they make decisions — not just what they decide, but how they think through choices. This page summarises findings from the Human Clarity Institute’s Decision-Making and Digital Systems 2026 dataset, based on 358 valid responses across six English-speaking countries.

The data shows a consistent pattern: digital and AI systems are most often used when decisions feel uncertain, effortful, or time-pressured, while most people still verify outputs, intervene when needed, and retain responsibility for final outcomes.

View the Decision-Making and Digital Systems 2026 Dataset

Construct tags: Decision Support · Agency · Decision Dependence · Cognitive Load

What the data shows

Four signals stand out: people use digital or AI systems to support decisions, reliance increases when decisions feel difficult, verification behaviour is widespread, and most people retain personal responsibility even when using systems.

Digital and AI systems are increasingly part of decision-making, but are not typically treated as automatic decision-makers.

In practice, the dominant pattern is not full delegation — it is supported decision-making. People rely on digital or AI systems selectively, particularly when decisions feel uncertain or effortful, while still seeing themselves as responsible for the final outcome.

How people actually use AI in decisions

People are not simply using digital systems for answers — they are using them as part of the decision-making process itself.

46% report frequently using digital or AI systems to help make decisions, but only 29% say these systems are their default starting point.

Use of digital and AI systems tends to be selective rather than automatic. People often incorporate them into their decision process when they want additional clarity, structure, or a way to check their thinking, rather than treating them as the primary source of a decision. This pattern shows that digital and AI systems are most often used as decision support rather than decision-makers.

People tend to rely more on digital or AI systems when decisions feel uncertain or effortful, rather than using them consistently across all decisions.

58%

Reliance increases when decisions feel mentally difficult

58% rely more on systems when decisions feel effortful.

People are more likely to use digital or AI systems when decisions feel difficult, often turning to them to reduce effort or gain additional clarity.

85%

Most people double-check outputs before using them

Verification is widespread.

People typically review and question system outputs before acting, rather than accepting them at face value.

91%

Personal responsibility remains strong

Most still feel responsible for decisions made with system support.

People generally continue to see themselves as responsible for final decisions, even when digital or AI systems are used to support them.

Reliance increases when decisions are difficult

The strongest behavioural pattern is that reliance increases when decisions feel uncertain, complex, or time-pressured.

61% are more likely to rely on systems when uncertain, and 58% when under time pressure.

AI becomes most influential when people feel uncertain, under pressure, or mentally stretched, rather than being used consistently across all decisions.

By the numbers (from HCI data)

70%

Feel more independent without systems

Many people associate a stronger sense of independence with decisions made without digital or AI support.

65%

AI helps provide clarity when unsure

People often turn to digital or AI systems when they feel uncertain, using them to gain clarity, structure options, or check their thinking.

Patterns observed in the data

Decision support is common, but not dominant

Digital and AI systems are widely used, but not the default starting point for most decisions.

Reliance increases under pressure

People are more likely to use AI when decisions feel difficult, uncertain, or time-sensitive.

Verification remains central

Most users check outputs before acting, meaning decisions are still filtered through human judgement.

Responsibility remains human

Even when AI is used, people continue to see themselves as responsible for outcomes.

In practice, system input is usually incorporated into decisions rather than replacing personal judgement.

Key takeaways

  • AI is commonly used to support decisions, but not as the default starting point
  • Reliance increases when decisions feel difficult or uncertain
  • Most people verify outputs and retain responsibility
  • Supported decision-making is more common than full delegation
  • AI changes how decisions are made, without replacing human judgement
  • Digital and AI systems are most often used when people feel uncertain, rather than across all decisions equally

Methodology

This dataset forms part of the Human Clarity Institute’s Human–AI Experience research programme, examining how people use digital and AI systems in decision-making, how these systems affect clarity, reliance, verification behaviour, and responsibility, and how individuals balance support from systems with personal judgement. The study uses a cross-sectional online survey design and focuses on descriptive patterns in AI-assisted decision-making, including clarity under uncertainty, reliance conditions, verification behaviour, override confidence, and perceived responsibility.

Data were collected via the Prolific research platform from adults across six English-speaking countries. Participants provided explicit consent for anonymised open publication as part of HCI’s open research programme.

Sampling & participants

  • Final n: 358
  • Countries: Australia, United States, United Kingdom, Ireland, Canada, New Zealand
  • Eligibility: Adults aged 18+ from six English-speaking countries
  • Recruitment platform: Prolific

The resulting dataset should be interpreted as a non-probability convenience sample and is not intended to represent national populations.

The cleaned dataset, variable dictionary, and reuse terms are publicly available through the HCI dataset repository: Decision-Making and Digital Systems 2026 Dataset →

Data integrity

All percentages reported on this page are calculated from valid responses in the cleaned dataset (n = 358). Percentages are rounded to the nearest whole number for readability. Unless otherwise stated, summary percentages combine respondents selecting 5–7 on the 7-point agreement scale (slightly agree, moderately agree, or strongly agree).

Where percentages refer to subgroups or conditions (such as uncertainty, difficulty, or time pressure), the wording on the page makes that explicit. Comparative patterns reflect differences in reported behaviour within the relevant condition rather than across the full sample.

Participant IDs, timestamps, and direct identifiers were removed before publication as part of the anonymisation process.

This dataset is exploratory and descriptive in nature. It does not support causal inference and results should be interpreted as observed patterns within the survey sample.

This dataset is released as open research to support transparent analysis of AI-assisted decision-making, clarity under uncertainty, reliance behaviour, verification patterns, and human responsibility in digitally mediated decision environments.

Data use and reuse terms are outlined in our Data Use & Disclaimer.

Explore further analysis on Human Clarity Insights, or browse the full collection of HCI research reports.