How People Experience Decision-Making and Digital Systems in 2026
People are increasingly using digital and AI systems as part of how they make decisions — not just what they decide, but how they think through choices. This page summarises findings from the Human Clarity Institute’s Decision-Making and Digital Systems 2026 dataset, based on 358 valid responses across six English-speaking countries.
The data shows a consistent pattern: digital and AI systems are most often used when decisions feel uncertain, effortful, or time-pressured, while most people still verify outputs, intervene when needed, and retain responsibility for final outcomes.
View the Decision-Making and Digital Systems 2026 Dataset
What the data shows
Four signals stand out: people use digital or AI systems to support decisions, reliance increases when decisions feel difficult, verification behaviour is widespread, and most people retain personal responsibility even when using systems.
Digital and AI systems are increasingly part of decision-making, but are not typically treated as automatic decision-makers.
In practice, the dominant pattern is not full delegation — it is supported decision-making. People rely on digital or AI systems selectively, particularly when decisions feel uncertain or effortful, while still seeing themselves as responsible for the final outcome.
How people actually use AI in decisions
People are not simply using digital systems for answers — they are using them as part of the decision-making process itself.
46% report frequently using digital or AI systems to help make decisions, but only 29% say these systems are their default starting point.
Use of digital and AI systems tends to be selective rather than automatic. People often incorporate them into their decision process when they want additional clarity, structure, or a way to check their thinking, rather than treating them as the primary source of a decision. This pattern shows that digital and AI systems are most often used as decision support rather than decision-makers.
People tend to rely more on digital or AI systems when decisions feel uncertain or effortful, rather than using them consistently across all decisions.
Reliance increases when decisions feel mentally difficult
58% rely more on systems when decisions feel effortful.
People are more likely to use digital or AI systems when decisions feel difficult, often turning to them to reduce effort or gain additional clarity.
Most people double-check outputs before using them
Verification is widespread.
People typically review and question system outputs before acting, rather than accepting them at face value.
Personal responsibility remains strong
Most still feel responsible for decisions made with system support.
People generally continue to see themselves as responsible for final decisions, even when digital or AI systems are used to support them.
Reliance increases when decisions are difficult
The strongest behavioural pattern is that reliance increases when decisions feel uncertain, complex, or time-pressured.
61% are more likely to rely on systems when uncertain, and 58% when under time pressure.
AI becomes most influential when people feel uncertain, under pressure, or mentally stretched, rather than being used consistently across all decisions.
By the numbers (from HCI data)
Feel more independent without systems
Many people associate a stronger sense of independence with decisions made without digital or AI support.
AI helps provide clarity when unsure
People often turn to digital or AI systems when they feel uncertain, using them to gain clarity, structure options, or check their thinking.
Patterns observed in the data
Decision support is common, but not dominant
Digital and AI systems are widely used, but not the default starting point for most decisions.
Reliance increases under pressure
People are more likely to use AI when decisions feel difficult, uncertain, or time-sensitive.
Verification remains central
Most users check outputs before acting, meaning decisions are still filtered through human judgement.
Responsibility remains human
Even when AI is used, people continue to see themselves as responsible for outcomes.
In practice, system input is usually incorporated into decisions rather than replacing personal judgement.
Key takeaways
- AI is commonly used to support decisions, but not as the default starting point
- Reliance increases when decisions feel difficult or uncertain
- Most people verify outputs and retain responsibility
- Supported decision-making is more common than full delegation
- AI changes how decisions are made, without replacing human judgement
- Digital and AI systems are most often used when people feel uncertain, rather than across all decisions equally
Questions this data can answer
How common is it to use AI for decisions?
AI is commonly used as a decision support tool, but not universally. 46% report frequent use, while many others use it more selectively. Digital and AI systems are already part of how many people approach decisions, but are not the default starting point for most.
Does reliance increase when decisions are difficult?
Yes. 58% say they rely more on systems when decision-making feels effortful. This indicates that AI becomes more influential when people feel uncertain or mentally stretched, rather than being used consistently across all decisions.
Do people verify outputs?
Most people actively verify system outputs before acting. 85% say they double-check or question outputs. This suggests that AI-supported decisions are typically filtered through human judgement, rather than being accepted without review.
Do people retain responsibility for decisions made with AI?
Yes. 91% say they still feel responsible for decisions made with system support. This shows that even when AI is involved, people usually see themselves as the final decision-maker rather than delegating full control.
The Agency Gap
The Agency Gap explores how AI consultation may begin shaping human decision-making while people still experience responsibility and judgement as their own.
Why Can’t I Focus?
Why Can’t I Focus investigates how digital distraction fragments attention and how clarity about what matters most can help restore sustained focus.
Methodology
This dataset forms part of the Human Clarity Institute’s Human–AI Experience research programme, examining how people use digital and AI systems in decision-making, how these systems affect clarity, reliance, verification behaviour, and responsibility, and how individuals balance support from systems with personal judgement. The study uses a cross-sectional online survey design and focuses on descriptive patterns in AI-assisted decision-making, including clarity under uncertainty, reliance conditions, verification behaviour, override confidence, and perceived responsibility.
Data were collected via the Prolific research platform from adults across six English-speaking countries. Participants provided explicit consent for anonymised open publication as part of HCI’s open research programme.
Sampling & participants
- Final n: 358
- Countries: Australia, United States, United Kingdom, Ireland, Canada, New Zealand
- Eligibility: Adults aged 18+ from six English-speaking countries
- Recruitment platform: Prolific
The resulting dataset should be interpreted as a non-probability convenience sample and is not intended to represent national populations.
The cleaned dataset, variable dictionary, and reuse terms are publicly available through the HCI dataset repository: Decision-Making and Digital Systems 2026 Dataset →
Data integrity
All percentages reported on this page are calculated from valid responses in the cleaned dataset (n = 358). Percentages are rounded to the nearest whole number for readability. Unless otherwise stated, summary percentages combine respondents selecting 5–7 on the 7-point agreement scale (slightly agree, moderately agree, or strongly agree).
Where percentages refer to subgroups or conditions (such as uncertainty, difficulty, or time pressure), the wording on the page makes that explicit. Comparative patterns reflect differences in reported behaviour within the relevant condition rather than across the full sample.
Participant IDs, timestamps, and direct identifiers were removed before publication as part of the anonymisation process.
This dataset is exploratory and descriptive in nature. It does not support causal inference and results should be interpreted as observed patterns within the survey sample.
This dataset is released as open research to support transparent analysis of AI-assisted decision-making, clarity under uncertainty, reliance behaviour, verification patterns, and human responsibility in digitally mediated decision environments.
Data use and reuse terms are outlined in our Data Use & Disclaimer.
Explore further analysis on Human Clarity Insights, or browse the full collection of HCI research reports.