How People Experience Confidence and Doubt in Their Own Judgement Online — 2026 Data
This page summarises findings from the Human Clarity Institute’s Trust Calibration in Information Environments 2026 dataset, based on 394 valid responses across six English-speaking countries. The analysis focuses on how people experience confidence in their own judgement when deciding whether information is trustworthy, and how that confidence can still coexist with doubt.
View the Trust Calibration in Information Environments 2026 Dataset
What the data shows
Four signals stand out in this dataset: confidence in personal judgement is widespread, a meaningful minority experience doubt, these two states frequently occur together rather than forming separate groups, and even among those who doubt their judgement, most still report confidence. Together, these findings suggest that judging whether information is trustworthy is not a fixed state of certainty, but an ongoing balance between confidence and doubt.
Feel confident in their own judgement
Most people report confidence in their ability to judge whether information is trustworthy.
Experience doubt in their own judgement
A meaningful minority report doubting their judgement when judging whether information is trustworthy.
Experience both confidence and doubt
Nearly one in four experience both confidence and doubt when judging whether information is trustworthy, showing that these states often coexist rather than replacing one another.
Doubt rarely replaces confidence
Among those who experience doubt, 76% still also report confidence in their judgement, showing that doubt rarely replaces self-trust entirely.
Overall, the data show that confidence in personal judgement remains the dominant experience, but it is not stable or absolute. For many people, confidence and doubt operate together rather than in opposition, suggesting that judging whether information is trustworthy often involves continuous internal calibration rather than fixed certainty.
By the numbers (from HCI data)
Feel confident without also reporting doubt
Most respondents fall into a confidence-without-doubt pattern when judging whether information is trustworthy.
Report doubt without confidence
Only a small minority report doubt in their judgement without also reporting confidence.
Do not report doubt in their judgement
Most respondents do not cross the agreement threshold for doubting their own judgement.
Express very high confidence
Only a small share strongly agree that they feel fully confident, indicating that absolute certainty is uncommon.
Express very high doubt
Very strong doubt is rare, suggesting that severe judgement instability is not the dominant pattern.
Report neither confidence nor doubt
A small minority sit outside both judgement poles, suggesting a more neutral or unsettled stance.
Patterns observed in the data
Confidence remains the default, but not the full story
Most people report confidence in their ability to judge whether information is trustworthy. This suggests that self-trust remains intact for the majority.
Doubt does not replace confidence — it sits alongside it
The co-occurrence pattern shows that many people maintain confidence in their judgement even while questioning it. This creates a more complex internal experience than a simple confident-versus-uncertain divide.
Pure doubt without confidence is uncommon
Only a small minority report doubt without also feeling confident in their judgement, indicating that most doubt appears within retained self-trust rather than as complete loss of confidence.
Most doubt appears as tension rather than instability
Stronger forms of doubt are relatively uncommon, suggesting that for most people, doubt appears as intermittent friction or second-guessing rather than persistent loss of confidence.
Judgement online appears to involve ongoing calibration
Taken together, these findings suggest that evaluating information is not a fixed state of certainty. Instead, people appear to continuously balance confidence and doubt as they interpret what they see.
Questions this data can answer
These questions reflect common real-world queries about confidence and doubt when evaluating information. For deeper answers across the trust cluster, explore the linked hub below.
Do people trust their own judgement online?
88% feel confident in their ability to judge whether information is trustworthy.
How common is self-doubt when evaluating information?
30% experience doubt in their own judgement.
Can confidence and doubt both exist at the same time?
23% experience both confidence and doubt when judging whether information is trustworthy.
Do people who doubt themselves still feel confident overall?
76% of those who experience doubt still report confidence in their judgement.
Is complete certainty common?
Only 9% report complete confidence, indicating that absolute certainty is uncommon.
Digital Trust
Digital Trust examines how people judge credibility, navigate uncertainty, and respond to authenticity concerns in AI-shaped digital environments.
Values vs Noise
Values vs Noise explores how digital distraction and busyness can obscure purpose — and how reconnecting with values helps restore direction and clarity.
Methodology
This dataset forms part of the Human Clarity Institute’s Human–AI Experience research programme, examining how people calibrate trust, assess reliability, and experience confidence or doubt in their own judgement when navigating digital information environments. The study uses a cross-sectional online survey design and focuses on descriptive patterns in epistemic confidence, internal judgement tension, and trust calibration under conditions of uncertainty.
Data were collected via the Prolific research platform on 2026-02-09 from adults across six English-speaking countries. Participants provided explicit consent for anonymised open publication as part of HCI’s open research programme.
Sampling & participants
- Final n: 394
- Countries: United Kingdom, United States, Canada, Australia, New Zealand, Ireland
- Eligibility: Adults (18+)
- Recruitment platform: Prolific
The resulting dataset should be interpreted as a non-probability convenience sample and is not intended to represent national populations.
The cleaned dataset, variable dictionary, and reuse terms are publicly available through the HCI dataset repository: Trust Calibration in Information Environments 2026 Dataset →
Data integrity
All percentages reported on this page are calculated from valid responses in the cleaned dataset (n = 394, unless otherwise specified). Percentages are rounded to the nearest whole number for readability. Unless otherwise stated, summary percentages combine respondents selecting 5–7 on the 7-point agreement scale (slightly agree, moderately agree, or strongly agree).
Co-occurrence figures on this page are calculated only from respondents who provided valid answers to both judgement items. Anonymisation procedures removed participant IDs, timestamps, and direct identifiers before publication.
This dataset is exploratory and descriptive in nature. It does not support causal inference and results should be interpreted as observed patterns within the survey sample.
Suggested citation:
Human Clarity Institute. (2026). Trust Calibration in Information Environments (Dataset). https://doi.org/10.5281/zenodo.18625243
Data use and reuse terms are outlined in our Data Use & Disclaimer.
Explore further analysis on Human Clarity Insights, or browse the full collection of HCI research reports.