VertaaUX Articles
Zero Issues Does Not Mean Accessible
Explain why a clean scan is not the same thing as an accessible experience, and give teams better language for reporting scope and confidence.
Last updated March 30, 2026
The wrong mental model creates false confidence. A green report can still hide real blockers, and a compliance program can still fail if no one is honest about scope, evidence, and what has not been tested yet.
A green report is only meaningful when it is paired with scope, evidence, and an honest account of what the scanner could not evaluate in the first place.
This is where evidence matters more than slogans.
A green report is only meaningful when it is paired with scope, evidence, and an honest account of what the scanner could not evaluate in the first place. VertaaUX reports should help teams communicate confidence honestly by making coverage visible, attaching evidence, and explicitly marking the parts of the experience that still need human validation.
What changed in practice
Teams get into trouble when they translate 'no detectable issues on this sample' into 'the product is accessible.' That jump is how false confidence reaches leadership decks, procurement responses, and public claims.
The better alternative is plain reporting language: sampled pages, checks run, issues found, manual work still required, and areas outside current coverage.
What scanners can prove and what they cannot
- Scanners can confirm many structural failures quickly, but they cannot certify captions, task clarity, screen-reader comprehension, or whether a complex widget is truly usable.
- Coverage reports are still useful because they show where automation is strong enough to prevent obvious regressions.
- Evidence quality matters: screenshots, selectors, criterion mapping, and sample scope should travel with the result.
Where teams still get it wrong
- Manual verification is still required for screen-reader behavior, reading order quality, reflow edge cases, and subjective clarity problems.
- Customer-facing claims need human review because language like 'fully accessible' creates legal and commercial risk.
- Sampling decisions themselves need judgment: which journeys, templates, and states actually represent the product?
A pragmatic checklist
- Report every scan with scope, date, environment, and the limits of the method used.
- Use automated findings to remove obvious issues early, then plan manual checks for the highest-risk states.
- Avoid absolute language in sales or compliance materials unless a qualified review supports it.
- Keep an audit history so teams can describe progress without overstating certainty.
Keep visible: what standard, scope, and sample set the article is actually talking about.
Scanners can confirm many structural failures quickly, but they cannot certify captions, task clarity, screen-reader comprehension, or whether a complex widget is truly usable.
Useful evidence: logs, screenshots, criteria mapping, and explicit test limits.
Manual verification is still required for screen-reader behavior, reading order quality, reflow edge cases, and subjective clarity problems.
Safer pattern: report confidence and open questions instead of absolute certainty.
Evidence pack manifest
article: "zero-issues-does-not-mean-accessible"
include:
- "standards mapping"
- "test scope and sample set"
- "screenshots or recordings of representative failures"
- "manual follow-up notes"
- "Scanners can confirm many structural failures quickly, but they cannot certify captions, task clarity, screen-reader comprehension, or whether a complex widget is truly usable."
exclude_claims:
- "Manual verification is still required for screen-reader behavior, reading order quality, reflow edge cases, and subjective clarity problems."How VertaaUX fits
VertaaUX reports should help teams communicate confidence honestly by making coverage visible, attaching evidence, and explicitly marking the parts of the experience that still need human validation.
References
- Deque: The Automated Accessibility Coverage Report
- WebAIM: The WebAIM Million 2025 report
- W3C: WCAG-EM 2.0 draft evaluation methodology
- W3C: Web Content Accessibility Guidelines (WCAG) 2.2
Treat governance as an operating discipline, not a PDF you produce under pressure. That is the difference between reporting quality and actually shipping it.
Reading Progress
0% complete
On This Page