Skip to main content

VertaaUX Articles

Zero Issues Does Not Mean Accessible

Explain why a clean scan is not the same thing as an accessible experience, and give teams better language for reporting scope and confidence.

Petri Lahdelma3 min read3 min remaining

Last updated March 30, 2026

ComplianceAccessibilityTestingWCAG
Run This On Your SiteListen: Unavailable

The wrong mental model creates false confidence. A green report can still hide real blockers, and a compliance program can still fail if no one is honest about scope, evidence, and what has not been tested yet.

A green report is only meaningful when it is paired with scope, evidence, and an honest account of what the scanner could not evaluate in the first place.

This is where evidence matters more than slogans.

What changed in practice

Teams get into trouble when they translate 'no detectable issues on this sample' into 'the product is accessible.' That jump is how false confidence reaches leadership decks, procurement responses, and public claims.

The better alternative is plain reporting language: sampled pages, checks run, issues found, manual work still required, and areas outside current coverage.

What scanners can prove and what they cannot

  • Scanners can confirm many structural failures quickly, but they cannot certify captions, task clarity, screen-reader comprehension, or whether a complex widget is truly usable.
  • Coverage reports are still useful because they show where automation is strong enough to prevent obvious regressions.
  • Evidence quality matters: screenshots, selectors, criterion mapping, and sample scope should travel with the result.

Where teams still get it wrong

  • Manual verification is still required for screen-reader behavior, reading order quality, reflow edge cases, and subjective clarity problems.
  • Customer-facing claims need human review because language like 'fully accessible' creates legal and commercial risk.
  • Sampling decisions themselves need judgment: which journeys, templates, and states actually represent the product?

A pragmatic checklist

  1. Report every scan with scope, date, environment, and the limits of the method used.
  2. Use automated findings to remove obvious issues early, then plan manual checks for the highest-risk states.
  3. Avoid absolute language in sales or compliance materials unless a qualified review supports it.
  4. Keep an audit history so teams can describe progress without overstating certainty.
Teams get into trouble when they translate 'no detectable issues on this sample' into 'the product is accessible.' That jump is how false confidence reaches leadership decks, procurement responses, and public claims.

Keep visible: what standard, scope, and sample set the article is actually talking about.

Testing boundary lens

Scanners can confirm many structural failures quickly, but they cannot certify captions, task clarity, screen-reader comprehension, or whether a complex widget is truly usable.

Useful evidence: logs, screenshots, criteria mapping, and explicit test limits.

Claims lens

Manual verification is still required for screen-reader behavior, reading order quality, reflow edge cases, and subjective clarity problems.

Safer pattern: report confidence and open questions instead of absolute certainty.

Evidence pack manifest

YAML
article: "zero-issues-does-not-mean-accessible"
include:
  - "standards mapping"
  - "test scope and sample set"
  - "screenshots or recordings of representative failures"
  - "manual follow-up notes"
  - "Scanners can confirm many structural failures quickly, but they cannot certify captions, task clarity, screen-reader comprehension, or whether a complex widget is truly usable."
exclude_claims:
  - "Manual verification is still required for screen-reader behavior, reading order quality, reflow edge cases, and subjective clarity problems."

How VertaaUX fits

VertaaUX reports should help teams communicate confidence honestly by making coverage visible, attaching evidence, and explicitly marking the parts of the experience that still need human validation.

References

Treat governance as an operating discipline, not a PDF you produce under pressure. That is the difference between reporting quality and actually shipping it.

Audit your page now

Apply this article on a live URL and get an actionable report in minutes.

Improve this article

Found an error, outdated section, or gap? Send feedback and we will update the changelog.

Was this useful?

Quick signal helps us prioritize article updates.