Updated: 28th March 2025 – aligned partial/automated designations with the categorisation in a later article: mind the WCAG automation gap. Also added links to some accessibility testing bookmarklets.
Apologies, I was too permissive in my initial appraisal and the aim of this exercise was not to be exhaustive, but as some people have taken it to be an exhaustive list I have added the relevant additonal success criteria (SC). Here is the updated list and overview of the 55 (56-1 due to the Parsing criterion being obsolete):
Evaluation method | SC count |
---|---|
⚠️ Partially automated | 49 |
🤖 Automated | 6 |
tool’s I use
I prefer to use tools that do not provide multi-page results as I don’t trust the output and also find the signal to noise ratio unhelpful.
ARC toolkit like any automated tools, it is not really useful unless you know when it is talking garbage, but I have some trust in it and the rules as I was involved in its development. Note I do not use the aXe extension, not because I don’t trust its output, but because I find the it’s marketing/upsell features to be annoying, as soon as ARC Toolkit starts with whack a mole upselling I will bin it as well. If you want to test with the aXe rules there are plenty of implementations of the rule set in other tools.
Web developer tools in Chrome Chrome has a number of tools built in such as the accessibility tree inspector and color contrast checker. and others such as the ARC toolkit can be added as extensions.
I also find the use of bookmarklets invaluable aids in manual testing, some of my fave’s
- A11y audit bookmarklets by Ian Lloyd
- CSS Bookmarklets for Testing and Fixing by Adrian Roselli
- text spacing bookmarklet by me
- target size Bookmarklet by me
The role and property names exposed in the accessibility tree of Browser developer tools do not always match the names exposed to the platform or are dependent on the platform level implementation and thus how they are conveyed to screen reader users, they are sometimes the internal names used by browser implemntors only. If in doubt check the specification that implementers use, for example an img
may be exposed as graphic
or image
in different browsers and accessibility APIs
automation blues
Here is a list of WCAG 2.2 Level A and AA success criteria that I think cannot be completely tested with automated tools. These criteria require manual testing because they involve meaning, usability, intent, or user experience that automated tools cannot fully evaluate.
NOBODY is saying that automated tools are not useful, only that they cannot provide comprehensive coverage of WCAG 2.2 criteria.
1.1.1 Non-Text Content (A)
Automated tools can detect missing alt
attributes but cannot determine if alternative text is meaningful.
1.2.1 Audio-only and Video-only (Prerecorded) (A)
Automation can detect presence, but not accuracy or completeness. Requires manual testing to confirm conformance due to reliance on meaning, usability, or user intent.
1.2.2 Captions (Prerecorded) (A)
Tools detect presence of caption track, not quality. Requires manual testing to confirm conformance due to reliance on meaning, usability, or user intent.
1.2.3 Audio Description or Media Alternative (Prerecorded) (A)
Presence can be detected; equivalence requires human judgment. Requires manual testing to confirm conformance due to reliance on meaning, usability, or user intent.
1.2.4 Captions (Live) (AA)
Tools detect live captions; real-time accuracy needs human review. Requires manual testing to confirm conformance due to reliance on meaning, usability, or user intent.
1.2.5 Audio Description (Prerecorded) (AA)
Cannot assess description completeness automatically. Requires manual testing to confirm conformance due to reliance on meaning, usability, or user intent.
1.3.1 Info and Relationships (A)
Tools can check for semantic HTML elements (e.g., headings, lists, tables) but cannot determine if relationships are accurately conveyed.
1.3.2 Meaningful Sequence (A)
Tools can analyze DOM order, but manual review is needed to verify if reading order makes sense.
1.3.3 Sensory Characteristics (A)
Requires human testing to determine if instructions rely solely on sensory characteristics (e.g., “Press the red button” without another indicator).
1.3.4 Orientation (AA)
Automated tools can check device orientation, but manual testing is needed to confirm that no essential content is lost when orientation changes.
1.3.5 Identify Input Purpose (AA)
Tools can verify the presence of autocomplete attributes, but cannot confirm if the assigned purpose is correct.
1.4.1 Use of Color (A)
Tools can detect color contrast failures but cannot verify if color alone is used to convey meaning.
1.4.2 Audio Control (A)
Automation can detect auto-playing audio and presence of controls, but may miss custom implementations.
1.4.4 Resize Text (AA)
Automation can simulate zoom but cannot verify usability or legibility.
1.4.5 Images of Text (AA)
Automation may flag text in images but cannot determine if it’s essential, exempt or decorative.
1.4.10 Reflow (AA)
Tools can simulate zooming and viewport resizing but cannot verify usability when content is reflowed.
1.4.11 Non-text Contrast (AA)
Automation can check contrast but may miss visual indicators’ purpose.
1.4.12 Text Spacing (AA)
Tools can check CSS styles but manual testing is needed to ensure readability when spacing is adjusted.
1.4.13 Content on Hover or Focus (AA)
Tools can detect hover/focus-triggered content but cannot confirm usability, persistence, or dismissibility.
2.1.1 Keyboard (A)
Automated tools can detect keyboard-inaccessible elements, but manual testing is required to verify full keyboard navigation and operability.
2.1.2 No Keyboard Trap (A)
Tools can identify focusable elements but cannot confirm if a user is truly trapped without manual testing.
2.1.4 Character Key Shortcuts (A)
Tools can detect single-character shortcuts, but manual testing is needed to check for unintended usability issues.
2.2.1 Timing Adjustable (A)
Tools can check for timeouts, but human testing is needed to assess usability and whether time extensions are provided appropriately.
2.2.2 Pause, Stop, Hide (A)
Automated tools can detect if moving content exists, but manual testing is needed to verify if users can pause or stop the motion.
2.4.1 Bypass Blocks (A)
Tools can detect skip links, landmarks, but not test functionality.
2.4.2 Page Titled (A)
Tools can verify the presence and structure of the <title>
element, but cannot determine whether the title text is meaningful or descriptive.
2.4.3 Focus Order (A)
Some tools can analyze DOM order, but only human testing can confirm logical, intuitive focus flow.
2.4.4 Link Purpose (In Context) (A)
Tools can detect link text, but manual testing is needed to verify if the link text makes sense in context.
2.4.5 Multiple Ways (AA)
Tools can check if multiple navigation methods exist, but manual review is needed to confirm usability.
2.4.6 Headings and Labels (AA)
Tools can verify if headings exist, but cannot determine if they are meaningful and correctly structured.
2.4.7 Focus Visible (AA)
Automated tools can detect if a focus indicator exists, but they cannot determine if it is visible enough in various conditions.
2.4.11 Focus Not Obscured (Minimum) (AA)
Some tools can detect focus visibility issues, but manual review is needed to confirm usability.
2.5.3 Label in Name (A)
Tools can check if a label matches an accessible name, but cannot confirm if the label correctly conveys its purpose.
2.5.7 Dragging Movements (AA)
Tools can detect drag-and-drop interactions, but manual testing is needed to verify if alternative input methods are provided.
2.5.8 Target Size (Minimum) (AA)
Tools can measure target sizes, but cannot confirm if targets are functionally usable.
3.1.1 Language of Page (A)
Tools can detect language attributes, but human review is needed to verify if they match the content.
3.1.2 Language of Parts (AA)
Tools can flag language attributes but cannot verify if they are correctly applied to multilingual content.
3.2.1 On Focus (A)
Tools can detect focus-triggered changes, but manual testing is needed to determine if they are disruptive.
3.2.2 On Input (A)
Tools can identify input fields that trigger changes, but human review is required to assess predictability.
3.2.3 Consistent Navigation (AA)
Tools cannot confirm whether small visual differences impact usability. Determine if order changes cause disorientation.
3.2.4 Consistent Identification (AA)
Automation cannot reliably understand the intended function of elements, know whether two buttons perform the same action or detect semantic inconsistencies across pages
3.2.6 Consistent Help (A)
Tools cannot determine whether the help provided is actually helpful, usable, or understandable
3.3.1 Error Identification (A)
Tools can detect some error messages but not clarity or placement.
3.3.2 Labels or Instructions (A)
Tools can check for form labels, but cannot evaluate if they are clear, meaningful, and useful.
3.3.3 Error Suggestion (AA)
Automation may detect presence, but not usefulness of suggestions.
3.3.4 Error Prevention (Legal, Financial, Data) (AA)
Tools can check for confirmations, but not their effectiveness.
3.3.7 Redundant Entry (A)
Tools cannot confirm whether the user was already asked to enter the same information earlier in the process. Determine if the repeated entry is necessary due to security or workflow reasons. Evaluate whether a mechanism is provided to avoid requiring re-entry (e.g., “Same as billing address” checkbox)
3.3.8 Accessible Authentication (Minimum) (A)
Tools cannot assess whether the overall authentication process excludes users with cognitive disabilities.
4.1.2 Name, Role, Value (A)
Tools can detect missing attributes, but manual testing is needed to confirm that these attributes are meaningful and correctly implemented.
4.1.3 Status Messages (AA)
Tools can detect use of ARIA roles like status
, alert
, or log
, but cannot determine if the message is meaningful, announced appropriately by screen readers, or relevant to context.
Alan Price – Sell Sell Sell
Lyrics
Sell, sell, sell, sell everything you stand for Tell, tell, tell, tell all the people that you care for Running here, running there Keep it moving, sonny, don't despair Because the next one will be, the next one will be, the next one will be, the best one of the year Give, give, give, give everything you paid for Run, run, run, run for everything you prayed for Keep that smile on your face With a smile you're welcome any place Because the next one will be, the next one will be, the next one will be, the best one of the year Can I interest you in this article of mine? Can I interest you to spare some of your time? Can I interest you in this life of mine? Won't you listen, listen, listen, listen, listen? Sell, sell, sell, sell everything you stand for Tell, tell, tell, tell all the people that you care for Running here, running there Keep it moving, sonny, don't despair Because the next one will be, the next one will be, the next one will be, the best one of the year
2 replies on “A TOOL’S ERRAND”
[…] A TOOLS ERRAND, Steve Faulkner, 24 March 2025 […]
[…] A TOOL’S ERRAND, 24 March 2025 […]