In example.test.js
, let's write a test to check for missing image alt text:
// ...
runTests(() => {
describe("images", () => {
const docAllImages = Array.from(document.querySelectorAll("img"));
it("have an alt attribute", () => {
docAllImages.forEach((image) => {
expect(image.getAttribute("alt"), image.outerHTML).to.exist;
});
});
});
});
This test gets all images in the DOM, loops over them, and checks to make sure each one has an alt
attribute.
Error messages in JavaScript include the line and character numbers to help quickly locate and fix errors, but we don't have access to this with HTML elements. By passing image.outerHTML
as the second argument of our expect()
function, we can include the element in the error message to make finding and correcting the offending element easier. We'll use this for custom error messages any time we're checking all instances of an element.
If this is too verbose or noisy for more complex elements, we could instead only include the element's opening tag, or assign all elements unique data-test-id
values to identify them directly in test error messages.
In our example.html
file, let's add an image without an alt
attribute to get a failing test:
<img src="/example.jpg" />
Web Test Runner reports the following in the command line:
❌ images > have an alt attribute
AssertionError: <img src="/example.jpg">: expected null to exist
at example.test.js:9:41
Chromium: 1 failed
Firefox: 1 failed
Webkit: 1 failed
Our test fails as expected. To get the test to pass, we simply need to add an alt
attribute, either as an empty string ""
for decorative content or with a useful description for meaningful content.
The neat thing about this test is it passes if no <img>
elements exist, but fails if they exist and don't have an alt
attribute. So, this test can be used from the beginning of any project without creating irrelevant noise.
Let's write tests for other common WCAG 2 failures.
We'll add another test in example.test.js
:
runTests(() => {
// ...
describe("links", () => {
const docAllLinks = Array.from(document.querySelectorAll("a"));
it("have a non-empty href attribute", () => {
docAllLinks.forEach((link) => {
const hrefValue = link.getAttribute("href");
expect(hrefValue, link.outerHTML).to.exist;
expect(hrefValue, link.outerHTML).to.not.equal("");
});
});
it("are not empty", () => {
docAllLinks.forEach((link) => {
expect(link.textContent, link.outerHTML).to.not.equal("");
});
});
});
});
Similar to the previous test, we get all links from the DOM, loop over them, and check that their text content is not empty. We're also checking that links have a non-empty href
attribute, just as an example of other HTML validation we'd like to do.
Time for another test suite:
runTests(() => {
// ...
describe("form inputs", () => {
const docAllFormInputs = Array.from(
document.querySelectorAll("input, textarea, select")
);
it("have a dedicated label", () => {
docAllFormInputs.forEach((formInput) => {
const inputId = formInput.id;
expect(inputId, formInput.outerHTML).to.exist;
const inputLabel = document.querySelector(`label[for="${inputId}"]`);
expect(inputLabel, formInput.outerHTML).to.exist;
expect(inputLabel.textContent, formInput.outerHTML).to.not.equal("");
});
});
});
});
In this test, we get all form input elements, loop over them, and assert multiple things:
id
attribute<label>
element with a for
attribute that has the corresponding input's id
as the value<label>
element to not be emptyIt's possible to provide an accessible name for form inputs with other techniques, such as aria-label
or aria-labelledby
, but I'm being opinionated here because I prefer actual <label>
elements and text content. Testing for an accessible name through other means is just as easy.
This is nearly identical to our empty links test:
runTests(() => {
// ...
describe("buttons", () => {
const docAllButtons = Array.from(document.querySelectorAll("button"));
it("are not empty", () => {
docAllButtons.forEach((button) => {
expect(button.textContent, button.outerHTML).to.not.equal("");
});
});
});
});
Let's write a test for the final common WCAG 2 failure:
runTests(() => {
// ...
describe("document", () => {
it("has a set language", () => {
const languageValue = document.querySelector("html").getAttribute("lang");
expect(languageValue).to.equal("en");
});
});
});
This one is the most straightforward, but valuable nonetheless. If your site has internationalization, you'd want to check the lang
value matches the current locale and updates based on user preference.
We're starting to build a universal test suite that will preserve accessibility across our projects.
Let's add some further assertions around heading levels, which are a common source of errors in accessibility audits:
runTests(() => {
// ...
describe("headings", () => {
const docAllHeadings = Array.from(
document.querySelectorAll("h1, h2, h3, h4, h5, h6")
);
it("have <h1> element as the first heading", () => {
expect(
getHeadingLevel(docAllHeadings[0]),
docAllHeadings[0].outerHTML
).to.equal(1);
});
it("have a single <h1>", () => {
docAllHeadings.forEach((heading, index) => {
// Don't fail the test if the first heading on the page is `<h1>`
if (index === 0 && getHeadingLevel(heading) === 1) {
return;
}
expect(getHeadingLevel(heading), heading.outerHTML).to.not.equal(1);
});
});
it("don't skip heading levels", () => {
docAllHeadings.forEach((heading, index) => {
let previousHeadingLevel = 0;
const currentHeadingLevel = getHeadingLevel(heading);
if (index !== 0) {
previousHeadingLevel = getHeadingLevel(docAllHeadings[index - 1]);
}
expect(currentHeadingLevel, heading.outerHTML).to.be.lessThanOrEqual(
previousHeadingLevel + 1
);
});
});
});
});
function getHeadingLevel(heading) {
return +heading.tagName.toLowerCase().replace("h", "");
}
With these tests, we can ensure heading levels are used properly in our page. We should only have a single <h1>
heading. This heading should also be the first heading on the page. Lastly, headings should never skip levels, such as <h2>
followed by <h4>
. We make use of small utility function, getHeadingLevel()
, to keep our code more concise and mistake-proof.
Although these tests enforce correct heading logic, they can't evaluate if heading levels are appropriate for the content they describe. That always requires thoughtful consideration. Tests free the developer to spend more time on wider UX considerations like this.
So far, we've made simple assertions about our static HTML to make sure it's valid and not causing common accessibility failures. But as we create interactive patterns in our UI, the accessibility considerations become much more complex. We need to use HTML and ARIA to create custom semantics. We need to handle click, tap, and keyboard events. And we need to display and hide content, all while updating multiple attributes in concert.
Let's follow the Red, Green, Refactor workflow as we build a custom disclosure (show/hide) pattern.
We'll start with our tests in example.test.js
to make sure our initial HTML and ARIA are correct:
runTests(() => {
// ...
describe("disclosure", () => {
const docDisclosureToggle = document.querySelector(
`[data-component="disclosureToggle"]`
);
const docDisclosureContent = document.querySelector(
`[data-component="disclosureContent"]`
);
it("has a toggle button", () => {
expect(docDisclosureToggle).to.exist;
expect(docDisclosureToggle.tagName).to.equal("BUTTON");
});
it("has a toggle button with the expected ARIA attributes", () => {
expect(docDisclosureToggle.getAttribute("aria-expanded")).to.exist;
expect(docDisclosureToggle.getAttribute("aria-expanded")).to.equal(
"false"
);
expect(docDisclosureToggle.getAttribute("aria-controls")).to.exist;
expect(docDisclosureToggle.getAttribute("aria-expanded")).to.not.equal(
""
);
});
it("has a content panel with the corresponding controls ID", () => {
expect(docDisclosureContent).to.exist;
expect(docDisclosureContent.id).to.equal(
docDisclosureToggle.getAttribute("aria-controls")
);
});
it("has a hidden content panel by default", () => {
expect(docDisclosureContent.getAttribute("hidden")).to.exist;
});
});
});
These tests will fail, meaning we're in the "Red" stage. Now, we can author our initial HTML in example.html
to get our tests to pass:
<button
type="button"
aria-expanded="false"
aria-controls="disclosureContent"
data-component="disclosureToggle"
>
Open disclosure
</button>
<div id="disclosureContent" data-component="disclosureContent" hidden>
<p>Disclosure content</p>
</div>
Our tests now pass, so we're in the "Green" stage. I'd make a Git commit at this point. If there are any improvements we'd like to make, we can make them with confidence as long as our tests are passing, and commit again each time we're in a working state.
Let's write a failing test to begin interacting with our disclosure in example.test.js
:
runTests(() => {
// ...
describe("disclosure", () => {
// ...
it("opens the disclosure on keyboard Enter press", async () => {
docDisclosureToggle.focus();
await sendKeys({
down: "Enter",
});
expect(docDisclosureToggle.getAttribute("aria-expanded")).to.equal(
"true"
);
expect(docDisclosureContent.getAttribute("hidden")).to.not.exist;
// TODO: Reset the disclosure
});
});
});
With this test, we move focus to our toggle button, then use the sendKeys
function from Web Test Runner commands to send a native keyboard Enter
key press. We're checking that our toggle button gets updated attributes, and that our disclosure content is no longer hidden. We'd want similar tests for mouse click and the keyboard Space
press, as well as testing that the disclosure closes on these interactions if it's already open.
We're in the "Red" stage again, but I'll leave the rest of the work to you to add the remaining tests and get them to pass. A cool thing is that the functionality needed for this component to work also creates some handy utility functions for our tests, such as resetting the disclosure at the end of each test to create a consistent starting point for other tests. And these utilities are easy to unit test with this approach as well.
This workflow for test-driven HTML and accessibility provides so much value in my daily work, and I hope teams can adopt this technique to create a more responsible and usable web for everyone. I've used this setup extensively in large projects with complex UI patterns and it has scaled gracefully with several hundred tests running at all times.
With this setup, we can write unit tests that have access to the real DOM across major browsers, which allows us to check our HTML for validity, interactivity, and accessibility. We can run these tests in multiple browsers at once with every change, and preserve accessibility from the very start of our project. These expectations are permanently captured in our code, so we can freely refactor our work and add new features without introducing regressions.
The examples in this post assume a static HTML file that we directly linked our tests to. This is the fastest and most direct way to test our HTML, but we don't want our test file to load in production. As a result, we'd need to remove the <script>
tag before deploying to the web. To make this approach more convenient, it'd be good to do this automatically with a build command that creates a /dist
folder or something similar.
Most projects probably won't just have static HTML files, and instead render components with JavaScript. In this case, we create standalone .test.js
files that import corresponding JavaScript modules and call render functions or methods directly. From there, the workflow and benefits are all the same, but there may be a little more effort in setting up specific UI situations to test vs. having HTML ready to go.
As a final consideration, this testing technique enhances, not replaces, other forms of accessibility testing and shifts a lot of the feedback earlier in the process. Using accessibility-focused code linters, automated accessibility testing tools, and manual accessibility review together provides the most value.
Many of the examples we covered in this post are checks we would otherwise have to make manually, either through code review or running accessibility audit tools. By writing unit tests for accessibility, we can discover and fix more issues as they arise. And we free more capacity to manually evaluate other aspects of accessibility, such as using assistive technology to evaluate the flow of a page or automated testing tools to check color contrast.
Automated tools cover about 30% of the WCAG success criteria and detect about 57% of overall issues, leaving the rest for manual evaluation. With this approach, unit tests can increase coverage, particularly with interactive UI patterns. Even with more automated test coverage, manual review remains the most effective accessibility testing technique for finding issues and building empathy.
My long-term goal with this work is to build a robust, universal accessibility test collection that can be used across projects. As we explored with our disclosure example, it's also possible to create test suites for custom UI patterns such as accordions, menus, tooltips, and others.
Let me know if you'd be interested in using this test collection or if you'd like to collaborate. Together, we can make test-driven accessibility and accessibility-first development a reality.
David Luhr is an independent consultant who helps teams of all sizes with accessible design and development. He is passionate about creating a more responsible web for everyone, eliminating waste, and creating free educational content through his Build UX YouTube channel.
Personal website and blog: luhr.co
YouTube: youtube.com/@buildux
LinkedIn: linkedin.com/in/davidluhr