The road to HTMHell is paved with semantics
by Vadim Makeev published on
HTML semantics is a nice idea, but does it really make a difference? There’s a huge gap between HTML spec’s good intentions and what browsers and screen readers are willing to implement. Writing semantic markup only because the good spec is a spec, and it is good, and it’s a spec is not the worst approach you can take, but it might lead you to HTMHell.
Simple days
Like most people involved in the front-end, I started my journey into Web development from HTML. It was simple enough, close to a natural language, and easy to use: you type some tags, save a text file, and reload the browser to see the result. And it would almost never fail if I made a mistake!
Back then, I considered HTML a simple set of visual building blocks. It was too late for purely visual <font>
elements (the CSS has replaced them), but the general idea stayed pretty much the same: if you wrap your text into <h1>
, it becomes big and bold, if you have two <td>
cells in a row, that’s your two-column layout. Easy! I learned tags to be able to achieve certain styles and behaviors. Remember <marquee>
?
That was just the beginning: soon, I needed calendars, popups, icons, etc. It turned out I had to code them myself! And so I did, mainly using divs, spans, and some CSS. Back in the mid-2000s, there weren’t any particular “logical” tags or functional widgets, only the ones you’d find on a typical text editor panel.
But at some point, a trend called “web standards” emerged: it suggested to stop using HTML as a set of visual blocks and start thinking about the meaning of the content and wrapping it into appropriate tags: <table>
only for tabular data, not layout; <blockquote>
only for quotes, not indentation, etc. The people bringing the web standards gospel were convincing enough, so I joined the movement.
Semantics
Following the trend, we started studying the HTML 4 spec to learn the proper meaning of all those tags we’ve already known and many new ones we’ve never heard about. Suddenly, we’ve discovered semantics in HTML, not just visual building blocks.
<b>
and<i>
weren’t cool anymore: the proper stress and emphasis could only be achieved with<strong>
and<em>
.<ul>
and<ol>
weren’t only for bulleted/numbered lists in content anymore, but for all kinds of UI lists: menus, cards, icons.<dl>
,<dt>
,<dd>
were accidentally discovered in the spec and extensively used for all kinds of lists with titles.<table>
was banned from layout usage mainly because it wasn’t meant for that by the spec, but later, we also discovered rendering performance reasons.
Why? Because we started paying attention to the spec, and it was semantically correct to do so. Every decision we make would have to be checked to determine whether it’s semantic enough. And how would we do that? By reading the spec like it’s a holy book that gives you answers in challenging moments of your life. On top of that, there was the HTML Validator’s seal of approval.
But then came the Cambrian explosion that changed everything: HTML 5.
A new hope
Just after the failed promise of XHTML, HTML 5 brought us new hope. Many new elements were added based on existing naming conventions to pave the cow paths. The new spec has challenged browsers for years ahead, from supporting the new parsing algorithm to default styles and accessibility mappings.
For the Web standards believers of the old spec, the new one was just a promised land:
- Landmarks to mark logical parts like headers, footers, asides, navigations, sections, and articles.
- Variety of new form elements other than the text ones: dates, emails, numbers, ranges, and colors.
- Media and interactive elements for video, audio, and graphics.
There was even a logo for semantics in the HTML 5’s design!
Apart from extending the list of functional building blocks, the spec added several semantic elements that didn’t even come with any styling, just meaning. But not only that! Some old, purely visual elements were lucky enough not to be deprecated but redefined. For example, <b>
and <i>
became cool again, though no one could explain the use cases, apart from rather vague taxonomy and emphasis ones and… naming ships. You think I’m kidding? Check the spec!
<i>Boaty McBoatface</i>
Don’t get me wrong, I think HTML 5 significantly advanced the Web, but it has also detached us from reality even further. Especially the idea of an outline algorithm and multiple nested <h1>
elements that would change the level based on nesting. It was never implemented by any browser but existed in the spec for a long, long time until finally removed in 2022.
<section>
<h1>Please</h1>
<section>
<h1>Don’t use</h1>
<section>
<h1>This code!</h1>
</section>
</section>
</section>
⚠️ Please don’t use the code above. It’s wrong and harmful.
Personally, I’ve wasted too many hours arguing about the difference between <article>
and <section>
for purely theoretical reasons instead of focusing on good user experience.
Drunk on semantics
Although the spec would provide examples, it primarily focused on marking up content, not UI. Even examples themselves were often purely theoretical with a kind of usage that would be semantically correct, not always practically useful. There’s another whole story about the difference between the W3C and WHATWG spec versions, but the W3C’s examples were usually better.
I’ve seen a lot of weird stuff and did it myself, too. People would often look at the HTML spec as a dictionary, looking up a word in the list of elements for an idea they had in mind. Try to read the following examples through the eyes of a beginner, giving a shallow look at the spec. They totally make sense!
<menu>
for wrapping the navigation menus.<article>
for the content of an article.<input type="number">
for a phone number.<button>
for everything that looks like a button.
I haven’t seen the <slot>
element used on a casino website to mark up a slot machine, but maybe only because I’m not into gambling. But the rest of the examples are real.
At the same time, a lot of people would read the spec carefully and use <footer>
, <header>
, <main>
, and other semantic elements properly. But the reason for that won’t be any different: they would also aim for semantically correct markup only because the spec says so. And if it does, the smartest of us would think it should be good for users, search engines, etc. Right?
It turned out that the spec could be wrong, and semantically correct markup wouldn’t guarantee good practical results.
I don’t blame people who gave up on following the spec altogether and became cynical enough to use <i>
for icons instead of naming damn ships. Fortunately, I didn’t go this way. I found another reason to keep caring about markup: user experience and accessibility.
Good intentions
Unlike many other languages, HTML is a user-facing one. It means that our decisions directly affect users.
Fortunately, it doesn’t matter how we format our markup, but our selection of elements matters a lot. So when I hear “this markup is semantic,” it often means that it’s correct according to the spec but not exactly good for actual users. Even though both can be true at the same time, the focus is in the wrong place.
It seems to me that we decided to trust the spec’s recommendations at some point without checking whether they were true. I firmly believe that the spec authors’ intentions are always good, and I know many smart people working on the HTML spec. But when it comes to implementation in browsers or screen readers, these intentions don’t always survive the reality.
There are usually three main obstacles:
- Product priorities: you probably know that already, but accessibility isn’t always a number one priority for various reasons, including complexity and the lack of people who know the area.
- Different points of view: for the same reason, automated testing won’t save you from accessibility issues, different user agents might have other points of view on certain platform features.
- Actual user experience: browsers call themselves “user agents” for a reason. When a specific platform feature or how developers use it hurts the users, browsers tend to intervene.
For example, the following list won’t be exposed as a list to VoiceOver in Safari only because you decided to disable default bullets and implement custom ones via CSS pseudo-elements.
<ul style="list-style: none">
<li>Item</li>
<li>Item</li>
</ul>
You can force the usual behavior by adding role="list"
to every list you style, but how convenient is that? Not at all for you as a developer. But Safari has probably had some reasons, most likely to improve their users’ experience by ignoring all semantically correct lists we started using so much outside of content.
As for the screen readers, Steve Faulkner’s “Screen Readers support for text level HTML semantics” article might open your eyes to the actual value of those tags we’re so passionately arguing about.
No browsers expose<strong>
or<em>
element role semantics in the accessibility tree.
Again, you can force some semantics via ARIA roles, but should you? That’s an open question. The answer depends on the value you’re trying to bring your users.
Does it mean we should immediately stop using semantic elements if they don’t bear any value for the users? I don’t think so. But I stopped using a semantics argument when talking about good markup. Just like tabs and spaces, semicolons, or quotes, semantics sometimes is a stylistic preference.
There’s also a future-proofing argument that suggests using semantic markup with the hope that someday, browsers will start supporting all those elements they choose to ignore now. I wouldn’t rely on it too much and prefer to focus on what’s important right now.
I used to be among those people who’d judge the quality of a website based on the number of divs it’s built of. We’d say, “Nah, too many divs, it’s not semantic.” Now I know that what’s inside of those divs matters the most. Enough landmarks, headings, links, and buttons would make it good, even if the divs/semantic elements ratio is 1000 to 10. We are divelopers, as Chris Coyier once said. Don’t be ashamed of this, wear this name with pride.
Training wheels
Following spec’s recommendations with semantic markup is still a good start, especially when you treat it as not just the list of available elements. I mostly agree with this idea often expressed by accessibility experts:
If you write semantic markup, it will be mostly accessible.
But to me, it sounds like a simple answer to a complex question. The HTML spec might be a good set of training wheels, but at some point, you’ll have to take them off. Not everything can be solved by semantic markup, for example, you’ll need to learn ARIA to create any modern interactive UI. There’s just not enought semantic elements for everything!
There are many simple answers waiting for you in the spec or articles praising semantics as the only thing you need. There are even more compromises made in modern frameworks in the name of better developer experience. And they aren’t all wrong! But if you keep your focus on the user experience, on the actual quality of the user interface, you’ll be able to make the right decisions.
And you know what? It doesn’t matter if you agree with me on the value of semantics. I’m sure you’ll be fine. After all, you’ve just read a big rant on HTML in the HTMHell advent calendar.
About Vadim Makeev
Frontend developer in love with the Web, browsers, bicycles, and podcasting. MDN technical writer, Google Developer Expert.
Blog: pepelsbey.dev
Vadim on Mastodon: @pepelsbey
Vadim in Telegram: @pepelsbey_dev