Predictions: XHTML

Recently I gave a talk at SQL Saturday Baton Rouge on the history of the Web. This was a version of a talk I had given many times before, largely to codecamp classrooms of novice developers. In sexing it up for a professional technical crowd, I ended up rewriting it completely - going back to a lot of primary sources, reading the www-talk archives and the various books written by people who were there at the time (Side note: a surprising amount of these individuals feel a need to include sections in their articles on the particular type of food and entertanment had at w3c conferences. It’s…weird.) In doing all this research to get an understanding of how it is that we got where we did, I ended up forming some opinions on where we’re probably going.

In this blog post series I’m going to make some hopefully non-obvious predictions about the future of the world wide web and how it is that we got here.

Prediction 1: XHTML Will Make a Comeback

A graphical illustration of how sure I am of this particular prediction

When I was starting out professionally in 2006, XHTML was the bane of my, and many other developers’, existance. You would write your page, test it in Internet Explorer and Firefox, struggle to get things layed out properly, ensure that any server or clientside errors would be hidden from the user. Then you would launch a test site and pass the results of your days of work through the XHTML validator…only to be told that you have roughly 127 validation errors. You suck.

And the worst part was that the site already looked good. What was the point of the whole XHTML thing if it didn’t help you achieve what you wanted? And then XHTML2 was supposed to be backwards incompatible!!? No Thank you. Like many others, I cheered when the effort was finally discontinued.

But here’s the thing. XHTML was a good idea. HTML was initially descended from SGML which is a sort of document markup meta-language. This is where we get angle brackets from and why our web pages look


<h1>Like This</h1>
<ul>
   <li>one</li>
   <li>two</li>
</ul>

instead of something


h1("Like This")
list(
    item("one")
    itme("two)
)

Regardless how you might feel about the final syntax, the fact that Tim Berners Lee could present his work at SGML conferences, and that people could immediately start using SGML tools to author web pages was a key component in the early success of the web. In fact, throughout the early and mid nineties, you rarely hear people discussing HTML except as an SGML subset.

It did start to deviate in both essense and syntax for a variety of reasons. Mostly due to browser vendors who - with feature requests pouring in - implemented them rapidly without waiting for community consensus. The situation slowly got worse and worse as the pace of users and thereby feature requests grew exponentially.

Marc Andreessen
Pictured: Reasons

By the turn of the century the situation was growing untenable. Whereas the early web was teeming with web browsers created and maintained by a single individual, HTML was getting sufficiently complex to require the full resources of a company and many years of development. XHTML was meant to address this at least in part - by making HTML more consistent, it would be easier to parse, and thereby easier to write new clients both for use by both people and for machines.

Of course this was not to be. In early 2000, the W3C approved XHTML1 and XHTML1.1 as an interim measure. These tightened restrictions on HTML; ensuring quoted attributes, that the correct tags were closed, that sort of thing. This is the XHTML that I remember and it wasn’t nearly enough to make things easy. XHTML2 was the one that was really going to bring all the boys to the yard - and to do that, XHTML2 and HTML would necessarily be related but incompatible formats.

This was a problem for the portion of the internet that had not lost touch with reality. In 2005, to reflect what people actually wanted, the WHATWG community was formed with the goal of mapping out the HTML that was actually used and providing some light direction. They called this HTML5. The WHATWG was structured to be more agile than the W3C and was able to make strides rapidly. By 2006, to continue to ignore its work the W3C would be risking obsolescense. Therefore they created their own HTML 5 (with a space!) group, whose work would be based on WHATWG’s and would drive HTML foreward while XHTML was still cooking. And as HTML 5 progressed, it became clear that not only would XHTML2 not be ready any time soon, but it was rapidly missing its window of relevancy. As this became more and more obvious the XHTML2 group was de-chartered in favor of more resources put toward HTML 5.

Yet I’m calling an XHTML comeback.

Again, XHTML is a good idea and its absence has been a stumbling block in accessibilty tech and the development of the semantic web. Developing HTML parsers need not be so difficult as to be the realm of a few dozen groups on earth. XHTML would be great, but history has shown that people don’t care to write it. But here’s the out: People don’t write all that much HTML anymore.

I mean obviously they do, but hang with me here.

An ever-increasing percentage of sites are nowadays written with a virtual-dom derivative library. This is a good thing and is the natural culmination of the maxim that any markup language eventually becomes a programming one. With virtual-dom you are using a programming language (typically Javascript) not to create HTML directly, but to define the DOM you would like to have (pretty please). The library itself is then in charge of making it so. And if so, it is no more difficult for a framework to generate XHTML than HTML or direct DOM manipulation. It all just depends what options you run the library with.

Now obviously, framework generated HTML will never compose a majority of the web and anyone aspiring to write a general-purpose web browser will still need to parse HTML as before. But many special-purpose clients could absolutely be created to support XHTML alone.

What I envision is a future where requests can be examined by webservers running isomorphic web applications. If the request’s Accepts header is for HTML then, by all means, the web application will be returned as HTML, javascript, and CSS as it currently does. If however the request specifies - let’s say XHTML3 - the server will run the application server-side to generate a compatible XML document and return it.

As far as I know, nobody is talking about XHTML3 yet (this is an awesome joke though), and not everyone is going to sit down and rewrite their site as isomorphic javascript applications. But within certain industries and use cases I could certainly imagine this becoming popular and special-purpose (likely automated or accessibility-specialized) web clients being widely used within that industry. In this way, the old chestnut of semantic web could be pushed forward again bit by bit.

blog comments powered by Disqus