Every page on the world wide web is constructed using a code language (more precisely a ‘markup’ language), called HTML. Like any coded language, HTML makes some things easier to express, and other things harder—as a tool, it comes with its own set of ‘affordances’.
The possibilities of HTML are codified into standards documents. Yet like with the grammars of natural languages, those writing HTML codes do not always follow the standard. Authors may well ignore parts of it, or use elements in ways that clash with the original intended use. Idioms come up through copying and sharing, and become part of the vocabulary web authors teach each other. These usage patterns are to be taken into account for subsequent revisions in the HTML standard, and are a factor in driving web browser vendors’ choices in what features to implement next.
For Web Browsers do not follow the standards by letter either. There are parts of the HTML standard that are not actually implemented in any popular browser. Web authors have little incentive to use them. On the other hand, web browsers might implement support for idioms that are not part of a standard, but are being used by authors.
In this way, HTML’s affordances are created by what standards prescribe, what browsers allow, and what people write—and together this forms what the web can be. There is a complex interrelationship between the different parties: sometimes their interests converge, and then they drift away again. In this text we try to introduce in more detail what are some of the power struggles that underlie the continued development of HTML. We will look at HTML’s latest iteration, HTML5. We will see the way in which companies like Google try to further their own agenda’s, sometimes under the guise of following the bottom up inventions of web authors.
From Academia to Mass Media: 1989–1998
An interesting account of the first years of the development of HTML is to be found in Dave Ragget’s 1997 book ‘Ragget on HTML’ of which the chapter 2 ‘a History of HTML’ is available online (Ragget, 1998). In 1989, Tim Berners Lee comes up with a system to write pages that are linked together with hyperlinks, and in this way form a web. Berners Lee constructs a language to write these pages, the HyperText Markup Language, and a protocol to transfer them: the Hypertext Transfer Protocol. The transfer protocol uses the already existing infrastructure of the internet, which is being used by academics and the military to exchange information through channels such as e-mail.
The original implementation of the web is meant to exchange scientific information. The HTML language is based on the existing SGML standard, ‘extensively applied by the military, and the aerospace, technical reference, and industrial publishing industries.’ (Standard Generalized Markup Language, 2013) HTML, like SGML, is a markup language. In contrast with a page description language like PostScript or PDF, a markup language is supposed to ‘describe a document's structure and other attributes’ (Ibid), without prescribing the exact page that is produced by this markup.
The concept of a markup language has a number of great benefits. By leaving out page-setting instructions, the pages are quicker to download. With HTML, it is not the author, but the visitor’s browser which is finally responsible for the layout on the screen. Already in the 1990ies, before smartphones and ‘responsive design’, people access the web with different kinds of screens and devices: the browser makes it all fit on the screen.
Another advantage of HTML is the fact that the language is ‘plain text’: The mark up of HTML is simple and uses normal keyboard characters. The first browsers already provide an option to ‘view source’, and see the underlying tag codes. These can easily be copied and adapted into an HTML creation of one’s own. This creates a language that is accessible for experimentation and self-learning, with a high potential for ‘bricolage’ by the budding homepage creators.
The one feature of page description languages that is lacking in a markup language like HTML, though, is the possibility to easily store and transmit rich layout features. The scientific community, within which the web arose, is used to scarce layout of scientific papers (as are the other audiences in which SGML based solutions are popular). The first HTML versions provide headings, paragraphs, citation marks—but no columns, or even: images! In order to win over a larger public outside of academia, some of the visual sophistication of illustrated print media will have to find its way into the web, and thus, into HTML. This is the tension upon which some of the first battles over HTML are built.
In 1993 NCSA Mosaic is released, the first graphical web browser on the Windows platform. A research project at the time, the responsible researchers launch Netscape Navigator (1994) upon graduation. An obvious tension can be seen between the original scientific community and the entrepreneurs that see a future for the web as a mass medium. In a rush to make the web attractive for a larger public, they add a lot of graphical capabilities to HTML. ‘Academics and software engineers later would argue that many of these extensions were very much ad hoc and not properly designed.’ (Ragget, 1998)
Whatever the original community around the web might think of the changing nature of the language, the new companies are also responsible for opening up the academic invention to the public at large. In this process, the power to define what is HTML shifts away from the community in which it originated, to the new (commercial) browser developers:
Following a predictable path, Netscape began inventing its own HTML tags as it pleased without first openly discussing them with the Web community. Netscape rarely made an appearance at the big International WWW conferences, but it seemed to be driving the HTML standard. It was a curious situation, and one that the inner core of the HTML community felt they must redress.
The situation prompts the creation of a standards body, the World Wide Web Consortium. Initially, the W3C is not able to lay as much weight on the table as it would like. The development of HTML is still mostly driven by the browser vendors. In 1995, not wanting to be left behind on the web, the world’s largest software company Microsoft launches Internet Explorer, and sparks what is to become known as the ‘browser wars’:
(…) during the Browser Wars of the 1990s, Microsoft (MSFT) and Netscape each claimed close to 50% of the market, and their browsers were almost entirely incompatible. It wasn't uncommon to type in a URL and find that the site didn't work. Companies eager to open their virtual doors had to invest in multiple versions of their sites. In short, it was a bad situation for businesses and consumers alike. Yet the browser makers were behaving as many software companies do—by trying to out-feature the competition with the introduction of new proprietary technologies.
The years of web standards 1998-2007
At this point, an other group with vested interests joins the debate. The web designers who have to create web pages for their living, who are having a hard time to work in this fragmented landscape, often creating multiple versions of a site that cater to multiple browsers. The idea of browsers respecting the W3C standards becomes their rallying point, in what is known as the Web Standards Movement. Businessweek runs a feature on Jeffrey Zeldman, an influential figure in this community, highlighting ‘his ability to talk about the dry and, let's face it, dull subject of standards in a way that made everyone see their importance.’ (Scanlon, 2007)
An aside, it would be interesting to look at this group of designers more precisely: it does not represent all designers, but rather a specific subset who have a hybrid design/development skill set. They operate contrasting themselves with communication agencies who have a design/development division of labour or who invest in more traditional designer-friendly tools such as Adobe Flash or Dreamweaver—in fact, the ‘Standards Aware’ designers can be seen to advocate against these kind of tools, advocating ‘hand written html’ over the ‘bloated WYSIWYG tools’. 1
At this point the W3C seems a natural ally to web developers. A standards body providing free standards—standards that became the stick to beat browser vendors with, and whose compliance becoming a mark of prestige for a new generation of web designer/developers. These makers group together as the Web Standards Project (1998).
Another party to the burgeoning ‘standards movement’ are new browser developers. These browsers with a smaller market share have a hard time competing, because most web pages are built to the whims of the Netscape or Internet Explorer’s rendering engines. Web standards will make it more easy for new browsers to gain a competitive advantage. The backgrounds of the browser manufacturers are quite varied: there is the small Norwegian company Opera, there is Mozilla, informed by ideals of an Open Web, who have created the Open Source project Firefox, and there is Apple, who have created their own browser Safari so that they don’t have to rely on third parties for a smooth web experience on their operating system.
The rise of Firefox, especially is spectacular. It is also a rare occasion for a new group to implicate themselves in the debate; the web users themselves. The ‘Get Firefox’ campaign that seeks to promote the software as it nears version 1.0 is completely volunteer run, and thousands of Firefox users contribute to a fundraising campaign that culminates in a 2-page ad in the New York Times (Mozilla, 2004).
The success of web standards is hard to quantify, so here are some indicators of its influence. The successful blogging software WordPress (launched in 2002) puts web standards right in its tag line: “WordPress is a semantic personal publishing platform with a focus on aesthetics, web standards, and usability.” (Wordpress, 2003). The Wordpress page also features a footer that notices ‘Valid CSS’ and ‘Valid HTML’. Such footers noting compliance to standards become something of a fashion. The use of Web Standards becomes part of Accessibility Guidelines that in some cases even become part of government regulation.
a new power struggle: 2007 to now
Standards are a work in progress, involving many actors. The confluence of browser vendors, web designers and the W3C working together has seen a great momentum, when the interests of all these parties aligned towards overcoming the power of the established browsers from Microsoft and Netscape. With the dust settled, afterwards, the way forward is less clear.
Discontents with the W3C becomes prevalent as development of XHTML2 progresses, which more clearly outlines the vision of the W3C: towards standards that require a strict adherence (i.e., the document won’t display if not fully well-formed), in order to pave the way for a future vision of the web that will allow the content of web pages to be more easily reasoned about by software programs—a future known as the Semantic Web.
Convincing arguments against a naive vision of the Semantic Web are already voiced by (Doctorow, 2001): since software can not easily deal with natural language, web pages would need some kind of structured metadata in addition to their linguistic content. Besides the inherent impossibility of objective frameworks for metadata (‘ontologies’), the quality of such metadata will always be lacking due to human laziness on the one hand, and the human desire to game the system on the other. 2
The other main argument against the new standard has been uttered in many forms around the web, among others by (Martin, 2008). This argument goes: the very fact that browsers are extremely forgiving in the way they interpret markup is the basis for the success of the internet: it has enabled copy-paste developing style that made the barrier to entry for creating web pages quite low.
The most consistent and influential counter reaction to the W3C’s direction comes from an association of browser vendors known as the WHATWG (2004). They stage a coup, proposing an alternative future standard: HTML5. The name itself suggests the promise of continuity and backwards compatibility, and the standard itself focuses on capturing existing practices, with a focus on web applications.
This coup is wildly successful. In 2007, W3C even endorses the new standard. For a while work continues on both HTML5 and XHTML2 until, in 2009 the W3C announces the decision to drop XHTML2. Many parts of the HTML5 standard are then quickly implemented in browsers. This is because the browser vendors are onboard from the beginning, but also because the standard has been based on existing practices, and does not require authors to ‘clean up their act’. HTML5 even specifies how a web browser should deal with malformed HTML tags.
conflicting interests, a case study: RDF/A
The WHATWG’s more pragmatic approach to HTML standardisation has proved successful, up to the point where the notion of ‘web standards’ is now much less present in the world of web design and development than it used to be. The Web Standards Project dissolves itself in 2013, seeing its mission largely as accomplished. ‘Standards Compliant’ is no longer a unique selling point. Wordpress’ homepage, once proudly flaunting standards compliance and semantics in it tag-line, now simply states: “WordPress is web software you can use to create a beautiful website or blog.” (Wordpress, 2014). Gone too is the fashion of noting adherence to standards in website footers. It is as if web designer/developers no longer believe the syntactical strictness of XHTML will deliver them from tag soup.
As part of its pragmatism, the WHATWG favours a nimble decision making process. To understand more about who writes the web now, we have to examine who partakes in this process. As far as standards bodies go, the W3C is quite open: the cost of membership is given on a sliding scale, depending on the character of the organisation applying and the country in which it is located. That isn’t the case with the WHATWG, as written in the Charter (WHATWG, n.d.), ‘Membership is by invitation only’. Membership is also available only to browser vendors. This than makes it opportune to look, who are these browser vendors, what are their interests, and how do they come into play in the nature of HTML5.
Update 21-10-2014: Ian Hickson weighs in in the comment section to explain more on the process employed by the WHATWG and how it compares to the W3C. More feedback by those familiar with the process is very welcome!
As much as it advances the state of the web, HTML5 is definitely no longer focused on the ideology of the Semantic Web. To examine what this means in practice, let’s look at an element of Semantic Web technology called RDF/A: the W3C ’s intended mechanism to add extra metadata to your HTML pages. This metadata allows one to specify all kinds of relations that normally only are available when accessing the underlying data sources, paving the way to re-use and expose the information in new ways.
A foundational idea of XHTML is its extensibility: based on a more abstract standard, XML, other XML based formats can be mixed in. HTML5 doesn’t provide such a format for extension. The HTML5 working group hand-picked two XML formats that can be embedded in an HTML5 document: SVG drawings and MATHML mathematical formulas. RDF/A is not among the extensions allowed in HTML5. The specification’s editor, Ian Hickson, writes on the W3C mailing list about the reasoning to omit RDF/A (Hickson, 2009). He fails to see the added value of RDF/A over ‘natural language processing’:
Do we have reason to believe that it is more likely that we will get authors to widely and reliably include such relations than it is that we will get high quality natural language processing? Why?
How would an RDF/RDFa system deal with the problem of the questions being unstructured natural language?
Can an RDF/RDFa system do better from a natural language query?
People have a hard enough time (as you point out!) doing simple natural language queries where all they have to do is express themselves in their own native tongue.
With natural language processing, Hickson means search algorithms that automatically distil keywords from existing documents, without the authors adding an additional formal layer as required by the Semantic Web. With natural language query, he means that users are able to use search functionality by using phrases in regular language, or simple combinations of keywords, without having to resort to a formal query language traditionally used in databases.
Hickson seems to suggest RDF/A adds nothing new or desirable, since there is an existing solution in natural language processing and natural language queries. It is then in the interest of the community not to add it:
If features solve new problems in practical ways, then the cost is worth it. If they don't, then it behooves us not to add them, as the total cost to the community would not be trivial.
Hickson’s e-mail to the list is a wonderful example of what Americans call Astroturfing: ‘the practice of masking the sponsors of a message (…) to give the appearance of it coming from a disinterested, grassroots participant. Astroturfing is intended to give the statements the credibility of an independent entity by withholding information about the source's financial connection.’ (Astroturfing, 2013). No-one uses the terminology: ‘Searching web pages with Natural Language Queries’. They google. And Google is the employer of Hickson. Google is the company that owns the best proprietary algorithms for Natural Language search, and their own closed index of the internet on which they use them.
The argument is disingenuous. Google’s algorithms are extremely good because they are a huge company that has invested billions of dollars in them, and they train them on huge datasets they have access to because they are at the axis of most internet traffic. Other companies do not have access to algorithms and indexes of the same quality. What Hickson says: we have an existing solution that works fine, as long as you are willing to depend on the commercial company for which I work.
The very idea of standards is that they level the playing ground, and they allow a level playing field for the various stake-holders. Keeping the standards process neutral is more important still because a company like Google has a known record in stifling internet standards when they conflict with Google’s interest. In tech circles Google is infamous for having tried to sabotage the RSS syndication format. RSS is a standard by which blogs and other periodic online publications can notify readers of new articles. Google’s Chrome browser is the only main stream browser not to support RSS. Google also created a free RSS reader software, only to discontinue the reader once it had effectively extinguished the competition (Cortesi, 2013).
Like RSS, RDF/A provides a way for content creators to make links and cross references that do not need Google. Hickson not highlighting the conflict of interest inherit in his judgment on this HTML5 feature shows how the standards process remains fragile, and how the continued development HTML5 hinges on a balance of powers that can easily come undone. The fragility of the standards process is summarised by technologist Shelley Powers:
On the other, I've been a part of the HTML WG for a little while now, and I don't feel entirely happy, or comfortable with many of the decisions for X/HTML5, or for the fact that it is, for all intents and purposes, authored by one person. One person who works for Google, a company that can be aggressively competitive.
Conclusion
The history of the web is mirrored in the history of its main encoding language, HTML. As new parties start to have an interest in the web, they start to partake in the development of this language. From the academics that launch the web, to the commercial companies that develop the first mainstream browsers, to the web standards movement that sees designers and the developers that create web-sites join the conversation. Nowadays, the most influence comes from web browser manufacturers, most of whom are part of companies who have other, large stakes on the internet.
This history brings some questions for the current situation. With the dissolution of the Web Standards Project, it seems web designers and developers are less implicated in the development of HTML. That is a shame, because as a voice they could provide a counterbalance to the interests of the web browser manufacturers. As the conflict of interest between Google and Ian Hickson shows, the process can be far from neutral.
Theoretically, it is not just designers who should be involved, but also their clients. From businesses who sell goods, to online publications to individual bloggers, they all have an interest in being able to make the websites they want to make. Even if the standards process is messy, it still allows more influence than the ‘walled gardens’ with whom they would need to content themselves otherwise: selling and publishing through Facebook, Amazon.
Finally, with perhaps the exception of their involvement in the success of the Firefox web browser, the one figure conspicuously missing from the standards process is the web user. From its humble beginnings as a medium for exchange of scientific reports, the web has become an intrinsic part of the live of most people on this planet. Is it time for the one body that has not been seen in the history of HTML to show up—that of consumer organisations?
- cf for instance “Giantmike's website is masterfully crafted with handwritten HTML” at http://www.giantmike.com/htmlbyhand.html
- With regards to laziness, it is telling that Metadata standards, while not employed en masse on the World Wide Web, have seen a great uptake in museums and archives, because these are the places where people are paid to make accurate metadata.
Bibliography
Astroturfing. (2013, November 19). In Wikipedia, the free encyclopedia. Retrieved November 19, 2014 from http://en.wikipedia.org/w/index.php?title=Astroturfing&oldid=582331848
Cortesi, A. (2013, March 14). Google, destroyer of ecosystems. Retrieved November 19, 2014 from http://corte.si/posts/socialmedia/rip-google-reader.html
Doctorow, C. (2001, August 26). Metacrap: Putting the torch to seven straw-men of the meta-utopia. The WELL. Retrieved November 19, 2013 from http://www.well.com/~doctorow/metacrap.htm
Martin, A. (2008, September 28). W3C go home! (C’est le HTML qu’on assasine). uZine. Retrieved November 19, 2013 from http://www.uzine.net/article1979.html
Mozilla Foundation Places Two-Page Advocacy Ad in The New York Times. (2004, December 15). Mozilla Press Center. Retrieved November 19, 2013 from https://blog.mozilla.org/press/2004/12/mozilla-foundation-places-two-page-advocacy-ad-in-the-new-york-times/.
Hickson, I. (2009, February 13). Re: RDFa and Web Directions North 2009. Public-rdf-in-xhtml-tf@w3.org. Retrieved November 19, 2013 from http://lists.w3.org/Archives/Public/public-rdf-in-xhtml-tf/2009Feb/0069.html.
Powers, S. (2009, July 2). XHTML2 is dead. Burningbird. Retrieved November 19, 2013, from http://burningbird.net/node/12#.Uouhk6Dzs60
Ragget, D. (1998). Chapter 2: A History of HTML. In Raggett on HTML 4 (Addison Wesley Longman, 1998). Retrieved November 19, 2013 from http://www.w3.org/People/Raggett/book4/ch02.html.
Scanlon, J. (2007, August 6). Jeffrey Zeldman: King of Web Standards. BusinessWeek: innovation_and_design. Retrieved November 19, 2013, from http://www.businessweek.com/stories/2007-08-06/jeffrey-zeldman-king-of-web-standardsbusinessweek-business-news-stock-market-and-financial-advice
Standard Generalized Markup Language. (2013, November 6). In Wikipedia, the free encyclopedia. Retrieved November 19, 2013, from http://en.wikipedia.org/w/index.php?title=Standard_Generalized_Markup_Language&oldid=580454005
Web Hypertext Application Technology Working Group Charter. (n.d.). Retrieved November 19, 2013, from http://www.whatwg.org/charter
WordPress — Home. (2003, June 18). Retrieved November 19, 2013 from https://web.archive.org/web/20030618021947/http://wordpress.org/.
WordPress — Home. (2014, September 28). Retrieved November 19, 2013 from https://web.archive.org/web/20140928052112/https://wordpress.org/
This article was commissioned for “Considering your tools. a reader for designers and developers” and is cross-posted here.
Update October 4, 2014: Since this article’s original publication, RDF/A has been accepted as part of HTML5: http://www.w3.org/News/2013#entry-9919.
The part about the WHATWG membership is wrong. The WHATWG "membership" is akin to the W3C "staff". Both are "by invitation only". In the case of the WHATWG, the "membership" (aka staff) do essentially no work, they're just an oversight committee whose responsibility it is to prevent the WHATWG from going into a misguided direction. All the work is done by contributors; there's no barrier to entry there other than (for unfortunate pragmatic reasons) speaking English.
The part about RDF/A is also a little wrong. While it's certainly true that Google has algorithms for NLP, it also has even better algorithms for structured data, and indeed at this point is probably the world's biggest processor of structured data. That doesn't change my arguments on the matter. Authors are terrible at marking up structured data. Also, it's not like it's easier for small companies to do structured data processing than NLP. It looks like it should be, but the real problems are things like spam fighting, contradiction disambiguation, query interpretation, etc, which are just as hard when the input data is structured as when it isn't.
Indeed, anyone who has spent enough time in the WHATWG community to see how I operate will tell you that if anything, I'm biased against Google, not astroturfing for them.
by Ian Hickson - October 21, 2014 4:05 PM
Reply
Dear Ian, thank you for taking the time to respond.
The ‘Charter’ I linked to indeed has no detail on how non-members can get involved in the HTML specification, and I am sorry for not being more clear on the subject. This document details how one can get involved with the standards: https://wiki.whatwg.org/wiki/What_you_can_do
Your characterisation of the membership as a simple facilitator doing ‘no work’ seems at odds, though, with the idea of the members as an oversight committee that has to make sure the WHATWG ‘doesn’t go in a misguided direction’. This means that the membership creates some idea of what is guided and what is misguided, and that it acts upon this idea.
I couldn’t find on the WHATWG who are the members now. The original members were all representing browser vendors, is this still the case? I also couldn’t find how the editor for a specification gets chosen; are they already part of the membership?
The staff member that surely has the most impact on the process is the editor:
https://wiki.whatwg.org/wiki/FAQ#How_does_the_WHATWG_work.3F
The role of the editor goes way beyond facilitating: it is them that have to weigh the pros and cons of solution and finally decide on them. The only safeguard in place is the aforementioned ‘membership’, and we don’t know whose interests they serve.
Unless the WHATWG took steps in recruiting members and editors from different background, the way the WHATWG is structured to me still suggests it is skewed towards the interests of browser vendors?
by habitus - October 21, 2014 6:13 PM
Reply
The member list is the one on the charter (it's been updated as we added members — looking at the list, it looks like right now two of the members don't work for browser vendors, and one never has), but there hasn't been any real activity amongst the members for years. It's similar to how in Britain in theory ultimate power rests in the monarch, but in practice, the majority party in the House of Commons is where stuff happens. The members, or oversight committee, has never had to actually take action.
Anyway, the charter is wildly out of date now (see the paragraph at the top). The WHATWG FAQ describes the current processes (such as they are).
Editors are just volunteers, just like everything else at the WHATWG (e.g. the guy who maintains the wiki software is a volunteer who doesn't work for any of the browser vendors). I think of the people who currently edit WHATWG specs, only two are on the member list.
It's worth noting, though, that all of this misses the point. The WHATWG has no power. The W3C has no power. The editors have no power. All the power lies in the implementors, because if they don't agree with something, they don't implement it, and the spec ends up dead on arrival. That's why, for instance, XHTML2 went nowhere: the browser vendors didn't want to implement it. It doesn't matter whether you have a pay-to-play membership like the W3C, or a totally open mailing list like the WHATWG: either way, the people with the power are the implementors. This has nothing to do with the WHATWG oversight committee "members". It would be true whether the members were all browser vendors, or whether they were fairly elected from SXSW attendees, or whatever.
by Ian Hickson - October 21, 2014 8:36 PM
Reply