Fragmentation of document formats on the Web

Just as Web clients are beginning to diversify away from being full-featured desktop and laptop PCs, document formats on the World Wide Web are at risk of fragmentation. There is now a proliferation of new document formats and profiles of these document formats, and few clients, if any, are capable of implementing all of them while meeting their other requirements (disk and memory use, speed, release schedules).

When I speak of documents on the Web, I'm referring to documents sent across the Web, from arbitrary clients to arbitrary servers. These Web formats must be interoperably implemented at both ends of the wire. I am not referring to documents exchanged between multiple machines internal to the operations of the Web server at, since those clients and servers are not Web clients and servers, but are internal to a Web server. Likewise, I am not referring to documents exchanged between a small cell phone and a proxy it uses to fetch and simplify Web content, since this client-server interaction is, as far as the Web is concerned, internal to a Web client.

The number of Web formats should be limited

The number of document formats for the Web (and profiles thereof) should be limited, for the following reasons:

Interoperability is hard
Web formats must be interoperably implemented by multiple implementations at both ends of the wire. Multiple document creators (whether authoring tools or authors writing by hand) must be interoperable with multiple viewers. The more standards that have to be implemented interoperably across all creators and viewers, the more bugs those tools will have, across all standards. If there are fewer Web formats, software developers can focus on fixing bugs in their implementations of existing standards. Furthermore, larger numbers of Web formats increase the chances that some conflict with each other on the details (and make it harder to develop new Web formats).
Profiled standards threaten interoperability
Profiled standards also threaten interoperability. Should Web clients be non-validating or validating XML parsers? Should they perform schema-validation when W3C XML Schema are available? RELAX NG Schema? Which profile of SVG (if any) should they implement? Which profile (if any) of XForms? Should they implement all of XHTML or just XHTML-Print? When client implementors answer these questions differently, they reduce interoperability.
Some clients need to be small
Web clients on small devices are unlikely to be able to implement all current W3C document standards (HTML, XHTML, CSS, SVG, XForms, XSLT, etc.) and all of their requirements (XML, XML namespaces, XPath, XML Schema, XML Events, XLink, etc.). Limiting the set of standards needed to access the Web improves the Web's accessibility to more diverse clients, which makes the set of clients more diverse and encourages interoperability between different classes of clients.

These disadvantages should be weighed against the advantages of sending formats over the Web. Examples of such advantages are:

HTML is a markup language at the appropriate level in the semantic spectrum such that it is easy to understand and it encourages device independence. A more semantic language would rapidly become either very difficult to use correctly or specialized to a few fields, while a less semantic language would not encourage device indepedence.
CSS is useful to send over the wire because sending style over the network allows author, user, and user-agent preferences to interact to determine how a document is displayed. It does not seem possible to allow input into styling from both ends of the wire without allowing style to be sent over the wire.
ECMAScript (ISO 16262) and DOM
The ability to send script over the network is necessary for allowing many types of response to user-interaction to occur without client-server communication that would increase response times to unacceptable levels.

Preventing fragmentation

These threats to interoperability threaten to fragment the Web, especially between desktop clients and small devices. While a combination of carefully designed documents and carefully designed profiles may allow graceful degradation in the absence of interoperability, past experience with HTML on the Web has shown that this is unlikely to happen.

Why is fragmentation bad? It reduces the amount of Web content available to any user, since it requires that authors produce content for all fragments of the Web, which most authors will not do. The W3C and its leaders have taken a strong position against fragmentation in the W3C's basic principles and in Tim Berners-Lee's opposition to the .mobi TLD proposal.

While it is in the financial interests of the W3C to work on whatever standards interest its members, it is not in the best interests of the Web. That is not an inherent conflict as long as the W3C makes it clear which standards (and profiles) are intended for use on the Web. If the W3C does not act, the problem will have to be solved either by some other standardization body or by the market. While solution by the market may not sound inherently bad, it is worth remembering that the rules for error-handling in traditional HTML were solved by the market, and the end result was bad for competition and bad for small devices.

David Baron (, 2004-05-11