More W3C Controversy (17:58 -0700)
There's been a good bit of controversy recently due to concerns in the Web browser community about the SVG Tiny 1.2 specification moving to Candidate Recommendation (the W3C's call for implementations stage). Ian Hickson (a long time Web standard developer and the man behind the WHATWG) objected to its duplicating features of many other Web standards. Boris Zbarsky (a leading Gecko/Mozilla developer) objected to the group failing to clarify basic concepts and to not following W3C process, said he no longer thinks the SVG working group is acting in good faith, and said he no longer plans to work with the SVG working group. Maciej Stachowiak (a leading WebKit/Safari developer) objected to the addition of text layout features to SVG that largely duplicate, but with significant differences, features that already exist in HTML+CSS, and said that he no longer thinks it makes sense for Web browser vendors to participate in the development of SVG at the W3C. Ian Hickson also objected to the SVG spec adding a new script processing model that is incompatible with the one used by HTML on the Web. Robert O'Callahan (another leading Gecko/Mozilla developer), objected to SVG's choosing link targeting behavior that was compatible with WebCGM rather than with HTML, since being compatible with HTML ought to be more important for a standard intended for the Web. Björn Höhrmann (a frequent and very knowledgeable commenter on many Web standards) detailed his formal objections and the alleged violations of W3C process through which they were ignored.
The technical comments in these messages are just the latest in a very long string of comments, and there are dozens or hundreds of technical problems of a similar magnitude. However, the frustration with the entire process has been expressed more explicitly than it was in the past.
Around the same time, there has been concern raised by prominent figures in the Web development community about lack of progress within the W3C. Jeffrey Zeldman criticized the W3C for its lack of concern for the needs of Web developers. Eric Meyer (a leading author about CSS) echoed these concerns, and pointed out that Molly Holzschlag's rebuttal to Zeldman's criticism actually accepted one of Zeldman's key points.
While these two areas of criticism may initially appear unrelated, I think they're actually very closely related. But explaining how they're related requires a bit of a detour into understanding how the W3C works.
The W3C today
The first thing to understand about the W3C is that it is a consortium. Over 400 companies pay the W3C to be members of the W3C, which allows them to participate in many W3C activities. The W3C then has over sixty technical employees who work on the things that the members are paying for.
The first thing that might surprise readers here is that there are over 400 member companies. Web developers might wonder if there are that many companies that make browsers or authoring tools? Or if there are a lot of medium size Web design companies in the membership? Neither is actually the case. And that should give a pretty clear explanation of what Molly Holzschlag called the W3C's “frightening disregard for the real needs of the workaday Web world.” If most of the member companies are paying the W3C to work on other things, then the W3C will probably end up working on other things.
So what do the W3C members want? For a start, have a look at the six domains that the W3C organizes its work into: Architecture, Interaction, Quality Assurance, Technology and Society, Ubiquitous Web, and Web Accessibility Initiative. Two of these six (Quality Assurance and Web Accessibility Initiative) are interested largely in refining the technologies produced in the other domains. Of the remaining four, the stuff that Web browsers do lives mostly in the Interaction and Architecture Domains, and it's mostly the Interaction Domain where there is interest in developing new standards relevant to Web browsers. So I want to focus on what W3C members want from the Interaction Domain.
"Follow the money" is often given as a good way to figure out motive. Why would companies want to be members of W3C? Because it helps their business. For example:
- A company that develops authoring tools might want to build a W3C format that their authoring tools can produce in the hopes that Web browsers will implement that format and authors will buy their tools (especially if the format is hard to write by hand). Likewise for a company that sells consulting services. Once the format is developed such companies might want to use the W3C to force Web browsers to implement the format.
- A large company (or group of companies) that uses Web technologies as part of their business, but not on the Web, might want to influence standards to make the technologies more useful for their use so that they can use off-the-shelf Web software in their business.
- A reasonably competitive industry, like the industry of software providers for cell phones, where a significant number of companies write software for a number of cell phone companies, might need a forum for standardization of what technologies are used by cellular carriers to send content to cell phones. The W3C is one of the standards development organizations competing for this (rather significant) standardization business, and some working groups that I've interacted with seem dominated by companies in the mobile industry.
- A company that has a business closely tied to the success of the Web (and I'd only put some browser makers, and only a handful of other W3C members that I've interacted with, in this category) might be interested in improving the experience of users or authors on the Web. The business interests of browser makers aren't necessarily aligned with making the Web better for users or authors. Some have alternative technologies that compete with the Web, some promote the implementations of standards used in their browsers for purposes other than the Web (with competing requirements), and some might have business interests aligned only with users, or only with authors (although I can't think of any browser makers in this last group).
These motives lead groups within the W3C to spend significant amounts of time on things that don't help the Web. For example, a company that is using W3C technologies in a non-Web environment may push the issues that arise in their environment to the agendas of working group meetings. Essentially, they're paying the W3C to have experts on the technology (the working group) solve their problems. And those experts are often quite willing, since work to make one's invention used in more places can appeal to the vanity of the inventor.
This problem is minor compared to the time that's been spent lately in fights between working groups and the communities they're associated with. The biggest such disputes have generally been between people involved with browsers for personal computers (some of which also run on mobile devices) and browsers developed primarily for mobile devices. While I don't know all that much about the cell phone industry, my general impression is that they're interested in providing Web browsing (although perhaps at pricing plans tend to make it relatively rarely used and thus probably not all that profitable). However, cell phone providers also provide a lot of content that's part of their network—content from which they can make money more directly. At one time they were requiring software on cell phones that supported WML so they could produce their content in WML, and also allowing access to WML on the Internet in the hopes that a separate Mobile Web would develop. Due perhaps to a combination of the lack of momentum of this Mobile Web, the increasing capabilities of the hardware on cell phones, and political pressure from organizations like the W3C that wanted one Web rather than two (one of which they didn't get the standardization business for), the companies decided to move to using Web standards on their mobile phones, perhaps with the high-level goal of actually being compatible with the Web.
However, compatibility is really in the details. Web browser developers all know that being able to display Web pages requires being compatible with other browsers not just by supporting the same standards and doing what those standards say (or in some cases all doing the same thing that's different from what the standards say), but by being compatible on minor details that are not mentioned in the standards (especially the poorly written standards). Even if this level of compatibility were met, there would still be huge obstacles to making the same Web pages work on desktop and mobile devices, which have vastly different displays or other output methods, and vastly different keyboards, keypads, pointers, or other input methods.
In other words, a large part of the mobile industry is using software that implements some Web standards, but largely not compatibly enough with the content on the Web to be able to use it. However, they're still developing content in their own walled gardens. And since that content works, it's the stuff that drives their development. Making that content work interoperably across multiple implementations requires standardizing what standards are required by Mobile Web browsers and standardizing much of the detailed behavior that's needed for compatibility. The problem is that they don't have much incentive to choose behavior that's compatible with the content on the Web. Or something roughly like that; I don't actually understand the details, but I have seen the results.
So we end up in a world where mobile browsers implement some of the same standards as desktop browsers (HTML, CSS) although generally not as well, and where they implement some different standards (e.g., SVG), perhaps better. The level of interoperability between these two worlds is not high enough to make it easy to develop content that works well in both. So, naturally, left to their own, the two worlds would diverge, into two different sets of rules for handling all the ambiguities in the standards, sets of common “bugs” (where all implementations would disagree with the standard in the same way), and eventually (or already) different sets of supported standards. In other words, using technology for both mobile and desktop based on the same underlying specifications doesn't actually do anything useful if the implementations of those specifications don't interoperate. However, the W3C staff and other W3C policies tend to try to force them to converge even if neither side actually wants this convergence. Generally, W3C process forces reuse of standards even when those standards are inappropriate. So whichever side of the divergence (desktop CSS, or mobile SVG and CDF, or in some cases other communities within the W3C) is the first to write down the rules that they depend on for interoperability supposedly writes the official W3C way of handling the ambiguity. At least, this is the way it is according to the rules at the W3C, although neither side really likes it. This causes each side to attack and try to block the advancement of the specifications on the other side.
This picture should make clearer the causes of both sets of criticism described above. The Web browser community has to stop things in SVG that are incompatible with or inappropriate for the Web, because if we don't stop them we'll never be able to standardize the Web behavior at the W3C. Second, the Web browser community isn't making much progress on standards relevant to the Web because it's spending all of its time fighting the larger Mobile Web community.
Ideal standards for the Web
If we want to fix these problems, we should think about what it is that the Web really needs from standardization.
One of the reasons we want standards is to get interoperability: a situation where there are multiple clients and multiple servers that can all talk to each other, or multiple authors and multiple readers that can all read what they've written. However, interoperability can be achieved by copying rather than by standardization. But standardization has some advantages over copying:
- It can produce interoperability faster than copying, since all participants in the market can implement at the same time, and since less time is wasted doing reverse-engineering.
- It can improve the competitiveness of the market by reducing the costs of entry (reading standards vs. reverse-engineering).
- It can improve the competitiveness of the market by spreading the burden of bug fixing among all participants rather than putting all that burden on the followers (to copy the behavior of the market leader).
- It can improve competitiveness by (if it's done in certain ways) providing some protection against some types of legal problems, such as patent lawsuits.
To get interoperability on the Web, what we really need are well written specifications with clear conformance requirements and clear error handling requirements. And we need good test suites that thoroughly test the specifications. The Web has already suffered many times when poorly written or poorly tested specifications caused loss of interoperability.
The other important thing to consider is that we want to choose the right behavior to standardize. (This isn't really specific to why we want standards; we want the right behavior whether or not there's a standard.) This means that we want to consider the needs of all participants. For document formats, this means authors, users, and implementors of the tools that they use. (The W3C process currently relies on the implementors to represent the other two. And that doesn't always work out so well; sometimes authors will get more representation than users, and sometimes the other way around.) This includes getting input from experts on issues like internationalization and accessibility, which affect some users.
If the Web would benefit from using the same standards that are used elsewhere, then it may be worth considering the needs of the other potential users of the same standards. However, the Web is large enough and important enough that the advantages to taking a backseat to other users (larger user base, additional tools) are unlikely to be nearly as large as the disadvantages (technologies less well suited to the Web).
A way forward?
I used to think that the W3C should focus on things that are compatible (in terms of architecture, interoperability, and maintenance of existing invariants) with what is already on the Web and designed primarily to improve the Web. This belief led me to complain about the fragmentation of document format specifications and complain about the results of a W3C workshop. However, this belief is not shared by many W3C members, and is not the underlying focus behind much of the standardization business that W3C wants to attract in order to attract members.
I think such a focus is necessary if we assume that W3C standards are supposed to be implemented by Web browsers—an assumption that some W3C staff and some member companies vehemently insist is true. It is the combination of this assumption (held by some people) and the lack (from other people) of common belief in this narrow focus that has led to many of the recent controversies about W3C specifications, including those about SVG that I mention above. These controversies, in turn, reduce the resources spent on development of what really matters to Web developers.
Such a focus is also necessary to ensure that all W3C specifications can be interoperably implemented together. For example, if two communities (say, the Web and the Mobile Web) build on the same underlying technology (say, the subset of CSS used in SVG) but using different specifications (say, the rest of CSS, or SVG) then the content built using one of the latter specifications might depend on ambiguities in CSS being clarified one way or on having certain bugs, and the content built using the other might depend on different clarifications or bugs. This creates an environment where somebody who wants to implement both the rest of CSS and SVG on top of the subset of CSS in SVG can't interoperate with both sets of existing content. This causes the two communities (the Web and the Mobile Web) to diverge, and eventually not to demand the same standards be implemented. In other words, the only way this focused model works is if the W3C produces new specifications slowly enough that everybody involved can implement all of them. And that doesn't fit with the W3C's business model.
So I've now come to believe the opposite. In other words, given the breadth of activity within the W3C, we can no longer assume that all the W3C's specifications are part of a single plan. Groups within the W3C should be allowed to produce specifications whose features overlap with those of other W3C specifications. No members of the W3C should be obliged to implement any specifications, or criticized for failing to do so simply because the specification they do not implement is a W3C Recommendation. Instead, specifications should compete on their own merits among implementors, authors, and users.
Accepting this does require giving up something that some might consider significant: the ability to put pressure on Microsoft to implement the W3C standards that are already interoperably implemented by Mozilla, Opera, and Safari, such as many parts of CSS and the DOM. Or, rather, the ability to put such pressure on Microsoft on the basis that these things are W3C standards. Microsoft does recognize the legitimacy of the W3C because the W3C has been a leading Web standards organization since the time when Microsoft was the competitor trying to beat the market leader (Netscape). But I don't think it has a history of yielding to pressure to implement W3C standards simply on the basis of their being W3C standards, rather than their meeting the needs of authors and users. And I don't think accepting this idea of the W3C being broader than the Web removes any ability to complain about bugs in existing implementations, at least for specifications already shown by other implementors to be compatible with the Web. In the end, pressure to implement new specifications really needs to come from authors who are using the new features that Mozilla, Opera, and Safari are implementing, which means making these engines more interoperable, making them easier to write for, and getting them more market share. (Furthermore, large organizations may be less hesitant (for legal reasons, perhaps?) to implement new specifications produced by a standards organization that they already know and understand than specifications produced elsewhere.)
In any case, don't think the W3C can continue trying to be both a focused organization and a broad organization. I think it currently tries to be both at the same time, and gets the technical disadvantages of both approaches, the technical advantages of neither, and the financial advantages of the latter. I've come to accept that it's not going to be the focused organization that I'd like it to be. Given that, I think the W3C and the community around it needs to fully accept the consequences of being a broader organization.
It's time for the Web browser community to stop using up its resources attacking specifications that we're not interested in implementing. One of the reasons there's been so little advancement of the standards used in Web browsers is that we've been spending most of our standardization work fighting against the proposals of others—proposals that don't fit with the Web, or working to improve proposals by others that aren't the top priorities for authors and users of the Web. We should work on, and implement, the standards that we think are appropriate for Web browsers, and ignore the rest. We should spend our time improving what Web developers and users want, not waste our time improving what is less important or criticizing what isn't going to work in the first place. That requires considering what's important at a high level before delving in—something that isn't always easy, and is easily forgotten. But we should spend the effort so that we work on what matters.