David Baron's weblog: March 2007

Friends & Colleagues

Thursday 2007-03-29

Why clearance sometimes needs to be negative (18:04 -0700)

At the CSS working group meeting in Oslo in August 2003, we came up with a concept called clearance to describe the offset used to change the position of a non-floating element that is moved by the clear CSS property. (Previously, the clear property was defined as increasing the margin.)

At our meeting earlier this week in Mountain View, we discussed a testcase where the current rules cause very strange behavior. The simplest form of this testcase is the following (view):

  <div style="float:left;height:10px;width:10px;background:blue"></div>
  <div style="clear:left; margin-top: 50px">hello</div>

In an implementation conformant to the current CSS2.1 spec, the 50px top margin of the second inner div disappears, which doesn't seem to be desirable behavior. We're still discussing how to fix this.

But one thing that came up during the discussion is why clearance ever needs to be negative (as it is in this case). A simple example of why clearance needs to be negative is the following testcase:

<p style="height: 20px; margin-bottom: 20px;">
  <img style="float: left">
<p style="margin-top: 20px; clear: left">Hello</p>

In this testcase, if the image is 40px tall (view), there is no clearance needed, so the paragraphs are separated by 20px (the collapsed margin). However, if the image is 41px tall (view), the second paragraph needs to clear the image, so we have clearance. Since this clearance separates the two margins, they suddenly don't collapse. So the separation between the two blocks ends up, in the current spec, as 20px margin + -19px clearance + 20px margin == 21px. If we didn't allow negative clearance it would suddenly jump to 40px, which is bad.

Now, you might wonder why we put the clearance before the margin rather than putting it between the margin and the border (which might make sense, since we're clearing the border edge). We can explain that with this testcase:

<!-- maybe something above to clear -->
<div style="clear: left; background: yellow">
  <h2 style="margin-top: 1em">Heading</h2>

If we had put the clearance between the margin and the padding, then if there were no float to clear (view), the h2's margin would collapse with the div's and end up outside the div. But if there were a float to clear past (view), putting the clearance between the margin and border would stop margin collapsing between the h2's top margin and the div's top margin, which would mean that the h2's top margin would suddenly switch to being inside the div.

(For what it's worth, I was never all that happy with the concept of clearance in the first place. It doesn't even fix all the cases where a collapsed margin appears partly on one side of a block's border edge and partly on the other -- although it does reduce such cases to being only for empty blocks. But we then have to add some complicated workarounds to the spec to ensure that empty boxes don't end up outside their parent by providing special rules for their position at the beginning of a block, but we still allow them to be outside their parent (even without negative margins) when they're at the end of a block.)

Sunday 2007-03-25

Problems with versioning, part 1: open environments (01:14 -0700)

[ This is based on an email message that I wrote a few months ago, and wanted to share. ]

At least in the world of W3C standards, versioning often refers, in part, to the practice of requiring that implementations adding support for newer versions of a specification maintain different handling for content or communication labeled as following the older version. New revisions of specifications that require this behavior tend to be called versions, while new revisions of specifications that do not tend to be called levels, editions, or revisions.

Versioning is great in closed environments where authors can/should control what software the users use, where software is extremely close to conformant (or is the definition of conformance), and where authors want to ensure users use software that understands every piece of the content they wrote. The Web is not such an environment.

Web content should be durable. Documents that we write today should be usable 50 or 500 years from now. If we, the designers of Web standards, constantly use versioning as an excuse to break compatibility, implementations that read the old versions of content will no longer be available for the machines we use in the future. (Not necessarily because people don't want the content, but because there's much less money in it.)

Documents that we write today should also gracefully degrade to be usable in the bulk of the software used to browse the Web today. They need not have full functionality, but they should be usable, and should definitely not have versioning information that forces the older user agents not to show what they can. This is essential to allowing forward progress of Web standards in the presence of a slow software upgrade cycle.

Requiring that software handle different versions differently either (1) spreads out testing and development resources among the versions and reduces the conformance to all versions or (2) leads to some versions not being implemented (and perhaps different versions in different implementations). Both of these results reduce rather than increase interoperability.

(There are some who want version indications in documents not so that normal implementations like browsers handle them differently, but so that testing tools like validators handle them differently. To that, I would reply that there are many ways that validators could report useful information about versions or levels without requiring authors to do more work or to choose an arbitrary parameter that matters only to the validator.)

Given the astonishingly poor conformance of all implementations of Web standards, primarily due to the astonishing complexity of those standards (and rapidly increasing complexity, if the W3C's idea of what should be used on the Web is to be believed), I think it is ridiculous to significantly increase the complexity by introducing versioning in more standards.

Saturday 2007-03-24

The Open Web (00:51 -0700)

I was thinking about the various technologies that Mozilla implements, and where they come from. They come from standards bodies with vastly different levels of recognition, structures, openness, and rules, such as ISO, W3C, WhatWG, the Unicode consortium, ECMA, the IETF, and the PNG development group, and from our own developers or developers of other browsers.

For the long term health of the Web, the important thing about these technologies isn't where they come from. It's that anyone else is free and able to use them. (Use can involve reading, writing, sending or receiving. And it can involve doing these things or writing software that does them.) Some of the things this requires are:

Freedom from legal threats

Companies shouldn't have to worry about being sued or having other legal sanctions placed on them because they implement something. These days, the main threat here is software patents, but there could be others.

No unnecessary dependencies

I'm including a significant breadth of problems in this one point.

Web technology shouldn't be tied to a particular piece of hardware or software. One of the biggest dangers here is ties to operating systems. Some Web technologies, such as Active X, are binaries for a particular operating system and hardware. Others are formats implemented only in a plugin that's available only for some operating systems. This prevents new entries in the market, and thus prevents the innovation that results from competition, and imposes the costs associated with technological stagnation. Another big danger here is standards that make it easy to build pages that work only at a certain screen size or paper size (like PDF, and like some uses of HTML and CSS).

But within this same point I also include accessibility to people with disabilities, and accessibility to speakers of different languages. It's important here that I said unnecessary dependencies. I don't expect a photo site to be fully accessible to blind people or a site written by an English speaker to be readable by somebody who speaks only Japanese. However, I do expect a news article to be accessible to a blind person and I do expect the document formats used to transfer the site written by the English speaker to be equally usable by speakers of Japanese.

The issues of screen size independence, accessibility, and internationalization are actually similar in many regards. People often attempt to solve all of them through the use of rules and guidelines, but the real solution is to design the technology in the first place in a way that makes it easy to do things right and hard to do things wrong. On the Web, this has been done somewhat successfully with internationalization, but not yet with the others.

Clear, precise, and open specifications

The specifications that define how the Web works need to be freely accessible, so that anyone can implement them. This allows for new players to enter the market (in all the areas of use above), which keeps the Web thriving and competitive.

But for anyone to be able to use a technology the specifications also need to be clear and precise. They should also come with tools to verify the correctness of implementations (test suites) and content (validators). Browser makers shouldn't need to reverse engineer other implementations in order to be able to browse the Web. Authors shouldn't need to test content in every browser and work around bugs just to write simple Web pages.

No unnecessary complexity

Making something harder to use is a continuum along the way to making it impossible. Some technologies used on the Web are more complex than they need to be. This makes them harder to write and harder to implement. This makes it harder for new authors and harder for new implementors, which reduces competition and innovation. Keeping the Web open requires keeping Web standards easy to use and implement.

(There's a lot more I could say about many of these points, but I'll resist.)

Blogging (00:08 -0700)

I haven't been blogging all that much lately because I haven't been finishing blog entries. That doesn't mean I haven't started them. I get an idea for something I want to say about technical issues. Sometimes I write down a sentence; sometimes a few hundred words. But, in the midst of everything else I'm trying to do, I'm not very good at coming back to these ideas and finishing them. (Sometimes I later flesh out the argument in private email in response to some other question, and still don't get around to turning the email back into a blog entry.)

So, tonight, I got an idea, wrote down as much as I felt like writing tonight, and I'm going to post it shortly. And hopefully I'll start clearing through the backlog over the next few weeks.

Wednesday 2007-03-14

WICD (14:07 -0700)

There's been a bit of controversy about WICD lately, so I thought I'd share my thoughts on the matter.

I was a member of the CDF working group for its first year or two of existence, but I was there because I was interested in solving the problems that already existed with mixed namespace documents, and left when it became clear to me that if I wanted to work on those problems I'd be working pretty much by myself.

The WICD specifications themselves consist of:

(Or at least they did when I was last involved with the group. I don't expect they've changed that much.)

In other words, it's a wishlist from a small group of authors who happen to represent powerful enough companies that they can get a W3C working group created. At Mozilla, we're interested in what Web authors want, but we can't satisfy all of them, all the time. (We can't implement everything everybody's ever thought of and have it working perfectly by next week. And we certainly can't meet contradicting requirements.)

In this case, though, the wishlist is from authors who are working primarily on walled garden content used in mobile networks, not on Web content. And because of that, I think this specific wishlist should be discounted, not because I hate private content, but because authors of content in walled gardens insist on a level of control that has proven unacceptable on the open Web. (Remember popup blocking? Do we want to have to add blocking of author control over focus (in SVG and in WICD)? I'm not saying Gecko's current focus handling in Web pages is good, but do we really want a standard that prevents us from improving it?)

The other sign that it's designed for walled gardens, where the content provider controls the browsers being used, is that it doesn't bother explaining how the technologies it combines should actually work together. So it's highly unlikely to be implemented interoperably, unless the interoperability is the result of reverse-engineering. But in walled gardens, interoperability doesn't matter so much, since the content authors just write for the chosen browser.

Saturday 2007-03-03

The end of antibiotics? (23:41 -0800)

I find it shocking that the FDA will allow companies to endanger the effectiveness of antibiotics, one of the most important advances in modern medicine, in order to support the horrid (and probably unhealthy for the consumer) manner in which cattle are raised in the United States.