Today the Web browser market involves a bunch of competitors that have, as far as I'm aware, largely similar software architecture. While some components are done on other threads or in other processes (e.g., HTML parsing, some of the later stages of the rendering process, plugin execution), the bulk of the work for loading and displaying a Web page is done on a single thread.
This model isn't necessarily ideal for the long term. Today's advances in computing hardware are coming largely in the form of more cores rather than faster cores. The single-threaded browser doesn't take advantages of more cores. To do that, we need a more parallel browser.
There have been research projects (such as Leo Meyerovich's) with the goal of implementing a more parallel web browser. Mozilla's Servo project is a research project to build a parallel Web browser at the level of quality needed to really be used as a Web browser.
What have the people working on these projects discovered about implementing CSS in a parallel way? Much of it is quite doable. However, there are some parts of CSS, for example, layout involving floats (as Meyerovich and Bodik describe in Fast and Parallel Web Page Layout), that introduce a lot of extra complexity to parallel implementations, in the case of floats by requiring speculative work that might need to be thrown out depending on the result of another parallel task.
This leads to the question: what characteristics of CSS are difficult to implement in a parallel implementation, and how can we avoid creating more of them?
I suspect that one thing that is likely to affect whether a specification's requirements are easy or hard to implement in a parallel implementation is the manner in which it specifies layout algorithms. Compare:
Some specifications describe layout algorithms in terms of mathematical rules or constraints that correspond to the calculations that would happen in an implementation. For example, the rules on computing widths and margins in CSS 2.1 describe mathematical rules that are likely to appear in implementations in a form similar to the way they are described.
My gut feeling is that specifications written this way are likely to be the easiest to implement in new architectures such as parallel layout implementations.
Other specifications describe layout algorithms in terms of step-by-step algorithms to be followed. For example, the Flexible Box Layout Module describes the details of layout in terms of a step-by-step algorithm.
While specifications written this way may be easy to implement as described in a sequential implementation, I worry that they may be hard to optimize, and I worry that they may be hard to implement in a parallel implementation.
Finally, other specifications describe layout rules in terms of a set of constraints that are actually pretty unrelated to how they are implemented. For example, CSS 2.1 describes rules for placement of floats in terms of a set of constraints such that the float must be placed as high as possible without violating the constraints, or (given a vertical position) as far to the left or right as possible without violating the constraints. However, these rules are completely unrelated to how they need to be implemented, because there is actually a complex interaction between these constraints and how lines are laid out. For example, rule 6 says:
6. The outer top of an element's floating box may not be higher than the top of any line-box containing a box generated by an element earlier in the source document.
In order to implement this, browsers have to deeply integrate the rules for float placement with the line breaking code, since:
None of these interactions are mentioned in the constraints governing float placement. That the implementation of floats needs to be so disconnected from the way they are described in the specification might be related to how hard they are to implement. (See my earlier post on hidden complexity in specifications.)
I suspect these fake constraints are likely to lead to the most difficulty in implementing, both in sequential and parallel algorithms.
Over the past few years, we've been adding features to the Web very quickly. This means that the Web's complexity is also increasing very quickly, probably faster than it ever has before. And we need the Web to move quickly in order to compete with other less-open platforms. But as the Web becomes more powerful, I hope that we don't end up constraining its future too much, though I'm worried that we might be doing so.
Providing the new layout capabilities that flexbox provides is very important for the Web as a platform. But we still need to be careful when doing so; we can't judge them to be implementable based only on today's style of implementations, particularly when we know those implementations aren't sustainable in the long term. But my guesses as to what might or might not be implementable also aren't much good; we need real feedback from researchers who have worked on parallel implementations. And we need this feedback on tomorrow's Web technologies while those technologies are still being developed, rather than after they're already baked in to the Web.