HTML was initially designed as a semantic markup language,
with elements having semantics (meaning)
describing general roles within a document.
These semantic elements have been added to over time.
Markup as it is used on the web is often criticized for not following the semantics,
but rather being a soup of div
s and span
s,
the most generic sorts of elements.
The Web has also evolved over the last 25 years from a web of documents
to a web where many of the most visited pages are really applications rather than documents.
The HTML markup used on the Web is a representation of a tree structure,
and the user interface of these web applications
is often based on dynamic changes made through the DOM,
which is what we call both the live representation of that tree structure
and the API through which that representation is accessed.
Browsers exist as tools for users to browse the Web; they strike a balance between showing the content as its author intended versus adapting that content to the device it is being displayed on and the preferences or needs of the user.
Given the unreliable use of semantics on the Web, most of the ways browsers adapt content to the user rarely depend deeply on semantics, although some of them (such as reader mode) do have significant dependencies. However, browser adaptations of content or interventions that browsers make on behalf of the user very frequently depend on the persistent object identity in the DOM. That is, nodes in the DOM tree (such as sections of the page, or paragraphs) have an identity over the lifetime of the page, and many things that browsers do depend on that identity being consistent over time. For example, exposing the page to a screen reader, scroll anchoring, and I think some aspects of ad blocking all depend on the idea that there are elements in the web page that the browser understands the identity of over time.
This might seem like it's not a very interesting observation. However, I believe it's important in the context of frameworks, like React, that use a programming model (which many developers find easier) where the developer writes code to map application state to user interface rather than having to worry about constantly altering the DOM to match the current state. These frameworks have an expensive step where they have to map the generated virtual DOM into a minimal set of changes to the real DOM. It is well known that it's important for performance for this set of changes to be minimal, since making fewer changes to the DOM results in the browser doing less work to render the updated page. However, this process is also important for the site to be a true part of the Web, since this rectification is important for being something that the browser can properly adapt to the device and to the user's needs.
]]>One of the ways to advance as a software engineer is to be in charge of something, such as a one-time project like implementing a new feature or leading a software release, or an ongoing task such as triaging incoming bugs or analyzing crash reports.
One thing that makes it more likely that you'll be in charge of something is if others trust you to be in charge of that. And you're more likely to be trusted if you've consistently behaved like somebody who is responsible for that thing or similar things.
So what does being responsible look like? Largely, it looks like the behavior you'd expect from a project owner, i.e., the way you'd expect the person in charge of the project to behave. In other words, I think it helps to think of yourself as having the responsibility of the project's owner. (But, at the same time, remember that perhaps you don't, and collaborate with others.) Let's look at two specific examples.
First, what do responsibility and ownership look like for somebody doing triage of incoming bugs? One piece is to encourage more and better bug reports by acting in ways that acknowledge the bug reporter's contribution, such as: making the reporter feel their concerns are heard, not making the reporter waste their time, and improving the bug report on the way (making the summary accurate, adding clearer or simpler testcases, etc.). Another is taking responsibility and following up to make sure important things are handled, and to make it clear that you're doing so. When you do this (or many other things), it's important to make appropriate commitments: don't commit to things if you can't honor the commitment, but avoiding committing to anything is a failure to take responsibility.
Second, what do responsibility and ownership mean for somebody writing code? I think one big piece is that you should do the things you'd do if you were the sole maintainer of the code before you submit it for review. That is, submit code for review when you're actually confident it's ready to be part of the codebase. This implies doing many things, from high level tasks like having a clear model of what the code is supposed to do, to having appropriate tests, assertions, and structure that make future modifications easier and reduce their risk, to more low-level things like looking at all the callers of a function when a change you make to what the function does requires doing so.
Another big piece of responsibility when writing code is taking responsibility for and fixing the problems that you cause. (As you take on more responsibility, you might find others to help you do this, but you're still responsible for it.) How to do this depends on the seriousness of the problems. It sometimes means temporarily reverting the changes while figuring out the longer term fix. In other cases it means writing patches for serious problems promptly. And in less serious cases a quick response may not be needed, but it's useful to communicate that you've concluded the problem is lower priority in case others have a different view of the seriousness.
Having engineers exercise responsibility and ownership in this way is important because having more engineers take responsibility makes a project run better. So it's a characteristic that I like to see in software engineers and one of the characteristics that defines what I see as a good engineer.
]]>It is sometimes easy for technology experts to think about computer security in terms of building technology that can allow them (the experts) to be secure. However, we need to think about the question of how secure all of the users of the technology will be, not the question of how secure the most skilled users can possibly be. (The same applies to usability as well, but I think it's more uniformly understood there.) We need to design systems where everybody (not just technology experts) can be secure. Designing software that is secure only when used by experts risks increasing inequality between an elite who are able to use software securely and a larger population who cannot.
We don't, for example, want to switch to using a cryptocurrency where only the 1% of most technically sophisticated people are capable of securing their wealth from theft (and where the other 99% have no recourse when their money is stolen).
Likewise, we don't want to create new and avoidable differences in employability of individuals based on their ability to use the technology we create to maintain confidentiality or integrity of data, or based on their ability to avoid having their lives disrupted by security vulnerabilities in connected (IoT) devices.
If we can build software that is usable and secure for as many of its users as possible, we can avoid creating a new driver for inequality. It's a form of inequality that would favor us, the designers of the technology, but it's still better for society as a whole if we avoid it. This would also be avoiding inequality in the best way possible: by improving the productivity of the bulk of the population to bring them up to the level of the technological elite.
]]>Support for running animations of 'transform
' and
'opacity
' on the compositor thread is scheduled to ship
next week in Firefox 41. This has been supported in Firefox OS since
1.0, and is something that a number of other browsers do as well, but
will now also ship in Firefox for desktop and Android.
This means that animations of the CSS 'transform
' and
'opacity
' properties will run on the compositor thread,
which makes them smoother, because they will continue running smoothly
when the main thread misses its frame budget (that is, when the main
thread stays busy too long to meet the frame rate).
Even better, when we run the animation on the compositor thread, we
stop updating style on the main thread as the animation progresses,
which reduces the total amount of work that we need to do. However, if
the page does anything that flushes style or layout (that is, requests
up-to-date style or layout information, such as
getComputedStyle()
, getBoundingClientRect()
,
or offsetTop
), we recompute the current style on the main
thread.
Running an animation on the compositor thread involves takes some existing optimizations that browsers have and makes them even more effective, which I'd like to explain in a little more detail. When an element has an active layer (a surface into which its contents are painted, that is given to the compositor thread to be composited), we implement transform and opacity by setting transform or opacity on the layer and having the compositor apply the transform or opacity when compositing. So when we want to run an animation on the compositor thread, we:
Even when we support off-main-thread compositing we don't run
transform and opacity animations on the compositor thread in all cases.
The biggest is that we don't support compositor-thread animation of 3-D
transforms in a preserve-3d scene. This means that if an element or its
parent uses 'transform-style: preserve-3d
', we don't run
animations of its transform on the compositor. This is something that
we hope to fix quite soon. Also, if the animation is also animating
height, width, top, right, bottom, or left, then we do not run transform
animations on the compositor because they would get out-of-sync with the
size or position changes coming from animations of those other
properties. We also don't support compositor-thread animation of
elements with SVG transforms.
[Demo added 22:50] If you want to see the benefits, you can compare this demo of the smoothness benefits between Firefox 40 (or older) and Firefox 41 (or newer).
Many people contributed to getting this shipped. I'd particularly like to thank David Zbarsky (who implemented sending animations to the compositor thread and running them there), Nick Cameron (who implemented suppressing the work we do for such animations on the main thread), Brian Birtles, Robert O'Callahan, Chris Jones, Matt Woodrow, Boris Zbarsky, Markus Stange, Jonathan Watt, Cameron McCormack, Alice0775 White, Virtual_ManPL, and Elbart.
(I'd also note that we've often called this project OMTA or off-main-thread animations. However, since the animations are specifically running on the compositor thread, I prefer to call it that.)
Related links:
One of the principles behind HTML5, and the community building it, is that the specifications that say how the Web works should have enough detail that somebody reading them can implement the specification. This makes it easier for new Web browsers to enter the market, which in turn helps users through competitive pressure on existing and new browsers.
I worry that the Web standards community is in danger of losing this principle, quite quickly, and at a cost to competition on the Web.
Some of the recent threats to the ability to implement competitive browsers are non-technical:
Many parts of the technology industry today are dominated by a small group of large companies (effectively an oligopoly) that have an ecosystem of separate products that work better together than with their competitors' products. Apple has Mac OS (software and hardware), iOS (again, software and hardware), Apple TV, Apple Pay, etc. Google has its search engine and other Web products, Android (software only), Chrome OS, Chromecast and Google Cast, Android Pay, etc. Microsoft has Windows, Bing, Windows Phone, etc. These products don't line up precisely, but they cover many of the same areas while varying based on the companies strengths and business models. Many of these products are tied together in ways that both help users and, since these ties aren't standardized and interoperable, strongly encourage users to use other products from the same company.
There are some Web technologies in development that deal with connections between parts of these ecosystems. For example:
In both cases, specifying the system fully is more work. But it's work that needs to happen to keep the Web open and competitive. That's why we've had the principle of complete specification, and it still applies here.
I'm worried that the ties that connect the parts of these ecosystems together will start running through unspecified parts of Web technologies. This would, through the loss of the principle of specification for competition, makes it harder for new browsers (or existing browsers made by smaller companies) to compete, and would make the Web as a whole a less competitive place.
]]>