David Baron's weblog: April 2007

Friends & Colleagues

Monday 2007-04-30

Video drivers (23:43 -0700)

For the past few Linux distro releases, I've found the most painful part of running Linux on the desktop to be video drivers. (It used to be wireless. But since NetworkManager made dealing with wireless mostly painless, except for the lack of any setup option for it in Fedora's installer, it's now video. Even counting the few regressions in the updates since FC6's release.)

My requirements are more than the median user's, but they're really not that complicated. I have an IBM T42 laptop with a 1400x1050 display. At work, I plug it into a dock that is connected to a 1920x1200 LCD monitor. I want to get the benefits of this monitor's high resolution when I'm at work.

With Fedora Core 4, I lived with what the open source drivers could offer. I prefer to stick to the open source drivers for a number of reasons. First, I'm more comfortable using software if I know people can read the source, even if I'm not the one doing the reading. Second, I've found using proprietary drivers tends to make my system less stable. This isn't surprising given what I've said about extensions and quality. And third, I'd like to improve the Linux desktop experience -- and that involves understanding it. If Linux on the desktop is ever to be taken seriously for the average user, the desktop Linux experience is what you're going to get when you install the distribution, without any hard-to-find and hard-to-install proprietary drivers.

This meant that I used my LCD monitor via the DVI port on my laptop's dock, but at 1400x1050 resolution.

But then, when I upgraded to Fedora Core 5, things got worse. My memory is that this was because they disabled part of the support for the driver because it crashed on some other variants of the card (although I can't find my source for that anymore). Worse in this case means that there was no longer any video output from my docking station. This pushed me to switch to the proprietary drivers from ATI.

Thanks to packaging done by Livna, this was actually reasonably painless (once I discovered Livna). I could power my DVI output again. I could even display at 1920x1200 (as long as the monitor was plugged in when I started X), and resize the display using xrandr when docking and undocking. I even had some extra hardware acceleration capabilities, although that stopped when I upgraded to Fedora Core 6, and I don't think they made a difference for anything I actually used (except for screensavers). I lost the ability to suspend-to-disk, but suspend-to-RAM still worked, and that was the main thing that mattered to me.

But after using Fedora Core 6 for a bit, I started (after some of the updates) running into problems on suspend/resume, where my computer would hang on resume. I reduced this to an upgrade in the proprietary ATI video drivers, which forced me into a cycle of avoiding ATI driver updates, which eventually forced me into avoiding kernel security updates, which wasn't a good idea.

But I found a solution for that problem. I now run the proprietary ATI drivers without the associated kernel module. The drivers work fine: I get all the functional benefits of the proprietary drivers without the stability or suspend/resume problems. Every time I get updated rpms from Livna, I just rpm -e --nodeps kmod-fglrx.

So I started thinking that the next time I get a laptop, I should make sure to get one with the Intel Graphics Card, since Intel is nice enough to develop their accelerated driver as an open-source driver. Unfortunately, Lenovo only sells that graphics card in laptops with a 1024x768 display; the 1400x1050 displays require the ATI card.

Last week I wrote a simple python script (works on my system only) to automate the stuff I do on dock / undock. However, it's managed to crash my X server twice; hopefully the sleep(1) will fix that by making things more like they used to be before it was automated.

But I'm somewhat pessimistic about whether this situation will improve, since most of the Linux developer types I run into seem to use the proprietary drivers. If nobody uses the free drivers, they'll never improve. Then again, who am I to talk? Maybe I should give them another try once F7 comes out.

Tuesday 2007-04-17

First attempt at a Mercurial workflow (17:51 -0700)

For the past three months I've been using Mercurial for my Mozilla development work. What I've been doing is a slight improvement over the way I managed my work in progress on top of CVS, but it has significant problems and I'm not very happy with it.

Since Mozilla is still using CVS, the first part of using mercurial has involved keeping my own Mercurial mirror of the Mozilla source tree. I haven't actually been doing importing; I've just been checking out occasionally from CVS and committing into Mercurial in order to get the work other people do, and doing the reverse to commit my work into CVS. This involves an extra working tree on my desktop machine at home that wouldn't need to exist if Mozilla actually switched to Mercurial.

However, the main issue with using Mercurial is how I manage the patches that I'm working on. The fundamental problem is that Mercurial is designed around the assumption that source trees are cheap, and you can use a separate source tree and build for each patch you're working on (for each piece of work intended to be reviewed and committed separately). (I think this is true for other distributed version control systems as well.) Once you merge the work you're doing in one tree with upstream changes, you've lost the separation between the changes in that tree. For Mozilla, that assumption is completely false, due to the size of our source tree and build and the amount of time it takes to compile. I also like the idea of having all my changes in a single tree because I use the build with my changes as my main browser; this helps me catch problems with my changes before I commit them. (git seems not to make that assumption as strongly as the other distributed VCSes; it has some commands (such as rebase) that are designed for keeping patches separate.)

So my first attempt to build a workflow using Mercurial involved depending on the Mercurial extension called mq. This was, in fact, the reason I switched to Mercurial: I was having trouble managing patches that were on top of each other, and vlad suggested mq as a solution. But mq has three fundamental problems:

  1. Many basic operations touch every file touched by one of your patches, since they require pushing and popping patches. (ccache could be a partial workaround for this on Unix-ish systems.)
  2. Merging patches with upstream changes is painful and complicated.
  3. Sharing changes that are in a patch queue between machines (whether they're different systems used by the same developer or systems used by different developers) is very complicated, and sometimes requires disabling mq or unapplying patches to do the necessary push / pull operations (even when doing precisely the right operations).

In general, common operations require way too many steps, and they don't work as well as a modern version control system should work.

I have a pretty good idea how to fix the first two of the above three problems, but it would require some pretty significant changes that I don't have time for. (Describing how would require an even longer post than this one.) But because of the third problem, I'm not sure it's even the right approach. I think fixing the third borders on building a different version control system entirely.

Discussions with Graydon led me to figure out a different workflow that might be better. It will involve two working trees, and will require me to commit any change before even compiling, but it will solve all of the problems mentioned above. (The basic idea is to maintain a branch for each change and an integration branch that integrates all of them.) I'm probably going to try it out when I get the time to convert all my current patches, but I fear that this approach will also require a lot of commands to do common operations. (However, writing an extension to manage them might be a lot simpler than the modifications I'd need to mq.)

Wednesday 2007-04-11

Geotagged photos, take 2 (19:45 -0700)

Last year I wrote about a flickr plus Google Maps mashup that I wrote to show my geotagged photos. I complained that there was no way to work with the flickr side of the mashup from the client side alone.

I noticed a few days ago that flickr now has a JSON response format as part of their API, so I've now updated my map to use it, and I now have a purely client-side mashup. At some point soon I plan to revise it so it can show more than just my own photos.

(JSON gets around the cross-site scripting restrictions because scripts, like images, can be loaded cross-site. This has some potential security issues if either the data transferred using the script are private or the site loading the JSON has private data served from its domain (that could be stolen by the script in the JSON), but neither of those problems apply in this case.)

Thursday 2007-04-05

Problems with versioning, part 1, take 2: open environments (17:36 -0700)

I wrote another message (this time on a public list) that restates much of what I said last week about versioning. I'm still planning to write part 2 sometime.