BrowserID session API support for Drupal

Late last week, Mozilla’s Identity team made available a Firefox extension for BrowserID, a new browser-oriented single-sign on mechanism. Click a button in your address bar and automagically login into a website.

Along with it they made available a browser session API—that is, the browser can now keep track and show whether you’re logged in, logged out, etc, also displayed in your address bar.

Drupal had a BrowserID module less than 24 hours after BrowserID’s initial announcement (thanks Isaac Sukin!). Likewise, in the weekend after the session API announcement I helped out and wrote a patch adding support for the new API.

If you’re familiar with Drupal development, install the Drupal module, apply the patch, install the Firefox add-on, and get browser-integrated, one-click login to your Drupal-powered website.

The patched module is running live on this site right now, so please play with it (myfavoritebeer.org does get boring).

At the moment, Drupal’s BrowserID module does not create an account on my blog, so you must do that first, separately. Create an account here, or if you’ve an OpenID, logon with your OpenID directly to also create an account (funny how complicated this has gotten already). Make sure to set and use the same e-mail address as the one you use for your BrowserID. After creating an account, logout, and then log back in using your BrowserID. If you’ve problems/find a bug, please leave comments on the Drupal bug or this blog post—thanks!

[UPDATE: 16 Aug 2010]: Drupal’s BrowserID module now includes my patch; you don’t need to download and apply the patch separately.

I’m Flattr’ed!

[inline:Flattr-Widget-smaller.jpeg]
2011 Flattr widget

I conducted an experiment back when I wrote my HP N36L review: I added affiliate links to both Amazon and Newegg, hopefully to get some revenue without polluting my site with advertisements.

It was successful; I earned enough to pay for a few cups of espresso, at least.

Many authors have PayPal-powered “tip jars” or links to their Amazon wishlist. I’ve now setup the same, but I think it’s unrealistic visitors spend the requisite time or money to use them.

Enter Flattr, a new “social micropayments” platform, a tip jar evolved to work on Web scale. Flattr is a quick and easy way to give back to content creators—including myself. Rather than trying to explain it, watch Flattr’s introductory video. I love the cake analogy.

If you want to get an item from my Amazon wish list, please do! But what if you wanted to contribute less? And why would anyone else want to use Flattr over PayPal?

Well, one, it’s simpler. Users need only click a single button (the Flattr widget) to Flattr me. Also, users don’t need to worry about figuring out how much to give me—Flattr’s “cake cutting” algorithm does it for you. Nothing stops you from donating more, of course.

Second, Flattr’s rates are better. Say you’ve decided to add $10/month to your Flattr account, and you’ve Flattr’ed 10 people, including me. Each of those people will be entitled to $1. With Flattr’s 10% commission, I’d get 90¢; with PayPal’s default fee schedule, I’d get 67.1¢ ($1-($1×2.9%+$0.30)). Big difference. Flattr works well when you’re dealing with small amounts (called micropayments), exactly the niche market they’re trying to fill.

Imagine you’re walking down the street and hear a great musician, to whom you’d like to donate some small change. It’s easy to do in the real world. With conventional payment systems oriented around transactions, this model doesn’t translate. With Flattr, however, the model does—it brings a donation system and ethic present within the real world onto the Web.

PayPal has its own little-known micropayments platform with a better fee schedule, but it requires the receiver to go through a manual approval process to receive a special account. It’s only available in a few countries and has been a “beta” product for years, implying PayPal does not care much about it. Why should they? PayPal, making money on high-value transactions, has little incentive to develop micropayments—not until there’s marketshare and mindshare to steal, something Flattr is building.

Third, with free culture luminaries like Peter Sunde of The Pirate Bay fame behind it, Flattr seems less likely to “censor” recipients, holding funds hostage and even confiscating them, something for which PayPal is notorious. While I’m not worried about anyone censoring my overly-politically correct blog, should I be OK with organizations unfairly censoring others? *cough* WikiLeaks *cough*

Flattr is so nascent, it’s unlikely I earn a significant amount of money. And the meager money I do earn I will probably use to Flattr others. But it’s easy to use, low-risk for myself and visitors, and most importantly I believe in its ethos—so why not?

If you like this post or my blog, please flattr it!

Lessons from Nokia’s CEO: ignore your product & your customers

So, Nokia’s paper-launched their newest flagship phone, the N9.

The N9 is based (well, sort-of, anyway—something many media outlets have gotten wrong) on Nokia’s Linux-based operating system MeeGo, which they’ve been developing in-house for several years. However, in 2010, Nokia decided to switch their flagship mobile phone OS from MeeGo to Windows Phone 7—effectively aborting all long-term plans and products in the pipeline.

This has universally been regarded as a bad move.

The N9 was apparently far enough down the pipeline, and new products based on Windows Phone 7 so far, that Nokia released the device anyway. So far, the N9 is a hit.

The move to the new OS is considered the handiwork of recently-appointed Nokia CEO Stephen Elop. However, an article in the Helsingin Sanomat paraphrases Elop:

In Elop’s words, there is no returning to MeeGo, even if the N9 turns out to be a hit.

So, it doesn’t matter if Nokia’s own products are successful? The business deal made with Microsoft is more important?

I read:

I have taken part in the conversations with the teleoperators and I have been part of the consumer test groups. The feedback has been extremely positive and I am sure that the Windows Phone system will be a great success

And think: teleoperators and consumer test groups are one thing, but what about your own customers and developers?

Lessons learned here: ignore your product, and ignore your customers, and you too can be part of a company as successful as Nokia. Maybe, as CEO, you’ll get your own social media hashtag too.

Following Firefox’s New Development Channels with Ubuntu

[inline:Firefox-Beta-About-Screen.jpeg]

Shortly after Firefox 4’s release, Mozilla announced the move to a channel development model, à la Chrome. On Windows and Mac, builds from these channels update themselves; what about on Linux, where both self-updating software and software outside management of the package manager (i.e. manually installed) is taboo?

If you use Debian, Mike Hommey and the Debian Mozilla Team’s mozilla.debian.net provides packages for the Firefox Stable, Beta, Aurora, and Nightly channels. Be aware that these packages are still labeled Iceweasel, i.e. they lack the official Firefox branding. These packages work on Ubuntu should you want to use them.

What if you want something more Ubuntu-specific? PPAs following each of the channels exist, but they’re not obvious to find.

Nightly builds of Firefox trunk, formerly known as “Minefield” builds, are available in the Ubuntu Mozilla Daily PPA. Remember, Nightly builds receive little testing (e.g. can they build without errors?) and thus may crash frequently. Start using these builds by pasting the following into your terminal:

sudo add-apt-repository ppa:ubuntu-mozilla-daily/ppa
sudo apt-get update && sudo apt-get install firefox-trunk

Firefox’s new build channel, Aurora, has builds that have had more testing than those from Nightly. Builds from the Aurora channel are available from the firefox-aurora PPA, which you can use with:

sudo add-apt-repository ppa:ubuntu-mozilla-daily/firefox-aurora
sudo apt-get update && sudo apt-get install firefox

The Beta channel, containing builds that received more testing than Aurora and are (mostly) ready to be released, is available in the firefox-next PPA:

sudo add-apt-repository ppa:mozillateam/firefox-next
sudo apt-get update && sudo apt-get install firefox

Lastly, if you want to follow the Stable channel, consider sticking with what’s available in Ubuntu’s normal repositories. Ubuntu has a (new) policy to bring stable Firefox updates to Ubuntu releases more quickly. If you really want to be testing the next Firefox release, try the Beta channel above. If you still want the “bleeding edge” of stable, there’s the firefox-stable PPA, which will go away soon, and Ubuntu’s Mozilla Security Team PPA, within which packages only remain until they are moved into the main archive.

Notice that the Aurora, Beta, and Stable channels contain packages of the same name: “firefox”; this means:

  1. You can only install Firefox from one channel at a time.
  2. They all will use the same profile and profile registry. You’ll need to manually switch profiles or alter shortcuts to launch the desired profile if you desire different.

Packages in the Nightly channel, however, are named firefox-trunk and can be co-installed alongside builds from another channel.

To switch to another channel, disable the source with Ubuntu’s Software Properties or delete the appropriate file in /etc/apt/sources.list.d/:

sudo rm /etc/apt/sources.list.d/ubuntu-mozilla-team-firefox-aurora\*list
sudo rm /etc/apt/sources.list.d/mozillateam\*list

Then re-add the appropriate repository to switch to a desired channel.

Hopefully, with easy-to-use PPAs available for each of Firefox’s build channels, more people, including you, will test these builds. Go forth and test!

Using my Creative Commons-licensed photos

Since I’ve become a photobug and licensed those photos under the Creative Commons, I’ve been getting a lot of requests for reuse. After all, that’s what CC-licensed content is for!

The “BY” bit in CC licenses, which stands for attribution, means authors/creators must be attributed if you decide to use a work. It does not specify how a work should be attributed or cited (though, there are some guidelines in CC’s FAQ), leaving it up to the owner of the work—which is the way it should be, of course.

Flickr does a terrible job letting people know how to attribute works. If you’re reading this, I’ve probably directed you here asking how I’d like you to cite or attribute any of my photos.

If you’re just using a photo, simple add a line like this somewhere near the photo, with the appropriate hyperlinks:

Name of photo, linked to Flickr photo page © Samat Jain, used CC BY-SA 2.0.

If you’re creating a derived work (that is, you modified the photo in some way), please use:

Derived from Name of photo, linked to Flickr photo page © Samat Jain, used CC BY-SA 2.0.

Summarized in an example:

[flickr-photo:id=2869930642,size=m]
Hyperthyroidism the morning (with zits!) © Samat Jain, used CC BY-SA 2.0.

Now, a rant on why I ask all for all this. There are just two reasons:

  1. I have a philosophical agenda to promote open source and free culture. I’m hoping the explicit wording and hyperlinks do so.
  2. I’m putting photos I took time to take and process on the web, for your free use… asking for a little credit (in exchange for link-fu) is not much to ask! Linking to Flickr fulfills Flickr’s ToS; linking to my homepage makes me happy; and linking to the appropriate license hopefully promotes free culture, informing users of their rights. To that end, if you can, avoid creating hyperlinks with nofollow.

Lastly, if you decide to use a photo, please let me know! I like to keep track of these things, and may link back to your use of my photo. Thanks!

Monitoring Intel SSD lifetime with S.M.A.R.T.

The Internet is abuzz with talk about solid state reliability right now (see a recent article by Jeff Atwood). Random, catastrophic failures aside, how can you know how much life you’ve eaten into your SSD?

If you’ve an Intel SSD, it’s pretty easy; they export a S.M.A.R.T. attribute “Media Wearout Indicator”. Starting at 100 (new), the attribute decreases to, well, zero. Forget how to do that on Linux? It’s easy:

$ sudo smartctl -a /dev/sda | grep Media_Wearout_Indicator
233 Media_Wearout_Indicator 0x0032   098   098   000    Old_age   Always       -       0

The SSD in my laptop is at 98, and my oldest SSD in another system (from mid-2009) is at 97. Yours?

On to the real news: the OpenStreetMap project has switched their tile rendering server to an SSD (hopefully making tile renders much, much faster). A newer, consumer-grade MLC-based Intel 320 Series 600 GB SSD, in fact. Conveniently, OpenStreetMap monitors their servers with Munin, which by default graphs all S.M.A.R.T. attributes, including Media Wearout Indicator.

Other than the initial import of the tile rendering database, OSM tile rendering does not consume many write cycles. But it definitely hammers the disk to death on reads. Keep a lookout on these graphs to see how their SSD ages over time. Don’t forget to contribute to OpenStreetMap yourself so we can see that number go down a bit quicker (I’m pretty sure OSM doesn’t mind!).

Note: Toby mentioned to me that all the values appear to be pegged at 100. Most of these attributes are dummy values—only “Media Wearout Indicator” and “Available Reserved Space” appear to change with normal use.

What does the “BY” in Creative Commons’ license names mean?

Have you ever wondered what the “BY” in Creative Commons’ licenses (e.g. CC-BY, CC-BY-SA, and CC-BY-SA NC) stands for?

The other bits in the license name are obvious, for the most part:

  • CC – Creative Commons
  • NC – Non-Commercial
  • ND – No Derivatives
  • SA – Share-Alike

BY” is confusing because it’s not an acronym, shortening of a word, or anything otherwise obvious. But buried in the Creative Commons FAQ, it’s mentioned it stands for attribution.

If all Creative Commons require attribution (except CC0), why is it included in the license names (especially the abbreviations) at all?

Increase file descriptors for Transmission on Linux

Have you run out of file descriptors for Transmission? The torrent will be stopped (for no apparent reason), and when you examine it, you’ll see an error similar to:

$ transmission-remote -t 1 -i | grep -i 'open files'
Unable to save resume file. Too many open files.

Time to increase the number of file descriptors available. This article is tailored towards Debian and Ubuntu.

It’s unlikely you’ll need to raise your system’s global limit. Check with:

$ cat /proc/sys/fs/file-max
397460

The OS needs a couple thousand file descriptors for itself. Make sure to make space for them with whatever numbers to choose below. In my case, I’ve more than enough.

If you still need to raise your system’s limit, you can easily; to set it to a million (which will be remembered across reboots):

sudo sh -c "echo fs.file-max=$(dc -e '2 20 ^ p') > /etc/sysctl.d/file-descriptors-max.conf
sudo service procps restart

While you may not need to change your system’s global limit, you probably will need to change the limit for your users. Check that limit with:

$ ulimit -Sn
1024
$ ulimit -Hn
1024

If you’re working with hundreds of torrents (each with dozens to hundreds of files) with Transmission, this isn’t enough. To let a user have a few thousand (in the below example, 16,384, with 128 more for the hard limit), create a new file /etc/security/limits.d/debian-transmission.conf:

sudo sh -c "echo debian-transmission soft nofile $(dc -e '2 14 ^ p')" > /etc/security/limits.d/debian-transmission.conf
sudo sh -c "echo debian-transmission hard nofile  $(dc -e '2 14 ^ 2 7 ^ + p')" >> /etc/security/limits.d/debian-transmission.conf

Replace “debian-transmission” with the user that is running Transmission.

For the changes to go into effect, you need to logout completely (e.g. close multiplexed SSH connections, etc), and log back in again. Or to be sure, just reboot to make sure changes kick in. You’ll see you have many more file descriptors available:

$ ulimit -Sn
16384
$ ulimit -Hn
16512

Now, we need to configure Transmission to use this many. In /etc/transmission-daemon/settings.json, find the open-file-limit option and set to a larger number (e.g. 16000 or so). When done, restart transmission-daemon:

sudo service transmission-daemon restart

If you’re not running Transmission as a system user, edit the right settings.json and restart the daemon appropriately.

That’s it. Have fun!

Comments on “3 TB disks are Here” from Linux Magazine

Linux Magazine published an article last week, 3 TB Drives are Here. On Twitter, I originally said it was wrong, but that’s a bit harsh. Parts of it, however, very misleading, and parts of it unnecessarily confusing.

The “2.199 TB” limit describes Logical Block Addressing (aka LBA), a scheme for addressing sectors on modern disks. Sectors are numbered 0 to n, where n is a number dependent on the disk’s size (i.e. disk size in bytes divided by sector size). There’s nothing intrinsically limiting about LBA, other than how many bits you can devote to store such an address. With this in mind, the sentence:

The LBA scheme uses 32-bit addressing under the MBR partitions.

is very misleading. I hate to be a grammar nazi, but it’s a misuse of active versus passive voice. This phrasing makes it seem as if LBA is the limitation; it’s not. Master Boot Record (MBR) blocks are what limit LBA addresses to 32-bits, and are what limit partitions to 2.199 TB.

The article then moves to discuss 4 KB sectors. While nothing here is wrong,it ignores the fact that current “4 KB sector disks” on the market (i.e. marketed as “Advanced Format”) do not work in the way described.

Most Advanced Format disks continue to report that their sectors are 512 bytes, a mode called 512e. Because of this, your “4 KB sector” disk still is limited to 2.199 TB when using MBR partition tables (the article, confusingly, implies otherwise).

However, they do use 4 KB sectors internally. That is, a request for sector 0 and 3 both, internally, request the same 4 KB sector. There are significant performance problems here: if you request sector 3 and 4, these internally map to two different 4 KB sectors. This becomes a problem when your filesystem uses 4 KB blocks (i.e. most modern filesystems, including NTFS, ext4, XFS, etc) that are not aligned to these boundaries: a 4 KB read may cause the drive to unnecessarily read 8 KB. The article does not mention anything about this sector alignment problem.

Discussing other operating systems, the article vaguely mentions “several operating systems” have switched to GPT (GUID Partition Tables). I really hate how vague the article is here: as far as I know, the only OS that does this by default is Apple’s Mac OS X. The article sells Linux short when it says:

In the consumer world this is a downside since most motherboards don’t have a BIOS that is GPT capable. This can affect all operating systems including Linux.

because, in fact, most motherboards do have a BIOS that can boot from GPT, especially when you use a hybrid MBR. And Linux, with GRUB 2, works fantastically with them. Unfortunately, compatibility is a crapshoot, and is not advertised. However, all the systems I’ve experimented on, some as old as 2005, worked fine booting from GPT. Where Linux definitely falls short is that no distribution (AFAIK) will setup a GPT for you.

With that in mind, it’s difficult to say:

Linux is ready for 4KB drive sectors with 64-bit LBA addressing

When it really isn’t. The largest obstacle is the sector alignment problem that the article glosses over, best explained by Theodore T’so’s Aligning filesystems to an SSD’s erase block size. His post, in short:

  • Linux partitioning utilities are hard-coded to assume 512-sectors, which create problems for 4 KB-sector disks and disks with larger block sizes (i.e. SSDs)
  • Various filesystem structures are not aligned to 4 KB boundaries (T’so points out LVM)

All of which kill performance, and in the case of SSDs, shorten lifespan.

One thing that bothers me about this article is that while it tries to explain the issues involved with 4 KB sector disks, it does nothing to tell you how to mitigate or avoid any of them. In the next couple of weeks, stay tuned for a few articles from me explaining how you can get around them with Linux.