Transitioning to a 4096-bit RSA OpenPGP key

I created a new GnuPG key two months ago (see key ID 0x4A456FBA). Now is a good a time as any to publicly announce it. Information for the key:

pub   4096R/4A456FBA 2009-05-08 [expires: 2015-01-01]
      Key fingerprint = E95D 7465 5B35 C5F6 B3B6  68CC 20C6 F0A6 4A45 6FBA
uid                  Samat K Jain 
uid                  Samat K Jain 
uid                  Samat K Jain 
uid                  Samat K Jain 
sub   4096R/8D18D72F 2009-05-15 [expires: 2015-01-01]

All this information (as well as the downloadable public key itself) is available on my CryptoKeys wiki page.

The new key uses 4096-bit RSA keys for both digital signatures and encryption. The change is prompted by questions regarding SHA-1’s viability, detailed by Daniel Gillmore. The concern is not new, as Bruce Schneier reported SHA-1 weaknesses back in 2005. The concerns have simply become worse, and they’re likely to become worse. So much so that the US government’s NIST has recommended the phasing out of SHA-1 by the end of 2010. GnuPG’s maintainers don’t trust SHA-1 either, as upstream GnuPG now defaults to RSA as well.

In this space was a paragraph (or four) describing a little bit more in detail the interaction between encryption algorithms (e.g. RSA, DSA), encryption keys, and hash algorithms (e.g. SHA-1/SHA-160, SHA-512), etc. But as an end-user, I don’t care, and I don’t think other end users need to care either. With encryption, I follow the mantra: use the defaults; more than likely you don’t have a clue what you’re doing if you stray. If you use OpenPGP and use an older DSA-based key (2048-bit RSA is safe), keep in mind there may be issues soon regarding it’s security, and you should switch to DSA-2 or RSA (the new default) instead.

Since SHA-1 hasn’t actually been broken yet, I’ve decided to set an expiration date on my old key (0x1A1993D3), rather than outright revoke it. [

Monospaced font for the Firefox AwesomeBar

In the shadow of my Flickr userstyle that adds black borders around photos, is another more simple one. Now on, Use a monospaced font for the AwesomeBar (aka the URL bar, URL field, etc).

This isn’t that original or clever, as it’s actually included in userChrome-example.css contained in most older Firefox user profiles. However, this file is no longer included with new profiles as of Firefox 3.5, so it’s a bit more difficult to discover.

Quick and easy network bandwidth benchmarking on Linux and MacOS X

A couple years ago, I setup my first gigabit Ethernet network. I wanted to test just how fast it could go with the equipment I gave it (that is, the NICs, cabling, and switches it operated on). Gigabit Ethernet, theoretically, can operate at 1000 Mbit/sec. This translates to 119.209 MiB/sec, units your OS typically displays when doing downloads (1000 Mbit/sec / 8 / 2^20). How close is your network setup to that maximum? Copying files between PCs, while being a very “real world” test, will be limited by how fast your disks can read or write. A specialized tool is needed.

While many system benchmark suites include network testing tools, most are not easily separate from their suites, and are not easy to install and use.

Enter NetStrain. It’s a very simple C application for Linux and MacOS X designed to stress network connections. Unfortunately, it’s not included in most Linux distributions or MacOS X, so you need to download and compile it yourself.

After compiling, use is simple. One machine acts as a server, and another machine acts as a client. Start the server first with:

netstraind -4 9999

This starts a server using IPv4 networking on port 9999 (use a different port if you know this is in use; remember to pick one above 1024 if you’re not running as root). On your client machine, start the client connect to the server (assumed to be running on IP and port 9999):

netstrain -4 9999 send

NetStrain will then try to send as much over your network connection as it can as long as the client is running. NetStrain is very spartan, so there are not a lot of options. In addition to sending, you may want to test receiving, as well simultaneously sending and receiving. Check NetStrain’s README for details.

Most likely, you will not get anything near 119.209 MiB/sec—-but hopefully, you’ll get better speeds than a normal 100 Mbit connection to make everything worthwhile.

What if you want to make things faster (without buying newer, better hardware)? There are many parameters you can tune on your operating system’s networking stack. However, in most modern operating systems, most of them are already set, or are automatically configured (e.g. TCP window scaling). The one major tunable is something called MTU (Maximum Transmission Unit).

Data is transferred over Ethernet in packets; the MTU defines the size of those packets. A larger packet size means fewer packets are needed to send the same amount of data, reducing the amount of processing that needs to be done by your computer, switches, and routers. Your computer’s NIC, switches, and routers need to support large-size MTUs, a feature often advertised as “Ethernet jumbo frames.” Jeff Atwood wrote an article on the promise and perils of jumbo frames that you may want to read if you’re interested.

Getting on the microblogging bandwagon

I’m usually a luddite when it comes to the latest Internet fads. I technically did not start blogging until 2003. I didn’t create a Flickr account or a Facebook account until 2006. I never bothered with MySpace. I turned into my OpenID in 2008. Given those things, I still hate YouTube (and all web video in general), and have yet to create a podcast or upload a video. I usually don’t think lolcats are funny, either.

Joining in the past year’s latest fad, I’ve started microblogging. Also known as “twittering,” microblogging revolves around the publication of little 140-character notes. The idea is that you share via these little notes news, thoughts, ideas, or whatever you happen to be doing at the moment. These notes are also known as “twits,” “dents,” etc.

Believe it or not, you’ve probably been doing a form of microblogging for a while. If you use an IM service and set “Away” messages, you’re microblogging. If you set your status on Facebook or LinkedIn, you’re microblogging as well. The currently accepted notion of microblogging is, started by the start-up company Twitter, a little different. Instead of messages being available to a select group of friends, your messages are global. Anyone in the world can read and respond to what you’re doing (that if, of course, if you have something interesting to say). Microblogging, Twitter-style, could be considered a type of global instant messaging.


Twitter, however, is a closed service. Your posts, lists of friends, etc live in a silo owned and controlled by them, and it’s difficult to extract data from that silo. They dictate how and when you’ll use their service, most evidenced by the frequent downtimes (it’s been so bad they’ve started a new meme, “the fail whale”). They’re also, unfortunately, a company out to make out to profit, and at this point, it’s not clear how they will do that—-what if they disappear tomorrow?

Because of these and many other reasons, I’ve eschewed using Twitter and gone with instead. In it’s most simple description, it is an open-source Twitter clone, oriented around a new openly-developed standard for microblogging. You can download the software that runs (called Laconica) and run it yourself. Your data is also available in open formats: you can easily take your posts and friends lists with you. Best of all, you can still interact with other open microblogging sites in a large, distributed network, hopefully making reliability problems things of the past.

I’ve been microblogging since the beginning of the year. Most of my entries are about the same topics as this blog—-Linux, open-source software, etc. I notice that I also tend to write a lot of things about New York City. If you care about any of these things, please subscribe to me on If you use Twitter, you can look read my cross-postings on my Twitter account too.

New Mexico, slowest Internet in the union

PCMag has ranked states according to average Internet speeds (via GigaOM). New Mexico came in last. I can attest to this… my Internet connection in Las Cruces is a crazy fast 144 Kbps IDSL connection, which costs over $120/month. And it’s been the best land-line Internet access I could get for the past 3 years.

Is there a correlation with New Mexico being one of the dumbest states (at 95.7, rank 46 of 50) with regards to IQ? One has to think about these things…

Adobe releases pre-release Flash 10 for 64-bit Linux

Today, Adobe released 64-bit Flash for Linux. Finally, I can waste time watching ugly, pixelated Internet video on my 64-bit Linux desktop and laptop, just like all of my 32-bit-confined brothers and sisters on the Internet! (Yes, I know about npviewer—-let’s not go there.)

What’s really interesting is that this is Adobe’s first 64-bit release of Flash. That is, Linux users got it first, before users of Windows Vista x64 and and MacOS X. It probably does not mean anything, especially since Adobe has mentioned 64-bit flash will be released at the same time across platforms, but you can’t help but feel good inside.

Go download it now and remember to report good bugs.

Update: Some quick notes…

  • The tarball provided on the labs website is not the conventional Adobe Flash installer—it just contains the plugin. To use the plugin, drop the .so file into your ~/.mozilla/plugins/ directory.
  • Make sure to uninstall your npviewer-powered 32-bit Flash completely (disabling the plugin within Firefox is not enough). I personally uninstalled it from my system to prevent any conflict.

Creating your own personal aspell dictionary

Something that has bothered me forever is that applications that use GNU aspell for spell checking kept marking my name as a misspelling (I’m looking at you, KMail). Most front-end applications don’t provide a way for you to add your own custom words.

Apparently, creating your own personal dictionary is ridiculous easy with aspell.

If your language is English, create a file in your home directory called ”.aspell.en.pws”:

personal_ws-1.1 en 0

The first line is a required header. Every subsequent line is a word you want to add to your dictionary. I can’t believe I’ve let this sit for so long. Because it’s a nice text file, syncing this file between machines to take your dictionary with you is trivially easy.

Taking Drupal sites offline via mysql and the command line

Drupal-powered websites can be put into an “offline mode.” This is much better than most alternatives (such as taking the web server offline), especially for search engines, as the message and HTTP status codes given to users and robots alike will tell them to patiently come back later.

I’ve found that putting the site into offline mode makes database backups go much faster on heavily trafficked sites (which is obvious). However, for a particular site I was working with, this needed to be done in an automated manner, and on a dedicated database server that did not have access to the Drupal installation.

Most people take their Drupal sites offline through Drupal’s web-based administration interface. They can also be put offline through the Drupal Shell. Neither were suitable for me: the former cannot be automated easily, and the latter requires access to the Drupal installation. Fortunately, Drupal sites can easily be taken offline by setting things in the database, which can easily be done via bash scripts and the command-line MySQL client.

Given your database user is my_db_user, password my_password, and database my_drupal_db, the backup script would look something similar to:


# Take site offline
mysql --user my_db_user --password=my_password my_drupal_db << EOF
UPDATE variable SET value='s:1:"1";' WHERE name = 'site_offline';
DELETE FROM cache WHERE CID = 'variables';

# Do stuff here while the site is offline (e.g. backup)

# Bring site online
mysql --user my_db_user --password=my_password my_drupal_db << EOF
UPDATE variable SET value='s:1:"0";' WHERE name = 'site_offline';
DELETE FROM cache WHERE CID = 'variables';

Update: The original version of this article had some problems on some setups with the variables table being cached. I added another SQL statement to make sure this cache is flushed so the site actually reflects its configuration.

Update: This method really doesn’t work that well, and the more I think about it, there isn’t a way to get around writing something that interacts with Drupal. I’m working on a script that will be more fool-proof.

Python-like tuple unpacking for PHP

Python provides a neat way for functions to return multiple arguments via “tuple unpacking”. For example:

def blah:
  return ('one', 'two')

rval_1, rval_2 = blah()

The same can be done in PHP relatively easily via the list construct:

function blah()
  return array('one', 'two');

list($rval_1, $rval_2) = blah();