Deciphering Intel’s new X25-M G2 SSD

My laptop hard disk is beginning to die. In what seems like perfect timing, Intel has released a refresh of their X25-M solid state disk (SSD) lineup (via Engadget and Ars Technica). The new models offer much over the old ones:

  • Manufactured on a 35 nm vs 50 nm process
  • Faster seek times, both read and write, leading to more I/O operations per second (IOPS)
  • Significantly less expensive (Cited as a 60% price drop, though that’s comparing at-introduction MSRPs. It’s still at least 25% less.)
  • Greater shock tolerance (1500 G vs 1000 G)
  • Future TRIM command support, via firmware upgrade. The ATA TRIM command mitigates SSD fragmentation problems that have been the cause of many performance issues.

While die shrinks usually lead to parts that consume less power, the new X25-M uses the same amount of power when active (150 mW), and actually more power when idle (75 mW vs 60 mW). Still, it’s significantly less power than most laptop hard disk drives (my Hitachi 7K200 idles at 800 mW). [Source: Intel’s technical specifications]

Of course, with all these changes, Intel decided to name the drives the same as the old ones, making it difficult for people who want to buy one right now to know what device they’re actually getting.

This kind of inane marketing isn’t new, with the most infamous example on my mind being the Linksys WRT54G. Linksys (so far) as made 6 different revisions of the exact same model, drastically changing the internal hardware throughout the revisions. While most people don’t care, a few did, such as those in the modder community (like myself) who wanted to run modified firmwares. Purchasing anything took a lot of research on the part of the buyer. Manufacturers really should be in the business of making their products easier to buy, not more difficult.

Fortunately, I’ve done the research for you: the new Intel SSDs do have slightly different part numbers, so you can tell the old parts from the new. For example, the old X25-M 80 GB disk has a part number of SSDSA2MH080G1C1, while the newer model has a part number of SSDSA2MH080G201. That is, the part numbers contain either a “G1” or a “G2” corresponding to the revision.

With the glowing positive reviews for the X25-M since it’s introduction a few months ago, its new lower price, and most importantly, the failure of my current laptop disk, I’m going to pick up one of these drives within a week.

Monospaced font for the Firefox AwesomeBar

In the shadow of my Flickr userstyle that adds black borders around photos, is another more simple one. Now on, Use a monospaced font for the AwesomeBar (aka the URL bar, URL field, etc).

This isn’t that original or clever, as it’s actually included in userChrome-example.css contained in most older Firefox user profiles. However, this file is no longer included with new profiles as of Firefox 3.5, so it’s a bit more difficult to discover.

Quick and easy network bandwidth benchmarking on Linux and MacOS X

A couple years ago, I setup my first gigabit Ethernet network. I wanted to test just how fast it could go with the equipment I gave it (that is, the NICs, cabling, and switches it operated on). Gigabit Ethernet, theoretically, can operate at 1000 Mbit/sec. This translates to 119.209 MiB/sec, units your OS typically displays when doing downloads (1000 Mbit/sec / 8 / 2^20). How close is your network setup to that maximum? Copying files between PCs, while being a very “real world” test, will be limited by how fast your disks can read or write. A specialized tool is needed.

While many system benchmark suites include network testing tools, most are not easily separate from their suites, and are not easy to install and use.

Enter NetStrain. It’s a very simple C application for Linux and MacOS X designed to stress network connections. Unfortunately, it’s not included in most Linux distributions or MacOS X, so you need to download and compile it yourself.

After compiling, use is simple. One machine acts as a server, and another machine acts as a client. Start the server first with:

netstraind -4 9999

This starts a server using IPv4 networking on port 9999 (use a different port if you know this is in use; remember to pick one above 1024 if you’re not running as root). On your client machine, start the client connect to the server (assumed to be running on IP and port 9999):

netstrain -4 9999 send

NetStrain will then try to send as much over your network connection as it can as long as the client is running. NetStrain is very spartan, so there are not a lot of options. In addition to sending, you may want to test receiving, as well simultaneously sending and receiving. Check NetStrain’s README for details.

Most likely, you will not get anything near 119.209 MiB/sec—-but hopefully, you’ll get better speeds than a normal 100 Mbit connection to make everything worthwhile.

What if you want to make things faster (without buying newer, better hardware)? There are many parameters you can tune on your operating system’s networking stack. However, in most modern operating systems, most of them are already set, or are automatically configured (e.g. TCP window scaling). The one major tunable is something called MTU (Maximum Transmission Unit).

Data is transferred over Ethernet in packets; the MTU defines the size of those packets. A larger packet size means fewer packets are needed to send the same amount of data, reducing the amount of processing that needs to be done by your computer, switches, and routers. Your computer’s NIC, switches, and routers need to support large-size MTUs, a feature often advertised as “Ethernet jumbo frames.” Jeff Atwood wrote an article on the promise and perils of jumbo frames that you may want to read if you’re interested.

Getting on the microblogging bandwagon

I’m usually a luddite when it comes to the latest Internet fads. I technically did not start blogging until 2003. I didn’t create a Flickr account or a Facebook account until 2006. I never bothered with MySpace. I turned into my OpenID in 2008. Given those things, I still hate YouTube (and all web video in general), and have yet to create a podcast or upload a video. I usually don’t think lolcats are funny, either.

Joining in the past year’s latest fad, I’ve started microblogging. Also known as “twittering,” microblogging revolves around the publication of little 140-character notes. The idea is that you share via these little notes news, thoughts, ideas, or whatever you happen to be doing at the moment. These notes are also known as “twits,” “dents,” etc.

Believe it or not, you’ve probably been doing a form of microblogging for a while. If you use an IM service and set “Away” messages, you’re microblogging. If you set your status on Facebook or LinkedIn, you’re microblogging as well. The currently accepted notion of microblogging is, started by the start-up company Twitter, a little different. Instead of messages being available to a select group of friends, your messages are global. Anyone in the world can read and respond to what you’re doing (that if, of course, if you have something interesting to say). Microblogging, Twitter-style, could be considered a type of global instant messaging.


Twitter, however, is a closed service. Your posts, lists of friends, etc live in a silo owned and controlled by them, and it’s difficult to extract data from that silo. They dictate how and when you’ll use their service, most evidenced by the frequent downtimes (it’s been so bad they’ve started a new meme, “the fail whale”). They’re also, unfortunately, a company out to make out to profit, and at this point, it’s not clear how they will do that—-what if they disappear tomorrow?

Because of these and many other reasons, I’ve eschewed using Twitter and gone with instead. In it’s most simple description, it is an open-source Twitter clone, oriented around a new openly-developed standard for microblogging. You can download the software that runs (called Laconica) and run it yourself. Your data is also available in open formats: you can easily take your posts and friends lists with you. Best of all, you can still interact with other open microblogging sites in a large, distributed network, hopefully making reliability problems things of the past.

I’ve been microblogging since the beginning of the year. Most of my entries are about the same topics as this blog—-Linux, open-source software, etc. I notice that I also tend to write a lot of things about New York City. If you care about any of these things, please subscribe to me on If you use Twitter, you can look read my cross-postings on my Twitter account too.

New Mexico, slowest Internet in the union

PCMag has ranked states according to average Internet speeds (via GigaOM). New Mexico came in last. I can attest to this… my Internet connection in Las Cruces is a crazy fast 144 Kbps IDSL connection, which costs over $120/month. And it’s been the best land-line Internet access I could get for the past 3 years.

Is there a correlation with New Mexico being one of the dumbest states (at 95.7, rank 46 of 50) with regards to IQ? One has to think about these things…

Adobe releases pre-release Flash 10 for 64-bit Linux

Today, Adobe released 64-bit Flash for Linux. Finally, I can waste time watching ugly, pixelated Internet video on my 64-bit Linux desktop and laptop, just like all of my 32-bit-confined brothers and sisters on the Internet! (Yes, I know about npviewer—-let’s not go there.)

What’s really interesting is that this is Adobe’s first 64-bit release of Flash. That is, Linux users got it first, before users of Windows Vista x64 and and MacOS X. It probably does not mean anything, especially since Adobe has mentioned 64-bit flash will be released at the same time across platforms, but you can’t help but feel good inside.

Go download it now and remember to report good bugs.

Update: Some quick notes…

  • The tarball provided on the labs website is not the conventional Adobe Flash installer—it just contains the plugin. To use the plugin, drop the .so file into your ~/.mozilla/plugins/ directory.
  • Make sure to uninstall your npviewer-powered 32-bit Flash completely (disabling the plugin within Firefox is not enough). I personally uninstalled it from my system to prevent any conflict.

Creating your own personal aspell dictionary

Something that has bothered me forever is that applications that use GNU aspell for spell checking kept marking my name as a misspelling (I’m looking at you, KMail). Most front-end applications don’t provide a way for you to add your own custom words.

Apparently, creating your own personal dictionary is ridiculous easy with aspell.

If your language is English, create a file in your home directory called ”.aspell.en.pws”:

personal_ws-1.1 en 0

The first line is a required header. Every subsequent line is a word you want to add to your dictionary. I can’t believe I’ve let this sit for so long. Because it’s a nice text file, syncing this file between machines to take your dictionary with you is trivially easy.

Taking Drupal sites offline via mysql and the command line

Drupal-powered websites can be put into an “offline mode.” This is much better than most alternatives (such as taking the web server offline), especially for search engines, as the message and HTTP status codes given to users and robots alike will tell them to patiently come back later.

I’ve found that putting the site into offline mode makes database backups go much faster on heavily trafficked sites (which is obvious). However, for a particular site I was working with, this needed to be done in an automated manner, and on a dedicated database server that did not have access to the Drupal installation.

Most people take their Drupal sites offline through Drupal’s web-based administration interface. They can also be put offline through the Drupal Shell. Neither were suitable for me: the former cannot be automated easily, and the latter requires access to the Drupal installation. Fortunately, Drupal sites can easily be taken offline by setting things in the database, which can easily be done via bash scripts and the command-line MySQL client.

Given your database user is my_db_user, password my_password, and database my_drupal_db, the backup script would look something similar to:


# Take site offline
mysql --user my_db_user --password=my_password my_drupal_db << EOF
UPDATE variable SET value='s:1:"1";' WHERE name = 'site_offline';
DELETE FROM cache WHERE CID = 'variables';

# Do stuff here while the site is offline (e.g. backup)

# Bring site online
mysql --user my_db_user --password=my_password my_drupal_db << EOF
UPDATE variable SET value='s:1:"0";' WHERE name = 'site_offline';
DELETE FROM cache WHERE CID = 'variables';

Update: The original version of this article had some problems on some setups with the variables table being cached. I added another SQL statement to make sure this cache is flushed so the site actually reflects its configuration.

Update: This method really doesn’t work that well, and the more I think about it, there isn’t a way to get around writing something that interacts with Drupal. I’m working on a script that will be more fool-proof.

Python-like tuple unpacking for PHP

Python provides a neat way for functions to return multiple arguments via “tuple unpacking”. For example:

def blah:
  return ('one', 'two')

rval_1, rval_2 = blah()

The same can be done in PHP relatively easily via the list construct:

function blah()
  return array('one', 'two');

list($rval_1, $rval_2) = blah();