Deprecating functions and methods in C++

Refactoring a C++ code base for a binary API, I needed to deprecate some functions and communicate changes to API users. While noted in the API’s documentation, people don’t read documentation, so I wanted the compiler to warn users as well (not that people read compiler warnings either!).

C++14 has a [[deprecated]] attribute, but what if you want to use that if supported, and do something else for other compilers?

Macros to the rescue! Below will use the canonical C++14 way if supported, and an equivalent proprietary method for GCC and Microsoft Visual C++ respectively.

// Helper to deprecate functions and methods
// See https://blog.samat.io/2017/02/27/Deprecating-functions-and-methods-in-Cplusplus/
// For C++14
#if __cplusplus >= 201402L
    #if defined(__has_cpp_attribute)
        #if __has_cpp_attribute(deprecated)
            #define DEPRECATED(msg, func) [[deprecated(msg)]] func
        #endif
    #endif
// For everyone else
#else
    #ifdef __GNUC__
        #define DEPRECATED(msg, func) func __attribute__ ((deprecated(msg)))
    #elif defined(_MSC_VER)
        #define DEPRECATED(msg, func) __declspec(deprecated(msg)) func
    #endif
#endif

To use it: in your header file for you API, simply wrap a function or method declaration with the macro. From:

void go(size_t goRadius, float one, float two, float three);

wrap it like so:

DEPRECATED("Use goNew()", void go(size_t goRadius, float one, float two, float three));

And you’ll get a warning. Here’s what it looks like with GCC 6.2:

/api.cpp: In member function ‘void SomeClass::go()’:
/api.cpp:104:23: warning: ‘void SomeClass::go(size_t, float, float, float)’ is deprecated: Use goNew() [-Wdeprecated-declarations]
     go(10, 1, 1, 1);
                   ^
In file included from /api.cpp:17:0:
/api.h:135:37: note: declared here
     DEPRECATED("Use goNew()", void go(size_t goRadius, float one, float two, float three));
                                    ^
/api.h:41:63: note: in definition of macro ‘DEPRECATED             #define DEPRECATED(msg, func) [[deprecated(msg)]] func
                                                           ^~~~

The macro is not perfect, however (or rather, compilers are not).

The canonical way for checking if[[deprecated]]is supported is with the compiler definition__has_cpp_attribute(deprecated); unfortunately, GCC 6.2 defines this symbol regardless of whether you are in C++14 mode or not. And then it prints a warning when run in-pedanticmode, even though it’s supported!

In the above snippet, the C++14 method is only used if the compiler fully supports C++14 and is in C++14 mode. If that’s not important to you, consider removing that extra if statement, and use this instead:

// Helper to deprecate functions and methods
// See https://blog.samat.io/2017/02/27/Deprecating-functions-and-methods-in-Cplusplus/
// For C++14
#if defined(__has_cpp_attribute)
    #if __has_cpp_attribute(deprecated)
        #define DEPRECATED(msg, func) [[deprecated(msg)]] func
    #endif
// For everyone else
#else
    #ifdef __GNUC__
        #define DEPRECATED(msg, func) func __attribute__ ((deprecated(msg)))
    #elif defined(_MSC_VER)
        #define DEPRECATED(msg, func) __declspec(deprecated(msg)) func
    #endif
#endif

Going HTTPS

This blog, my landing page, wiki, and a few of my other Websites are now being served encrypted over HTTPS (see the lock icon in your address bar?), thanks to Let’s Encrypt. Along with TLS, I’ve enabled HTTP/2.

/images/2015/12/02/blog.samat.org-SSL-screenshot.thumbnail.png

Hackers won’t find anything sensitive on my public Websites, but my private Websites (e.g. my ownCloud and Tiny Tiny RSS instances) have needed more security for a long time.

Enabling HTTP/2 was very easy, as HTTP/2 support is shipped with the ‘http2’ module in Apache 2.4.17 and later. While easy, it wasn’t obvious; I’ve written a tutorial for enabling HTTP/2 on Apache. Redirecting non-HTTP connections was trickier than I thought, so I’ve written a tutorial for HTTPS redirects with Apache’s mod_rewrite too.

I created my TLS certificates as part of Let’s Encrypt’s closed beta. I have an unconventional and complex Apache setup (something I’ll simplify, one day…) and because of bug 1531, a problem in an upstream library, I can’t use the official client the way it was meant to be used (i.e. “install” or “auth” commands). I don’t think I wanted an automated script editing config files on my servers anyway.

With a lot of fiddling, I’ve figured out how to use the official letsencrypt client reverse proxied through Apache, which will let me update certificates regularly without headache.

Enabling HTTP/2 on Apache 2.4 on Debian or Ubuntu

Apache 2.4.17 ships with mod_http2. Available in Debian 9 (stretch) and Ubuntu 16.04 (Xenial Xerus) (see comments) Ubuntu 16.10 (Yakkety Yak), it brings HTTP/2 support to one of the Internet’s popular Web servers. Assuming you’ve already configured a SSL/TLS Website, this quick tutorial will show you how to quickly enable HTTP/2.

Based on mod_h2, the module is still very experimental. It should be enabled manually, on a site-by-site basis, via the Protocols directive. The module’s defaults otherwise don’t need to be changed.

First, enable the module:

sudo a2enmod http2

In the <VirtualHost> stanzas for your Website served over TLS in your Apache configuration, add the Protocols directive:

<VirtualHost *:443>
  ServerName example.com

  Protocols h2 http/1.1

  # Other configuration stuff here…
</VirtualHost>

Restart Apache:

sudo service apache2 restart

If you’ve curl 7.34.0 or later, you can test whether HTTP/2 is working by running:

curl -k -v --http2 https://example.com/

and look for mentions of “http2”.

While you’re fiddling with your Web server configuration, consider updating your SSL settings with Mozilla’s great SSL configuration generator.

Getting All Past Events for your Meetup.com group

If you’re an organizer on Meetup.com, it’s a pain to look for events that have happened in the past in your group. Meetup doesn’t have a way to search only within your group, and paging through the calendar is a pain if your group is old or if you have a lot of events.

I’ve written a quick Python script that uses Meetup.com’s API to get all of a Meetup group’s meetup events and generate an easy-to-search and view report. For the Jornada Hiking & Outdoors Club, you can see a report of all our previous events.

The script is on Github as a gist if you’d to run it against your own Meetup.com groups. You’ll need to know the group’s “custom address” (which is also in the URL for your group) and have an Meetup.com API key.

After installing the dependencies (Python 3.4+, click, jinja2, and requests), usage is pretty easy:

Usage: Meetup-past-events.py [OPTIONS]

Options:
--groupname TEXT
Name of group in Meetup.com URL, i.e. http://meetup.com/<groupname>/
--apikey TEXTYour Meetup.com API key, from https://secure.meetup.com/meetup_api/key/
--helpShow this message and exit.

If you’ve found this blog post, downloading and running the script might be too much for you. Feel free to contact me with a link to your group and I’ll e-mail a list for your group.

Slimming an existing Raspbian install

Raspbian is the definitive full-featured Linux distribution for the Raspberry Pi. As it is tailored for educational use, there is a lot of software that is unnecessary (i.e. bloat) if you intend to use your Pi headless.

While various slimmed down Linux distributions exist for the Raspberry Pi, what if you want to slim down an existing Raspbian install?

I encountered this problem when I hosted a Raspberry Pi with Raspberry Pi Colocation from PCextreme. You’ll get a Raspberry Pi with stock Raspbian, complete with X11 and a bunch of other software unnecessary for use in a datacenter.

Based on the official Raspbian build scripts (previously asb/spindle), you can copy and paste the commands below to uninstall all the extra educational and GUI packages:

The above snippet is available in Github as a gist—I’ll keep it up to date, and feel free to fork it and add/remove whatever other packages you find necessary.

Download and run it with:

sudo cp /etc/network/interfaces /etc/network/interfaces.bak
wget https://gist.githubusercontent.com/samatjain/4dda24e14a5b73481e2a/raw/5d9bac8ec40b94833b4e9938121945be252fdee1/Slim-Raspbian.sh -O Slim-Raspbian.sh
sh ./Slim-Raspbian.sh | sudo sh

The above backups up/etc/network/interfacesin case it’s deleted (see comments).

Coming soon: how to create your own Raspbian image!

UPDATE [19 Aug 2015]: Robert Ely pointed out that purging raspberrypi-ui-mods will delete your network config—I’ve updated the gist so this package is only removed, not purged. I’ve also added a line from Gareth Jones that will remove the configuration files for packages no longer installed.

UPDATE [06 Jan 2015]: Raspbian now offers a “lite” image that won’t include a lot of this bloat. If you’re setting up a new Raspberry Pi, consider using that image.

Realtek 8188eu-based Wi-Fi adapters on the Raspberry Pi

NOTE: If you’re using Raspbian 2015.01 or later, the Raspberry Pi kernel 3.18 now includes this driver—you don’t need to compile anything and this tutorial is now irrelevant!

Raspberry Pi enthusiasts like to use Realtek 8188eu-based Wi-Fi adapters. Known as “micro” Wi-Fi adapters, they are small and inexpensive. For example, the TP-LINK WN725N V2 can be found for under $20 (the V1 and V3 use different chipsets. Thanks for making everything confusing, TP-LINK!).

As of October 2014 and Linux 3.12.29+, the driver for this Wi-Fi adapter isn’t available in the Raspberry Pi Linux kernel, so these Wi-Fi adapters are not supported out-of-the-box, despite otherwise you may find on the Internet. Definitely, they don’t work in a plug-and-play manner.

You can get these adapters to work by using an out-of-tree driver (i.e. a driver not included with the Linux kernel). People on the Internet have resorted to doing all these silly things like downloading random kernel modules or compiling the vendor Linux kernel trees from scratch trying to get this driver to work. You’re probably reading this article because you, like me, thought it was silly to chase files around on some random Internet forum.

The source code for the 8188eu driver is available on GitHub. Rather than trying to look for random files and hope they work, you can keep the source for the module on your Raspberry Pi and recompile the module as needed. Below are copy-and-pastable instructions to do this on Raspbian.

Make sure you have a compiler, etc installed.

sudo apt-get install build-essential git

Download the source code for the driver. I prefer to put source code for stuff like this somewhere convenient in my home directory, like ~/src/drivers/.

mkdir ~/src/drivers
cd ~/src/drivers/
git clone https://github.com/lwfinger/rtl8188eu.git

Now you have the source code for the driver, you need to get the source code for the kernel running on the Raspberry Pi. Finding the source code for the currently-running kernel on the Raspberry Pi can be a pain. I like to use Hexxeh’s rpi-source tool, which you can download, install, and run:

sudo wget https://raw.githubusercontent.com/notro/rpi-source/master/rpi-source -O /usr/local/bin/rpi-source && sudo chmod +x /usr/local/bin/rpi-source && /usr/local/bin/rpi-source -q --tag-update
sudo rpi-source

If you get an error about GCC, use the “skip-gcc” flag:

sudo rpi-source --skip-gcc

Now, compile the driver:

cd ~/src/drivers/rtl8188eu
sudo make all
sudo make install

Your Wi-Fi should be working, though you’ll need to configure it (configuring it is out of scope for this article).

You can double check whether the adapter is recognized by looking at the output of (assuming you’ve only one Wi-Fi adapter, it will be named wlan0):

sudo iwconfig wlan0

If you see nothing, the driver may have been installed wrong or your Wi-Fi adapter not working. If you see something like:

wlan0     unassociated  Nickname:"<WIFI@REALTEK>"
          Mode:Master  Frequency=2.412 GHz  Access Point: E8:DE:27:9E:65:D7   
          Sensitivity:0/0  
          Retry:off   RTS thr:off   Fragment thr:off
          Power Management:off
          Link Quality=0/100  Signal level=0 dBm  Noise level=0 dBm
          Rx invalid nwid:0  Rx invalid crypt:0  Rx invalid frag:0
          Tx excessive retries:0  Invalid misc:0   Missed beacon:0

Then you are good to go!

All of this seemed like a lot of work compared to hunting down the file, right? Except, it isn’t: when you update the kernel on your Raspberry Pi and reboot, the Wi-Fi will stop working because it will no longer have an appropriate driver. Rather than having to go on a witch hunt for files, you can simply recompile the driver again while offline:

cd ~/src/drivers/rtl8188eu
git pull # Run this if you have Internet access even though your Wi-Fi is down
make all
sudo make install

The above is all you need to turn your Raspberry Pi into a working Wi-Fi client.

If you want to use your shiny new Wi-Fi enabled Raspberry Pi as an access point, there’s a gotcha with Realtek Wi-Fi adapters (I’ve found this true of both the 8188eu and 8192cu chips): they won’t work with hostapd as included with Raspbian or Arch Linux. You’ll need to use a custom hostapd, such as this one posted by Jens Segers, to get a working access point.

Setting up package caching for pip and npm

If you’re on a metered Internet connection (like me), downloading and re-downloading packages to setup your Python virtualenvs and Node.js projects isn’t very fun. Besides wasting data, it’s slow.

These package managers actually have caching built-in, though it needs to be configured. Follow the instructions below to configure Python’s pip, Node.js’ NPM, and Twitter’s Bower. I assume you’re on Linux and want to use XDG-compliant directories.

Python pip

Configuring a cache directory for pip will cache both source and wheel-based packages.

cat >> ~/.pip/pip.conf << EOF
[global]
download_cache = ~/.cache/pip
EOF
mkdir -p ~/.cache/pip

pip’s package cache caches things by URL, not by package, which makes pip’s caching mechanism not work with mirrors. The pip manual describes a mechanism for fast and local installs if this a problem for you.

Node.js NPM

NPM does cache packages by default, but in a non-XDG compliant directory. Reconfigure it as so:

npm config set cache ~/.cache/npm
mkdir -p ~/.cache/npm
mv ~/.npm/* ~/.cache/npm/
rm -rf ~/.npm/

See NPM’s documentation for package caching for further detail.

Twitter Bower

Bower, like NPM, also caches packages by default in a non-XDG compliant directory. Reconfigure it, and create a .bowerrc:

bower cache-clean
cat >> ~/.bowerrc << EOF
{
  "cache": "$HOME/.cache/bower/"
}
EOF
mkdir -p ~/.cache/bower

None of pip, npm, or bower do any purging of outdated or unused packages. You will have to manage this yourself, unless you are OK of these caches growing without bound (disk space is cheap these days).

Attempting to manage these caches is tricky: since modern UNIXes no longer update last accessed times (aka atime) on files, you can’t purge packages on last use. I leave figuring out the best way to purge these directories as an exercise to the reader (please leave a comment!).

First post!

I’m rebooting this blog. It’s a long time coming.

My first “blog” was back in 2003. As was fashionable back then, I wrote my own, custom blogging software in PHP. In 2005, I ditched the custom software and started using Drupal, starting with Drupal 4.5.

I become fairly skilled with Drupal: I wrote themes, modules, launched several Websites for myself and clients, ran a Drupal-oriented consulting firm and Web hosting company, and managed scalability at a startup built on Drupal.

I, very painfully, upgraded those sites and this blog, which started with Drupal 4.5, all the way up to Drupal 7. It was not easy or fun.

I had to constantly deal with security upgrades. Upgrading to new Drupal versions required a witch hunt to find which 3rd party modules were deprecated, which new modules I should use to replace them, and in what ways those modules worked differently. And then there was all the spam—even with Acquia’s Mollom service, too much spam got through, requiring constant moderation.

The thing that did Drupal in for me was SA-CORE-2014-005, fixed in Drupal 7.32. After over 10 years of following Drupal security updates, I missed one, by just a few hours—and my site was hacked. While restoring from my backups, I made a mistake and deleted the most recent one.

If I wasn’t turned off to PHP Web applications before, I’m really, really turned off to them now. After using Drupal for almost 10 years, I’m switching to something else (at least for my personal stuff).

Because I didn’t want to deal with upgrading or security issues, it meant using something like Ghost or Wordpress was out.

The new Samat Says is powered by Nikola, a static site generator. I looked at Jekyll, Blogofile, Pelican, but Nikola seemed the easiest for me to understand, extend, and most importantly has a vibrant, active community.

For a long time, I cared about owning my comments and not being tied to a 3rd party service. After dealing with all the spam my old Drupal site received, I now care quite a bit less. I’d like to move to a self-hosted, or better yet federated, commenting solution in the future but for the time being I’m using Disqus. I haven’t figured out how I will migrate old comments yet.

Worth mentioning: you may have noticed this isn’t an actual first post. I’m slowly migrating content from the old blog (probably only the popular bits) from my old Drupal installation over to this Nikola-powered one.

Weekend irony with the University of Florida

Irony: this weekend (Apr 21–22), NPR’s Wait Wait˙ Don’t Tell me! re-ran a segment with author Jack Gantos. In it, Jack Gantos makes a crack at the University of Florida:

I drove up to University of Florida. It looked just like my high school — a giant football facility with a small academic institution

Apparently, University of Florida thought this was a compliment. Forbes reports (via) that University of Florida has eliminated research & graduate work in its Computer Science department, while simultaneously significantly increasing funding for athletics.

A clarification missing from the Forbes article: UF is eliminating research and graduate work in its computer science department, just like it did for its nuclear engineering department the year prior. The departments will remain as severely gimped teaching-only undergraduate departments… not unlike a glorified community college.

Keep it real University of Florida!

Spaceport America on OpenStreetMap

[flickr-photo:id=6639706995,size=m]

Spaceport America is “the world’s first purpose-built commercial spaceport”. Wonder what it looks like? You can now find it on OpenStreetMap, one of the many things I’ve been mapping in New Mexico’s barren & isolated Jornada del Muerto. I’ve indicated various Spaceport America structures, like the state-of-the-art Terminal Hangar Building and Spaceport Operations Center. I’ve yet to accurately locate Spaceport America’s vertical launch pad, which has been in use since 2007.

No, it’s not on Bing Maps, Google Maps, or any of the other Web mapping competitors—just in case you needed a reason why crowd-sourced geodata (or VGI) can’t be beat.

Want an aerial photo of Spaceport America? Over on Flickr I’ve a screencap from the USDA’s public-domain NAIP 2011 release, pretty much the only source for high-resolution imagery of the middle of nowhere.

See Spaceport America on OpenStreetMap