Pop OS/Ubuntu: Detecting USB Security Key Events

Security keys such as the Yubikey from Yubico have become much more popular in recent times, showing an increased focus on security best practices. Many people are already aware of Multi-Factor Authentication or MFA, since many secure online services (e.g. banks, government portals) have mandated the use of a physical device or one-time access codes before using their services. To learn more about MFA, please check out the Wikipedia article on MFA.

Since you’re here, you know what MFA is so let’s look at what I needed to achieve. Simply:

Upon USB security key insertion or removal, detect that event and “do something” in response.

– My requirements 🙂

My environment:

  • Pop OS 21.04 Linux (based on Ubuntu)
  • Yubikey USB security key

What I’m not going to cover:

  • How to configure your Yubikey for use in Linux. To learn more, please see the official Yubico documentation
  • Which Linux distributions use or don’t use udev – that’s on you

Get Info About Security Key

For all this to work, we’ll rely on udev. Using the built-in tools that are provided with various Linux distributions, we first need to get some info about your security key. Your output will vary greatly depending on events happening in your system and also on the model of security key you are using.

  • Disconnect/remove your security key from your system
  • Open a terminal and start monitoring udev events:
udevadm monitor --property

Successfully starting the monitor will show this:

~ ❯
~ ❯ udevadm monitor --property
monitor will print the received events for:
UDEV - the event which udev sends out after rule processing
KERNEL - the kernel uevent

Note: In the output below, I’ve removed some info and replaced it with [removed].

  • Insert your security key and note the extensive output shown in your terminal window. The specific section looks like this on my system:
UDEV  [52954.592536] add      /devices/pci0000:40/0000:40:01.1/0000:41:00.0/0000:42:08.0/0000:48:00.3/usb9/9-4/9-4:1.0/0003:1050:0407.001C/input/input40/event6 (input)
  • Three things are of particular interest here:
    • ACTION is add
    • SUBSYSTEM is input
    • ID_MODEL is YubiKey_[some_stuff_here] – note down the value of this property now, as you’ll need it shortly
  • Remove your security key and again note the extensive output shown in the terminal. The relevant block looks similar to before but with one main difference:
    • ACTION is remove
  • Hit Ctrl-C to stop the udev monitoring

Now that we have the necessary info about our security key, we can proceed.

The “Do Something” Part

The “do something” part of this article will be different for everyone, meaning the scripts below are examples only. Here are the components of this setup:

  • Script called yubikey, located in my home directory:

if [ "$1" == "removed" ]
    echo "Oi, Yubikey has been removed!" > /home/me/yubikey_status.txt
    echo "K, Yubikey found.  Secure all the things!" > /home/me/yubikey_status.txt
  • Text file named yubikey_status.txt in my home directory:
  • The watch command can be used to monitor file contents:
watch cat ~/yubikey_status.txt

Reacting to a security key removal event will likely be accompanied with an action such as locking the screen, sending an email or displaying a message. For now, this is what we’ll see:

Every 2.0s: cat /home/me/yubikey_status.txt


Detecting Security Key Insertion/Removal

udev will be used to handle the USB insertion/removal events, using custom udev rules stored in /etc/udev/rules.d.

Privilege escalation via sudo (or similar) will be required to create and edit these files.

To learn more about how these files are named, please see the official udev documentation.

  • Create a file named /etc/udev/rules.d/90-yubikey.rules, containing the following content:
ACTION=="add", SUBSYSTEM=="input", ENV{ID_MODEL}=="[ID_MODEL_COPIED_EARLIER_GOES_HERE]", RUN+="/home/me/yubikey inserted"
ACTION=="remove", SUBSYSTEM=="input", ENV{ID_MODEL}=="[ID_MODEL_COPIED_EARLIER_GOES_HERE]", RUN+="/home/me/yubikey removed"

These udev rules complete the following actions:

  • Watches for both add and remove events
  • Matching events must occur in the INPUT subsystem
  • The matching events must occur for devices matching the specified ID_MODEL
  • When a matching event occurs, the ~/yubikey script executes
    • inserted command-line parameter if the device was added
    • removed command-line parameter if the device was removed
  • The ~/yubikey script will write content to the ~/yubikey_status.txt file depending on the value of the $1 parameter i.e. the first parameter on the command line

Reloading udev Configuration

We can dynamically reload the udev rules by running the following command:

sudo udevadm control --reload


  • Run the watch command above if it’s not already running
  • Insert your security key
  • If everything has been configured correctly and your udev rules have been reloaded, you’ll see the watch command output change as follows:
Every 2.0s: cat /home/me/yubikey_status.txt

K, Yubikey found.  Secure all the things!
  • Conversely, removing the security key will run the script again, causing the output to change as follows, as the contents of the ~/yubikey_status.txt file changes:
Every 2.0s: cat /home/me/yubikey_status.txt

Oi, Yubikey has been removed!

Wrapping Up

That’s all there is to it. Hopefully this was useful for someone.

Linux Uncategorized

CentOS 7, DNS and firewalld

Yesterday I needed to setup a local DNS server. Sure, I could’ve used Windows but, mostly for licensing reasons, decided that using a free OS would be a much better idea. For various reasons I used to go with CentOS 7, the latest version of the CentOS Project Linux distribution.

This particular server is virtual and currently provided by Oracle VirtualBox, another free product. You can see a pattern here, right? 🙂

After installing CentOS, configuring the interfaces & network then installing & configuring BIND, I found that DNS name resolution worked perfectly while logged into the server itself. However, the CentOS DNS server would not respond to any DNS requests from any other host. I had configured BIND to allow queries from any IP address on my local network andlisten on all interfaces. At this point, I thought it should be working.

The first thing I checked was the configuration:

Check BIND configuration

If that command returns no output, the /etc/named.conf configuration file contains no syntax errors.

The next check was to verify the syntax of the zone configuration files:

Check BIND zone files

Yes, I’m logged in as root. Don’t panic, it’s just for testing.

Anyway, those commands confirm that BIND is configured properly, including the zone files.

I had already checked to make sure IPtables wasn’t running:

Check iptables services

Nope, no iptables services running.

What I didn’t realise is that since Red Hat Enterprise Linux (RHEL) 7 and CentOS 7, iptables interaction is provided the dynamic firewall daemon firewalld. Sure enough, firewalld was definitely running:

Check firewalld service

Here’s the Red Hat page that confirms it:

Red Hat Enterprise Linux 7 iptables & firewalld

So, what to do? firewalld had to be configured to permanently allow requests on UDP port 53, followed by reloading the firewalld configuration.

Add firewalld rule

Update: As pointed out by certdepot in the comments below, requests on TCP port 53 should also be allowed in the event that the DNS request or response is greater than a single packet, for example responses that have a large number of records, many IPv6 responses or most DNSSEC responses.

firewall-cmd --zone=public --add-port=53/tcp --permanent

After that, requests from my local laptop to the BIND server running on the CentOS system worked as they should:

Test nslookup


During this process I also had the help of friend & total guru – Tim Philips a.k.a @mr_timp. Thanks, Tim! 🙂

Archived Development Linux Uncategorized

PHP 5.4 + suhosin = FAIL

I recently modified my dev environment to allow PHP 5.4 apps, something that involved adding a bunch of Debian Linux “wheezy” sources to my aptitude configuration. No problem in itself, but it did totally wreck my web server’s ability to load the extension (PHP 5.4 doesn’t support it).

This wasn’t a biggie until I hit up one of my Laravel apps and was greeted with a lovely message that said the following:

 Unhandled Exception Message: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525/' - /usr/lib/php5/20100525/ cannot open shared object file: No such file or directory Location: Unknown on line 0 

Not nice. Looking through php.ini, apache2.conf and httpd.conf showed nothing and, since I’m not particularly familiar with suhosin in itself, I couldn’t figure out where the extension was being loaded.

 locate suhosin 

That showed me that there is a file on my system called suhosin.ini located in /etc/php5/conf.d so I fired up nano (I’m not cool enough to use vi) and, sure enough, there at the top was the following line: 

I commented that line, restarted the apache services:

 service apache2 restart 

… and now Laravel doesn’t throw any horrible unhandled exceptions about suhosin being broken.

Sorted. 🙂

Archived Linux Uncategorized

Debian LAMP dev server (and that’s it)

Another one? Really?

I’ll be the first to admit there there is a huge number of documents, websites, tutorials and guides already available for what I’d say is one of the most common questions from web developers: “How do I setup my own Linux web development server?” Read that carefully – development web server. The steps below will fall pretty far short of the mark if you need a production web server. I might cover that later in another article but not in this one.

I’ve built dozens of Linux web servers over the years. Some of them have been for testing, some of them have been for production. The intriguing thing is that out of all those servers, though, I don’t think I’ve ever followed the same guide twice. Why do I need a guide if I’ve built so many? Partly because my memory is crap and partly because I don’t build them often enough to remember the steps manually.

So, here it is – the “Digital Formula No-Bullshit Guide To Setting Up A Debian LAMP Server.”

Question: Why Debian?
Answer: Many reasons, for example:

  • Like most distributions, Debian is easily downloadable (I choose to download distribution ISOs via BitTorrent).
  • There’s so much documentation available that guides like this aren’t even necessary … errr … 🙂
  • From the reading I’ve done, Ubuntu is based on Debian … unless you need something only Ubuntu can provide, why not go straight to the Big Daddy distribution that the other guys copied off? [ start Linux distro war here ].
  • I’ve heard people say, countless times, that for security, Debian is the bee’s knees. Is it true? Until I hear otherwise, I’m rolling with it.
  • It’s just easy. If someone that’s been administering enterprise Windows infrastructure for 15+ years can handle Debian, believe me, anyone can.

Note that this is about as far as you can get from an article about how to manage Debian itself. For that reason, I’m going to assume that you know the basics like CLI navigation, editing text files and using the man command. So, let’s go (you’ll need to run the below commands either as root, or via the sudo command).

The basic foundation

  1. Install Debian Linux. Bet you didn’t see that coming! During installation, try to make sure you’re enable a network mirror, if possible – it’ll save time later. Note that during installation of the latest version (6.0.5 as I write this) there is an option to configure your build as a web server, etc – I don’t use these. I only select the option to install the core utilities/services.
  2. Once Debian is installed, booted and you’re logged in, disable the CD/DVD source by editing /etc/apt/sources.list and commenting out the line(s) that start with deb cdrom:. This is optional, but I think it makes things easier later as it ensures your update processes won’t try and use the media you installed from, only network sources.
  3. Update your package lists by running aptitude update.
  4. Upgrade the currently installed packages by running aptitude safe-upgrade. If you know what you’re doing, you could run aptitude full-upgrade but, as the man pages say, it is more likely to perform unwanted actions. For those that have done this sort of thing before, this is the same as running aptitude dist-upgrade – the command just got renamed.
Archived Linux Mac & OS X Uncategorized

A couple more BASH prompt examples

Yesterday I published an article called "Pimp My Prompt … like Paul Irish" in which I showed how to make your BASH prompt similar to the one used by Paul Irish.

I also included a couple of sample prompts that you could use for reference so I figured I’d write a follow-up article that shows what they look like.  So, without any further ado, here they are.

Example 1 – Green username, blue host, magenta working directory, white git branch:

PS1='${GREEN}u${BLACK}@${CYAN}h:${MAGENTA}w${WHITE}`__git_ps1 " (%s)"`$ ‘

The example above uses the short colour codes outlined in the original article and looks like the screenshot below.

Green username, blue host, magenta working directory
Green username, blue host, magenta working directory

Example 2 – Blue user and host, magenta working directory, white git branch:

PS1='[33[0;36m]u@h[33[01m]:[33[0;35m]w[33[00m][33[1;30m][33[0;37m]`__git_ps1 " (%s)"`[33[00m][33[0;37m]$ '

The example above uses the built-in colour codes but can be harder to read.  It looks like the screenshot below.

Blue user and host, magenta working directory, white git branch
Blue user and host, magenta working directory, white git branch

Don’t forget to read the original article, "Pimp My Prompt … like Paul Irish", if you’re unsure about how to enable the __git_ps1 and short colour code commands.

Archived Linux Mac & OS X Uncategorized

Pimp My Prompt … like Paul Irish

There’s been a lot of talk lately about those cool colours that Paul Irish uses in his videos.  If you don’t know who Paul Irish is and you dabble in a bit of web design … well … shame on you!  😉  The things referenced in this article, including the prompt, can be found in Paul’s YouTube video entitled "The Build Script of HTML5 Boilerplate: An Introduction".  Anyway, I think the prompt colours are pretty useful, especially if you’re speedy at navigating your way around the CLI/shell in Linux or OS X and want to see where you are very quickly..  They help identify where you are in the file system, whether or not your current working directory is a git branch and, depending on what options you set, whether or not there are untracked files present etc.

Here’s what Paul’s prompt looks like (screenshot from the video linked above).

Paul Irish’s OS X prompt
Paul Irish’s OS X prompt

I believe Paul is using iTerm, as am I.  Combined with the stuff below, my iTerm configuration looks like this:

iTerm Configuration
iTerm Configuration

Pimp my prompt!

Ok, so you want your prompt to look like that?  It’s not that hard, actually.  You have to grant me a little latitude, though, as I’m guessing what Paul uses the various CLI prefix symbols for.  In this example I’m using an "o" to indicate a working outside a git branch and a + to indicate that the current working directory is a git branch.

This only works if you’re running an OS X or Linux shell – I’m sure you can do it with Windows but I’m not going to cover that here.

Step 1

If you haven’t got one already, open or create a file called .bash_profile in your home (~) folder.  You might be asking why not use .bashrc?  We want the command we’re adding to apply to interactive login shells – .bashrc only applies to interactive non-login shells.  There is an exception to this if you’re using OS X (like me) – .bash_profile is run for each new window, by default.

If a file with that name already exists, make sure you don’t remove anything from it that you want to keep.  From here I’m going to explain what each part of my .bash_profile does.  The complete file will be shown at the end, including a couple of extra bits.

Note: If you’re going to include the git parts, you’ll need to download the git source and put the file .git-completion somewhere (mine is in my home directory).

Archived Linux Mac & OS X Uncategorized

Using rsync to synchronise folders

I keep most of my primary files on my iMac. Needless to say, this would be a pretty dumb move if I kept them there, and there only. However, I’m a paranoid freak when it comes to data, having lost 7 years worth of work in 1 go a few years back – I’m not letting that happen again. I use an Apple Time Capsule and Apple Time Machine for my primary backups and it’s saved me once or twice already.


Anyway, because I also have a Macbook Pro, that means I’m sometimes working where my iMac isn’t. I needed a solution to synchronise a specific set of folders on my iMac and my Western Digital portable USB drive. There are a ton of ways to do this but because I’m a geek I chose to use the command line (Terminal, in OS X) and write a small script. I’m using rsync for this – here’s the script, should you need something like it yourself.

The Script

Synchronise with rsync:


# sync the files
rsync ~/Data/Solutions /Volumes/Backup --recursive --verbose --delete --progress --human-readable --exclude="tmp*"

# unmount the external drive
sudo diskutil unmount /Volumes/Backup

What does it do?

Simple! The script script above does the following.

  • Synchronises all files in the sub-folder Data/Solutions in my home folder onto my USB drive (called ‘Backup’). All files matching the file pattern tmp* are skipped.
  • Ejects the USB drive when finished.
  • All sub-folders are included.
  • Files that exist in the destination but not in the source are deleted.
  • Progress of each file is displayed as it is copied.
  • All output is in human-readable format … because I’m human.

The eject part prompts for my password before ejecting because the diskutil unmount command is a privileged operation and requires elevated rights before it will be allowed,

Simple, but useful. 🙂

Archived Development Linux Uncategorized

Slow FTP running ProFTPD on Parallels for Mac

My current web development server is a Debian GNU/Linux 5.0.5 machine running under Parallels for Mac (which I’m trialling right now). It was all working great until I tried to access the FTP server running on it from my Macbook Pro over the Airport. It worked but maaaaan was it slow! The Debian server is running ProFTPD, one of the many free FTP server options out there.

Thankfully the solution to the slow FTP access is very simple.  There isn’t a DNS server running here as it’s not something I need for local development (although it would’ve solved this problem, too).  The problem was caused by reverse DNS lookups failing – this is something ProFTPD has enabled by default if you use apt-get install proftpd to install ProFTPD.

Turn off Reverse DNS lookups

Turning off reverse DNS lookups in ProFTPD is as simple as checking /etc/proftpd/proftpd.conf for the following lines.

  • IdentLookups off
  • UseReverseDNS off

The IdentLookups line may already be set to ‘off’ – that’s fine and you can leave it that way.  The UseReverseDNS value may not exist at all – if that’s the case just added it to the proftpd.conf file and set the value to ‘off’ by following the format of the other configuration lines:

UseReverseDNS in ProFTPD

Restart ProFTPD

While logged in with appropriate privileges the following command (from the Terminal) should restart ProFTPD.  Rebooting your Debian GNU/Linux server will also restart ProFTPD but the whole point of running Linux is so you hardly ever have to reboot … right?

Restart ProFTPD:

/etc/init.d/proftpd restart

Obviously your own configuration may differ so only follow these instructions if it won’t break everything – be warned!