Archived Development Linux Uncategorized

PHP 5.4 + suhosin = FAIL

I recently modified my dev environment to allow PHP 5.4 apps, something that involved adding a bunch of Debian Linux “wheezy” sources to my aptitude configuration. No problem in itself, but it did totally wreck my web server’s ability to load the extension (PHP 5.4 doesn’t support it).

This wasn’t a biggie until I hit up one of my Laravel apps and was greeted with a lovely message that said the following:

 Unhandled Exception Message: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525/' - /usr/lib/php5/20100525/ cannot open shared object file: No such file or directory Location: Unknown on line 0 

Not nice. Looking through php.ini, apache2.conf and httpd.conf showed nothing and, since I’m not particularly familiar with suhosin in itself, I couldn’t figure out where the extension was being loaded.

 locate suhosin 

That showed me that there is a file on my system called suhosin.ini located in /etc/php5/conf.d so I fired up nano (I’m not cool enough to use vi) and, sure enough, there at the top was the following line: 

I commented that line, restarted the apache services:

 service apache2 restart 

… and now Laravel doesn’t throw any horrible unhandled exceptions about suhosin being broken.

Sorted. 🙂

Archived Linux Uncategorized

Debian LAMP dev server (and that’s it)

Another one? Really?

I’ll be the first to admit there there is a huge number of documents, websites, tutorials and guides already available for what I’d say is one of the most common questions from web developers: “How do I setup my own Linux web development server?” Read that carefully – development web server. The steps below will fall pretty far short of the mark if you need a production web server. I might cover that later in another article but not in this one.

I’ve built dozens of Linux web servers over the years. Some of them have been for testing, some of them have been for production. The intriguing thing is that out of all those servers, though, I don’t think I’ve ever followed the same guide twice. Why do I need a guide if I’ve built so many? Partly because my memory is crap and partly because I don’t build them often enough to remember the steps manually.

So, here it is – the “Digital Formula No-Bullshit Guide To Setting Up A Debian LAMP Server.”

Question: Why Debian?
Answer: Many reasons, for example:

  • Like most distributions, Debian is easily downloadable (I choose to download distribution ISOs via BitTorrent).
  • There’s so much documentation available that guides like this aren’t even necessary … errr … 🙂
  • From the reading I’ve done, Ubuntu is based on Debian … unless you need something only Ubuntu can provide, why not go straight to the Big Daddy distribution that the other guys copied off? [ start Linux distro war here ].
  • I’ve heard people say, countless times, that for security, Debian is the bee’s knees. Is it true? Until I hear otherwise, I’m rolling with it.
  • It’s just easy. If someone that’s been administering enterprise Windows infrastructure for 15+ years can handle Debian, believe me, anyone can.

Note that this is about as far as you can get from an article about how to manage Debian itself. For that reason, I’m going to assume that you know the basics like CLI navigation, editing text files and using the man command. So, let’s go (you’ll need to run the below commands either as root, or via the sudo command).

The basic foundation

  1. Install Debian Linux. Bet you didn’t see that coming! During installation, try to make sure you’re enable a network mirror, if possible – it’ll save time later. Note that during installation of the latest version (6.0.5 as I write this) there is an option to configure your build as a web server, etc – I don’t use these. I only select the option to install the core utilities/services.
  2. Once Debian is installed, booted and you’re logged in, disable the CD/DVD source by editing /etc/apt/sources.list and commenting out the line(s) that start with deb cdrom:. This is optional, but I think it makes things easier later as it ensures your update processes won’t try and use the media you installed from, only network sources.
  3. Update your package lists by running aptitude update.
  4. Upgrade the currently installed packages by running aptitude safe-upgrade. If you know what you’re doing, you could run aptitude full-upgrade but, as the man pages say, it is more likely to perform unwanted actions. For those that have done this sort of thing before, this is the same as running aptitude dist-upgrade – the command just got renamed.
Archived Development Uncategorized

JavaScript refactoring, the n00b way

I’m a bit of a n00b when it comes to doing stuff in JavaScript. So naturally, when I find a way of introducing a bit of efficiency into a JavaScript situation, I’m more than happy to throw it into the mix. My only disclaimer for the below ramblings are that if you’re a developer that knows anything more than the basics, everything I’m about to write will be nothing more than a “Yeah, no shit, genius” kinda thing. So, go easy on me, mmmkay? 🙂

So anyway, my current attempt at web hackery requires a bit of JavaScript, AJAX + the usual suspects that often go along with those things i.e. jQuery Core, jQuery UI, Sass … you get the picture. Nothing unusual going on here … did I mention that I love jQuery?

I tend to write stuff, make it work, carry on to other things and then come back later, look at a block code and throw up in my mouth a little bit when I see how freakin’ ugly and inefficient it is. I mean, seriously, why would you duplicate an almost identical block of JavaScript eleven times when you can do what a real developer would do, and write it properly in the first place?

This example is super-obvious and really aimed at people, like me, who don’t do this stuff professionally but like to learn better ways of doing things. Firstly, here’s a snippet of what I had before.

 $('button.delete').button( { icons: { primary: 'ui-icon-circle-minus' }, text: false }); 

I had that bit of jQuery UI a bunch of times and had to add a new one, whenever a new element came up that required button-ifying … like that? It’s new. 🙂

Needless to say, that’s horribly inefficient as it requires copy/paste/edit for every occurrence.

Not too long ago, I followed along all doe-eyed while Jeffrey Way (@envatowebdev) taught my fellow n00bs and I how to do things properly with jQuery. Objects came up, and a revelation happened. For me, anyway.

So now, instead of duplicating that block of code (the one up there a bit) many times over, I’ve got all my buttons in an array of objects which is then processed and the relevant properties turned into the equivalent of the eleven blocks of rubbish I had before. I’m the first to admit that there will definitely be a better way of doing this, still, but for now I’m happy with the changes.

Here’s how it looks now:

 var buttons = { b1 : { id : 'button.delete', icon : 'ui-icon-circle-minus', text : false }, b2 : { id : 'div#password-submit input', icon : 'ui-icon-check', text : true }, b3 : { id : 'div#profile-submit input', icon : 'ui-icon-check', text : false } } for each ( button in buttons ) { $( ).button( { icons: { primary: button.icon }, text: button.text }); } 

Plug a new button-ifiable element into the array and it’ll automatically processed next time the script runs (and isn’t cached – we’ve all fallen into that that trap).

The result is exactly the same as I was getting before, but, in my opinion, could be called more elegant, efficient, easier to read, whatever.


Archived Databases Uncategorized

VMware vSphere 5 + Remote SQL Server 2012

Most VMware administrators will know that it’s possible to run VMware vCenter on one server and have the vSphere SQL database on another server.  This is a perfectly acceptable configuration and, in anything but smaller environments, is probably the best thing to do.

Anyway, the latest release of Microsoft SQL Server, SQL Server 2012, was released during the first half of 2012 and is, of course, supported for use with VMware vSphere.  That means that your vCenter server has to be able to connect to the remote instance of SQL Server 2012 – here’s where things get tricky.

At the most basic level, connecting to the server is easy, but the VMware vSphere 5 running on a 64-bit version of Windows Server 2008 requires that the vCenter have a 64-bit DSN (data source name) configured to manage the connection.  Easy, right?  You’d just thrash ahead and install the SQL Server 2012 management tools or some other Microsoft package with the latest SQL Native Client on the vCenter server … yeah?  If you do, vCenter will not be able to connect to the remote SQL Server 2012 instance.  As of the date of this article, May 31st 2012, the latest version of the SQL Native Client is 11.0 – this is where the problem lies.

There’s something in version 11.0 of the SQL Native Client that prevents vCenter from being able to use the 64-bit DSN you create.  I don’t know or care what the problem is, but I spent the best part of 3 hours trying to get it working, including trawling the VMware community forums, running c:windowssystem32odbcad32.exe, c:windowssyswow64odbcad32.exe and all manner of other things.

The final thing I tried did the trick and that was to simply remove version 11.0 of the SQL Server Native Client and install version 10.0.  Version 10.0 of the SQL Native Client is the version that ships with SQL Server 2008 and the SQL Server 2008 Management Studio.  In my setup, I need to be able to access the remote SQL Server 2012 database from the vCenter server so installing the SQL Server 2008 Management Studio didn’t present an issue.

At the date of writing this article, May 31st 2012, the SQL Server 2008 Management Studio can be downloaded by going to – this article relates specifically to the 64-bit version of Windows so please make sure you download the 64-bit version of the management studio.

Hopefully that long-winded explanation helps someone.

Archived Mac & OS X Uncategorized

Synchronise Coda 2 Configuration with Dropbox

Coda 2

Coda 2, one of my favourite code editors, was released on May 24th 2012 to a somewhat underwhelming reception, in my opinion.  To be 100% honest, I’m a bit disappointed with Coda 2, not for product or feature reasons, but because of the responses it has made some people come up with during their testing and usage of a brand new product.

I’m not going to review Coda 2 myself – plenty of other websites have already done that – but I am going to address one of the questions that seems to have come up a few times, despite Coda 2 still being a relative infant as I write this.

What question?

Synchronisation.  By that I mean the ability to maintain the same settings between multiple Macs running Coda 2.  The first thing I need to make clear is that if you purchase Coda 2 from the Mac App Store, iCloud synchronisation is already built-in but, if you buy Coda 2 directly from Panic, you can’t use iCloud to sychronise your configuration.  Panic’s own FAQ page has this to say about it:

At the moment, there is only one difference between the two versions: the Mac App Store version will support iCloud syncing of Sites and Clips, and the direct version will not. This is a restriction imposed by Apple.

So we’re screwed, right?

Well, no, and if we’re a little bit smart about it, the solution is relatively simple – Dropbox.  Given that OS X stores most application configuration in well-known locations like ~/Library/Application Support and ~/Library/Preferences, synchronising those directories with Dropbox is even easier than on competing operating systems that use things like the *gulp* registry.  By the way, in case you’re not sure, ~ means your home directory …

Others have written articles about how some of Coda’s competitors can be synchronised using Dropbox and this solution is really no different – all I’ve done is automate the process a bit.


By moving the following bits to Dropbox:

  • The Coda 2 main configuration directory
  • The Coda 2 recent files list
  • The Coda 2 user preferences file

No really, how?

If you’re familiar with the term “symbolic link”, you’ll know what’s coming.  If you’re not, you can run the script below and everything will be done for you.  That said, you should be familiar with how to grab a script and, if necessary, make it executable – sorry but this isn’t the right place for me to explain how to do that (hint: chmod +x script_name).  Besides, I’d imagine readers of this article are relatively technical anyway, right?  😉

Anyway, grab the script below, edit the paths at the top if you need to, then run it.  A couple of warnings, though:

  • Before running the script on any Mac, please make sure Coda 2 has been run at least once
  • The first Mac you run this on should be the one where Coda 2 is configured how you’d like it
  • On subsequent Macs, you must make sure you you edit the script and change the first_run variable to FALSE before you run it … if you don’t, the existing Coda 2 configuration in Dropbox will be overwritten

The script will do a few checks while running, move your Coda 2 configuration to Dropbox, delete the local configuration and then create some symbolic links to the new Dropbox versions.  As far as Coda 2 is concerned, nothing has changed – it still operates as if the files are in the same place they’ve always been.

Archived Design Uncategorized

HTML5 Boilerplate – Quick setup script

Recently I’ve been setting up new websites left, right and center.  These have been almost exclusively for testing purposes and, since I’ve been basing them all off Paul Irish’s HTML5 Boilerplate, I needed a quick way to get a new site template up and running.

Enter automation

I set about making a quick script to automate the process of setting up a new website template based on the boilerplate (which I’ll refer to as h5bp from here on).  The steps it goes through are as follows (Bash-specific but could be retro-fitted to Windows without too much trouble).

  • Checks to see if git is installed.  This is required as the latest h5bp version is available from github.
  • If git isn’t installed, the script will fail gracefully and provide instructions on how to install git.

      • Checks for a single parameter.  If found, the script will use this as the name of the directory to create the project in, relative to “.”
      • If no parameter is specified, the destination folder is setup to a folder named after the current date and time.
      • Checks to see if the destination folder exists (almost impossible if using the date and time method).  If it’s not found, it is created.
      • Initialises a new git local repository in the new project directory.
      • Clones the latest h5bp version into the project directory.
      • Clones the latest h5bp ant build script into the project directory,
      • Cleans up the directory structure a bit, e.g. renames the ant build script from “ant-build-script” to “build”  I’ve found this to be a required step before the build script will run properly.
      • Converts the h5bp markdown documentation to HTML, if pandoc is found on the local system.
      • Does a bit more cleanup and moves some non-critical files into a directory called “_exclude”
      • Configures some global git options, i.e. the user name and user email address.
      • Adds all the project files to the new git repository.
      • Does a full commit of the new files.
      • Finishes up with a small message showing some steps that may be helpful to new users.

    Care to share?

    Sure.  If you’re interested in the script, here it is in its entirety.  On my system I have it installed as an alias by putting into ~/.bash_profile inside a function called h5bp.  If you want to use the script in a stand-alone way, that works fine, too.  For the purposes of this article, I’ll show the code in its function-based form.

    The script

     # function to create a new website based on the HTML5 BoilerPlate ( function h5bp { if [[ -z `which git` ]]; then echo " The git binaries were not found on this system and are required before this script will run properly." echo " If you are sure git is installed, please investigate why the 'git' command wasn't found, then try again." echo " To do this, you can try running "echo `which git`" (without the double quotes but keep the single ones!)." echo " If you don't have git installed, please download and install it by following the instructions located at" else echo "git binary found at `which git`." echo "Creating new website." if [ -z $1 ]; then newFolderName=`date +%Y-%m-%d_%H-%M-%S` echo "No folder name specified, using current date and time: $newFolderName." else newFolderName=$1 echo "Using '$newFolderName' as destination folder." fi if [ ! -d $newFolderName ]; then echo "Destination folder not found, creating it now." mkdir $newFolderName cd $newFolderName else cd $newFolderName fi echo "Initialising new git repository in `pwd`." git init --quiet echo "Getting latest h5bp build from github." git clone echo "Getting latest h5bp build script." git clone echo "Moving files into place." mv html5-boilerplate/* ./ rm -Rf html5-boilerplate/ mv ant-build-script/ build/ mkdir _exclude/ if [[ -z `which pandoc` ]]; then echo "Unable to convert the file to HTML for viewing. You can fix this by installing 'pandoc' from" else echo "Converting markdown documentation to HTML." pandoc -o readme.html mv readme.html _exclude/ fi mv _exclude/ echo "Setting git global options." git config --global "Put your name here" --quiet git config --global --quiet echo "Adding new files to new git repository." git add * echo "Running initial project commit." git commit -m "Initial h5bp project creation" --quiet echo "" echo "You should edit humans.txt in the '$newFolderName' directory before going any further." echo "Also, if you're not familiar with how to use the h5bp, you should go through the readme file in the _exclude/ directory (, or readme.html if you have pandoc installed)." echo "" echo "Done!" fi } 

    Can we see it in action?

    Of course you can.  I’ve recorded the script in action and put it up on YouTube – see below.  To view the video in HD so that your viewing experience doesn’t suffer from small-video-itis, I’d highly recommend watching it on the YouTube website (opens in a new window).

Archived Software Uncategorized

Set network interface IP address with Powershell

The Problem

While setting up our partner technology centre recently, I found myself switching back and forth between networks so often that I was constantly having to change my laptop’s IP address. For reasons that are outside the scope of this article I’m unable to use the option for an alternate network configuration.

Powershell to the rescue

The solution? Two small Powershell scripts – one to setup my network connection for our corporate LAN, the other to setup my network connection for the PTC. The scripts are shown below – feel free to use them in any way you like.

PTC configuration script {#ptcconfigurationscript}

 $index = (gwmi Win32_NetworkAdapter | where {$_.netconnectionid -eq “Local Area Connection”}).InterfaceIndex $NetInterface = Get-WmiObject Win32_NetworkAdapterConfiguration | where {$_.InterfaceIndex -eq $index} $NetInterface.EnableStatic(“”, “”) $NetInterface.SetDynamicDNSRegistration(“FALSE”) 

Corporate LAN configuration script

This script just resets the network adapter back to DHCP

 $index = (gwmi Win32_NetworkAdapter | where {$_.netconnectionid -eq “Local Area Connection”}).InterfaceIndex $NetInterface = Get-WmiObject Win32_NetworkAdapterConfiguration | where {$_.InterfaceIndex -eq $index} $NetInterface.EnableDHCP() $NetInterface.SetDynamicDNSRegistration(“TRUE”) 

Extra stuff

Although I don’t need them in my PTC environment (it’s internal only, with no internet access), you can also use the snippets below to add some extra functionality. Set gateway:


Set DNS server search order:

 $NetInterface.SetDNSServerSearchOrder($dns) # (e.g. “” for single server or “,” for multiple servers) 

Enable dynamic DNS registration (e.g. in AD environment):


Archived Mac & OS X Uncategorized

Getting Hazel for Mac to process subfolders

Back in February 2011 I wrote an article called Running HandBrakeCLI from Hazel that described how to setup Hazel for Mac to automatically run Handbrake from the command line. The idea of this is to run Handbrake automatically when a video file appears in a specific folder and convert the video to .m4v format suitable for iTunes.

This is a quick update to that post that shows how to setup Hazel to also process subfolders as, up until now, the Hazel configuration I had only processed a single level of folders.

The Hazel Rule

The configuration is pretty easy. Simply add a new rule at the top of your Hazel rules, making 100% sure it is above the rule that runs to automatically convert the video files.

The rule order looks like this:

Hazel rule order

Hazel rule order

The rule itself looks like this:

Hazel rule to process subfolders

Hazel rule to process subfolders

Once that rule is in place and the order is correct, the following rule will run once a new subfolder is found in the appropriate location.

Archived ExpressionEngine Uncategorized

Dynamic HTML grid with ExpressionEngine


This article is going to be a long one so sit back, relax (if you can) and get ready to do some EE hacking … ok, not hacking, creation. 🙂


I’m going to make a few assumptions for this article, as follows.

  1. You know your way around the ExpressionEngine control panel and are familiar with terms like template, etc.
  2. You know what jQuery is have an idea of what it is used for.

The Situation

Recently I’ve been doing some work on the redesign of the website for Erin King Photographer. Part of the redesign is the requirement for Erin to do 99% of the site management herself, without the need to refer back to me.

I realised that I’d need absolute control over the layout, styling & site architecture for this to work – that meant WordPress wasn’t an option for this version of Erin’s site. Now, I’m not saying WordPress couldn’t do the job (we still run Erin’s blog part of the site on WordPress. In my opinion, though, WordPress forces a shift in focus from design to content management – good for some but not what I wanted.

The Desired Layout

It looks simple (and is, when rendered), but this is the layout we’re trying to get to. Don’t forget this must be built & rendered dynamically, not hard-coded in the page’s markup.

Dynamic HTML grid for displaying products
The desired layout

The Decision

Some time ago I decided the new site should run on ExpressionEngine. The ability to control absolutely everything and yet still have the power of a full CMS made the decision pretty easy (plus, Digital Formula runs on ExpressionEngine).

*cough* I’ll leave this article where it is, but Digital Formula runs on WordPress, now. 😉

The Problem

The first issue I ran into (which I knew was coming) was that ExpressionEngine’s native file module, while powerful in its simplicity, does lack a couple of what you’d expect to be standard features. For example, it’s difficult to control the order that files are displayed, unless they all have a different date and time in the database. It’s also impossible to manually set the file order in the control panel and have that order obeyed during page rendering.

Archived ExpressionEngine Uncategorized

ExpressionEngine Plugin – Entry Age

Today I threw together a quick plugin as I couldn’t find an easy way of doing what I wanted without putting JavaScript in my templates.  The plugin, called ‘Entry Age’, allows you to specify a message that can be displayed if the entry being viewed is older than a certain age.

Why a plugin?

I’m assuming that someone else out there will find this useful.  I’m sure plugins like this exist already but, if they do, I can’t find them.

For an example of what the plugin looks like when it finds an outdated entry, please see this article: Move ExpressionEngine to a different server (and yes, the content there is actually outdated).

Can I get it?

Of course you can.  If you want to download Entry Age and try it out, you can head over to my GitHub page any time. 🙂 Entry Age 1.0 on GitHub.

Hope it helps someone.