Yesterday I needed to setup a local DNS server. Sure, I could’ve used Windows but, mostly for licensing reasons, decided that using a free OS would be a much better idea. For various reasons I used to go with CentOS 7, the latest version of the CentOS Project Linux distribution.
This particular server is virtual and currently provided by Oracle VirtualBox, another free product. You can see a pattern here, right? š
After installing CentOS, configuring the interfaces & network then installing & configuring BIND, I found that DNS name resolution worked perfectly while logged into the server itself. However, the CentOS DNS server would not respond to any DNS requests from any other host. I had configured BIND to allow queries from any IP address on my local network andlisten on all interfaces. At this point, I thought it should be working.
The first thing I checked was the configuration:
If that command returns no output, the /etc/named.conf configuration file contains no syntax errors.
The next check was to verify the syntax of the zone configuration files:
Yes, I’m logged in as root. Don’t panic, it’s just for testing.
Anyway, those commands confirm that BIND is configured properly, including the zone files.
I had already checked to make sure IPtables wasn’t running:
Nope, no iptables services running.
What I didn’t realise is that since Red Hat Enterprise Linux (RHEL) 7 and CentOS 7, iptables interaction is provided the dynamic firewall daemon firewalld. Sure enough, firewalld was definitely running:
So, what to do? firewalld had to be configured to permanently allow requests on UDP port 53, followed by reloading the firewalld configuration.
Update: As pointed out by certdepot in the comments below, requests on TCP port 53 should also be allowed in the event that the DNS request or response is greater than a single packet, for example responses that have a large number of records, many IPv6 responses or most DNSSEC responses.
I’ve got a few PHP libraries and classes that I use regularly and, since making the decision to base all my projects on the Laravel framework, have gotten fairly used to installing packages using Composer.
I battled occasionally with getting my libraries to work with Laravel, mostly because of the way I’d written them, but I figured the smart way forward would be to make them available to other Composer users. For PHP, this means structuring the libraries in a specific way and making them available through Packagist, “The PHP package archivist”. You probably know this already, but when you add a package to your project’s composer.json file, Packagist is where it gets downloaded from.
Anyway, after some searching around, I found a bunch of articles that covered how to do *some* of the things needed to create Packagist packages, but no decent end-to-end guide.
So here we go – I’m going to write one. Hopefully it comes in useful.
Pre-requisites & initial steps
Sign up for Github. It’s free until you need to go beyond their basic account limitations. Packagist packages are based on Github repository *releases* (or tags).
It’s an excellent idea to have the same usernames on Github and Packagist, if possible.
Creating the base package structure
Create the directory structure for your package. There’s no “right” way to do this, but I’m following the structure used by almost every Packagist package I’ve installed so far. The structure below is for the package we’ll be creating in this article.
Here’s a basic explanation of what each part is.
digitalformula – your *vendor* name, your Github username and your Packagist username.
hello-world – the name of the package, following proper directory naming guidelines.
.git – Git repository information. This gets generated by git when you run ‘git init’ below.
.gitignore – list of files that should be ignored during git adds and commits. Each line should contain a filename, path or pattern.
src – the ‘root’ directory for your package. It’s also where we’ll tell the PSR-0 autoloader to look – we’ll get to that in a bit.
DigitalFormula – the first segment of the PHP namespace we’ll use. This must match your vendor name although at this point you’re free to use appropriate capitalisation, if you like.
HelloWorld – the second segment of the PHP namespace we’ll use.
HelloWorld.php – the PHP file containing our package’s code. Obviously more complex packages will have multiple files, but a single file will suit us just fine for now.
LICENSE – the license that covers usage of your package.
README.md – information about your package. Every Github repository should have one.
composer.json – probably the most important file here. It describes your package, what it’s for, who worked on it, dependencies and how PSR-0 autoloading will work. Unless you’re interested in the technicalities (I’m not), don’t worry about how PSR-0 works. It just works, I promise.
For those wondering, the complete PHP namespace will be *DigitalFormula\HelloWorld*.
Hold on … namespace? What’s that? If you’re not already familiar with PHP namespacing, I highly recommend reading Dayle Rees’ excellent PHP Namespaces Explained article.
Like most websites, my current project requires users to login. For security reasons, user sessions timeout after a while, leaving the user without an authenticated session. While this is fine for me, it’s not so good for users that may have entered something info a form, hoping to come back to it later. In some cases, the text may be lost – annoying, no doubt.
Since my PHP framework of choice is Laravel, it was relatively simple to have a background process check the user’s login status and, if their session has expired, show an informative message.
Since this article is just a quick demo, I’m going to check the user’s login status every 60 seconds and show the message if nobody is logged in. On the production site this will change, although the method will be the same
Step 1 – Setup the JavaScript timer
Here is the tiny bit of JavaScript you need to setup the timer. Don’t forget to change the element you are binding this to in your application – I’m using <li>, as you can see.
For the functions above, you may need to alter the way they are called if you aren’t using Laravel in your projects (although I recommend it). You’ll also notice that I’m using Sentry::check() – this is how to check a user’s login status while using the excellent Sentry package from Cartalyst.
Here’s what the <li> element looks like.
That’s really all you need to do, although you may also want to change when the functions are called so that unregistered visitors don’t get shown the message, too. Easy!
Yesterday I decided to get an action camera. The decision between a Garmin Virb and the latest GoPro is probably a topic for another article, so let’s just get the decision out of the way; I ended up getting the GoPro Hero3+ Black Edition. In fact, if I’d decided to get the Garmin Virb this article wouldn’t even be necessary. Why? Read on.
Firstly, I’ll say that there are various reasons for getting an action camera, one of which is the need to record my daily ride to work. Nearly every day I encounter situations that I’d like to have proof of, should the need arise. Read: motorists that probably shouldn’t be on the road in the first place.
Background
So what is this article about? I work in I.T. for a living and like looking at GPS data for my proper rides. So, that means you may or may not find the stuff below a wee bit technical. That all depends what you’re into. Anyway, for the GPS part of things I use a Garmin Edge 800. With an action camera, it makes sense that having some speed or other performance data on screen with the video – it’s the inner geek in me controlling that, no doubt. The Elite version of the Garmin Virb has built-in GPS which makes it very easy to overlay GPS data on top of videos. However, because I bought a GoPro, this is an option I don’t have. Not out of the box, at least. But surely it can be done, right?
The short answer: yes!
Desired result
When I look at a cycling video I’ve made, I want to see the current speed & GPS track at the same time. Not in separate windows, either – on top of the video.
The search
I started out looking online for a way to do what I wanted. I found a few ways, including some software that looked promising. Unfortunately, the software is built for Windows only – I use a Mac. Stumped again!
Fortunately for me I found a blog called Syntax Candy, run by a guy called Bartek – a programmer based in Poland. I found that he’d already written a Java application to do exactly what I wanted – score! There are other apps that will do the same thing, but they aren’t free.
Now, I’ve done a fair amount of development in the past but never in Java. That meant my system wasn’t setup for Java development. The next step? Give my system the ability to compile and run Java applications.
The process
Here’s what I had to do get my system ready to run Bartek’s application. I can only cover what I did on OS X, sorry – YMMV for Windows. Note that ~ refers to your home directory.
Download and install the Java Development Kit. As I write this, version 8 is the current version – click here to download.
Set the JAVA_HOME variable. This step is dependant on your shell – the default on OS X is bash, although I use Zsh. For bash, edit ~/.bashrc and for Zsh edit ~/.zshrc (create the relevant file if it doesn’t exist). Put this line in the file:
export JAVA_HOME=$(/usr/libexec/java_home)
Download Apache Maven – it is required when running Bartek’s application. Click here to download.
You can put Apache Maven anywhere, but put it somewhere that makes sense as you’ll need to reference that location later. On my system, the full path to Apache Maven is in ~/Applications/apache-maven–3.2.1.
Get the full path to your Apache Maven location. To do this, open a Terminal window and type the following commands. They will return the full path, substituting ~ for the actual path to your home directory. Note that this example assumes your Apache Path is ~/Applications/apache-maven–3.2.1.
cd ~/Applications/apache-maven-3.2.1
pwd
Copy the path that is returned – mine is /Users/chris/Applications/apache-maven–3.2.1.
Add the path from step 6 to your PATH environment variable. This isn’t strictly required, but it means you can run Apache Maven without entering the full path to the binary. For bash, edit ~/.bashrc, for Zsh edit ~/.zshrc. On my system, the PATH declaration is as follows – yours will be different, but you can see where I’ve added the Apache Maven path.
Extract the application’s archive somewhere. It doesn’t matter where, as long as you can access it via the Terminal.
Open a Terminal window and change to the directory you extracted the application to. Type the following command. This might take a while as Apache Maven needs to download a bunch of files that are required to run the application.
mvn package
In the same Terminal window, type the following command. You must be in the GpsVideo application’s directory for this to work.
mvn compile exec:exec
Here is what the contents of the Terminal window should look like (yours may look different depending on themes, etc).
GpsVideo – Terminal Output
And here is what the app should look like, when running correctly.
GpsVideo – Application
That’s it! If you’ve done everything correctly, the GpsVideo application should run after a few seconds.
GPX files
The Garmin Edge cycle computers save GPS and tracking data in FIT format (by default). For the GpsVideo application, the data needs to be in GPX format. This means taking the FIT file and converting it using something like Garmin Connect or rubiTrack. rubiTrack isn’t free, but the trial version’s only limitation is the number of activities you can have. Imported activites can be exported as GPX from there or Garmin Connect very easily.
What does it look like?
Here’s a screenshot of what a video produced by GpsVideo looks like – not bad at all!
Iāve thought about writing a post like this for a while, but havenāt ever had a really good reason until now. As I write this, Iām sitting in a hotel in Singapore during the final night away from home, right at the tail end of a huge 6 month trip around the world. One of the battles I had during previous trips was āDo I really need to carry a bunch of photography gear with me?ā It never really turned into an issue as the trips werenāt long or unique enough to be completely unrepeatable. This one, however, is a trip Iāll almost definitely never get to do again. Iāll also mention that Iāve done a bunch of contract sports photography in the past as well as selling some of my work and having some of it published in a magazine. I like to think I have some idea what Iām doing. š
My camera
Ok, that stuff out of the way, so what did I take? Way back in 2007 I bought a Nikon D300. At the time it was Nikonās top semi-pro body and has been such a great camera that Iāve simply never had any reason to upgrade it. A full frame camera (FX, in the Nikon world) would be a useful and worthwhile upgrade, but ultimately thereās no reason for someone in my position to really need it. So, I still have the same camera and it still works perfectly. Keep that in mind as does mean Iām using a crop-sensor camera (DX, in the Nikon world) that multiplies all focal lengths by ~1.5x. Note: Other manufacturers use their own FX/DX acronyms but, for the sake of simplicity, Iāll use FX and DX throughout this post. Itās worth noting that my partner is a professional photographer that owns all the relevant gear to go with that job. For what itās worth, that means Iām fully aware of the advantages of FX vs DX and do have access to FX gear if absolutely necessary. The guts of the stuff above is to hopefully tell you that the body you take is of almost no relevance at all ⦠as long as you take appropriate lenses.
Sigma 50mm f1.4
Oh, no ⦠not Sigma? Yes, Sigma. When I bought this lens a while ago, I researched the nuts off it and found that all tests except for one found that at f5.6, the aperture that I tend to use most, the Sigma outperformed the far more expensive Nikon equivalent by a long way. When it comes to usability, this is a fantastic lens with a focal length that allows for beautiful portrait photography, close ups and semi-wide shots.
Note that these sample images are low-res for the web.Sigma 50mm sample shot. See? Not bad. However, when used for shots that really should be shot wider, the 50mm did frustrate me a lot. Shooting beaches, landscapes, street scenes etc was difficult and, to be truly effective, needed multiple shots stitched together. Donāt forget that as the lens is being used on a DX camera, the actual focal length was around 75mm.Sigma 50mm sample shot. Would I take this lens on a 6 month holiday again? Nope.
70ā200 f2.8 VR
Actual focal length: 105ā200mm ⦠This has been my favourite lens for a very, very long time. The sharpness of it is absolutely unreal and it the VR actually does do what the shops say it will i.e. allow almost 3 full stops of over exposure, when needed. I used this lens more than I used the 50mm, simply because it meant scenes could be composed from far away while still allowing a decent selection of what was in the shot and what wasnāt. At locations like the Genbaku DÅmu in the Hiroshima Peace Memorial, the detail apparent in long-distance shots of the dome itself is stunning. The shot below is taken from across the river – a reasonable distance from the dome – and yet the detail is still pretty much perfect. Nikon 70-200 f2.8 sample shot. Would I take this lens on a 6 month holiday again? Definitely.
13″ MacBook Air
I really struggled with the decision to take a laptop on holiday this time. However, I now know that leaving it at home wouldāve almost been worse than leaving the camera at home with it. This laptopās extreme light weight combined with the ability to fire up Adobe Lightroom any time meant I could publish photos almost anywhere. If you decide to take a laptop on holiday, my recommendation would be something no heavier than the MacBook Air 13″ or an equivalent model from your manufacturer of choice. Would I take this laptop on a 6 month holiday again? Letās put it this way ⦠if I forgot to bring it, Iād get the pilots to turn the plane around so I could go home to get it. Seriously,itās been
that useful.
Fujifilm X100
Before this trip we knew that carrying SLR gear around every day and having it as the only option wouldāve been quite painful. For that reason, we also took with us the Fujifilm X100 compact camera. Itās got a 23mm (35mm equivalent) fixed lens that turned out to be plenty wide enough to get some great shots. The photo quality is as good as most SLR cameras and has enough manual control to handle most situations, although the jumps in shutter speed settings can be limiting. For example, the exposure dial jumps from 1/1000 to 1/2000 to 1/4000 with nothing in between. Fujifilm X100.X100 sample shot.
If I could do it all again?
Aside from the compulsory extras like a whole of memory cards and the relevant chargers, my carry-on baggage allowance wouldnāt allow any more than that. I also took an iPad 3 and, combined with the basic gear above, put my bag over 10kg when everything was packed. If I did this trip again and had more lenses to choose from, Iād still take two lenses again. Any more would be overkill and probably weigh to much. However, my lens selection would definitely change. This is what Iād take.
For a DX camera
14ā24mm f2.8 super wide angle zoom – NOT fisheye, though.
24ā70mm f2.8 zoom
For an FX camera
24ā70mm f2.8 zoom
70ā200 f2.8 zoom Why the different kits for different cameras? Simple, really. My partner, Erin, had a 24ā70mm f2.8 with her, a lens I used every time she wasnāt using it. I found the resulting shots to be far more satisfying when it came to processing them in Lightroom later. However, if youāve got a DX camera, I believe it would be a great idea to have a super-wide zoom lens that it still impressively wide when you take the crop sensor into account. For example, the 14ā24mm lens becomes 21ā36mm, or thereabouts. Thatās still plenty wide enough, in my opinion. If you canāt take a shot using those lenses, you either need to rethink your shot ⦠or get closer. š
The compulsory stuff Aside from āactualā camera gear, hereās the gear that I think everyone should take. Most of it pretty obvious, but I figure Iāll cover all bases (just to be safe). * Lexar Pro USB 2.0 card reader (CF, CFII, SD, etc) * 4 x SanDisk 2GB Extreme III memory cards. Why so small? If I lose one, I wonāt lose all my shots – simple. Plus, the D300 is 12MP, meaning a super-fast memory card isnāt critical for me. It would be, though, for cameras like the Nikon D800 (a ridiculous 36MP) or Canon 5D MkIII (22MP). * Black Rapid āSportā camera strap. Neck straps, in my opinion, are the best way to ruin your neck muscles on a long trip. The Black Rapid straps completely solve this by taking all the weight off your neck. * Battery charger (for the Nikon and Fujifilm cameras) * Spare battery * A āToddyā cloth, from Toddy Gear. This is seriously the best lens cleaning cloth Iāve ever used. Sounds weird to rave about it, but trust me, itās that good.
Tripod
One thing that wouldāve been useful was a tripod, although the logistics of carrying that around for 6 months were the reason we didnāt take one. Night photography was obviously difficult, but thatās where something like an SLR-weight āflexibleā tripod would be perfect. Whenever we needed to take night shots we just used a bit of common sense and found an appropriate surface to put the cameras on. Itās also worth keeping in mind that many tourists spots around the world donāt allow tripods, e.g. the temples of Angkor prohibit tripods at most of their sites. If the staff there see you setting up a tripod, youāll be immediately asked to take it down. For that reason, carrying one around on a 6 month holiday would be nothing more than a pain in the backside, in my opinion.
Backup
On a trip like this, I would also be very careful about how you backup your images. Gigabytes of RAW files are difficult to backup unless you can afford huge memory cards dedicated to that purpose. For me, I processed my images, exported them from Lightroom then moved the RAW files to an SD card. Unfortunately, the 128GB SSD drive in the MacBook Air filled up quickly when being used by two people. The exported JPG files were then backed up by uploading a copy to my Mega account. With 50GB of free storage, I found Mega to be the perfect destination for those files. Another option is a portable backup device, e.g. a HyperDrive ColorSpace backup device. The solid-state drive versions are rugged enough for the average trip and are available in a range of sizes depending on your needs.
Carrying it all
Depending on where youāre going, security may or may not be an issue. If your trip is to resorts around the world, a secure backpack may not be necessary. For a trip like mine, the bag I shouldāve carried was the Pacsafe V25. Itās big enough to hold all the gear I had plus a good selection of āday stuffā i.e. water, wallets, etc. The biggest selling point, though, is the bagās security options. It would be very difficult for anyone to remove anything from one of these bags without you knowing about it. I had the opportunity to buy one in New York, but didnāt. This is a decision Iām still regretting now.
More gear?
I wouldnāt bother taking any more than gear that. Extension tubes, multiple USB cables, external flashes, battery grip etc. They add weight and, unless youāre taking photos with the intention to use them for commercial purposes later and have a specific need for equipment like that, thereās a good chance you wonāt end up using all that extra gear.
Anything else?
Not really. These are my personal thoughts on photography while travelling, having done a LOT of it recently. Everyone’s thoughts will be different, no doubt, but that’s how the world works. Hopefully you enjoyed reading these ramblings. š
I recently modified my dev environment to allow PHP 5.4 apps, something that involved adding a bunch of Debian Linux āwheezyā sources to my aptitude configuration. No problem in itself, but it did totally wreck my web serverās ability to load the suhosin.so extension (PHP 5.4 doesnāt support it).
This wasnāt a biggie until I hit up one of my Laravel apps and was greeted with a lovely message that said the following:
Unhandled Exception Message: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525/suhosin.so' - /usr/lib/php5/20100525/suhosin.so: cannot open shared object file: No such file or directory Location: Unknown on line 0
Not nice. Looking through php.ini, apache2.conf and httpd.conf showed nothing and, since Iām not particularly familiar with suhosin in itself, I couldnāt figure out where the extension was being loaded.
locate suhosin
That showed me that there is a file on my system called suhosin.ini located in /etc/php5/conf.d so I fired up nano (Iām not cool enough to use vi) and, sure enough, there at the top was the following line:
extension=suhosin.so
I commented that line, restarted the apache services:
service apache2 restart
⦠and now Laravel doesnāt throw any horrible unhandled exceptions about suhosin being broken.
Iāll be the first to admit there there is a huge number of documents, websites, tutorials and guides already available for what Iād say is one of the most common questions from web developers: āHow do I setup my own Linux web development server?ā Read that carefully – development web server. The steps below will fall pretty far short of the mark if you need a production web server. I might cover that later in another article but not in this one.
Iāve built dozens of Linux web servers over the years. Some of them have been for testing, some of them have been for production. The intriguing thing is that out of all those servers, though, I donāt think Iāve ever followed the same guide twice. Why do I need a guide if Iāve built so many? Partly because my memory is crap and partly because I donāt build them often enough to remember the steps manually.
So, here it is – the āDigital Formula No-Bullshit Guide To Setting Up A Debian LAMP Server.ā
Question: Why Debian?
Answer: Many reasons, for example:
Thereās so much documentation available that guides like this arenāt even necessary ⦠errr ⦠š
From the reading Iāve done, Ubuntu is based on Debian ⦠unless you need something only Ubuntu can provide, why not go straight to the Big Daddy distribution that the other guys copied off? [ start Linux distro war here ].
Iāve heard people say, countless times, that for security, Debian is the beeās knees. Is it true? Until I hear otherwise, Iām rolling with it.
Itās just easy. If someone thatās been administering enterprise Windows infrastructure for 15+ years can handle Debian, believe me, anyone can.
Note that this is about as far as you can get from an article about how to manage Debian itself. For that reason, Iām going to assume that you know the basics like CLI navigation, editing text files and using the man command. So, letās go (youāll need to run the below commands either as root, or via the sudo command).
The basic foundation
Install Debian Linux. Bet you didnāt see that coming! During installation, try to make sure youāre enable a network mirror, if possible – itāll save time later. Note that during installation of the latest version (6.0.5 as I write this) there is an option to configure your build as a web server, etc – I donāt use these. I only select the option to install the core utilities/services.
Once Debian is installed, booted and youāre logged in, disable the CD/DVD source by editing /etc/apt/sources.list and commenting out the line(s) that start with deb cdrom:. This is optional, but I think it makes things easier later as it ensures your update processes wonāt try and use the media you installed from, only network sources.
Update your package lists by running aptitude update.
Upgrade the currently installed packages by running aptitude safe-upgrade. If you know what youāre doing, you could run aptitude full-upgrade but, as the man pages say, it is more likely to perform unwanted actions. For those that have done this sort of thing before, this is the same as running aptitude dist-upgrade – the command just got renamed.
Iām a bit of a n00b when it comes to doing stuff in JavaScript. So naturally, when I find a way of introducing a bit of efficiency into a JavaScript situation, Iām more than happy to throw it into the mix. My only disclaimer for the below ramblings are that if youāre a developer that knows anything more than the basics, everything Iām about to write will be nothing more than a āYeah, no shit, geniusā kinda thing. So, go easy on me, mmmkay? š
So anyway, my current attempt at web hackery requires a bit of JavaScript, AJAX + the usual suspects that often go along with those things i.e. jQuery Core, jQuery UI, Sass ⦠you get the picture. Nothing unusual going on here ⦠did I mention that I love jQuery?
I tend to write stuff, make it work, carry on to other things and then come back later, look at a block code and throw up in my mouth a little bit when I see how freakinā ugly and inefficient it is. I mean, seriously, why would you duplicate an almost identical block of JavaScript eleven times when you can do what a real developer would do, and write it properly in the first place?
This example is super-obvious and really aimed at people, like me, who donāt do this stuff professionally but like to learn better ways of doing things. Firstly, hereās a snippet of what I had before.
I had that bit of jQuery UI a bunch of times and had to add a new one, whenever a new element came up that required button-ifying ⦠like that? Itās new. š
Needless to say, thatās horribly inefficient as it requires copy/paste/edit for every occurrence.
Not too long ago, I followed along all doe-eyed while Jeffrey Way (@envatowebdev) taught my fellow n00bs and I how to do things properly with jQuery. Objects came up, and a revelation happened. For me, anyway.
So now, instead of duplicating that block of code (the one up there a bit) many times over, Iāve got all my buttons in an array of objects which is then processed and the relevant properties turned into the equivalent of the eleven blocks of rubbish I had before. Iām the first to admit that there will definitely be a better way of doing this, still, but for now Iām happy with the changes.
Hereās how it looks now:
var buttons = { b1 : { id : 'button.delete', icon : 'ui-icon-circle-minus', text : false }, b2 : { id : 'div#password-submit input', icon : 'ui-icon-check', text : true }, b3 : { id : 'div#profile-submit input', icon : 'ui-icon-check', text : false } } for each ( button in buttons ) { $( button.id ).button( { icons: { primary: button.icon }, text: button.text }); }
Plug a new button-ifiable element into the array and itāll automatically processed next time the script runs (and isnāt cached – weāve all fallen into that that trap).
The result is exactly the same as I was getting before, but, in my opinion, could be called more elegant, efficient, easier to read, whatever.
Most VMware administrators will know that it’s possible to run VMware vCenter on one server and have the vSphere SQL database on another server.Ā Ā This is a perfectly acceptable configuration and, in anything but smaller environments, is probably the best thing to do.
Anyway, the latest release of Microsoft SQL Server, SQL Server 2012, was released during the first half of 2012 and is, of course, supported for use with VMware vSphere.Ā Ā That means that your vCenter server has to be able to connect to the remote instance of SQL Server 2012 – here’s where things get tricky.
At the most basic level, connecting to the server is easy, but the VMware vSphere 5 running on a 64-bit version of Windows Server 2008 requires that the vCenter have a 64-bit DSN (data source name) configured to manage the connection.Ā Ā Easy, right?Ā Ā You’d just thrash ahead and install the SQL Server 2012 management tools or some other Microsoft package with the latest SQL Native Client on the vCenter server … yeah?Ā Ā If you do, vCenter will not be able to connect to the remote SQL Server 2012 instance.Ā Ā As of the date of this article, May 31st 2012, the latest version of the SQL Native Client is 11.0 – this is where the problem lies.
There’s something in version 11.0 of the SQL Native Client that prevents vCenter from being able to use the 64-bit DSN you create.Ā Ā I don’t know or care what the problem is, but I spent the best part of 3 hours trying to get it working, including trawling the VMware community forums, running c:windowssystem32odbcad32.exe, c:windowssyswow64odbcad32.exe and all manner of other things.
The final thing I tried did the trick and that was to simply remove version 11.0 of the SQL Server Native Client and install version 10.0.Ā Ā Version 10.0 of the SQL Native Client is the version that ships with SQL Server 2008 and the SQL Server 2008 Management Studio.Ā Ā In my setup, I need to be able to access the remote SQL Server 2012 database from the vCenter server so installing the SQL Server 2008 Management Studio didn’t present an issue.
At the date of writing this article, May 31st 2012, the SQL Server 2008 Management Studio can be downloaded by going to http://www.microsoft.com/en-us/download/details.aspx?id=7593 – this article relates specifically to the 64-bit version of Windows so please make sure you download the 64-bit version of the management studio.
Hopefully that long-winded explanation helps someone.
Coda 2, one of my favourite code editors, was released on May 24th 2012 to a somewhat underwhelming reception, in my opinion.Ā Ā To be 100% honest, I’m a bit disappointed with Coda 2, not for product or feature reasons, but because of the responses it has made some people come up with during their testing and usage of a brand new product.
I’m not going to review Coda 2 myself – plenty of other websites have already done that – but I am going to address one of the questions that seems to have come up a few times, despite Coda 2 still being a relative infant as I write this.
What question?
Synchronisation.Ā Ā By that I mean the ability to maintain the same settings between multiple Macs running Coda 2.Ā Ā The first thing I need to make clear is that if you purchase Coda 2 from the Mac App Store, iCloud synchronisation is already built-in but, if you buy Coda 2 directly from Panic, you can’t use iCloud to sychronise your configuration.Ā Ā Panic’s own FAQ page has this to say about it:
At the moment, there is only one difference between the two versions: the Mac App Store version will support iCloud syncing of Sites and Clips, and the direct version will not. This is a restriction imposed by Apple.
So we’re screwed, right?
Well, no, and if we’re a little bit smart about it, the solution is relatively simple – Dropbox.Ā Ā Given that OS X stores most application configuration in well-known locations like ~/Library/Application Support and ~/Library/Preferences, synchronising those directories with Dropbox is even easier than on competing operating systems that use things like the *gulp* registry.Ā Ā By the way, in case you’re not sure, ~ means your home directory …
Others have written articles about how some of Coda’s competitors can be synchronised using Dropbox and this solution is really no different – all I’ve done is automate the process a bit.
How?
By moving the following bits to Dropbox:
The Coda 2 main configuration directory
The Coda 2 recent files list
The Coda 2 user preferences file
No really, how?
If you’re familiar with the term “symbolic link”, you’ll know what’s coming.Ā Ā If you’re not, you can run the script below and everything will be done for you.Ā Ā That said, you should be familiar with how to grab a script and, if necessary, make it executable – sorry but this isn’t the right place for me to explain how to do that (hint: chmod +x script_name).Ā Ā Besides, I’d imagine readers of this article are relatively technical anyway, right?Ā Ā š
Anyway, grab the script below, edit the paths at the top if you need to, then run it.Ā Ā A couple of warnings, though:
Before running the script on any Mac, please make sure Coda 2 has been run at least once
The first Mac you run this on should be the one where Coda 2 is configured how you’d like it
On subsequent Macs, you must make sure you you edit the script and change the first_run variable to FALSE before you run it … if you don’t, the existing Coda 2 configuration in Dropbox will be overwritten
The script will do a few checks while running, move your Coda 2 configuration to Dropbox, delete the local configuration and then create some symbolic links to the new Dropbox versions.Ā Ā As far as Coda 2 is concerned, nothing has changed – it still operates as if the files are in the same place they’ve always been.