Securing a multi-user Apache Web Server

As part of refining my Apache web server which runs multiple sites I’ve create a user account, database account and home folder per site so for example the site has a user account example, a database account example and a web folder located at:


The corresponding Apache VirtualHost for this site is:

<VirtualHost *:80>
        ErrorLog /var/log/apache2/
        LogLevel warn
        CustomLog /var/log/apache2/ combined
        DocumentRoot /home/example/public_html
        <IfModule mod_suexec.c>
                SuexecUserGroup example example

Previously to ensure PHP scripts worked I had a Bash cron job to loop over all the user’s public_html folders and set the owner on the public_html folder to the apache user www-data.

Not ideal.

So after a few hours of digging I managed to deploy a solution both secure and flexible, allowing users to logon and edit their web pages without permissioning headaches.

Assuming a basic Apache setup first install the Apache suPHP and suEXEC modules:

sudo apt-get install libapache2-mod-suphp apache2-suexec

Enable the modules:

sudo a2enmod suexec
sudo a2enmod suphp

The suPHP module replaces the Apache PHP4 and PHP5 modules. Having both active prevents suPHP from working properly so you’ll need to disable the PHP4 and PHP5 modules:

sudo a2dismod php4
sudo a2dismod php5

Finally you’ll want to set the permissions on the user folder:

find ~/public_html/ -type f -exec chmod 644 {} \;
find ~/public_html/ -type d -exec chmod 755 {} \;

To get this setup even better I’d ideally like to set those permissions to 600 and 700 respectively but that’s a job for tomorrow.


Awesome link which covers much of the above and then some.


Spreading your bets on RAID

In the early days of our startup, bubblegum and duct tape seemed to be the order of the day as we struggled to keep things running on cheap as chips computers bought off ebay and a ragtag bunch of borrowed Dell Optiplexes.

Developer files sat on their individual machines, source code was scattered across the place and the concept of centralised document storage was a share on one of the developer machines called Common in which everyone dumped their stuff.

A year into this rapidly escalating mess I took matters into my own hands and pestered the boss for a £1500 budget to build a file server. A Supermicro SC-743 Cool & Quiet Case coupled with a top notch Xeon board, 8GB of RAM, Intel Quad Core CPU and a top of the line 3ware 9690SA RAID card (with battery backup no less!) meant we were about to take our file server (the aforementioned developer’s machine) from a mewling kitten to a roaring tiger.

The whole thing was assembled beautifully and worked a treat, with a RAID 1 mirror for the Debian installation and 8x Seagate 7200.11 hard drives for the RAID 10 storage array.

In building this machine I made one and only one mistake. All of the drives were the same make and model and doubtless all manufactured at the same time.

Fast forward 12 months and on coming into work on Monday morning I saw a mail from the 3ware monitoring manager: ‘Drive 4 dropped out of array’. Not a problem I thought, we had a monthly offsite backup in place. I hopped online and ordered a spare disk.

Later that afternoon I received another alert: ‘Drive 6 dropped out of array’.

‘Sh****t’ I (probably) exclaimed realizing that if the second drive had dropped out of the same stripe as the previous drive our array would have been toast. I quickly ordered two more drives.

Making hasty backups and crossing fingers I awaited the arrival of the new drives the following day and on their arrival stuck one in to replace the failed disk. A few hours after successfully rebuilding the array I saw another disk fail.

It was at this point that I got down on my knees and began to pray. (I’m just kidding – I did that that the day before).

On a hunch I removed and reinserted the failed drive. It initialized and rebuilt fine. A few hours later one of the new drives dropped out. Over the next few days I was barely playing catch up in ensuring the RAID array didn’t fail entirely with drives dropping out 1-2 times a day and then initializing on reinsertion.

We were making daily backups by now but since this was our main file server and we were going through a pretty lean month it meant that we had zero budget to replace all the disks or get another box.

It was then that I exercised my Google-fu and hit the internet. Turned out Seagate had a bad batch of 7200.11 disks and had issued a firmware update.

The duty of taking the box offline after work and updating the firmware of all 11 drives fell on my shoulders. This ghastly process involved sticking all the disks, one at a time into a desktop and running the firmware update on each one.

Since then the array has run like a champ. We kept it with the original 8 disks and 3 hot spares for good measure…it’s been 7 years and nary a complaint from 3ware’s management tool.

Fast forward to 2013 and our latest storage purchase was a lovely Synology 10 disk NAS. Quick and (very) quiet it came populated by the manufacturer with 10 2TB Seagate disks (Enterprise models no less!). We loaded it up with our data and enjoyed the feel of the new shiny, flashing its pretty lights at us from the equipment rack.

Fast forward 12 months and you guessed it, a drive dropout. Then another, and another, followed by another. Over the course of 6 months we must have replaced more than half of those damned Seagate drives.

Moral of the story? Don’t buy Seagate.

Heheh, just kidding (maybe)…moral of the story is not to buy the same brand and batch of hard disks when speccing your storage array. Since those early days of scraping by we now build some pretty powerful RAID arrays for our customers and we always try and use a 50/50 mix of different brands and batches.

(We also make a lot of backups!)

Device Icons

I’m a great believer in having strong visual cues in user interfaces to help a user orient themselves. To this end I think manufacturers of devices like Kingson, LaCie, Sandisk, etc. should step up to the plate more and offer the user quality icons for their devices.

LaCie are actually fairly good at this, although some of their icons leave a little to be desired. Sandisk and Kingston AFAIK don’t provide any icons for their devices which is a great pity.

The benefit of these icons mean a user interface can go from this:


To this:


Now isn’t that much better?


More for my benefit than yours, but I’ve attached/linked to the icons I use here:

Attributed wherever possible to the original author of the icon.

LaCie Little Big Disk Icon

LaCie 2big Icon

Kingston DTSE9 Icon

Sandisk Titanium Icon (Author: iiroku)

Openfire Single Sign On (SSO)

I’m a dabbler, I like to dabble.

While most people are happily using Google Talk, Facebook chat, Skype and the like I’m busy playing around with my own chat server, writing plugins for it and seeing if I can get things like Single Sign On (SSO), DNS Service Records and Federation working. It’s time consuming, frustrating at times but ultimately rewarding. One particularly frustrating problem I recently tackled was single sign on with Openfire (a Jabber/XMPP messaging server).

My basic setup likely mirrors most enterprise-y networks:

  • Windows Active Directory Domain Controller with Windows Support Tools installed
  • Openfire 3.8 bound to the Windows DC
  • Windows XP/Windows Terminal Server Clients running Pandion/Pidgin
  • Mac OS X Clients Running Adium

The first step is to ensure that you have a working Windows AD network alongside a working Openfire installation.

  • AD Domain: EXAMPLE.COM
  • Openfire (XMPP) Domain: EXAMPLE.COM
  • Keytab account: xmpp-openfire

Ensure you have an A and reverse DNS record for your Openfire server and then setup your DNS Service Records for Openfire like so: 86400 IN SRV 0 0 5222 86400 IN SRV 0 0 5269

With DNS done create two new Active Directory accounts. Account one is for binding the Openfire server to the domain (skip this account if you’ve already bound Openfire to your domain).

Account two is to associate your Service Principal Name (SPN) so Kerberos clients can find and authenticate using SSO with your Openfire server.

On account two check under Account properties that User cannot change password, Password never expires and Do not require Kerberos preauthentication are checked.

On the Windows Domain Controller you’ll now need to create the SPN and keytab. The SPN (Service Principal Name) is used by clients to lookup the name of the Openfire server for SSO. The keytab contains pairs of Service Principals and encrypted keys which allows a service to automatically authenticate against the Domain Controller without being prompted for a password.

Creating the SPN:

I created two records since it seems some clients lookup xmpp/ and some look up xmpp/

setspn -A xmpp/ xmpp-openfire
setspn -A xmpp/ xmpp-openfire

Mapping the SPN to the keytab account xmpp-openfire and when prompted enter the xmpp-openfire password:

ktpass -princ xmpp/ -mapuser xmpp-openfire@EXAMPLE.COM -pass * -ptype KRB5_NT_PRINCIPAL

Create the keytab:

I found that the Java keytab didn’t work on my Openfire system in which case I used the Windows ktpass utility to create it. Some users report the converse, so see whichever works for you:

Java keytab generation:

ktab -k xmpp.keytab -a xmpp/

Windows keytab generation:

ktpass -princ xmpp/ -mapuser xmpp-openfire@EXAMPLE.COM -pass * -ptype KRB5_NT_PRINCIPAL -out xmpp.keytab

Copy the keytab to your Openfire directory, typically /usr/share/openfire or /opt/openfire. The full path will look like this:


Configuring Linux for Active Directory

Configure Kerberos

First we need to install ntp, kerberos and samba:

apt-get install ntp krb5-config krb5-user krb5-doc winbind samba

Enter your workgroup name:


Configure /etc/krb5.conf

default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log

dns_lookup_realm = true
dns_lookup_kdc = true
ticket_lifetime = 24h
forwardable = yes

pam = {
debug = false
ticket_lifetime = 36000
renew_lifetime = 36000
forwardable = true
krb4_convert = false

Test connection to Active Directory by entering the following commands:

:~# kinit xmpp-openfire@EXAMPLE.COM

Check if the request for the Active Directory ticket was successful using the kinit command

:~# klist

The result of this command should be something like this:

Ticket cache: FILE:/tmp/krb5cc_0
Default principal: xmpp-openfire@EXAMPLE.COM

Valid starting Expires Service principal
07/11/13 21:41:31 07/12/13 07:41:31 krbtgt/EXAMPLE.COM@EXAMPLE.COM
renew until 07/12/14 21:41:31

Join the domain

Configure your smb.conf like so:

   workgroup = EXAMPLE
   realm = EXAMPLE.COM
   preferred master = no
   server string = Linux Test Machine
   security = ADS
   encrypt passwords = yes
   log level = 3
   log file = /var/log/samba/%m
   max log size = 50
   printcap name = cups
   printing = cups
   winbind enum users = Yes
   winbind enum groups = Yes
   winbind use default domain = Yes
   winbind nested groups = Yes
   winbind separator = +
   idmap uid = 600-20000
   idmap gid = 600-20000
   ;template primary group = "Domain Users"
   template shell = /bin/bash

   comment = Home Direcotries
   valid users = %S
   read only = No
   browseable = No

   comment = All Printers
   path = /var/spool/cups
   browseable = no
   printable = yes
   guest ok = yes

Join the domain:

:~# net ads join -U administrator

You will be asked to enter the AD Administrator password.

Verify you can list the user’s and groups on the domain:

:~# wbinfo -u
:~# wbinfo -g

Testing the keytab works:

From your Openfire system run the below command:

  kinit -k -t /usr/share/openfire/resources/xmpp.keytab xmpp/ -V

You should see:

Authenticated to Kerberos v5

Then create a GSSAPI configuration file called gss.conf in your Openfire configuration folder normally in /etc/openfire or /opt/openfire/conf. Ensure you set the path to your xmpp.keytab file: {

Ensure the file is owned by the openfire user.

Stop Openfire and enable GSSAPI by editing your openfire.xml configuration file which is found in the openfire conf directory:

<!-- sasl configuration -->
    <!-- Set this to your Keberos realm name which is usually your AD domain name in all caps. -->
        <!-- You can set this to false once you have everything working. -->
        <!-- Set this to the location of your gss.conf file created earlier -->
        <!-- "/" is used in the path here not "\" even though this is on Windows. -->

Or add to System Properties:

sasl.gssapi.config /etc/openfire/gss.conf
sasl.gssapi.debug false
sasl.gssapi.useSubjectCredsOnly false
sasl.mechs GSSAPI
sasl.realm EXAMPLE.COM

Restart Openfire

Buying Hi-Def music today is a crapshoot

The loudness war has been going on for some time with musicians, producers and record companies over the past few decades mastering and releasing their records with ever increasing volume and compression. In the days of vinyl there was a physical limit to how loud you could press a record before the needle would be unable to play it – the advent of Compact Discs however changed that. Whilst they boasted a greater dynamic range than vinyl they also defined a maximum peak ampltitude. Through some science and a bunch of signal processing, record engineers could thus push the overall volume of a track so that it became louder throughout, often hitting peak and compressing the dynamic range of the record. The long and short of this is that modern records nearly all tend to have dynamic range compression applied and the result is a loss of sound quality in the form of distortion and clipping.

Why do record companies do this? A popular perception (misconception?) is that the louder a record sounds – the better it sounds – and hence the more likely someone hearing it in the record store or over the radio is to buy it.

Note the mediums over which most people traditionally hear new music – inside record stores, over the radio, in the coffee shop, on their phones, tablets and notebooks – none of these mediums are known for high fidelity listening and their poor quality speakers tend to mask the compression in the music. As a result loud sells.

So if that new track by The Killers sounds good to you playing on the cheap speakers at your local coffee shop just wait until you hear the Muse single coming up  – it’s probably louder and in a noisy coffee shop will sound better.

The problem arises when you listen to that record on your nice, shiny headphones or your stereo at home – in a quiet environment, with good audio equipment those distorted, normalised tracks are going to sound noisy, fatiguing and to be perfectly blunt – a bit crap.

A backlash from consumers and high end audio equipment manufacturers was bound to happen with the demand for high quality, well mastered records ever increasing. Companies like HDtracks, naimlabel and LINN Records to name a few, stepped in to fill the gap. Offering not only well mastered tracks they also boast a higher resolution than CD can provide, with the quality up to 24-bits and 192kHz. One thing which needs to be stressed however is that mastering matters – in fact it matters more than how much fidelity a record has: A poorly mastered 24-bit 192kHz record is not going to sound any better than a well mastered 16-bit 44.1kHz CD. In fact if it’s very poorly mastered it will almost certainly sound worse than an MP3 rip of the CD.

Take Elton John’s self titled album for example. Here it began life in 1970 on vinyl. No sign of clipping or compression here:

Elton John - Elton John - The King Must Die - 1970 Vinyl
Elton John – Elton John – The King Must Die – 1970 Vinyl

In 1985 it was released on CD, again with no discernible compression:

Elton John - Elton John - The King Must Die - 1985 CD
Elton John – Elton John – The King Must Die – 1985 CD

In 1995 it was re-released as a Remastered Edition on CD. You can see the track is louder but it’s just about acceptable:

Elton John - Elton John - The King Must Die - 1995 Remastered CD
Elton John – Elton John – The King Must Die – 1995 Remastered CD

In 2008 it was again re-released as a Deluxe Edition CD. As expected for a modern release, it’s been made loud and sounds compressed and fatiguing as a result:

Elton John - Elton John - The King Must Die - 2008 Deluxe Edition CD
Elton John – Elton John – The King Must Die – 2008 Deluxe Edition CD

Finally Elton John’s album appears on HDtracks in high definition 24-bit 96kHz. It should offer the best sound quality but to take advantage of the vast dynamic range of those 24-bits it will need to be mastered properly. Here we can see that this is definitely not the case. In fact it suffers from more dynamic range compression than the 2008 CD release:

Elton John - Elton John - The King Must Die - HDtracks 24-bit/96kHz
Elton John – Elton John – The King Must Die – HDtracks 24-bit/96kHz

The HDtracks edition should offer the best sound quality – after all it is 24-bit 96kHz and comes from a store which aims to provide high end audio tracks. Sadly it suffers from bad mastering. The result is a lot of clipping and excessive loudness and consequently it sounds worse than the older, less compressed editions.

So what can we surmise from this? Put simply that despite the much touted quality of 24-bit music there’s no certainty that the HD version of the album you’re buying is also mastered properly and free from excessive normalisation and distortion. For those looking to upgrade their album collection it’s clear that there’s no guarantee your new, 24-bit purchases will sound better. This is a great pity since technically the new high definition audio formats offer higher quality than has ever been possible – if only the studios, record producers and artists would oblige. Until then for those seeking quality HD audio tracks it’s a crapshoot.


The background to our lives

John F. Kennedy, Robert F. Kennedy, John Lennon, Martin Luther King Jr., Abraham Lincoln and that legendary queen of Queen; Freddie Mercury – so many showmen have graced the stage of life. They have enlivened our worlds and broadened our horizons. They have pushed us from complacency and made us look at the world with new eyes. They have enriched us and inspired us and they have left us before their time.

It was with heavy heart that I heard of Steven P. Jobs’ passing. He was one of the greats. A man who pushed us and his peers, a man who showed us that there was a better way of doing things. Whilst his passage from this life was expected; his health visibly failing at every public appearance – his loss still came as a blow. Felt keenly around the world, it was a loss for which many of us felt wholly unprepared.

Jobs’ legacy, from the Apple II to the Mac, the iPod, the iPhone and the iPad have been part of the background of our lives. Those few who haven’t used his products have certainly used those of his competitors – products which borrowed and benefitted from his great designs. He might not have been the sole creator but his influence was evident in the high standards of each.

My heroes in this world have been those lofty individuals who almost canonised have passed into legend – JFK, RFK, Lennon, MLK, Lincoln, Freddie Mercury. These men however are obvious choices. What surprised me about Steve Jobs’ death was not how acutely his passing was felt, but that I hadn’t realised he’d been a hero all along.

Below are a selection of family images from over the years. From a little girl’s first e-mail to cousins communicating across continents Apple have been a valuable part of our lives:

To breaking your fall on expensive gear and happy endings!

It’s every photographer’s worst nightmare (well apart from that one where you miss the shot of a lifetime): Dropping your gear. Ice and cameras don’t mix very well but in pursuit of that ever elusive perfect shot we push ourselves to extraordinary lengths, into harms way if needs be to satisfy our craft.

In my case, I pushed myself to get up at the crack of well, midday, out into the frosty January afternoon to get some pictures of the snow.

Despite the supposed grip a good pair of hiking boots was supposed to offer, I could get little purchase on the ice. The pavement was covered in a thick layer and it glistened treacherously. It was no wonder the street was deserted.

I moved as carefully as possible, my cat like reflexes saving me from certain embarrassment. I took as many photos as I could before my fingers began to tingle from the cold seeping its way through my gloves and managed to make it back to my door step in one piece. ‘At last I thought –  home free’. Such hubris.

I of course fell.

My camera landed lens first with a dull crack, shards of my lovely B+W MRC filter flying everywhere. I cared not a jot about a potential wrist injury as I struggled to get back on my feet and survey the damage. My beloved 24-70mm f/2.8 lens had suffered, I knew not what kind of damage, but dragging my crestfallen self back inside I surveyed the devastation. The filter had shattered into a myriad of multi-coated shards. Its deformed body fusing itself to the lens thread making removal impossible.

The lens itself appeared to be relatively undamaged. The front element had a few tiny nicks – the filter thread on the lens however was completely gone on one corner.

Thankfully my camera was without a scratch – the lens took the brunt and surprisingly functioned still. I took a few pictures and despite the broken shards of glass only a small amount of ghosting was visible.

Despite the relatively good prognosis I still felt pretty beat up but then recalled that I had camera insurance with Photoguard. A quick call to the insurer and after filling a form and sending photographic evidence (naturally) I was recommended to send my lens and camera in for repair to Fixation in London.

To say I was impressed with my insurer would be putting it mildly. Not only did Fixation turn around my lens within a few days – Photoguard sorted out all payment and even paid for having my camera body checked and cleaned just in case it too was damaged!

Now that’s service.

I’m not a big one for adverts but if you need insurance by all means click on the affiliate link or pick up the phone and give Photoguard a call.

Suffice to say I renewed my insurance!

From viewfinder to wall

From the moment you press the shutter a picture takes on a life which goes from your camera, to your darkroom (be it digital or chemical) where after being burnt and dodged (and in some cases bodged) it goes for printing. You do print your pictures don’t you? I have a hunch that with the advent of digital photography the vast majority of us leave our digital treasures gathering dust on our computer hard drives – I know I certainly do.

With the advent of photo management software, organising this digital soup has become a shedload easier and thanks to the power of programs like Lightroom and Aperture, the digital darkroom has finally come of age (it actually came of age a year or two ago, but I digress…). We are now able to push and tweak our photos so that they go from this:

to this:

The former, whilst nice hardly represents what I saw and felt when I took the picture – the latter however does, and certainly has the pizazz I need to justify printing the picture and hanging it on my wall. Which neatly segues to my topic – printing. Rather than go through the pain (and it is a pain) of keeping and maintaining a desk hogging, top notch inkjet, that requires the expense of cartridges and special photo-paper I prefer to let someone else deal with all that hassle.

Quite simply – taking pictures is what I like – picture printing – meh, I’ll let someone else do that for me.

Unfortunately being an anally-retentive perfectionist I needed someone truly high calibre to handle my printing (not that my pictures are that high calibre to be honest, but as the saying goes ‘if something’s worth doing, it’s worth doing right’). The big internet printers like Photobox I found to be good on price but not so good on quality so I was rather pleased when I discovered that one of the better printers, theprintspace can be found right here in London, and specialize in a particular type of printing called C-Type (or Type-C), which involves projecting the digital image onto light sensitive photographic paper rather than printing ink on paper. This apparently results in a higher quality, longer lasting print.

I chose to print the above macro image of a daisy as well as the following three pictures:

Flower Anther Macro

Erin Black and White


Leon Black and White


How did they turn out? Good – pretty damn good. (One thing I discovered, and no doubt my lack of experience is showing; the image needs to be sharpened more for printing than they do for on screen viewing – and not just a little but a fair amount). Now not only do theprintspace offer printing, they provide color calibrated workstations for you to prepare your print so that you can be sure that what you see on screen is what you get. And if that wasn’t enough they can mount the print onto card, mdf, plastic, aluminium and a bunch of other materials, with prices which aren’t too unreasonable. Now I’m not going to be sending hundreds of holiday snaps through them sure, but for the pictures I’ve taken which I feel are worth hanging up – only theprintspace will do.



D200 it was good knowing you!

Back in 2003 I was fortunate enough to have a Canon 300D at my disposal. Besides my first real camera; a Nikon F-401 (a generous present which I was only able to use sporadically due to the cost of film and processing), the new Canon 300D was a mind blowing experience for me. Not only did it offer a price far, far below any other SLR out at the time, it also offered 6-megapixels, great image quality and a chunky plastic body making it comfortable to use for those who fumble with the tiny buttons compact cameras bristle with.

The 300D whilst good, maybe even great, was never a camera one could love (and loving the aged F-401 was outta the question!). It wasn’t in fact until I received a Nikon D200 as a very generous wedding present did I fall hard for photography. Coupled with the supremely versatile 18-200mm lens this thing ate through the scenery as fast as I could press the shutter. Gone was the agonising startup delay of the 300D, gone was the plasticky feel of a camera you knew would shatter like a glass bowl full of pin ball parts – the Nikon D200 was a brick and it could really take pictures, unless of course - you were in poor light. Ah yes, the D200’s Achilles heel, as the light level fell, the D200 would compensate by raising the ISO (sensor sensitivity) and hence the noise.

If I lived somewhere other than Britain (where the sun is harder to find than rocking horse poop) poor light would not have been a problem. But given that on any slightly cloudy day I would see the ISO creeping up to 800 and beyond (and noise permeating my images along with it) I’d clutch my sturdy D200, nod sympathetically at its limitations and soldier on.

Love mon ami, is what kept me and my D200 together through that and I am reluctant ever to sell it. But may she rest in peace for I have been seduced (and how!) by the delights of Nikon’s latest baby, the D700.

Offering all the qualities of a D200; superb build quality, the always excellent Nikon ergonomics, and a plethora of features, the D700 also offers one thing I had been longing, nay aching for – low noise. And if that weren’t enough – a 35mm, ‘full-frame’ sensor. I’d found heaven, and it took far too many pay-cheques (and a birthday gift of Bank of England vouchers) to purchase it.

With time I might just find that elusive artist’s eye in my camera bag, but until then with a D700 also in there – I can gladly say that my camera gear does not limit the image inside my head.

Goodbye D200, Hello D700!
Goodbye D200, Hello D700!

Hello world!

After months of procrastinating (and after waving a tearful goodbye to my old site), I have decided upon WordPress for my new photo journal. Combining my irreverent wit and technical prowess I shall be writing about camera tech and hopefully pausing to take a picture or two. No doubt I’ll spend more time fetishizing camera gear than actually showing what it’s capable of, but putting this blog up I feel is accomplishment enough that I can allow myself some bad habits ;)