Getting Things Done (and getting your data out of Cultured Code’s

I’m a big fan of David Allen’s book Getting Things Done. When it comes to effective organization you could probably read this book and be done, it really is that good.

One issue I have with the book though is that it’s very paper centric. As a Yard-O-Led fountain pen and Rhodia notebook loving scribbler I have to say I can dig that. I really can.

But the thing that irks me with a paper based workflow is that eventually, at some point, I know I’m going to have to type my paper notes up!

To that end I need some good software to organize my TODO lists.

For years I’ve turned to Cultured Code’s Things. It’s a beautifully designed application but has not been without its problems. In the early days of productivity software Things was relatively alone in the marketplace. Today things are very different with Asana, Todoist, Wunderlist, TaskPaper and a whole myriad of other entrants crowding up the productivity suite market.

Things hasn’t kept up and it’s high time I switched. Unfortunately Cultured Code hasn’t seen fit to put any sort of decent export functionality into Things. Thankfully however they do offer AppleScript functionality so an evening of hacking led me to produce a script which pulls the data out of their database and sticks it into a nice CSV.

From there you can copy and paste it into the task app of your choice or alternatively import it using your own script-fu.

Benchmarking System Performance

A little knowledge is a dangerous thing. Or so the saying goes. When specifying and buying computer hardware it saves time and money knowing the level of performance you get with your existing equipment and the performance you can expect from your new purchase.

There are numerous metrics to measure but in order to obtain meaningful results (relatively) quickly I personally focus on CPU, memory and file and network I/O.

The key tools I use to measure performance are:

  • dd – file/network I/O
  • SysBench – CPU, Memory and file/network I/O
  • iperf – network I/O
  • IOzone – file/network I/O


dd is a simple command which copies standard input to standard output. As a result by directing input/output from and to various destinations we can measure their read and write performance.

To measure write performance:

dd if=/dev/zero of=tmp.bin bs=2048k count=5k && sync

To measure read performance:

dd if=tmp.bin of=/dev/null bs=2048k count=5k && sync

Since block size is 2048k (2MB) your output file tmp.bin will be double the size of your count figure. So for example to test a file size of 10GB specify a count value of 5k.

Aim to test a file size of 2x your system memory. Otherwise you’ll end up caching a lot of your result and hit memory rather than disk.


10737418240 bytes transferred in 80.956609 secs (132631769 bytes/sec)

Here we’re observing bandwidth of 132631769 bytes/sec or 132MB/s.

Script It!

Takes two arguments, destination and size in GB of the test file.


# Default size in GB

if [ "$1" = "" ]; then
 echo "Destination path missing"
 exit 1

if [ "$2" != "" ]; then

COUNT=$(($SIZE / 2))k

echo "Starting Write Test"
dd if=/dev/zero of="$DEST/tmp.bin" bs=2048k count=$COUNT && sync
echo "Completed Write Test"
echo ""
echo "Starting Read Test"
dd if="$DEST/tmp.bin" of=/dev/null bs=2048k count=$COUNT && sync
rm "$DEST/tmp.bin"
echo "Removed test file"
echo "Completed Read Test"


SysBench is a benchmarking application which covers a range of performance tests to measure CPU, memory, file IO and MySQL performance.

It can be used with very little setup and allows you to quickly get an idea of overall system performance.



sysbench --test=cpu run

By default the process runs in 1 thread. Specify –num-threads=X for multiprocessor systems where X is the number of CPU cores.


sysbench 0.4.12: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Doing CPU performance benchmark

Threads started!

Maximum prime number checked in CPU test: 10000

Test execution summary:
total time: 10.4933s
total number of events: 10000
total time taken by event execution: 10.4909
per-request statistics:
min: 0.99ms
avg: 1.05ms
max: 2.17ms
approx. 95 percentile: 1.27ms

Threads fairness:
events (avg/stddev): 10000.0000/0.00
execution time (avg/stddev): 10.4909/0.00

The key figure to look out for is total time: 10.4933s.


Execute (read):

sysbench --test=memory run

Execute (write):

sysbench --test=memory --memory-oper=write run


sysbench 0.4.12: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Doing memory operations speed test
Memory block size: 1K

Memory transfer size: 102400M

Memory operations type: write
Memory scope type: global
Threads started!

Operations performed: 104857600 (2187817.58 ops/sec)

102400.00 MB transferred (2136.54 MB/sec)

Test execution summary:
 total time: 47.9279s
 total number of events: 104857600
 total time taken by event execution: 40.6687
 per-request statistics:
 min: 0.00ms
 avg: 0.00ms
 max: 4.36ms
 approx. 95 percentile: 0.00ms

Threads fairness:
 events (avg/stddev): 104857600.0000/0.00
 execution time (avg/stddev): 40.6687/0.00

The key figures to look out for are the transfer rates MB/sec or ops/sec values.

File I/O

Measuring storage performance is a very tricky beast. There are many variables at play from the bandwidth of the interconnect (SATA 3Gb or 6Gb, Ethernet 10Gb or 1Gb etc.) to the amount of memory the system has which affects how much of the benchmark is hitting memory instead of disk. On top of that you need to be aware of the type of data you’ll be pushing; does it involve a lot of small sized random I/O or larger files with a lot of sequential I/O.

For example a database or virtual machine disk store will have a small block size with a lot of random I/O. Large ISOs or media files will have larger block sizes with a lot of sequential I/O. How you specify your storage server will drastically affect its performance in these cases, particularly with random I/O which is the most demanding case.

If a storage system can handle random I/O well it can certainly handle sequential I/O too which is why a lot of storage reviews will tend to focus on random performance. It also requires significantly less exotic (and expensive) hardware to engineer a well performing storage system for lots of sequential I/O so bear this in mind when determining your storage needs. You probably won’t need SSD backed read/write caches or high RPM drives if you’ll be serving media.


When using Sybench’s fileio benchmark you will need to create a set of test files to work on.


sysbench --test=fileio --file-total-size=4G prepare

It is recommended that the size set using –file-total-size is at least 2x larger than the available memory to ensure that file caching does not influence the workload too much.



sysbench --test=fileio --file-total-size=4G --file-test-mode=rndrw --max-time=240 --max-requests=0 --file-block-size=4K --num-threads=4 --file-fsync-all run

The I/O operations to use can be specified using –file-test-mode which takes the values seqwr (sequential write), seqrewr (sequential rewrite), seqrd (sequential read), rndrd (random read)rndwr (random write) and rndrw (random reead/write).

Generally the higher you set –num-threads the higher your result. Beyond a certain point however performance will start to level off. This will tend to happen with a thread count 2x the number of CPUs on the test system.

If testing random I/O a file block size of 4K is suggested using –file-block-size. For sequential I/O use 1M.

Setting the option –file-fsync-all only affects the rndwr and rndrw tests. It forces flushing to disk before moving onto the next write. You would want to do this to emulate very demanding cases such as VMware and NFS stores which force sync on write. Performance is drastically degraded with this option. By default sysbench flushes the writes to disk after 100 writes.

By default sysbench fileio executes 10000 requests. In order to produce effective benchmarks within a period of time we set the –max-requests value to 0 which is unlimited.

We then set the –max-time value to a logical value based upon the file-total-size value in order to ensure the test doesn’t execute requests indefinitely. 240 seconds is a good value for sizes of 4G, for larger sizes such as 60G a time of 720 seconds is good.


sysbench 0.4.12: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Extra file open flags: 0
128 files, 32Mb each
4Gb total file size
Block size 16Kb
Number of random requests for random IO: 10000
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Threads started!

Operations performed: 6000 Read, 4000 Write, 12800 Other = 22800 Total
Read 93.75Mb Written 62.5Mb Total transferred 156.25Mb (40.973Mb/sec)
 2622.29 Requests/sec executed

Test execution summary:
 total time: 3.8135s
 total number of events: 10000
 total time taken by event execution: 0.3151
 per-request statistics:
 min: 0.00ms
 avg: 0.03ms
 max: 5.88ms
 approx. 95 percentile: 0.02ms

Threads fairness:
 events (avg/stddev): 10000.0000/0.00
 execution time (avg/stddev): 0.3151/0.00

The key figures to look at are the transfer rates MB/sec and Requests/sec which basically equates to your IOPS figure.

A bug in the fileio output shows the bit abbreviation but shows the numerical byte value.



sysbench --test=fileio --file-total-size=4G cleanup

To cleanup simply run the above command and the various temp files used to run the fileio test will be removed.

Script It!

Here’s a little script I use to quickly test File I/O performance using sysbench. Simply call it from the folder on the storage device or network share you want to benchmark:


# Set to 2x RAM

#Set to long enough to complete several runs

#For random IO set to 4K otherwise set to 1M for sequential

logdate=$(date +%F)

echo "Preparing test"
sysbench --test=fileio --file-total-size=$FILE_TOTAL_SIZE prepare

echo "Running tests"
for run in 1 2 3; do
 for each in 1 4 8 16 32 64; do
 echo "############## Running Test - Write - Thread Number:" $each "- Run:" $run "##############"
 sysbench --test=fileio --file-total-size=$FILE_TOTAL_SIZE --file-test-mode=rndwr --max-time=$MAX_TIME --max-requests=0 --file-block-size=$FILE_BLOCK_SIZE --num-threads=$each --file-fsync-all run > log-$logdate-write-${each}T-${run}R.log
 echo "############## Running Test - Read - Thread Number:" $each "- Run:" $run "##############"
 sysbench --test=fileio --file-total-size=$FILE_TOTAL_SIZE --file-test-mode=rndrd --max-time=$MAX_TIME --max-requests=0 --file-block-size=$FILE_BLOCK_SIZE --num-threads=$each run > log-$logdate-read-${each}T-${run}R.log

echo "Cleaning up"
sysbench --test=fileio --file-total-size=$FILE_TOTAL_SIZE cleanup


IOzone is an incredibly comprehensive file IO measurement application. It provides in depth analysis of filesystem performance and measures it across three axis; file size, transfer size and performance.

It also lets you easily produce pretty graphs like this which show the performance effect of CPU cache, memory cache and raw disk speed performance:

IOzone read performance report
IOzone read performance report

With iozone there are two scenarios I typically measure:

  • Direct Attached Storage (DAS)
  • Network Attached Storage (NAS)

To explain the commands below there are a few variables to set in both types of scenario. Firstly I set -g (size) to 2x RAM of the file server being measured. It takes a lot longer to test, especially with large amounts of memory, but the results are much more useful since they give a nice 3D surface chart which shows the sustained speeds you can expect for a given file size as it hits CPU cache, memory cache, SSD cache and finally spinning disks.

The argument -b produces a binary compatible spreadhseet which can be opened in Excel to produce 3D surface charts like below. You can see the measured performance decreases as file size exhausts the CPU cache (top strata) at (7 GB/s), buffer cache (next strata down) and finally hits spinning disks in the pale blue section at the bottom (450 MB/s). That last figure is our sustained speed at load.

Where the chart flatlines is where the result is unmeasured. Be sure to set option -z to avoid that!

IOzone Writer Report (RAID 10 FreeNAS system 64G record size)
IOzone Writer Report (RAID 10 FreeNAS system 64G record size)


Direct Attached Storage


iozone -Raz -g 4G -f /mnt/ZFS_VOL/ZFS_DATASET/testfile -b iozone-MY_FILE_SERVER-local-size-4g.xls

Network Attached Storage

I use NFS for most of my server file stores. As a result these commands are NFS focused but should work on non-NFS storage as well.


iozone -Razc -g 4G -U /mnt/MY_FILE_SERVER -f /mnt/MY_FILE_SERVER/testfile -b iozone-MY_FILE_SERVER-nfs-size-64g.xls


iozone -RazcI -g 4G -f /mnt/MY_FILE_SERVER/testfile -b iozone-MY_FILE_SERVER-nfs-size-64g.xls

For NFS testing ideally you want to use the first argument which unmounts the NFS share between tests and removes the effect of caching. This requires an fstab entry so the test can mount/unmount successfully. Unfortunately I often encounter issues with the remount failing after a few tests. If you encounter that (or can’t be bothered to create an fstab entry) use -I which uses DIRECT I/O for all file operations which tells the filesystem that all operations are to bypass the buffer cache and go directly to disk.

With your XLS file in hand open in Excel and checkout your performance. All figures are in kilobytes.

To produce a graph it’s pretty simple. Select the table, go to Insert and choose a 3D Surface graph.

iozone graphing in excel


Securing a multi-user Apache Web Server

As part of refining my Apache web server which runs multiple sites I’ve create a user account, database account and home folder per site so for example the site has a user account example, a database account example and a web folder located at:


The corresponding Apache VirtualHost for this site is:

<VirtualHost *:80>
        ErrorLog /var/log/apache2/
        LogLevel warn
        CustomLog /var/log/apache2/ combined
        DocumentRoot /home/example/public_html
        <IfModule mod_suexec.c>
                SuexecUserGroup example example

Previously to ensure PHP scripts worked I had a Bash cron job to loop over all the user’s public_html folders and set the owner on the public_html folder to the apache user www-data.

Not ideal.

So after a few hours of digging I managed to deploy a solution both secure and flexible, allowing users to logon and edit their web pages without permissioning headaches.

Assuming a basic Apache setup first install the Apache suPHP and suEXEC modules:

sudo apt-get install libapache2-mod-suphp apache2-suexec

Enable the modules:

sudo a2enmod suexec
sudo a2enmod suphp

The suPHP module replaces the Apache PHP4 and PHP5 modules. Having both active prevents suPHP from working properly so you’ll need to disable the PHP4 and PHP5 modules:

sudo a2dismod php4
sudo a2dismod php5

Finally you’ll want to set the permissions on the user folder:

find ~/public_html/ -type f -exec chmod 644 {} \;
find ~/public_html/ -type d -exec chmod 755 {} \;

To get this setup even better I’d ideally like to set those permissions to 600 and 700 respectively but that’s a job for tomorrow.


Awesome link which covers much of the above and then some.


Spreading your bets on RAID

In the early days of our startup, bubblegum and duct tape seemed to be the order of the day as we struggled to keep things running on cheap as chips computers bought off ebay and a ragtag bunch of borrowed Dell Optiplexes.

Developer files sat on their individual machines, source code was scattered across the place and the concept of centralised document storage was a share on one of the developer machines called Common in which everyone dumped their stuff.

A year into this rapidly escalating mess I took matters into my own hands and pestered the boss for a £1500 budget to build a file server. A Supermicro SC-743 Cool & Quiet Case coupled with a top notch Xeon board, 8GB of RAM, Intel Quad Core CPU and a top of the line 3ware 9690SA RAID card (with battery backup no less!) meant we were about to take our file server (the aforementioned developer’s machine) from a mewling kitten to a roaring tiger.

The whole thing was assembled beautifully and worked a treat, with a RAID 1 mirror for the Debian installation and 8x Seagate 7200.11 hard drives for the RAID 10 storage array.

In building this machine I made one and only one mistake. All of the drives were the same make and model and doubtless all manufactured at the same time.

Fast forward 12 months and on coming into work on Monday morning I saw a mail from the 3ware monitoring manager: ‘Drive 4 dropped out of array’. Not a problem I thought, we had a monthly offsite backup in place. I hopped online and ordered a spare disk.

Later that afternoon I received another alert: ‘Drive 6 dropped out of array’.

‘Sh****t’ I (probably) exclaimed realizing that if the second drive had dropped out of the same stripe as the previous drive our array would have been toast. I quickly ordered two more drives.

Making hasty backups and crossing fingers I awaited the arrival of the new drives the following day and on their arrival stuck one in to replace the failed disk. A few hours after successfully rebuilding the array I saw another disk fail.

It was at this point that I got down on my knees and began to pray. (I’m just kidding – I did that that the day before).

On a hunch I removed and reinserted the failed drive. It initialized and rebuilt fine. A few hours later one of the new drives dropped out. Over the next few days I was barely playing catch up in ensuring the RAID array didn’t fail entirely with drives dropping out 1-2 times a day and then initializing on reinsertion.

We were making daily backups by now but since this was our main file server and we were going through a pretty lean month it meant that we had zero budget to replace all the disks or get another box.

It was then that I exercised my Google-fu and hit the internet. Turned out Seagate had a bad batch of 7200.11 disks and had issued a firmware update.

The duty of taking the box offline after work and updating the firmware of all 11 drives fell on my shoulders. This ghastly process involved sticking all the disks, one at a time into a desktop and running the firmware update on each one.

Since then the array has run like a champ. We kept it with the original 8 disks and 3 hot spares for good measure…it’s been 7 years and nary a complaint from 3ware’s management tool.

Fast forward to 2013 and our latest storage purchase was a lovely Synology 10 disk NAS. Quick and (very) quiet it came populated by the manufacturer with 10 2TB Seagate disks (Enterprise models no less!). We loaded it up with our data and enjoyed the feel of the new shiny, flashing its pretty lights at us from the equipment rack.

Fast forward 12 months and you guessed it, a drive dropout. Then another, and another, followed by another. Over the course of 6 months we must have replaced more than half of those damned Seagate drives.

Moral of the story? Don’t buy Seagate.

Heheh, just kidding (maybe)…moral of the story is not to buy the same brand and batch of hard disks when speccing your storage array. Since those early days of scraping by we now build some pretty powerful RAID arrays for our customers and we always try and use a 50/50 mix of different brands and batches.

(We also make a lot of backups!)

Device Icons

I’m a great believer in having strong visual cues in user interfaces to help a user orient themselves. To this end I think manufacturers of devices like Kingson, LaCie, Sandisk, etc. should step up to the plate more and offer the user quality icons for their devices.

LaCie are actually fairly good at this, although some of their icons leave a little to be desired. Sandisk and Kingston AFAIK don’t provide any icons for their devices which is a great pity.

The benefit of these icons mean a user interface can go from this:


To this:


Now isn’t that much better?


More for my benefit than yours, but I’ve attached/linked to the icons I use here:

Attributed wherever possible to the original author of the icon.

LaCie Little Big Disk Icon

LaCie 2big Icon

Kingston DTSE9 Icon

Sandisk Titanium Icon (Author: iiroku)

Openfire Single Sign On (SSO)

I’m a dabbler, I like to dabble.

While most people are happily using Google Talk, Facebook chat, Skype and the like I’m busy playing around with my own chat server, writing plugins for it and seeing if I can get things like Single Sign On (SSO), DNS Service Records and Federation working. It’s time consuming, frustrating at times but ultimately rewarding. One particularly frustrating problem I recently tackled was single sign on with Openfire (a Jabber/XMPP messaging server).

My basic setup likely mirrors most enterprise-y networks:

  • Windows Active Directory Domain Controller with Windows Support Tools installed
  • Openfire 3.8 bound to the Windows DC
  • Windows XP/Windows Terminal Server Clients running Pandion/Pidgin
  • Mac OS X Clients Running Adium

The first step is to ensure that you have a working Windows AD network alongside a working Openfire installation.

  • AD Domain: EXAMPLE.COM
  • Openfire (XMPP) Domain: EXAMPLE.COM
  • Keytab account: xmpp-openfire

Ensure you have an A and reverse DNS record for your Openfire server and then setup your DNS Service Records for Openfire like so: 86400 IN SRV 0 0 5222 86400 IN SRV 0 0 5269

With DNS done create two new Active Directory accounts. Account one is for binding the Openfire server to the domain (skip this account if you’ve already bound Openfire to your domain).

Account two is to associate your Service Principal Name (SPN) so Kerberos clients can find and authenticate using SSO with your Openfire server.

On account two check under Account properties that User cannot change password, Password never expires and Do not require Kerberos preauthentication are checked.

On the Windows Domain Controller you’ll now need to create the SPN and keytab. The SPN (Service Principal Name) is used by clients to lookup the name of the Openfire server for SSO. The keytab contains pairs of Service Principals and encrypted keys which allows a service to automatically authenticate against the Domain Controller without being prompted for a password.

Creating the SPN:

I created two records since it seems some clients lookup xmpp/ and some look up xmpp/

setspn -A xmpp/ xmpp-openfire
setspn -A xmpp/ xmpp-openfire

Mapping the SPN to the keytab account xmpp-openfire and when prompted enter the xmpp-openfire password:

ktpass -princ xmpp/ -mapuser xmpp-openfire@EXAMPLE.COM -pass * -ptype KRB5_NT_PRINCIPAL

Create the keytab:

I found that the Java keytab didn’t work on my Openfire system in which case I used the Windows ktpass utility to create it. Some users report the converse, so see whichever works for you:

Java keytab generation:

ktab -k xmpp.keytab -a xmpp/

Windows keytab generation:

ktpass -princ xmpp/ -mapuser xmpp-openfire@EXAMPLE.COM -pass * -ptype KRB5_NT_PRINCIPAL -out xmpp.keytab

Copy the keytab to your Openfire directory, typically /usr/share/openfire or /opt/openfire. The full path will look like this:


Configuring Linux for Active Directory

Configure Kerberos

First we need to install ntp, kerberos and samba:

apt-get install ntp krb5-config krb5-user krb5-doc winbind samba

Enter your workgroup name:


Configure /etc/krb5.conf

default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log

dns_lookup_realm = true
dns_lookup_kdc = true
ticket_lifetime = 24h
forwardable = yes

pam = {
debug = false
ticket_lifetime = 36000
renew_lifetime = 36000
forwardable = true
krb4_convert = false

Test connection to Active Directory by entering the following commands:

:~# kinit xmpp-openfire@EXAMPLE.COM

Check if the request for the Active Directory ticket was successful using the kinit command

:~# klist

The result of this command should be something like this:

Ticket cache: FILE:/tmp/krb5cc_0
Default principal: xmpp-openfire@EXAMPLE.COM

Valid starting Expires Service principal
07/11/13 21:41:31 07/12/13 07:41:31 krbtgt/EXAMPLE.COM@EXAMPLE.COM
renew until 07/12/14 21:41:31

Join the domain

Configure your smb.conf like so:

   workgroup = EXAMPLE
   realm = EXAMPLE.COM
   preferred master = no
   server string = Linux Test Machine
   security = ADS
   encrypt passwords = yes
   log level = 3
   log file = /var/log/samba/%m
   max log size = 50
   printcap name = cups
   printing = cups
   winbind enum users = Yes
   winbind enum groups = Yes
   winbind use default domain = Yes
   winbind nested groups = Yes
   winbind separator = +
   idmap uid = 600-20000
   idmap gid = 600-20000
   ;template primary group = "Domain Users"
   template shell = /bin/bash

   comment = Home Direcotries
   valid users = %S
   read only = No
   browseable = No

   comment = All Printers
   path = /var/spool/cups
   browseable = no
   printable = yes
   guest ok = yes

Join the domain:

:~# net ads join -U administrator

You will be asked to enter the AD Administrator password.

Verify you can list the user’s and groups on the domain:

:~# wbinfo -u
:~# wbinfo -g

Testing the keytab works:

From your Openfire system run the below command:

  kinit -k -t /usr/share/openfire/resources/xmpp.keytab xmpp/ -V

You should see:

Authenticated to Kerberos v5

Then create a GSSAPI configuration file called gss.conf in your Openfire configuration folder normally in /etc/openfire or /opt/openfire/conf. Ensure you set the path to your xmpp.keytab file: {

Ensure the file is owned by the openfire user.

Stop Openfire and enable GSSAPI by editing your openfire.xml configuration file which is found in the openfire conf directory:

<!-- sasl configuration -->
    <!-- Set this to your Keberos realm name which is usually your AD domain name in all caps. -->
        <!-- You can set this to false once you have everything working. -->
        <!-- Set this to the location of your gss.conf file created earlier -->
        <!-- "/" is used in the path here not "\" even though this is on Windows. -->

Or add to System Properties:

sasl.gssapi.config /etc/openfire/gss.conf
sasl.gssapi.debug false
sasl.gssapi.useSubjectCredsOnly false
sasl.mechs GSSAPI
sasl.realm EXAMPLE.COM

Restart Openfire

Buying Hi-Def music today is a crapshoot

The loudness war has been going on for some time with musicians, producers and record companies over the past few decades mastering and releasing their records with ever increasing volume and compression. In the days of vinyl there was a physical limit to how loud you could press a record before the needle would be unable to play it – the advent of Compact Discs however changed that. Whilst they boasted a greater dynamic range than vinyl they also defined a maximum peak ampltitude. Through some science and a bunch of signal processing, record engineers could thus push the overall volume of a track so that it became louder throughout, often hitting peak and compressing the dynamic range of the record. The long and short of this is that modern records nearly all tend to have dynamic range compression applied and the result is a loss of sound quality in the form of distortion and clipping.

Why do record companies do this? A popular perception (misconception?) is that the louder a record sounds – the better it sounds – and hence the more likely someone hearing it in the record store or over the radio is to buy it.

Note the mediums over which most people traditionally hear new music – inside record stores, over the radio, in the coffee shop, on their phones, tablets and notebooks – none of these mediums are known for high fidelity listening and their poor quality speakers tend to mask the compression in the music. As a result loud sells.

So if that new track by The Killers sounds good to you playing on the cheap speakers at your local coffee shop just wait until you hear the Muse single coming up  – it’s probably louder and in a noisy coffee shop will sound better.

The problem arises when you listen to that record on your nice, shiny headphones or your stereo at home – in a quiet environment, with good audio equipment those distorted, normalised tracks are going to sound noisy, fatiguing and to be perfectly blunt – a bit crap.

A backlash from consumers and high end audio equipment manufacturers was bound to happen with the demand for high quality, well mastered records ever increasing. Companies like HDtracks, naimlabel and LINN Records to name a few, stepped in to fill the gap. Offering not only well mastered tracks they also boast a higher resolution than CD can provide, with the quality up to 24-bits and 192kHz. One thing which needs to be stressed however is that mastering matters – in fact it matters more than how much fidelity a record has: A poorly mastered 24-bit 192kHz record is not going to sound any better than a well mastered 16-bit 44.1kHz CD. In fact if it’s very poorly mastered it will almost certainly sound worse than an MP3 rip of the CD.

Take Elton John’s self titled album for example. Here it began life in 1970 on vinyl. No sign of clipping or compression here:

Elton John - Elton John - The King Must Die - 1970 Vinyl
Elton John – Elton John – The King Must Die – 1970 Vinyl

In 1985 it was released on CD, again with no discernible compression:

Elton John - Elton John - The King Must Die - 1985 CD
Elton John – Elton John – The King Must Die – 1985 CD

In 1995 it was re-released as a Remastered Edition on CD. You can see the track is louder but it’s just about acceptable:

Elton John - Elton John - The King Must Die - 1995 Remastered CD
Elton John – Elton John – The King Must Die – 1995 Remastered CD

In 2008 it was again re-released as a Deluxe Edition CD. As expected for a modern release, it’s been made loud and sounds compressed and fatiguing as a result:

Elton John - Elton John - The King Must Die - 2008 Deluxe Edition CD
Elton John – Elton John – The King Must Die – 2008 Deluxe Edition CD

Finally Elton John’s album appears on HDtracks in high definition 24-bit 96kHz. It should offer the best sound quality but to take advantage of the vast dynamic range of those 24-bits it will need to be mastered properly. Here we can see that this is definitely not the case. In fact it suffers from more dynamic range compression than the 2008 CD release:

Elton John - Elton John - The King Must Die - HDtracks 24-bit/96kHz
Elton John – Elton John – The King Must Die – HDtracks 24-bit/96kHz

The HDtracks edition should offer the best sound quality – after all it is 24-bit 96kHz and comes from a store which aims to provide high end audio tracks. Sadly it suffers from bad mastering. The result is a lot of clipping and excessive loudness and consequently it sounds worse than the older, less compressed editions.

So what can we surmise from this? Put simply that despite the much touted quality of 24-bit music there’s no certainty that the HD version of the album you’re buying is also mastered properly and free from excessive normalisation and distortion. For those looking to upgrade their album collection it’s clear that there’s no guarantee your new, 24-bit purchases will sound better. This is a great pity since technically the new high definition audio formats offer higher quality than has ever been possible – if only the studios, record producers and artists would oblige. Until then for those seeking quality HD audio tracks it’s a crapshoot.


The background to our lives

John F. Kennedy, Robert F. Kennedy, John Lennon, Martin Luther King Jr., Abraham Lincoln and that legendary queen of Queen; Freddie Mercury – so many showmen have graced the stage of life. They have enlivened our worlds and broadened our horizons. They have pushed us from complacency and made us look at the world with new eyes. They have enriched us and inspired us and they have left us before their time.

It was with heavy heart that I heard of Steven P. Jobs’ passing. He was one of the greats. A man who pushed us and his peers, a man who showed us that there was a better way of doing things. Whilst his passage from this life was expected; his health visibly failing at every public appearance – his loss still came as a blow. Felt keenly around the world, it was a loss for which many of us felt wholly unprepared.

Jobs’ legacy, from the Apple II to the Mac, the iPod, the iPhone and the iPad have been part of the background of our lives. Those few who haven’t used his products have certainly used those of his competitors – products which borrowed and benefitted from his great designs. He might not have been the sole creator but his influence was evident in the high standards of each.

My heroes in this world have been those lofty individuals who almost canonised have passed into legend – JFK, RFK, Lennon, MLK, Lincoln, Freddie Mercury. These men however are obvious choices. What surprised me about Steve Jobs’ death was not how acutely his passing was felt, but that I hadn’t realised he’d been a hero all along.

Below are a selection of family images from over the years. From a little girl’s first e-mail to cousins communicating across continents Apple have been a valuable part of our lives:

To breaking your fall on expensive gear and happy endings!

It’s every photographer’s worst nightmare (well apart from that one where you miss the shot of a lifetime): Dropping your gear. Ice and cameras don’t mix very well but in pursuit of that ever elusive perfect shot we push ourselves to extraordinary lengths, into harms way if needs be to satisfy our craft.

In my case, I pushed myself to get up at the crack of well, midday, out into the frosty January afternoon to get some pictures of the snow.

Despite the supposed grip a good pair of hiking boots was supposed to offer, I could get little purchase on the ice. The pavement was covered in a thick layer and it glistened treacherously. It was no wonder the street was deserted.

I moved as carefully as possible, my cat like reflexes saving me from certain embarrassment. I took as many photos as I could before my fingers began to tingle from the cold seeping its way through my gloves and managed to make it back to my door step in one piece. ‘At last I thought –  home free’. Such hubris.

I of course fell.

My camera landed lens first with a dull crack, shards of my lovely B+W MRC filter flying everywhere. I cared not a jot about a potential wrist injury as I struggled to get back on my feet and survey the damage. My beloved 24-70mm f/2.8 lens had suffered, I knew not what kind of damage, but dragging my crestfallen self back inside I surveyed the devastation. The filter had shattered into a myriad of multi-coated shards. Its deformed body fusing itself to the lens thread making removal impossible.

The lens itself appeared to be relatively undamaged. The front element had a few tiny nicks – the filter thread on the lens however was completely gone on one corner.

Thankfully my camera was without a scratch – the lens took the brunt and surprisingly functioned still. I took a few pictures and despite the broken shards of glass only a small amount of ghosting was visible.

Despite the relatively good prognosis I still felt pretty beat up but then recalled that I had camera insurance with Photoguard. A quick call to the insurer and after filling a form and sending photographic evidence (naturally) I was recommended to send my lens and camera in for repair to Fixation in London.

To say I was impressed with my insurer would be putting it mildly. Not only did Fixation turn around my lens within a few days – Photoguard sorted out all payment and even paid for having my camera body checked and cleaned just in case it too was damaged!

Now that’s service.

I’m not a big one for adverts but if you need insurance by all means click on the affiliate link or pick up the phone and give Photoguard a call.

Suffice to say I renewed my insurance!

From viewfinder to wall

From the moment you press the shutter a picture takes on a life which goes from your camera, to your darkroom (be it digital or chemical) where after being burnt and dodged (and in some cases bodged) it goes for printing. You do print your pictures don’t you? I have a hunch that with the advent of digital photography the vast majority of us leave our digital treasures gathering dust on our computer hard drives – I know I certainly do.

With the advent of photo management software, organising this digital soup has become a shedload easier and thanks to the power of programs like Lightroom and Aperture, the digital darkroom has finally come of age (it actually came of age a year or two ago, but I digress…). We are now able to push and tweak our photos so that they go from this:

to this:

The former, whilst nice hardly represents what I saw and felt when I took the picture – the latter however does, and certainly has the pizazz I need to justify printing the picture and hanging it on my wall. Which neatly segues to my topic – printing. Rather than go through the pain (and it is a pain) of keeping and maintaining a desk hogging, top notch inkjet, that requires the expense of cartridges and special photo-paper I prefer to let someone else deal with all that hassle.

Quite simply – taking pictures is what I like – picture printing – meh, I’ll let someone else do that for me.

Unfortunately being an anally-retentive perfectionist I needed someone truly high calibre to handle my printing (not that my pictures are that high calibre to be honest, but as the saying goes ‘if something’s worth doing, it’s worth doing right’). The big internet printers like Photobox I found to be good on price but not so good on quality so I was rather pleased when I discovered that one of the better printers, theprintspace can be found right here in London, and specialize in a particular type of printing called C-Type (or Type-C), which involves projecting the digital image onto light sensitive photographic paper rather than printing ink on paper. This apparently results in a higher quality, longer lasting print.

I chose to print the above macro image of a daisy as well as the following three pictures:

Flower Anther Macro

Erin Black and White


Leon Black and White


How did they turn out? Good – pretty damn good. (One thing I discovered, and no doubt my lack of experience is showing; the image needs to be sharpened more for printing than they do for on screen viewing – and not just a little but a fair amount). Now not only do theprintspace offer printing, they provide color calibrated workstations for you to prepare your print so that you can be sure that what you see on screen is what you get. And if that wasn’t enough they can mount the print onto card, mdf, plastic, aluminium and a bunch of other materials, with prices which aren’t too unreasonable. Now I’m not going to be sending hundreds of holiday snaps through them sure, but for the pictures I’ve taken which I feel are worth hanging up – only theprintspace will do.