An Unfinished Symphony

It's about the internet and stuff.

Time for something new

Once again it’s been a ridiculous amount of time since my last post and I think that needs to change, along with a number of other changes that I hope will breathe a bit of life into this blog and kick-start my enthusiasm for writing. First up for change is the subject matter, which will be made a little broader. I’ll continue to write the occasional web development post but rather than limiting myself to that subject I also intend to write about my interest in photography, reviews on products and services that I own or use, and a bit of personal stuff thrown in for good measure every so often.

The second change that I have planned is to the design, once again. I still like the look of this theme but it’s been here a long time and so needs to go. As there have been a fair few people in the past who have expressed an interest in using the theme themselves, along with some who didn’t ask first, I’ve decided to make it available for download via Blog Themes Club where you’ll find it listed as the Artemis theme. Blog Themes Club is operated by Kevin from Blogging Tips and Sarah from Stuff by Sarah. Themes are available after purchasing one of the selection of membership packages that are available, or by purchasing a single use license if you don’t require any support, updates or access to other themes. The service hasn’t been running for long but they’ve already got a good selection of great looking themes available so go take a look if you’re in need of a decent WordPress theme.

Change number three is more of a personal change and involves me making sure that I post to this site on a more regular basis, so keep an eye out for updates and hopefully there’ll be something of interest for everyone.

Up arrow

Naked Day x2

Well, CSS naked day has been and gone again this year – well, at least for me it has. The last couple of years it’s been on April 5th, and I’ve participated each year. This year was no exception – however I may have celebrated by myself as it seems they changed the day and forgot to tell me 😉

I have a little PHP set up to automagically strip my CSS links out of my pages whenever it’s April 5th anywhere in the World, and that worked perfectly again this year. The problem is, CSS Naked Day is now on the 9th, so I’m going to have to do it all again. So, here goes …

It’s CSS Naked Day – Go here for more information about why I have no styles on this site, and maybe consider joining in 🙂

For those of you wondering why some sites are doing it on the 8th – the script used checks to see whether it’s the 9th ‘somewhere’ in the world, and if so it then turns off the CSS. It’s automagic!

Up arrow

A feeling of deja vu

It’s amazing what you can find from reading through your web stats. Things like who links to you, who visits you, where they come from, how often the search engines visit – and even who is stealing your design. Yes, it seems someone likes this design enough to pirate it.

A few weeks ago I noticed an odd referrer in my stats, http://localhost/mazochova. Anyone that knows much about computers will know that localhost refers to the local computer – ie. the one that you are using is your localhost. So, someone had a web page on their computer that was being run on a local server and was directly accessing something on this site. My suspicion was that this person was stealing my design, or some component of it, and was hotlinking my images in the process. This isn’t something I wanted to happen so I blocked their IP number as a referrer. Today I found that my suspicion was right, and found that they had put up their (part done?) version at (it has now been removed). In case you’re wondering how I knew that they had put it online, they are stupidly still hotlinking some of my images – and are even still linking through to my lastFM account, my SpreadFireFox affiliate account and even my GeoURL details; they didn’t bother linking to my TextLinkAds account though.

They say that mimicry is the highest for of flattery, however I’m not particularly flattered that eVisions, a supposedly professional web design company, are intending to profit from my design. While I intend to contact them (and their hosts) and ask them to remove it, I’m not sure what else I can do about this – at some stage this is going to be passed on to someone else to use, and so I doubt they’ll care too much if I post it on (which I still intend to do). I’ll probably have to hope that they do the decent thing and remove it, but my experience of Eastern Europeans on the internet leaves a lot to be desired, and doesn’t fill me with too much confidence.

I can only hope that at some point in the future Anna Mazochová, the woman who is going to be ripped off by these thieves, gets to see this at some point and takes some action for herself. Unless someone has any ideas on effective ways to stop these people, If so, I’d be grateful for the heads up. In the meantime, here’s a couple of screen shots:

  • The top half of one of the inner pages of the pirate site
  • Screenshot showing the stolen design linking to my LastFM account
  • Full page view of stolen design used at
Up arrow

Advanced spam control with mod_rewrite

I can’t remember where I first got the idea from, but for some time now I’ve been using mod_rewrite to protect against spam and hack attempts, and this has worked quite well for some time. Essentially, I have a number of rules contained in my .htaccess file which are designed to block attacks from “users” displaying common traits – with one of those common traits being the absence of a user-agent string from the request headers.

As was pointed out to me yesterday, there’s no obligation for any user-agent (UA) to send a user-agent string as a part of its request headers. I have no quarrel with that statement at all – except, on this site there is. Considering the vast number (several thousand per month) of attempts to directly access comment and contact forms, or to access non-existent files with random character file names by bots, spammers or hackers whose request headers lack a user-agent string, and the fact that it is very rare in my experience for a legitimate visitor to not include one, I decided that it was a requirement for visiting here and used the following code to block them:

The mod_rewrite code used to block visitors without a user-agent string
  1. RewriteEngine On
  2. RewriteCond %{HTTP_USER_AGENT} =""
  3. RewriteRule .* - [F,L]

Line 1 turns the rewrite engine on. Line 2 sets the condition to be checked for, in this case an empty user-agent string (denoted by the absence of content between the double quote marks), and line 3 says what should happen when the condition is met – with the F stating that the request should fail. In which case it returns a 403, forbidden, error.

As I said above, that has worked quite well for some time and I’ve been happy with the effect that it’s had on the amount of spam I’ve experienced. However, when checking my access logs on a couple of occasions recently I noticed that something had been trying to access a file relating to the Text Link Ads service; in order to check that their adverts are working properly their server periodically checks publishers’ sites to make sure that the adverts are displayed. Whilst this is a reasonable, and sensible, thing to do it appears that their server fails to include a user-agent string in its request headers – meaning that every attempt to check my site was being rejected by the server, which isn’t so good. Consequently, this meant that either I had to stop blocking them, or they had to include a user-agent string in their headers

As my attempts to explain the situation to their support people seemed to be met with misunderstanding it turned out that I had to stop blocking them. Though this wasn’t as simple as just removing the code from my .htaccess as this would only result in my being bombarded with spam and hack attempts yet again. Instead I had to check for two conditions instead of one, with the extra condition being that the visitor wasn’t them. To do that I also checked to see if the visitor’s IP belonged to their server or not, like so:

  1. RewriteCond %{REMOTE_ADDR} !^12\.34\.567\.89$

That line of code checks to make sure that the visitor’s IP is not the one listed (nb. that is just a dummy IP address rather than their actual one). If both conditions are met (not the listed IP and no user-agent string) then the visitor gets blocked. When added to the previous code we get the following:

The amended mod_rewrite code
  1. RewriteEngine On
  2. RewriteCond %{HTTP_USER_AGENT} =""
  3. RewriteCond %{REMOTE_ADDR} !^12\.34\.567\.89$
  4. RewriteRule .* - [F,L]

While that snippet of code will allow them to access my site even when they have no user-agent string in their request headers, and while there’s no obligation for one to be included (as mentioned previously), I personally feel that it would be wiser for them to fix their software to ensure that it identifies itself when accessing remote servers. Not doing so means that it’s quite easy to confuse them with spammers and hackers who do their best to disguise their actions and methods, and so leaves them to potentially be blocked by many other users who might take similar measures. Hopefully the support person that responded to my queries will pass the matter on to someone who will understand the issue and be able to do something about it.

Up arrow


Regular visitors to the site should notice quite a radical change of design here which I’m hoping will make it a bit more visually pleasing. When I first started the site it was, almost literally, just thrown together in a hurry and without a finalised design. This was basically because there were a number of people that wanted the blog up and running and, I think, for me to start posting on it. Some of you may still remember the original look of the site, which was far from complete and so a little bit broken to say the least. After a little feedback I decided not to continue working with that design and came up with a working temporary one instead. Even though it provided everything I wanted from it (functionality, flexibility and simplicity) it was a little uninspiring and I was never really happy with it, despite that it managed to survive for a couple of years.

Simplifying the Interface

What I’ve tried to do with the new design is keep the important concepts intact – it’s still, I believe, functional; it’s still flexible, though now within a set limit; and, importantly, it’s still simple. In fact I’ve made an effort to simplify it further. I found certain aspects of the old design were slightly cluttered, notably the links sidebar. In order to de-clutter the link heavy sidebar I separated the archives and external links section into their own pages, with them linked via a new primary menu. I’ve also used a little PHP and hidden the MyBlogLog widget from everyone but myself.

All in all I think the sidebar is cleaner and more useful while still sharing things that I want to share. Also, perhaps the most noticeable change is the dropping of the third column from the front page. My original idea for that was to allow the front page of the site to show the latest post in full whilst also showing the next 5 or 6 most recent posts, but rather than overloading the page with too much content these recent posts would just have excerpts showing on the front. The old design did achieve that fairly well, however the extra column meant that the front page would always seem a little cramped. It’s also not necessary to have a third column in order to provide direct access to that content from the front page, so it got dropped.

New Sections

I’ve added a few extra sections to the site where I can provide supplementary information, such as an accessibility statement and privacy policy, including an about page. At the moment the about page is a bit bare; I do intend to put something in there at some point, hence its inclusion, however I haven’t decided on what exactly I want to say. I’ve also extended the subscription page to include information about my feeds, feed readers and feed aggregators along with the original email subscription option that I’ve carried over from the previous version. There’s also a contact form, just in case.

Ironing out the Issues

As with any project, there have been issues to resolve (I’m talking specifically about coding issues at this point).

PHP Issues

One of the things I wanted was for the archives page to show each category with all the posts for that category, to be able to reorder the categories, to be able to show the date of each post adjacent to the link to the post and to be able to structure markup how I wanted. One of the great things about WordPress is its ease of templating, and to that end the codex provides details of the variety of template tags available for this, however those available for templating category listings didn’t quite meet all of my requirements. This is where Sarah came in with a bit of help, well a lot of help, providing me with the PHP needed to pull the required information from the database in the way that I wanted. She also helped me write the PHP for my contact form, in the process helping me to add in fields checking for spammers, and also code to check that those fields had been completed appropriately.

The site used to use an old version of the Subscribe2 plugin which I held off from upgrading to the amount of work it would have taken to get a newer version to work with the existing theme. A new theme has meant that was no longer a consideration, and so I upgraded to the latest version of Subscribe2. It was only when I uploaded, and set live, the completed theme and upgraded plugins at the weekend that a few issues came to light. Basically the plugin only partially worked on the live site, with new subscribers receiving an email containing a link to confirm the subscription, but the link going to a blank page and the subscription not being confirmed. After a lot of reading through the plugin code, and with Sarah doing a number of tests, we found that there were three functions responsible for the confirmation process – commenting out the call to one of these functions allowed the confirmation to go through and the appropriate confirmation page to display – just in a very broken state as a number of scripted components failed to work on the page. It had, by that time, occurred to us that it was an issue with using customised permalinks on the site along with the way the plugin takes over an existing page that it rewrites with the confirmation – instead of doing this successfully the page was being redirected to the real permalinked page, causing the confirmation to fail. Once we had tested that this was the case, by resetting permalinks to the default method, it didn’t take much to discover the real cause – a conflict between Subscribe2 and the Permalink Redirect plugin. This issue hadn’t existed with the previous version of Subscribe2 that I’d been using, so something that has changed caused the conflict – the process of taking over an existing page from the database, rather than using its own dedicated one like the old version did. It’s probable that the new method makes it easier for the plugin to be integrated with a theme and so it’s an acceptable change in the grand scheme of things, but unfortunate that it results in a conflict. To fix it I dropped the other plugin.

CSS Issues

While differences in browser rendering continue to exist there are always going to be display issues when switching between those browsers. The star browser was Firefox, getting things right first time and according to standards. Internet Explorer 7 had a few minor issues, and IE 6 a few more, however these were relatively simple to fix with a few changes in margin, width and position settings in their own respective style sheets linked via conditional comments. In Internet Explorer 5.5 it looked like a bomb had hit it; that browser couldn’t handle a fair bit of the CSS and rather than spend too much time on an obsolete browser I changed a number of things and fixed the width. I basically just wanted to make sure the site would be usable rather than trying to precisely reproduce the design.

Quite surprisingly I had a few strange problems in Opera 9.20, surprisingly because Opera is at least as standards compliant as Firefox if not more so. However there were several oddities that didn’t seem to have an easy fix, or at least not a fix that could be achieved simply by tweaking the CSS without those tweaks affecting other browsers. Those issues included:

  • displaying the sponsor links at the correct size until the page finished loading, then the text mysteriously shrinking. As these used the same markup and CSS as the other links in the sidebar there was no easy or logical way to fix the problem without affecting the other links.
  • text marked up as paragraphs within the content area would magically jump to the right once the page loaded, leaving a small fragment of text behind (a few characters from the last line of each paragraph, chopped off at the top). Again, there was no simple fix without it affecting the positioning in other browsers.
  • text link styles were overridden with the browser default ones, again without any logic that I could see. It’s possible that one of the settings in my browser profile caused this issue, but I’m not familiar enough with the browser to track it down – so I needed to find another fix.

It’s likely that the first two issues were caused by the combination of scripts used on the page, for example the sponsor links are generated by scripts. However as it wasn’t feasible to remove those scripts anyway I decided not to test the theory by disabling them on a live site (the issues weren’t apparent in my local testing copy). Due to the prospect of a CSS fix affecting other browsers I had to find a hack or filter for Opera, much as it pains me to use them now that it’s clear we can control the real problem browser (Internet Explorer 6 and under) using conditional comments. However these issues needed an Opera specific fix and, after a bit of searching on Google, I found a way using an @media declaration to target Opera:

  1. @media all and (min-width:0px) {
  2. head~body p {
  3. margin-left : 0;
  4. }
  5. #content a, #content a:link, #content a:visited, #skipper a:link, #skipper a:visited {
  6. color : #00303e !important;
  7. }
  8. #content a:hover, #content a:active, #content a:focus, #skipper a:hover, #skipper a:active, #skipper a:focus {
  9. color : #870000 !important;
  10. }
  11. #sidebar #links55315 a {
  12. font-size : 11px;
  13. }
  14. }

The combination of the “@media all” and the “and (min-width:0px)” in line 1 of the code above targets Opera fairly specifically – apparently it has also been shown to target Pre-release versions of Safari too. I had no way of testing this, however I felt it was safe to use the method as I was just emphasising already existing styles rather than trying to impose different ones. Line 2 very specifically targets all paragraphs, overriding any other rules that may be conflicting. Line 3 sets the left margin for all paragraphs to zero (even though this had already been done in the main style sheets), while Line 4 closes the rule set opened in line 2. A quick test in Opera showed that the method worked successfully. Lines 5, 6 and 7, and lines 8, 9 and 10 were used to fix the link colour change, while 11, 12 and 13 were used to fix the resizing of the sponsor link text – using a pixel unit instead of ems was what was needed there. The final line closes the whole thing. That discovery came courtesy of the Tanrei Software blog where there’s a useful article on using media selectors for browser targetting.

Other Issues

Rather than use a mass of extra markup and images I chose to use Alessandro Fulciniti’s NiftyCorners Cube script for the curved corners, so users without javascript enabled will see the site with square corners. Users with javascript may also have some issues, there’s a slight lag in the page load time as the script runs through the targeted elements in order to set the curved corners. It’s also possible that the script contributes to the paragraph jumping in Opera, though I’ve never seen this happen in other sites that I’ve used the script on.

I use a plugin that dynamically adds classes to various links in order to add an icon to identify the target type (such as an arrow pointing away from a box to indicate external links). One of the icons used was for links to Wikipedia which consisted of a small icon of the Wikipedia logo. However it was hard to distinguish that icon from the surrounding text, so it was removed.

I wanted to use an image of an upwards pointing arrow for my back to top links, but without all of the images being background images (otherwise there would have been no actual content for the links). I achieved it using a combination of foreground and background images with the following markup and CSS:

  1. <div class="up">
  2. <a href="#skipper" title="Back to top">
    <img src="" width="16" height="16" alt="Up arrow" /></a>
  3. </div>

The markup simply creates a div as a container element for the image used as the default-state up arrow and the actual link back to the top of the page.

The CSS – part 1
  1. div.up {
  2. float : right;
  3. margin : 5px 10px 0 0;
  4. padding : 0;
  5. width : 16px;
  6. height : 16px;
  7. }

The first part of the associated CSS sets the dimensions of the container div to match the size of the image it contains and positions it where I want it on the page (relative to its own container).

The CSS – part 2
  1. div.up a, div.up a img {
  2. display : block;
  3. padding : 0;
  4. margin : 0;
  5. }

Part 2 of the CSS sets both the image and the anchor around it to be block level. This corrects Internet Explorer 6’s behaviour of adding a 3px space beneath inline images (which is the space preserved for text descenders). It also allows the full area taken up by the image to be a clickable part of the link.

The CSS – part 3
  1. div.up a {
  2. background : url(images/up2.png) no-repeat center bottom;
  3. position : relative;
  4. width : 100%;
  5. height : 100%;
  6. text-decoration : none;
  7. cursor : pointer;
  8. }

This code sets the link to 100% width and height of the container div to complete the process started in the previous rule set, which allows the full area of the image to be clickable. It also sets the positioning to relative to allow the contained image to be absolutely position within it, and the first line sets the background to be the hover state version of the image. The final 2 lines turn off the underline added to links and ensure that the proper cursor style is used.

The CSS – part 4
  1. div.up a img {
  2. position : absolute;
  3. top : -5px;
  4. left : -5px;
  5. border : none !important;
  6. }

Part four positions the foreground image within its containers so that it precisely and completely covers up the background image. It also makes sure that no border is added.

The CSS – part 5
  1. div.up a:hover img, div.up a:active img, div.up a:focus img {
  2. visibility : hidden;
  3. }

The final rule set is responsible for removing the foreground image when the link is hovered over, revealing the hover state background image. It works as intended in Firefox, IE 7 and Opera 9. In IE 5 and 6 the change in state doesn’t work, and IE 5 also needs a slight change in the positioning to cover the hover state image.

No doubt there’ll be other issues to find, and they’ll be fixed when they are. I’m expecting there to be one or two in Mac browsers, but as I haven’t been able to test in them (the Mac testing services I normally use have been broken when I’ve visited recently) I haven’t been able to find them. If you’re a Mac user and do find issues, please let me know and I’ll do my best to resolve them, thanks.

Up arrow