boogdesign posts

Longer posts on standards based web design, portable web development and Linux, intermingled with some stuff on my other nerd interests.

Rob Crowther, London, UK based Blogger, Web Developer, Web Designer and System Administrator - read my Curriculum Vitae

Buy my book!

Book CoverHello! HTML5 and CSS3 available now

Buy my other book!

Book CoverEarly access to HTML5 in Action available now


09:47:37 pm Permalink London Web Meetup: Mobile, SEO & the Business Aspects of Web Companies

Categories: Web Develop, Gadgets

Review: London Web April: Mobile, SEO & the Business Aspects of Web Companies at Hoxton Apprentice, 16 Hoxton Square, London. N1 6NT. 19:30 to 21:30

I'm finding my 'spare writing time' increasingly used up these days, more on that in a future post, so this is going to be a much shorter review than I usually manage. There were three talks scheduled, but I only stayed for the first two as it was 10pm by the time they'd finished.

The first talk was by Grapple, who have a platform which takes your HTML, CSS and JavaScript, and turns it in to a cross platform mobile app. And by cross platform they don't just mean iPhone and Android, they mean Blackberry, Palm and almost any variety of J2ME capable phone. This is a significantly larger potential market than just those phones with a well know app store. And talking of app stores, Grapple also offer consultancy services for getting your app noticed and installed by users who don't have access to an app marketplace.

Unsurprisingly, one of the first questions asked was how Section 3.3.1 of Apple's most recent iPhone developer agreement affected them. Basically, Apple have decreed that only their approved languages can be used to develop iPhone apps. Since Grapple is basically executing JavaScript inside a WebKit view, they feel that they're fully compliant with section 3.3.1 and, unlike some of their competitors, won't have a problem.

Next up was a talk from Plugin SEO. Although it covered some of the basics, which I'm already fairly familiar with, it was interesting to me because it didn't end at the 'create some great content' line as many SEO introductions do. We were presented with some strategies for actually creating 'great content' and then introduced to a variety of tools to monitor how well our content creation was going. The slides were the same as these, from a March MiniBar, so you can have a look for yourself.

There was quite a bit of background noise, especially early on, and we might well have benefited from a more strict adherence to the schedule, but interesting talks again, and a great bunch of people with plenty to share, so 4 out of 5.

Technorati tags for this review:    

Tweet this!
Send feedback »PermalinkPermalink


10:11:26 pm Permalink London Web Standards: Javascript with Frances Berriman and Jake Archibald

Categories: Front End Web Development, Management and Communication

Review: LWS March: JavaScript – The events that get left behind & Pro-bunfighting at The Square Pig, 30 - 32 Procter Street, Holborn, London, WC1V 6NX 19:00 to 20:30

This event was focussed on JavaScript, specifically the Glow 2 library from the BBC which both speakers are working on. Frances talked about coordinating work on a JavaScript project with a geographically distributed team, discussing some teamwork strategies and demonstrating some useful tools. Jake talked about the nightmare that is DOM Level 2 keyboard events and how he'd worked around the issues in Glow 2. Both of them were funny and engaging speakers, so much so that I feel I could hardly do them justice with a textual repetition of their talks. That and I didn't really take detailed notes this time, and had to leave before the Q&A... So, for a summary of what was said check out the usual live blog of the event from Jeff, meanwhile I'm going to have a play around with three of the JavaScript tools discussed: JSDoc, QUnit and Glow 2 keyboard events.


In team environments developers are supposed to liberally comment their code and also document the requirements before coding, and the APIs afterwards. Inevitably anything that developers are asked to do which is not strictly coding tends to take a back seat, especially when deadlines are approaching. Documentation requirements therefore get skimped or skipped, and often are not kept in sync as the code evolves. In the Enterprise development world this issue led to the development of tools like JavaDoc which, if you write your comments in a particular format, will automatically generate nicely formatted documentation for you when you've finished. This halves the amount of documentation you have to write, increasing the chance developers will do it, and keeps that documentation close to the source code to which it pertains, improving the chance it's kept up to date. JavaDoc has been much imitated, and the JavaScript equivalent is JSDoc.

So, how do you use JSDoc? First you have to download and extract the latest version, you also need to have a Java runtime installed, version 1.5 or later. Now you have to adjust your comments slightly, assuming you have comments in your code already :) I didn't, so I grabbed the complex.js file I'd copied out of the rhino book for my post on the canvas element. You can run JSDoc on that file as it stands and it will generate some documentation. Assuming you're in a command prompt in the directory where you extracted JSDoc (and, er, you're on Linux - adjust path separators if you're on Windows), issue a command line like this:

java -jar jsrun.jar app/run.js -a -t=templates/jsdoc/ ../code/complex.js

The command will produce a directory out with the generated documentation. This is what the basic documentation looks like:

The results of JSDoc on a standard JavaScript file

As you can see (check out the full results), it's found the constructor function but not much else - everything is in the global namespace. Let's annotate the comment before the constructor function and see what happens, this is what it looked like before:

 * The first step in defining a class is defining the constructor
And here's what it looks like when it's JSDoc enabled:
 * The first step in defining a class is defining the constructor
Can you see the difference? Look closely at the first line - there's one extra asterisk. That extra asterisk is what tells JSDoc to look in this comment for 'special' stuff. At the end of the comment I'm going to add an annotation:
 * @constructor

Run the command again and suddenly there's a whole load more stuff:

The results of JSDoc on a JavaScript file with a single annotation

In the new version of the documentation there's a Complex class, and it's found all the methods. However, your fellow coders may appreciate a bit more than the basics. Perhaps you might want to document what the expected parameters and return values are? Additional annotations follow the same pattern as @constructor - an ampersand followed by a keyword. Some of the other keywords let you provide additional information, here's what parameters and return values look like:

 *Add two complex numbers and return the result.
 *@param a {Complex} A complex number
 *@param b {Complex} A second complex number
 *@returns {Complex} The sum of a and b

Now JSDoc adds your comments to the output, and additionally provides links to any other types you've defined:

The results of JSDoc on a JavaScript file with several annotations

You can have a look at the final output here.


Unit testing is something I keep thinking I should learn how to do. The BBC Glow team is using the jQuery unit testing framework QUnit, so this seems like a good excuse to investigate it.

QUnit is very easy to set up, you just need to link to JQuery and the two QUnit files in your document head:

<script src=""></script>
<link rel="stylesheet" href="" type="text/css" media="screen" />
<script type="text/javascript" src=""></script>
Then provide some HTML framework for the results to appear in:
<h1 id="qunit-header">QUnit example</h1>
<h2 id="qunit-banner"></h2>
<h2 id="qunit-userAgent"></h2>
<ol id="qunit-tests"></ol>
Now you need to write some tests. To do this, simply add a function to $(document).ready that calls the test function:
test("Passing tests", function() {
  ok(true, "The truth is true");
  equals(1,1, "One is one");

These are obviously quite basic tests, and not actually testing anything, but demonstrate how easy it is. The ok test accepts a 'truthy' value and a string and succeeds if the value is true, the equals test accepts two values and a string and succeeds if they're equal.

Basic QUnit tests

To try some less basic tests I once again dragged out the complex.js file. The test functions can contain more than just the testing functions, you can put as much JavaScript in there as you need to do your test. Here's what I ended up with:

test("Basic class functionality", function() {
  var real = 1.0;
  var imaginary = 1.0;
  var c;
  ok( c = new Complex(real,imaginary), "Class created" );
  equals( c, real, "Simple value comparison" );
  equals( c.toString(), "{" + real + "," + imaginary + "}", "String comparison");
  equals( c.magnitude(), Math.sqrt(2.0), "Magnitude comparison")

You can see I've used the ok test to confirm the object gets created, then done some simple comparisons to make sure the object created is what I expect. Unsurprisingly, all my tests pass:

Slightly less basic QUnit tests

Of course, this just demonstrates that QUnit is very straightforward to use rather than that some code I nicked out of a book works perfectly. There's a whole art to writing unit tests over and above the simple mechanics of the framework you're using, but I'm definitely not the person to be telling you about that. If anyone knows of any good, JS oriented tutorials for doing test driven development, please leave a comment.

Glow 2 Keyboard Events

Jake talked about the mess that is keyboard events in current browsers, and how he set about fixing it in Glow 2. The problem with keyboard events in browsers can be summed up by a quick look at the W3C DOM Level 2 Spec:

A visual comparison of the size of the mouse event spec with the much shorter keyboard event spec

Given the lack of spec it's hardly surprising that the browsers have all implemented keyboard events slightly differently. Not only have the browser implemented things differently from each other, there are also differences between the same browsers on different operating systems. Throw in the fact that keyboards in different countries have different sets of keys, and it all gets a bit messy. In Glow 2, the keyboard events have been normalised to keydown, keyup and keypress, with the same properties on the event object across browsers.

I've downloaded the Glow 2 source to have a go with these keyboard events. After downloading the extra libraries I've managed to get it to build with Ant, but I've not got the build itself working in a browser yet. It could be me, it could be a bug, I haven't figured it out yet - when I do I'll fix up my keyboard event example application and update this post.

Another excellent event, the two best speakers of the three events I've been to, 5 out of 5. Watch out for the next one on 26th April.

Technorati tags for this review:  

Tweet this!
1 feedback »PermalinkPermalink


03:28:04 pm Permalink Adventures in Web 3.0: Part 5 - The HTML5 Canvas Element

Categories: Front End Web Development, Standards, HTML and CSS

This weekend I came across Prez Jordan's post on Julia sets, I then followed the trail back to his original post on the Mandelbrot set. I've always loved fractals, since I read Gleick's Chaos back in school, and I used to spend hours generating Mandelbrot images on my Amiga when I should have been learning CS at University. I did at one point try to write my own Mandelbrot generator, but quickly got bogged down in trying to manage enough code to get a basic UI in Workbench and gave up and went to the pub.

Anyway, Prez's post stirred my memories and inspired me to once again try to put together my own Mandelbrot generator. Except this time, instead of having to manage OS libraries and indirect addressing in C just to open a window, I would take advantage of the HTML5 Canvas element:

The canvas element provides scripts with a resolution-dependent bitmap canvas, which can be used for rendering graphs, game graphics, or other visual images on the fly.

Basically, canvas creates an area on your web page on which you can then draw lines, curves, images, and text with Javascript. It allows you to do some pretty crazy things, a lot of the more spectacular early HTML5 examples used it.

Adding a canvas to the page is very simple:

<canvas id="mandelbrot" width="320" height="240">A mandelbrot fractal will appear here.</canvas>

The content of the element will only appear if the user agent does not support canvas, it's similar to a noscript block. If your browser does have support, then all the content the user sees will be drawn on with Javascript. You'll notice I specified the width and height, if I left that out it would default to 300 pixels wide by 150 pixels high, which has an interesting side effect I'll discuss below, otherwise, it's much the same as any other element. It doesn't have much in the way of default styling, and it's an inline, rather than block level, element, but you can target it with CSS rules:

canvas { 
    -moz-box-shadow: rgb(0,0,0) 0px 2px 3px 3px; 
    -webkit-box-shadow: rgb(0,0,0) 0px 2px 4px; 
    box-shadow: rgb(0,0,0) 0px 2px 3px 3px;

So, we now have an empty rectangle with a nice drop shadow, how do we actually draw something? To draw on a canvas you need to get the context object, which in turn gives you access to all the drawing methods:

var canvas = document.getElementById('mandelbrot');
if (canvas.getContext){
    var ctx = canvas.getContext('2d');
    ctx.fillText('Hello World', 50, 50);

To draw a Mandelbrot we need to be able to plot single pixels across the whole element. There isn't a plot method, canvas isn't really intended for pixel by pixel manipulation, but we can pick a colour and plot a one pixel by one pixel rectangle:

var canvas = document.getElementById('mandelbrot');
if (canvas.getContext){
    var x=1, y=1;
    var ctx = canvas.getContext('2d');
    ctx.fillStyle = 'rgb(255,0,0)';

So now you know enough about the canvas element to write a Mandelbrot generator, for the rest we can just steal code off the internet &amp;#58;&amp;#41; Here's what we're aiming for (warning - don't click on the link on a slow computer, it'll take several seconds to render):

An image of the Mandelbrot set, generated in the canvas element with Javascript

Going back to Prez's post, his code is in Python, but it's fairly straightforward looking and easy enough to translate into Javascript. I took the Complex number library from "9.3.6. Example: Complex Numbers" of JavaScript: The Definitive Guide, 5th Edition and then used it to reimplement the Python function:

function mandel(c) {
    var cols = ["rgb(255,0,0)", "rgb(255,165,0)", "rgb(255,255,0)", "rgb(165,255,0)", "rgb(0,255,255)", "rgb(0,165,255)", "rgb(165,0,255)", "rgb(0,0,255)"];
    var z = new Complex(0,0);
    for (var i = 0; i <=20; i++) {
        z = Complex.add(Complex.multiply(z,z), c);
        if (z.magnitude() > 2) {return cols[i % cols.length]}
    return "rgb(0,0,0)";

The function returns a different colour based on how many iterations it takes for z to exceed 2 in magnitude when combined with the input value c in the formula z * z + c (if it doesn't exceed 2, then it is in the Mandelbrot set, and so it's black). The input value is the pixel position on our canvas element except translated into a complex number - where the x axis is the real component and the y axis the imaginary one - ctx.fillStyle = mandel(c);. Check the Wikipedia page for full details of how the Mandelbrot set works, I worked out most of the details through trial and error once I had it functional.

I mentioned the default size for a canvas element above, this is 300px by 150px. As an experiment I removed the width and height attributes from the element itself and set the size of the canvas in CSS:

canvas { 
    width: 45%;
    height: 45%;

You can view the results here (again, watch out if you have a slow computer). The browser renders the canvas, then scales the results to fit the CSS dimensions. So, if you want your canvas to take up a particular portion of the page (eg. half of it), you need to set the dimensions of the element with Javascript based on the pixel width of the page.

Moving on to Prez's second post, which inspired this whole adventure, where he looks at the Julia set. The Julia calculation is very similar to the Mandelbrot one, except instead of starting with z = 0 like the Mandelbrot it starts with z = t, where t is a complex number. The obvious place to get t is the existing plane of numbers on which we've drawn our Mandelbrot set. A canvas is an element like any other, so simply attach an onclick event to it and then work out the value of t from where the click event fired. Here's the Julia set for real 0.4249, i -0.2666:

An image of a Julia set, generated in the canvas element with Javascript

You can take a look at the final Canvas Mandelbrot/Julia Generator here. In performance terms, doing a calculation to render each individual pixel is a pathologically bad case for canvas - it's not something you'd normally do, you would make use of the higher level drawing controls instead. As it stands it does make an interesting performance comparator between browsers. I tested it in Firefox 3.6 (about 2 seconds to generate each image on my machine), Google Chrome 5 beta (about 1 second for each image) and Opera 10.50 (about half a second for each image), it ought to work in Safari too but I didn't try it. I did a version which used Explorer Canvas to try and see how long IE took, but it kept hitting the script timeout before it was even one third of the way through.

Tweet this!
2 feedbacks »PermalinkPermalink


04:57:10 pm Permalink London Web Meetup: Accessibility in the Days of jQuery, Flash and AJAX

Categories: Usability & Accessibility, Front End Web Development

Review: London Web February: Accessibility in the Days of jQuery, Flash and AJAX at Wahoo Sports Bar, 14 Putney High St, Putney, London, SW15 1SL 19:30 to 21:30

This week, accessibility has been a bit of a theme for me, after LWS Inclusivity on Monday I was at the London Web Meetup on Accessibility on Thursday night.

To start with, Nathan gave a short presentation on HTML5 and CSS3. It was an introductory talk, so nothing new to me, but there was a very interesting open discussion afterwards. The focus was whether we'd even be able to use this fancy new HTML5 and CSS3 stuff while IE6 continued to account for 20% (or more) of the users of any given website. The Yahoo! home page still sees a huge number of IE6 visitors, and people who worked a lot with city clients said IE6 was still the default browser for many of their customers, the recent security scares do seem to have created an impetus for change among some of the banks. There was also some discussion about whether we even need to provide a pixel for pixel identical experience in every browser, or whether we needed to have the visual bells an whistles at all - apparently a front end engineer at Yahoo! Sports turned off all the rounded corners and showed the result to a designer and the designer couldn't spot the difference. My contribution to the discussion was that as more and more people use a mobile device to browse the web, and a lot of the browsers on those do support HTML5 and CSS3, you may be able to start using these much sooner if you're targeting these users.

After a short break we moved on to Artur Ortega's demonstration of screen readers and WAI-ARIA. Artur had JAWS, the leading commercial screen reading software, and NVDA, the free and open source alternative. He also mentioned Orca, the Gnome Linux based screen reader, and VoiceOver, which comes free with Mac OSX, including the iPhone version, which Artur used (this reminded me of Sandi's comment at the end of Monday's talk - including the tools in the operating system brings wider benefits).

Artur started off with a discussion of how the needs of screen reader users differed from fully sighted users. Although web pages are two dimensional, a screen reader sees them as a one dimensional audio track. This means a screen reader user needs 'timestops' if they are to navigate the page efficiently. These can be provided, in a well structured page, by headings - a screen reader can navigate from heading to heading with a keystroke without reading all the text in between. So the first simple improvement you can do to make your pages more accessible is make sure your page uses headings in appropriate places. Another small change which can make a big difference is to indicate language correctly with the lang attribute. This is very important in pages where multiple languages are likely to appear, such as search engine results. Currently, Yahoo! is the only search engine to do this - Artur demonstrated the huge difference it made to the screen reading experience, a set of multi-lingual search results became almost unintelligible when the screen reader was in the wrong language mode. Since search engines already work out the language of a particular page and expose that information, as evidenced by them providing 'translate this page' links in the results, this ought to be a simple change to make.

Next, Artur moved on to WAI-ARIA. ARIA is Accessibility for Rich Internet Applications, web apps with heavy use of Javascript and AJAX. For a general introduction see the W3C WAI-ARIA Overview, or More Accessible User Interfaces with ARIA which I attended in December. Support for ARIA is available in IE8 (there was some, but not much, support in IE7) and Firefox 3.0 (and up), when used with JAWS 10 or later (or recent releases of NVDA). Artur showed us ARIA landmarks and the aria-required attribute as well as, briefly, ARIA live regions.

There were a lot of questions all through the talk, so we had to cut it short at the end. I think many people, like myself, were totally in awe of Artur and his ability to navigate the web with a screen reader - especially when he demonstrated doing it at 'normal' speed near the end (he'd had it set to slow mode to give us a chance to keep up during the demos). I was inspired to spend a few hours the following morning implementing aria-required on a form in my web app at work &amp;#58;&amp;#41;

Another great event, 4 out of 5. I'm not sure a bar is the most comfortable environment for listening to a presentation, it seems that few in the London web community agree with me there, though. On the plus side, unlike most the events I attend this one was within ten minutes walk of where I work, so no need to be fighting my way through London rush hour to get there. The talks themselves, and the discussion afterwards, were excellent.

Technorati tags for this review:    

Tweet this!
Send feedback »PermalinkPermalink


05:57:30 pm Permalink London Web Standards: Inclusivity with Sandi Wassmer

Categories: Usability & Accessibility, Front End Web Development

Review: LWS February: Move over Web Accessibility, inclusivity is heading straight at you! at The Square Pig, 30 - 32 Procter Street, Holborn, London, WC1V 6NX 19:00 to 20:30

I'm always interested to learn more about accessibility so, after I enjoyed last month's LWS so much, this event was a must attend. As before, Jeff live-blogged the talk and I will be covering most of the same ground, but hopefully with a different enough emphasis and perspective to make this worth reading too.

Picture of the question and answer session at the end of the talk

Sandi started off here talk by discussing the need for the new term, 'inclusivity'. Accessibility has had a lot of powerful advocates in recent years but that has resulted in a somewhat negative image and a narrow approach. The need for accessible websites has been driven into people's conciousness, but not the underlying principles. Accessibility has become,"that thing you have to do to make disabled people happy." So to make people happy developers are resorting to a checklist approach which isn't actually benefiting users. Marketers fear that implementing accessibility will devalue the brand; designers fear it will limit their design options; developers worry that it will reduce functionality. In fact, accessibility, when done right, need not cause any of these issues.

Inclusivity is an effort to repackage accessibility with a more positive spin, by returning to first principles while combining with other elements of usability and remaining practical. The focus needn't be completely on screen reader users - only 3% of the UK's registered blind people are totally blind. Providing an alternate text only version of your website is not the same thing as having an accessible website, according to Sandi the aim of accessibility should be,"an unobtrusive bridge between myself and the world." Inclusive design allows people to have a choice in how they interact with your website. The pay off to taking an inclusive approach to design can be huge: there are 10.6 million registered disabled people in the UK, 19% of the working population is registered disabled and they represent approximately £80 billion in annual purchasing power.

Inclusive design is the embodiment of seven principals, it is:

  • Unbiased
  • Flexible
  • Straightforward
  • Clear
  • Sensible about errors
  • Minimises physical effort
  • Emphasises appropriate shape and size
To embrace inclusive design you also need to attack three assumptions:
  • Disabled vs Not disabled - People are people, disability is not a binary attribute, there is a range of abilities
  • Accessible vs Inaccessible - Accessibility is subjective, there is not such thing as an accessible site, sites will always be more or less accessible to different groups of users
  • Disabled people do not appreciate design - Anyone can have bad taste, this is orthogonal to their abilities in other areas

You might worry that you will never get accessibility 'right' - Sandi offers the advice,"just do your best." You just have to keep learning - accessibility is a process, not a finishing state - and you have to use that knowledge whenever you can.

Sandi then moved on to how the design process should be structured in order to take account of inclusivity:

  • The Brief should not be brief and should include a discussion of the principals of inclusivity.
  • The Plan should include user testing (with real users) and nominate an inclusivity champion.
  • The Functional Scope is where the real world will impinge, how much and what type of user testing does the budget allow?
  • The Technical Scope has everything nailed down, at this stage you just need to make sure everyone is communicating so that the overall goals are not compromised by a simple misunderstanding.
  • Learning, Designing, Testing, Tweaking and repeat as often as necessary (or you can afford).

Next Sandi discussed the relationship of inclusivity with web standards and best practices. The key misconception many people have about WCAG is that last letter - they are guidelines, not rules. The difference is that while rules are inflexible, they can only be complied with or not, guidelines are a relationship, they guide you on the way to discovering the best interface for your users.

Usability has much crossover with accessibility, though unlike accessibility there are no legal requirements to make your site usable. With usability you're asking yourself how specific users are going to accomplish specific goals in a certain context and evaluating your solution according to its effectiveness, efficiency and (user) satisfaction.

Web standards form the foundations of good accessibility, but they are just the beginning. Having your page pass the W3C validators doesn't guarantee it will be accessible.

The two most popular strategies for delivering good accessibility are progressive enhancement and graceful degradation. Sandi said that while progressive enhancement is a strategy, graceful degradation is an afterthought. Progressive enhancement is the way to go because it allows you to build your web site in layers and so make available a good experience to everyone.

Finally Sandi discussed why user testing is important, even if you have excellent market research and analytics. Consider three users: Peter, George and John. All are marketers in their mid-thirties, with 2.4 children and are demographically identical. They are all using Firefox on the same brand of computer, so are basically indistinguishable from the point of view of market demographics and browser identification data, however:

  • Peter is an internet lover, he's maxed out his browser with nearly every extension he could get his hands on
  • George is a luddite who only uses a browser because he needs it for work, he's turned Javascript off because he heard, some time ago, that it was dangerous to browse the web with it enabled
  • John is technically savvy, like Peter, but is visually impaired and so uses a screen reader

Clearly these three users have very different needs, and yet you're only going to see the difference between them if you do user testing.

After a wrap up, where Sandi re-iterated the need to always keep learning, we moved on to the question and answer session. There were a few questions which stood out for me:

  • Providing mobile access, is this accessibility, inclusivity or usability? - All of the above! Sandi's advice was to just try your best, not all content needs to be available on all devices. While the holy grail may be a site which is completely accessible on desktop and mobile, budgetary constraints will probably limit you before you get there.
  • How can we get the message of inclusivity to banks and other large and slow moving institutions? - Bring it to the mainstream, social change is hard work but it's the path to ultimate victory. Also, challenge people's stereotypes, don't let them think of a small number of completely blind people using screen readers, get them to think more broadly. One of the audience pointed out that one of the best business cases for accessibility had been at Legal & General. Also you should consider the people with the most power at these institutions tend to be older, and while they may not consider themselves disabled they are likely to suffer from impaired vision and other ailments simply due to old age making them a ready made market if you phrase things well.
  • What's the best way to develop an accessible website, where should your concentrate the effort, on semantic code? - A problem is that technology is always changing, and assistive technology doesn't always keep up, so you can't always provide the best solution now, and often the best solution now won't be the best in the future. This is why the WCAG is not about technical solutions but about guiding you to an understanding of your users.
  • Is there ever a place for exclusive design? - No, stuff is more usable when built for everyone. For instance the Mac accessibility tools built into the OS, now everyone can use them even if they don't consider themselves 'disabled'.
Another excellent event, 5 out of 5, I'll be watching out for the next one.
Technorati tags for this review:    

Tweet this!
1 feedback »PermalinkPermalink


03:04:05 am Permalink Real World CSS 3

Categories: Web Design, Front End Web Development, Standards, HTML and CSS

I've done a number of posts recently on new features coming in CSS level 3. These posts have mostly been based on features available in the version fours of Safari and Chrome and the Firefox 3.6 release so, while they may be useful if you're producing an iPhone app, they don't seem like they'll be much use on general purpose websites where IE users need to be considered. In this post I'm going to look at how these new CSS features can be made to work, or at least degrade gracefully, in older browsers, including Internet Explorer. I'm going to extend the example from my earlier post on scaling background images and gradients to see what's possible.

So my scaled background image looks OK in Firefox 3.6:

Screenshot of perfectly scaled background image

But it looks a bit crappy in Firefox 3.5, not to mention any other browser which doesn't support background-size:

Screenshot of background image tiling inappropriately

There are going to be some cases where there's nothing you can do, for instance if you want some page elements to be in position over bits of the background, and will just need to take another approach (like putting the image in the page and using a negative z-index). However, in my case, I think that if users can't be treated to a nice full screen photo of palm trees then a tiling 'leafy' image would be an acceptable substitute without ruining the ambience. In order to provide one background image to browser which support sizing and another to those that don't I'm going to make use of another CSS3 feature, multiple background images. In CSS3 compatible browsers you can just provide multiple images in a comma separated list:

background-image: url('1-rhus-typhena-leaf-tile.jpg'), url('bg-image-1.jpg');

All the other background rules accept a similar comma separated list, including background-size. The nice thing about this is the stacking order of the images is defined in the spec:

The first image in the list is the layer closest to the user, the next one is painted behind the first, and so on. The background color, if present, is painted below all of the other layers.

So I put the tiling image 'closest to the user' then in my background-size set it to be zero size. Browsers which support background-size make that image disappear and show the scaled image behind it, those that don't show the tiling image:

background-size: 0% 0%, 100% 100%;
-moz-background-size: 0% 0%, 100% 100%;           /* Gecko 1.9.2 (Firefox 3.6) */
-o-background-size: 0% 0%, 100% 100%;             /* Opera 9.5 */
-webkit-background-size: 0% 0%, 100% 100%;        /* Safari 3.0 */
-khtml-background-size: 0% 0%, 100% 100%;         /* Konqueror 3.5.4 */

Note, the image doesn't actually disappear in WebKit, but if you set background-repeat: no-repeat it's very hard to spot. Opera 10.10 supports background-size (on Windows, anyway) but not multiple background images, which is unfortunate, but things work well on Opera 10.50. So now all we have to worry about is those browsers which don't support background-size or multiple background images. This is easy enough, as they should ignore all the rules they don't understand, so we just need to precede all of the above with a single background-image rule

background-image: url('1-rhus-typhena-leaf-tile.jpg');

Now each browser displays a background image according to their abilities, here's Firefox 3.5 after the fix:

Screenshot of background image tiling with multiple background hack

With one further tweak, we can support IE too. The issue with IE is that it parses the multiple background images even though it doesn't understand them, treating them as a single, invalid background image. For this I resort to IE conditional comments:

<!--[if IE]>
<style type="text/css">
body {
    background-image: url('1-rhus-typhena-leaf-tile.jpg');

Screenshot of background image tiling in IE8

Something still needs to be done about the menu examples; to refresh your memory, the first menu example used a single 'button' background image:

Links with scaled background image

This is amenable to the same multiple background hack. This time I used a one pixel wide slice of the button image and repeated it across the background:

background: url('bg-image-3.png') repeat-x;
background: url('bg-image-3.png') no-repeat, url('bg-image-2.png') no-repeat;   /* Image courtesy of */
background-size: 0 0, 100% 100%;
-moz-background-size: 0 0, 100% 100%;           /* Gecko 1.9.2 (Firefox 3.6) */
-o-background-size: 0 0, 100% 100%;             /* Opera 9.5 */
-webkit-background-size: 0% 0%, 100% 100%;        /* Safari 3.0 */
-khtml-background-size: 0% 0%, 100% 100%;         /* Konqueror 3.5.4 */

Add some rounded corners and you end up with a reasonable approximation of the original buttons in Firefox 3.5:

Links with tiled background image

The next set of buttons all used gradient backgrounds:

Links with CSS gradient background

I'm just going to look at the first and the last one of these, as all the interesting bits will get covered in these two cases. One thing worth noting at this point - the syntax for specifying gradients has changed significantly in Firefox (and the standard) since the last time I discussed them. It's actually now a lot simpler. Old way:

background-image: -moz-linear-gradient(top, bottom, from(#090), to(#060), color-stop(25%, #cfc));

New way:

background-image: -moz-linear-gradient(top, #090, #cfc 25%, #060);

As you can see, the from, to and color-stop tokens have been removed and now, instead of stating the start and end colours and then any steps in between after that, the colours are now in the order they'll appear. The percentage value is now optional, if you don't specify it then the colour stops will be evenly distributed.

Because this example involves no transparency, supporting browsers which don't understand gradients is easy - just provide a background colour:

background-color: #080;
background-image: -moz-linear-gradient(top, #090, #cfc 25%, #060);
background-image: -webkit-gradient(linear, left top, left bottom, from(#090), to(#060), color-stop(25%, #cfc));

And (for a bit of variety), here's what it looks like in Opera 10.50 Alpha (on Linux):

Links with fallback solid background

You might settle for that in Internet Explorer, but you don't need to! IE has supported gradients since 5.5 thanks to the gradient filter. Here's a simple lighter to darker green gradient in IE8 CSS syntax:

-ms-filter: "progid:DXImageTransform.Microsoft.gradient(GradientType=0, startColorstr=#FF009900, endColorstr=#FF006600)";

And this is what it looks like in Internet Explorer 8:

Links in IE with CSS gradient background

While you sit there flabbergasted that, once again, IE seems to have been ahead all along in the race to snazzy CSS3 effects, there are a couple of caveats. First off, like all IE filters, underlying this is an ActiveX control, so expect strange issues related to stacking, clicking and (possibly) problems due to the security settings of extremely paranoid Windows admins. Secondly, you can't specify anything other than a flat gradient, there is no equivalent to colour stops. Finally, unlike the 'true' CSS gradients in Firefox and Safari, a filter in IE applies to the entire element instead of just a single property. So while in Firefox you can happily use a gradient in the background-image property and also specify a separate box-shadow property, in IE you have to use all the filters on a single -ms-filter property; this may not be a problem for you, but I've found that combining more than one filter of a different type on a single element can lead to some rather strange results, more on this below.

In my final example in my earlier post, I tried to do some no Image Aqua Buttons without adding additional markup to my fake menu. You may recall one of the issues I had was simulating the 'glare' with a radial gradient because there wasn't a way to make it anything other than round. In the new gradient syntax supported by Firefox, there is now a way to change the shape of the radial gradient:

background-image: -moz-radial-gradient(center 25%, ellipse farthest-side, rgba(255, 255, 255, 0.7), rgba(255, 255, 255, 0.1) 50%, rgba(255, 255, 255, 0));

You can now specify a shape, circle or ellipse, as well as a number of different 'size constants' - farthest-side in my example above. This allows a lot more flexibility for the radial gradients. Compare the orginal version:

Sort of Aqua buttons with no extra markup

With the new 'ellipse farthest-side' version:

Sort of Aqua buttons now with ellipsoidal glare

Yes, I know, the difference is subtle &amp;#58;&amp;#41;

As I alluded to above, the main issue with these buttons is that we want them to be semi-transparent. Apart from IE, it's been a while since any other major browser didn't support RGBA colour - Opera 9.64 didn't support it, but Firefox 3.0 did. As long as a non-transparent colour is an acceptable fallback we can just rely on normal CSS precedence and specify two background colours:

background-color: rgb(60, 132, 198);
background-color: rgba(60, 132, 198, 0.8);

If you want a completely transparent background in the case where gradients are supported and a coloured background otherwise, you can specify everything in two background rules instead, browsers should ignore the rules they don't understand.:

background: rgb(60, 132, 198);
background: -moz-linear-gradient(left, rgba(28, 91, 155, 0.8), rgba(108, 191, 255, 0.9)) transparent;

Once again for IE we turn to the -ms-filter property. In the earlier gradient example you will have seen startColorstr=#FF009900 - the alpha value is that first hexadecimal pair (the rest make up a standard hex colour declaration), if you set it to something less than FF the colour will be partly transparent. Since I have two elements to use, the li and the a within it, I will attach a gradient filter to both:

div#nav2 ul li {
    -ms-filter: "progid:DXImageTransform.Microsoft.gradient(GradientType=1, startColorstr=#CC1C5B9B, endColorstr=#E56CBFFF)";
div#nav2 ul li a {
    -ms-filter: "progid:DXImageTransform.Microsoft.gradient(GradientType=0, startColorstr=#88FFFFFF, endColorstr=#00FFFFFF)";

The GradientType is 1 for horizontal and 0 for vertical, so the first declaration is a relatively solid dark blue to light blue horizontal gradient and the second declaration is a white to transparent vertical gradient. Finally in IE, because the -ms-filter property is stand-alone, the normal background colour has to be turned off in conditional comments:

<!--[if IE]>
<style type="text/css">
h1, div#nav1 ul li, div#nav2 ul li, div#nav2 ul li a {
    background-color: transparent;

This produces some acceptably 'glassy' buttons:

Sort of Aqua buttons in IE 8

In Gecko and WebKit there is also text and box shadow, these can also be emulated by -ms-filter in IE8. It is possible to tack multiple filters on to the declaration like this:

div#nav2 ul li {
    -ms-filter: "progid:DXImageTransform.Microsoft.gradient(GradientType=1, startColorstr=#CC1C5B9B, endColorstr=#E56CBFFF) progid:DXImageTransform.Microsoft.Shadow(color=#3399EE,direction=180,strength=10)";
div#nav2 ul li a {
    -ms-filter: "progid:DXImageTransform.Microsoft.gradient(GradientType=0, startColorstr=#88FFFFFF, endColorstr=#00FFFFFF) progid:DXImageTransform.Microsoft.Shadow(color=#3399EE,direction=180,strength=10)";

The filters are applied in the order they appear in the rule. However, as I alluded to above, this doesn't always have the effect you might expect, here's the result of the above rules:

IE 8 multiple filter fail

Not very useful! You may have better luck mixing other filters, or applying multiple filters of the same type, but in general I would recommend a one filter per element approach. In this case, I think the gradient is more important than the shadow for the general feel, so I'll stick with that.

So now the final 'compatible' version of my example page is ready. For some side by side comparisons, here it is at it's best in Firefox 3.6:

CSS3 Backgrounds example page in Firefox 3.6

This is the same page in Internet Explorer 8:

CSS3 Backgrounds example page in IE 8

And this is the same page again in the legacy Opera 9.64:

CSS3 Backgrounds example page in Firefox 3.6

As you can see, the pages are by no means identical in every browser, but each is displaying the page according to its capabilities, with no need for Javascript hacks and only a little CSS trickery. So, if you are willing to accept that less capable browsers will show less stylish pages (and are willing to write a few redundant CSS rules), it is possible to use several CSS3 features on websites today.

Tweet this!
Send feedback »PermalinkPermalink


11:56:15 pm Permalink London Web Standards: Web Fonts with Ben Weiner

Categories: Front End Web Development, Standards, HTML and CSS

Review: LWS January: Web Fonts with Ben Weiner at The Square Pig, 30 - 32 Procter Street, Holborn, London, WC1V 6NX 19:00 to 20:30

I've been experimenting a bit with @font-face recently so I was intrigued when I heard about this talk. It was booked out within hours of announcement but fortunately (for me) a few folk couldn't make it and the people on the waiting list got to attend.

Jeff has already done a 'live blog' of the talk, and there's a full transcript on Ben's site, so I'll just give a brief and slightly out of order outline and concentrate on the bits I found interesting.

Ben is a typography geek who got into web design so he had a good understanding of both the typographical world and the history of web fonts. This he discussed for the first third of the talk - explaining what a font is, why font design is difficult, why typography is important and why, more than twelve years after Microsoft first brought web fonts into the world with IE4 and EOT, we might finally have a chance at a solution that works for users, designers and the font foundries. One of the interesting aspects he discussed here was ligatures - where two letters, when placed consecutively, are melded into a single symbol. This happens in English fonts for things like 'ff' and 'fi', although the individual letter shapes remain, but the effect is even more marked in Arabic scripts where the combined character looks significantly different to the two (or more) which it represents. Having to deal with all this sort of stuff is why font design is so difficult, why even the 'open source' fonts have mostly been paid for (one way or another) and why font foundries have been generally paranoid about the possibility of all their hard work being stolen as soon as it's uploaded on to the internet.

Although most of the excited noises about web fonts are coming from designers, there are a few reasons why they're important. Firstly, while the font support for English and other languages based on the Roman alphabet is good, there are others where it's not so good and some (relatively popular) languages which have no font support at all in most operating systems (Telugu, a language of the Indian subcontinent, has 74 million speakers but no support in Windows). In this situation many sites have resorted to delivering their textual content as images. Delivering textual content as images is also an issue for accessibility as many designers, desperate to use their fonts of choice, resort to image replacement techniques which, if done badly, can result in poor accessibility.

This leads neatly in to Ben's discussion of the various hacks people have used because of the lack of working, cross browser web fonts. In order from the stupidest to the cleverist:

  • Image replacement - Basically, write your text in Photoshop and save it as an image, then put it in your page. An accessibility nightmare, particularly if the designer 'forgets' to provide the alt content.
  • Fahrner image replacement - In this technique the normal text is left in the HTML and the 'rendered' font is put in a background image. CSS is then used to move the normal text out of the viewport, leaving only the background image. Less stupid than straight image replacement, but still not perfect - no support for text resizing and you have to generate all your images in advance.
  • sIFR - Moving on to the clever hacks, sIFR works by embedding your font into a Flash movie then using Javascript to replace the headings dynamically. This was the first hack to be CMS-friendly - the text to be displayed is a run time parameter, but it introduced the additional requirements of JS and Flash.
  • Cufón - Getting to the first of the really clever hacks, Cufón uses a generator to convert your font into a set of JSON encoded outlines and then uses canvas to render it in the page.
  • Typeface.js - The cleverest of all, by making use of a server side component with access to the Freetype library, and then applying according to your existing CSS.

While these libraries get the job done, they do have a number of drawbacks - sIFR's use of Flash, Cufón's requirement to specify your fonts outside of CSS, the way it splits the words up into separate elements for each letter, and the general dependence on Javascript. They also have some drawbacks on the typography front, particularly to do with ligatures and letters that change shapes in different contexts - Ben showed us a number of example slides which you can check out on his site. Also, none of the use 'real fonts' - the same things that any other application would recognise as a font file.

We moved on to web fonts and the @font-face rule. Here is what it looks like:

@font-face {
	font-family: 'CantarellRegular';
	src: url('Cantarell-Regular.eot');
	src: local('Cantarell Regular'), 
		url('Cantarell-Regular.ttf') format('truetype');
You can then reference your downloaded font like this:
p { font-family: "CantarellRegular";}

All relatively straightforward, this will work in recent versions of all the major browsers - a standards compliant way to render actual fonts of your own choosing, with no need for any scripting or third party plugins, taking full advantage of typesetting capabilities already available within browsers and operating systems. Of course it's not quite that straightforward, there is some server side setup to consider, and then the ideological and technical hurdles that still need to be overcome before this all becomes practical for mainstream sites.

To deal with the idealogical problem first - font foundries are still not playing ball. This is a problem because, unlike music, a font is software rather than data and gets the same protections, also, as should be clear from some of the difficulties described above, good font design is difficult and so professional designers realise it is worth paying for a good font. So, even though the technology and standards are now getting in to place, font foundries are still unwilling to license 'desktop' fonts for distribution with websites.

And it's not like there are no practical problems. First you've got to consider what adding a few fonts to the mix is going to do to your bandwidth requirements. Regular desktop fonts, with just normal, bold, italic and bold/italic options, exceed 100k in size, especially if they include a full unicode character set. A font from a foundry, which is likely to have many more variations (extra heavy, expanded, smallcaps etc.) can be up to 1Gb in size. This gives you another thing to think about - what will the user see while waiting for the font to download? In Firefox they'll see the page rendered in whatever the default font is - this is likely to have a different geometry to the font you're delivering, so the page will have to be reflowed when the font arrives. This causes usability difficulties as elements users have started interacting with suddenly move around. In Safari the user will see nothing until the entire font is downloaded, making it impractical to use a large font for body text. This is the flash of unstyled text problem. You might be able to subset the font to reduce download times, but then you need a font with a permissive license, which leaves you back at the idealogical roadblock outlined above. You might have the idea of a single repository for web fonts to increase the chance users already have the font you want to use, similar to Google's Ajax library (and you wouldn't be the first), or even just to share a font across multiple domains in a portfolio. In this case you've got to set up Cross-Origin Resource Sharing for Firefox.

Although @font-face itself has wide support there are some differences in the details when it comes to defining families of fonts. Firefox seems to do pretty well and Safari/Chrome behave similarly, there are some issues that crop up in Opera and Konqueror, but most of the problems occur in IE. Here is how you declare a font family according to the spec:

@font-face {
    font-family: "DejaVu Serif";
    src: url("/fonts/DejaVuSerif.ttf") format("TrueType");
    font-weight: 400;
    font-style: normal;
@font-face {
    font-family: "DejaVu Serif";
    src: url("/fonts/DejaVuSerif-Italic.ttf") format("TrueType");
    font-weight: 400;
    font-style: italic;
@font-face {
    font-family: "DejaVu Serif";
    src: url("/fonts/DejaVuSerif-Bold.ttf") format("TrueType");
    font-weight: 700;
    font-style: normal;
@font-face {
    font-family: "DejaVu Serif";
    src: url("/fonts/DejaVuSerif-BoldItalic.ttf") format("TrueType");
    font-weight: 700;
    font-style: italic;
Each different weight and style has it's own definition and associated font file - multiple font files can be listed and the browser should pick the first format it can handle. However, of the above, this is all IE understands:
@font-face {
    font-family: "DejaVu Serif";
    src: url("/fonts/DejaVuSerif.ttf");
@font-face {
    font-family: "DejaVu Serif";
    src: url("/fonts/DejaVuSerif-Italic.ttf");
@font-face {
    font-family: "DejaVu Serif";
    src: url("/fonts/DejaVuSerif-Bold.ttf");
@font-face {
    font-family: "DejaVu Serif";
    src: url("/fonts/DejaVuSerif-BoldItalic.ttf");

Internet Explorer ignores all the font-weight and font-style rules so essentially all that happens (leaving aside the issue of file format) is that DejaVu Serif gets redefined repeatedly. A common work around is to define each font style explicitly in it's own font family, but this is hardly ideal.

In the final part of his talk Ben moved on to WOFF (Web Open Font Format) - the new standard which could finally get the font foundries behind web fonts. It was initially worked on by Erik van Blokland and Tel Leming, two guys well respected in the world of type, then worked on by Jonathan Kew and John Daggett of Mozilla, two guys with a lot of respect in the web browser world - so it was able to gain a lot of traction in both areas very quickly. So the future looks good, in the meantime there are a number of startups and other websites looking to exploit the demand for web fonts in TTF and EOT format:

This was a very useful talk on what is a hot topic in the web design world right now, I was particularly interested to learn about some of the history behind web fonts and some of the issues surrounding support of non latin languages, so 5 out of 5.
Technorati tags for this review:      

Tweet this!
1 feedback »PermalinkPermalink