boogdesign blog

Web design, web development, standards compliance, Linux, events I went to related to that, and random things I found on the internet.

Rob Crowther, London, UK based Blogger, Web Developer, Web Designer and System Administrator - read my Curriculum Vitae

Buy my book!

Book CoverHello! HTML5 and CSS3 available now

Buy my other book!

Book CoverEarly access to HTML5 in Action available now

08/06/08

12:30:36 am Permalink Amarok Issues with Ogg on Kubuntu

Categories: Linux, Debian / Ubuntu

I just spent an hour fannying around with Amarok because it suddenly refused to play Ogg Vorbis encoded files. The correct solution turned out to be in this thread on the Ubuntu forums. The problem I was getting in Amarok was a message like this every time I tried to add an .ogg file to the playlist:

Some media could not be loaded (not playable)

Several of the solutions I came across recommended removing my Amarok profile, which then meant a good ten minutes waiting for it to re-scan my collection before I could see if the solution worked. Which it didn't.

A few more helpful solutions pointed the finger at Xine, the sound engine which Amarok uses, though most of the solutions were based around installing missing codec packages I already had (I was playing these same files in Amarok with no problems last week). However, when I tried playing the files in Xine directly I got another error message about there being 'no demuxer plugin available'. Again I spent some time trawling through solutions which blamed Grip for using incompatible versions of the Ogg Vorbis codec, or generating incompatible ID3 tags, but these were unhelpful.

The solution, when I found it in the forum thread above (which links to this Fedora related blog post), was very simple - exit Xine/Amarok, remove the catalog.cache and restart:

rm .xine/catalog.cache


Tweet this!
6 feedbacks »PermalinkPermalink

03/06/08

10:28:10 pm Permalink Use the new microformats API in your Firefox 3.0 Extensions

Categories: Semantic Web News

I've had another article published on developerWorks, a short one this time, which looks at utilizing the new Microformats API in the upcoming Firefox 3.0 from within an extension. With it being a purposely short article I had to gloss over some of the background steps, so if you have any questions ask in the related forum topic and I'll try and point you in the right direction.


Tweet this!
1 feedback »PermalinkPermalink

01/06/08

12:29:23 am Permalink In The Brain of Peter Elst: The evolution of the Flash Platform & ActionScript 3.0

Categories: Front End Web Development

Review: The evolution of the Flash Platform & ActionScript 3.0 at Skills Matter, Sekforde Street, London 18:30 to 20:00

This was my final London Web Week event, my curiosity for Flash had been piqued somewhat by my attendance at the onAIR London event even though I don't usually follow developments in this area, so I hoped this would be a good talk for getting me more up to speed.

I was late for the start of the talk, but I don't think I missed too much. When I got there Peter was talking about the use of Flash in digital art installations, I assume as part of a segment on 'good' uses of Flash. He then moved on to some of the new features in ActionScript 3, some of the highlights of which are: a new, more consistent, API based on ECMAScript 4; E4X; a new Event model; and support for binary sockets (which allows connections to arbitrary network services). Peter then discussed the many tools available for authoring and delivering Flash content with AS3, one of the refreshing things about the list was the number things on it which had recently been released as open source.

Next, common Flash myths were addressed. The first was the SEO issues - while it's true that search engines will struggle to crawl an all Flash site, they can, thanks to the Flash Search Engine SDK, extract the static text and links from Flash movies. Adobe are currently working with Google and others on ways for search engines to extract dynamic text and understand context. A related myth is that it's impossible to do deep linking on a Flash based site, but with the SWFAddress library it's possible to provide URLs for particular elements of your movie as well as support the back and forwards buttons in the library. Flash has traditionally required non-validating markup on its underlying HTML page, but with SWFObject that is also a thing of the past. Finally Peter expressed his frustration at people continually comparing AIR and Silverlight, AIR is a cross platform runtime environment whereas Silverlight is a browser plugin (actually, more like Flash) - they are different things and there's no point comparing them.

Having covered the current state of 'stuff based on the Flash runtime', Peter moved on to look at the future directions for the platform. One of the big changes is Adobe's drive towards open source, some of the key projects which have been open sourced are:

  • Tamarin - the virtual machine which underlies ActionScript is now to be the engine for Javascript in Mozilla 2
  • Tamarin Tracing - a version of the virtual machine which uses trace trees to optimize JIT compilation (pdf) and should be suitable for 'constrained environments' such as low end mobile phones and other embedded devices
  • Flex SDK - previously an enterprise application server selling for big bucks, now free to download and plug in to Eclipse
  • BlazeDS and the AMF file format - similarly, an enterprise solution for feeding live data to Flash front ends now freely available
  • The Open Screen Project - possibly the most exciting of all, Adobe have removed licensing restrictions on the SWF and FLV specifications, making it legal for anyone to develop a Flash player for any platform (64bit Flash here we come?), and removing licensing fees on Flash and AIR to encourage adoption in the embedded space

Peter then went on to talk about the next version of Flash (both the authoring environment and the player) which he illustrated with several slightly hard to hear videos shot from his phone at major Adobe events. Some of the neat stuff in the Flash authoring environment was live video on the 'stage' (ie. the bit in Flash where you assemble the content), auto-tweening through drag and drop and path manipulation (no more messing about with keyframes) and inverse kinematics (draw a picture of an arm, then make it move). The player itself will support PixelBender - a language for writing image filters which will then work in Photoshop, After Effects and Flash, in real time on running movies. I thought this might be a cool way to improve accessibility in Flash - add high and low contrast versions of Flash movies at runtime. The other cool new thing we saw was Pacifica, a SIP based VOIP client in Flash (and, in future versions, AIR). It struck me, as we saw presentations on these and other technologies, that Adobe is addressing the arrival of Ajax and DHTML interfaces moving into their traditional 'rich application' space for Flash by heading out in directions that are unlikely in the foreseeable future be doable in HTML and Javascript. This is definitely a good thing, it will push the development of the web onwards, but it means that all those people who unconditionally hate Flash on websites are going to have to be putting up with it for much longer as there's just going to be some things that can only be done in Flash.

The last demo we saw was a guy at Adobe who'd written a C/C++ to ActionScript cross compiler, which allowed him to compile popular C libraries, such as libxslt, to ActionScript and make use of them in Flash movies - adding features which are just not available in native Flash. The culmination of his talk was a demo of Quake running in Flash after being ported to ActionScript from C. Most of the videos we saw are on Peter's blog if you want to check them out.

Overall this was a pretty good talk, I got what I wanted out of it. It was quite difficult to hear what was being said on the videos as they were just using his laptop speakers, but I grapsed enough of what was going on from the video bits. I give it 4 out of 5.

Technorati tags for this review:    

Tweet this!
Send feedback »PermalinkPermalink

29/05/08

11:59:28 pm Permalink WSG London Findability Meetup

Categories: Usability & Accessibility, Information Architecure, Front End Web Development, Standards, HTML and CSS, Semantic Web and Microformats

Review: Web Standards Group London Meetup at Westminster University, New Cavendish Street campus, London 19:00 to 21:00

Concepts of Findability (Cyril Doussin) - This was a whirlwind tour of the subject of findability, mostly based on Peter Morville's Ambient Findability, since you can probably just go and read the book I'm just going to mention a couple of things Cyril talked about that caught my ear, first - what's the difference between data, information and knowledge?
Data
An unevaluated set of symbols
Information
An evaluated, validated or parsed set of symbols
Knowledge
A set of symbols which have been understood

You can easily see this definition in the context of the Semantic Web project - moving the web from data/information into the realms of knowledge. Cyril then discussed several general strategies for making things findable: The "In Your Face" Discovery Principal (basically, traditional advertising); Hand Guided Navigation (web directories and drill-down hierarchical menu systems); Describe and Browse (search engines); Recommendations (forums, mailing lists, Digg and other interactive systems). Several websites combine two or more of these to improve findability, for example Yahoo now suggest categories for drill-down with your search results. Cyril then discussed how to measure the relevance of search results, by considering the precision (lack of false positives) and the recall (exhaustiveness) against the requirements for the type of search (for some searches recall is more important than precision and vice versa) before finishing off with a brief chat about content organisation.

Building Websites with Findability in mind (Stuart Colville) - This talk was mostly regular SEO type stuff, five basic requirements were covered:

  • Understand your potential audience - this almost goes without saying, but you everything in the design and structure of your website should be driven from who your visitors are and what they're after.
  • Have compelling content - with the wealth of content already available on the web, you've got to find a way to make your content stand out. Either be completely original, which is difficult, or aim at a niche market where there isn't so much competition. This could also come from presenting existing content in a new way to better serve the needs of your target audience. Also, try and keep your content focussed - pictures of your cat might be very cute, but unless they're relevant to the main topics of your website consider hosting them elsewhere.
  • Use quality markup - this was the meaty bit of the talk and ranged across a number of sections:
    • Follow web standards in your markup, while it won't turn poor content into compelling content, it will improve the ratio of content to code on your page
    • Pay attention to your meta tags, while keywords are ignored the description is often displayed in search engine results so should have something relevant to the current page
    • Titles and headings are important,. Titles always appear in search results and will also be the default link text for any of your content in social bookmarking services (ie. they're link juice keywords - check this out for an idea of how bad many people are at this - I got "1 - 10 of about 36,900,000"). Remember your h1 heading is the most important visible text on the page, for most pages on your site this will not be your company name and/or logo
    • Text content should use the semantically correct element, strong and em can be used to give particular phrases higher weight but use them sparingly, duplicate content is not the issue it used to be especially if it's 'natural' (eg. a blog where the same post will normally appear on several pages)
    • Images which are purely design elements should be CSS backgrounds, images which have some data should use CSS image replacement and inline images should always be given correct metadata (where 'correct' sometimes implies 'none')
    • Microformats can improve findability, particularly after Yahoo!s recent announcement
    • Javascript should always be unobtrusive and you should practice progressive enhancement
  • Always keep accessibility in mind - remember a search engine has a similar view of the web to someone using a screen reader, improving your accessibility will usually also improve your findability
  • Present no barriers to search engines
    • Website performance affects indexation - search engine spiders only spend a finite time crawling your site, so the quicker you deliver pages the more will get indexed
    • URLs are an important opportunity to add keywords, and remember "A cool URI is one which does not change." Learn to use mod_rewrite and remember to give correct HTTP responses - 301 for content which has moved, 404 for content which you need de-indexed
    • Be careful that you don't block off important parts of your site with the robots.txt file

Finding yourself with Fire Eagle (Steve Marshall) - Fire Eagle is a service which helps you manage your location. It sits between your location provider (GPS device, mobile phone etc.) and your location dependent services and presents a uniform interface to those services while also letting you control your privacy. The key point is that it breaks the tight coupling that usually exists between 'location getting device' and 'location using software' which should therefore facilitate an explosion in location driven websites when it goes online (Google are experimenting with a similar thing in Gears, so it's an idea who's time has come). One very nice feature was it's use of OAUTH to set privacy levels - you can determine for each service the amount of geographical detail it will get and the API returns to the service a hierarchical object which goes down to the level of accuracy you specify (eg. country -> city -> locality -> postcode -> geo). This talk and it's associated demos/examples was very interesting, but you probably need to see it in action to really grasp how cool it could be.

Overall I enjoyed this event, though I was already familiar with a lot of material in Stuart's talk it's always good to have a refresher/reminder, and I learned some new things in both the others so 4 out of 5

Technorati tags for this review:    

Tweet this!
2 feedbacks »PermalinkPermalink

28/05/08

11:30:34 am Permalink Microformats vEvent

Categories: Front End Web Development, Semantic Web and Microformats

It's London Web Week, which means there are lots of events on. I plan to go to three of them myself, the first of which was last night's Microformats vEvent.

Review: Microformats vEvent at The Yorkshire Grey, Holborn, London 19:00 to 21:30

The venue was a function room in a pub which wasn't ideal - there weren't enough seats for everybody and, since I was a bit late arriving, I had to stand through both presentations which made it a bit awkward to make notes.

Putting microformats on the Semantic Web with GRDDL (Tom Morris) - Tom started off by talking about "Descriptive Markup", an alternate term for semantic markup to avoid "semantic" being every third word that would come out of his mouth. He then moved on to GRDDL and it's potential for creating a decentralised data web out of any HTML pattern you use on your website. An HTML pattern doesn't have to be a Microformat, it can be any HTML as long as it's used consistently on your site and you can provide an XSL transform to turn that pattern into useful semantic data. You can then write your own GRDDL profile to allow the data to be automatically extracted. He also discussed that the output from a GRDDL transform needn't be RDF, it could be anything you want such as RSS or JSON, which would allow you to very easily create an API for your website with nothing more than a few transforms. In the questions at the end he compared GRDDL to CSS and Javascript, except where CSS was for presentation and Javascript is for behaviour GRDDL would be for data. For an example of the potential he suggested we checkout triplr.org. Tom finished off with some "Design Patterns for the Web of Data":

  • Give everything a URI - your website, your house, your car, your pets etc.
  • Things (ie. URIs) linked together with meaning
  • Small vocabularies loosely joined - don't try and describe the whole world, stay focussed on your target domain
  • Do the right thing - practice wisdom
  • "Pragmatic" usually means you're doing it wrong
  • Don't mandate, don't limit - the less you restrict, the more room there is for freedom of expression

One Big Happy Family: Practical Collaboration on Meaningful Markup (Dan Brickley) - Dan's talk was a bit more political than technical and addressed the antagonistic attitudes which are sometimes displayed by Semantic Web folk towards Microformats folk and vice versa. Through a discussion of some of the history behind the current Semantic Web specifications at the W3C, a history which is largely obscured by the specs in their current state, he demonstrated that the two communities have far more commonalities than differences. Both communities are converging on the same goal but are taking very different routes to get there, and a bit more respect on both sides could allow each to learn from the other. That was a high level summary, the way Dan said it took longer but was also funnier with a liberal sprinkling of anecdotes :)

The talks themselves were very good, but I didn't enjoy the environment much - apart from the standing, the TV screen was a bit hard to read so 3.5 out of 5 overall.

Technorati tags for this review:    

Tweet this!
3 feedbacks »PermalinkPermalink

27/05/08

05:10:00 pm Permalink Building a compressed prototype + scriptaculous with YUI Compressor

Categories: Web Develop, Front End Web Development

A newer version of protoaculous is available from Inderpreet Singh, skip the reading and download protoaculous 1.9 directly.

A while back I started using protoaculous.js, a combined and compressed version of the Prototype.js and Scriptaculous in a single file. Unfortunately, as time went by and both Prototype and Scriptaculous got updated, it didn't seem like anyone was updating protoaculous in sync, so a few months ago I decided to build my own.

Of course, a day or so after I did that, John-David Dalton released a more up to date and far more complete set of files in his 'protopack', but I know some folks were already using my version because I got a request the other day to update it to use 1.6.0.2 of prototype.js. I figured this was a good opportunity to try and be a bit more organised about building it, plus I like being in the situation where I don't have to rely on anyone else for my updates, hence this mini-tutorial blog post.

Before starting we'll need the YUI Compressor (get the yuicompressor-x.y.z.jar file out of the build sub-directory), which itself needs a version of the Java runtime installed. The below assumes you're running on Windows, have Java setup, and have the YUI Compressor .jar file in your working directory. You'll also need a set of scriptaculous and prototype source files (the prototype.js file is in the 'lib' sub-folder, everything else is in 'src').

OK, so assuming we've got all the relevant stuff from the previous paragraph downloaded into your working directory, building the combined and compressed protoaculous.js is two fairly easy steps. First off, scriptaculous loads it's dependent scripts dynamically by inserting <script> elements into the document head, since we're not going to have the separate scripts in the combined file we need to stop it doing that. I create a file v_scriptaculous.js where I've removed lines 46 to 54:

    var js = /scriptaculous\.js(\?.*)?$/;
    $$('head script[src]').findAll(function(s) {
      return s.src.match(js);
    }).each(function(s) {
      var path = s.src.replace(js, ''),
      includes = s.src.match(/\?.*load=([a-z,]*)/);
      (includes ? includes[1] : 'builder,effects,dragdrop,controls,slider,sound').split(',').each(
       function(include) { Scriptaculous.require(path+include+'.js') });
    });

And also lines 26-29:

  require: function(libraryName) {
    // inserting via DOM fails in Safari 2.0, so brute force approach
    document.write('<script type="text/javascript" src="'+libraryName+'"><\/script>');
  },

If you end up working with a newer version the procedure should be similar, though the line numbers may be different, just look for any functions which try to dynamically insert script tags. With the above two sections removed I end up with a v_scriptaculous.js file 46 lines long, which, disregarding comments, looks like this:

var Scriptaculous = {
  Version: '1.8.1',
  REQUIRED_PROTOTYPE: '1.6.0.2',
  load: function() {
    function convertVersionString(versionString) {
      var v = versionString.replace(/_.*|\./g, '');
      v = parseInt(v + '0'.times(4-v.length));
      return versionString.indexOf('_') > -1 ? v-1 : v;
    }
 
    if((typeof Prototype=='undefined') ||
       (typeof Element == 'undefined') ||
       (typeof Element.Methods=='undefined') ||
       (convertVersionString(Prototype.Version) <
        convertVersionString(Scriptaculous.REQUIRED_PROTOTYPE)))
       throw("script.aculo.us requires the Prototype JavaScript framework >= " +
        Scriptaculous.REQUIRED_PROTOTYPE);
  }
};
 
Scriptaculous.load();

Now we need to combine all the files together and feed them into the YUI Compressor. To do this I've created a batch file build.bat:

@echo off
copy prototype.js + v_scriptaculous.js + builder.js + effects.js + dragdrop.js + controls.js + slider.js + sound.js c.js /b
java -jar yuicompressor-2.3.5.jar -o protoaculous.1.8.1.min.js c.js
del c.js

This creates a temporary file, c.js, which is a combination of all the javascript files, uses the YUI Compressor to build the output file, then deletes the temporary file. I found I had to use the /b (binary) switch on the copy command otherwise I got junk at the end of the file which caused errors in the compressor. After running the batch file we should end up with protoaculous.1.8.1.min.js sitting in our working directory.


Tweet this!
13 feedbacks »PermalinkPermalink

21/05/08

12:00:01 am Permalink Implement Semantic Web standards in your Web site

Categories: Semantic Web News

I've had another article published on developerWorks, this time a tutorial so you'll have to register to view it. It moves on from my previous article with some actual code examples of a lot of the stuff I talked about before. Check it out and let me know what you think!


Tweet this!
Send feedback »PermalinkPermalink