I hate the Facebook API

So, my friend is resuming his movie-making projects (site in Italian) and I decided to spend some time to make the site better integrated with Facebook, since that is actually his main contact media (and for what he does, it’s a good tool). The basic integration was obviously adding the infamous “Like” button.

Now, adding a button should be easy, no? It’s almost immediate doing so with the equivalent Flattr button, so there should be no problem to add the Facebook version. Unfortunately the documentation tends to be self-referencing to the point of being mostly useless.

Reading around google, it seems like what you need is:

  • add OpenGraph semantic data to the page; while the OpenGraph protocol itself only mandates title, url, type and image, there is an obscure Facebook doc actually shows that to work it needs one further data type, fb:admins;
  • since we’re adding fb:-prefixed fields, we’re going to declare the namespace when using XHTML; this is done by adding xmlns:fb="http://www.facebook.com/2008/fbml&quot; to the <html> field;
  • at this point we have to add some HTML code to actually load the Javascript SDK… there are a number of ways to do so asynchronously… unfortunately they don’t rely, like Flattr, on loading after the rest of the page has loaded, but need to be loaded first, following the declaration of a <div id="fb-root" /> that they use to store the Facebook data.
window.fbAsyncInit = function() {
  FB.init({ cookie: true, xfbml: true, status: true });
};

(function() {
  var e = document.createElement('script');
  e.src = document.location.protocol + '//connect.facebook.net/en_US/all.js';
  e.async = true;
  document.getElementById('fb-root').appendChild(e);
}());

* then you can add a simple like button adding <fb:like/> in your code! In theory.

In practise, the whole process didn’t work at all for me on the FSWS generated website, with neither Chromium nor Firefox; the documentation and forums don’t really give much help, most people seems to forget fields and so on, but for what I can tell, my page is written properly.

Trying to debug the official Javascript SDK is something I won’t wish for anybody to be forced to; especially since it’s minimised. Luckily, they have released the code under Apache License and is available on GitHub so I went on and checked out the code; the main problem seems to happen in this fragment of Javascript from connect-js/src/xfbml/xfbml.js:

      var xfbmlDoms = FB.XFBML._getDomElements(
        dom,
        tagInfo.xmlns,
        tagInfo.localName
      );
      for (var i=0; i < xfbmlDoms.length; i++) {
        count++;
        FB.XFBML._processElement(xfbmlDoms[i], tagInfo, onTagDone);
      }

For completeness, dom is actually document.body, tagInfo.xmlns is "fb" and tagInfo.localName in this case should be "like". After the first call, xfbmlDoms is still empty. D’uh! So what is the code of the FB.XFBML._getDomElements function?

  /**
   * Get all the DOM elements present under a given node with a given tag name.
   *
   * @access private
   * @param dom {DOMElement} the root DOM node
   * @param xmlns {String} the XML namespace
   * @param localName {String} the unqualified tag name
   * @return {DOMElementCollection}
   */
  _getDomElements: function(dom, xmlns, localName) {
    // Different browsers behave slightly differently in handling tags
    // with custom namespace.
    var fullName = xmlns + ':' + localName;

    switch (FB.Dom.getBrowserType()) {
    case 'mozilla':
      // Use document.body.namespaceURI as first parameter per
      // suggestion by Firefox developers.
      // See https://bugzilla.mozilla.org/show_bug.cgi?id=531662
      return dom.getElementsByTagNameNS(document.body.namespaceURI, fullName);
    case 'ie':
      // accessing document.namespaces when the library is being loaded
      // asynchronously can cause an error if the document is not yet ready
      try {
        var docNamespaces = document.namespaces;
        if (docNamespaces && docNamespaces[xmlns]) {
          return dom.getElementsByTagName(localName);
        }
      } catch(e) {
        // introspection doesn't yield any identifiable information to scope
      }

      // It seems that developer tends to forget to declare the fb namespace
      // in the HTML tag (xmlns:fb="http://www.facebook.com/2008/fbml") IE
      // has a stricter implementation for custom tags. If namespace is
      // missing, custom DOM dom does not appears to be fully functional. For
      // example, setting innerHTML on it will fail.
      //
      // If a namespace is not declared, we can still find the element using
      // GetElementssByTagName with namespace appended.
      return dom.getElementsByTagName(fullName);
    default:
      return dom.getElementsByTagName(fullName);
    }
  },

So it tries to workaround when the xmlns is missing by using the full name fb:like rather than looking for like for their own namespace! So to work it around at first, I tried adding this code before FB.init call:

FB.XFBML._getDomElements = function(a,b,c) {
  return a.getElementsByTagNameNS('http://www.facebook.com/2008/fbml', c);
}

This actually should work as intended, and finds the <fb:like /> elements just fine. If it wasn’t, that is, that the behaviour of the code then went ape, and the button worked intermittently.

At the end of the day, I decided to go with the <iframe> based version like most of the other websites out there do; it’s quite unfortunate, I have to say. I hope that at some point FBML is replaced with something more along the lines of what Flattr does with simple HTML anchors that get replaced, no extra namespaces to load…

The bright side is that I can actually use FSWS to replace the XFBML code with the proper <iframe> tags, and thus once it’s released it’ll allow to use at least part of the FBML markup to produce static websites that do not require the Facebook Javascript SDK to be loaded at all… now, if FSF were to answer me on whether I can use text of the autoconf exception as a model for my own licensing I could actually release the code.

Choosing a license for my static website framework

You might remember that some time ago I wrote a static website generator and that I wanted to release it as Free Software at some point in time.

Well, right now I’m using that code to create three different websites – mine the one for my amateur director friend and the one for a Metal band – which is not too shabby for something that started as a way to avoid repeating the same code over and over again, and it actually grew bigger than I expected at first.

Right now, it not only generates the pages, but also the sitemap and to some extent the robots.txt (by providing for instance the link to the sitemap itself). It can generate pages that link to Flickr photos and albums, including providing descriptions and having a gallery-like showcase page, and it also has some limited support for YouTube videos (the problem there is that YouTube does not have a RESTful API; I can implement REST calls through XSLT, but I don’t think I would be able to program GData protocol with that).

Last week, I was clearing up the code a bit more, because I’m soon going to use it for a new website (for a game – not video game – a friend of mine invented and is producing) and ended up finding some interesting documentation from Yahoo! on providing semantic information for their search engine (and I guess, to some extent, Google as well).

This brought up two questions to me:

  • is it worth keeping working on this framework based on XSLT alone? As I said, Flickr support was piece-of-cake, because the API they use is REST-based, but YouTube’s GData-based API definitely require something “more”. And at the same time, even Flickr gallery wrapping has been a bit of a problem, because I cannot really properly paginate using XSLT 1.0 (and libxslt does not support XSLT 2.0, where the iterators I needed are implemented). While I like the consistent generation of code, I start to feel like it needs something to pre-process the data before sending it out; for instance I could make some program just filter the references to youtube videos, write down an XML description of them, downloaded with GData, and then let XSLT handle that. Or cache the Flickr photos (which would be very good to avoid requesting all the photos’ details every time the website is updated);
  • I finally want to publish FSWS to the public; even if – or maybe especially – I want to discontinue it or part of it, or morph into something that is less “pure” than what I have now. What I’m not sure about is which license to use. I don’t want to make it just GPL, as that implies that you can modify it and never give anything back, since you won’t redistribute the framework, but the results; AGPL-3 sounds more like it, but I don’t want to make the pages generated by the framework to apply that license. Does anybody have an idea?

I’m also open to suggestion on how something like this should work. I really would prefer if the original content is written simply in XML: it’s close enough to the output format (XHMTL/HTML5) and shouldn’t be much of a trouble to write. The less vague idea I have on the matter is to use multiple steps of XML conversion; already the framework uses some nasty two-pass conversion of the input document (it splits in N-branches depending on the configured languages, then it processes the two branches almost independently to produce the output), and since some content is generated by the first pass, it’s also difficult to make sure that all the references are there for links and similar.

It would be easier if I could write my own xslt functions: I could just replace an element referring to a youtube video with a reference to a (cached) XML document, and similarly for Flickr photos. But to do so I guess I’ll either have to use JavaScript and an XSLT processor that supports it, or I should write my own libxslt-based processor that can understand some special functions to deal with GData and similar.

The status of some deep roots

While there are quite a few packages that are know to be rotting in the tree, and thus are now being pruned away step by step, there are some more interesting facets in the status of Gentoo as a distribution nowadays.

While the more interesting and “experimental” areas seem to have enough people working on them (Ruby to a point, Python more or less, KDE 4, …), there are quite some deeper areas that are just left to rot as well, but cannot really be pruned away. This includes for instance Perl (for which we’re lagging behind a lot, mostly due to the fact that tove is left alone maintaining that huge piece of software), and SGML, which in turn includes all the DocBook support.

I’d like to focus a second on that latter part because I am partly involved in that; since I like using DocBook and I actually use the stylesheets to produce the online version of Autotools Mythbuster using the packages that are available in Portage. Now, when I wanted to make use of DocBook 5, the stylesheet for the namespaced version (very useful to write with emacs and nxml) weren’t available, so I added them, adding support for them to the build-docbook-catalog script. With time, I ended up maintaining the ebuilds for both versions of the stylesheets, and that hasn’t been always the cleanest thing given that upstream dropped the tests entirely in the newer versions (well, technically they are still there, but they don’t work, seems like they lack some extra stuff that is nowhere documented).

Now, I was quite good as I was with this; I just requested stable for the new ebuilds of the stylesheets (both variants) and I could have kept just doing that, but … yesterday I noticed that the list of examples in my guide had broken links, and after mistakenly opening a bug on the upstream tracker, I noticed that the bug is fixed already in the latest version. Which made me smell something: why nobody complained that the old stylesheets were broken? Looking at the list of bugs for the SGML team, you can see that lots of stuff was actually ignored for way too long a time. I tried cleaning up some stuff, duping bugs that were obviously the same, and fixing one in the b-d-c script, but this is one of the internal roots that is rotting, and we need help to save it.

For those interested in helping out, I have taken note of a few things that should probably be done with medium urgency:

  • make sure that all the DTDs are available in the latest release, and that they are still available upstream; I had to seed an old distfile today because upstream dropped it;
  • try to find a way to install the DocBook 5 schemas properly; right now the nxml-docbook5-schemas package install its own copy of the Relax-NG Compact file; on Fedora 11, there is a package that installs more data about DocBook 5, we should probably use the same original sources; the nxml-docbook5-schemas package could then either be merged in with that package or simply use the already-installed copy;
  • replace b-d-c, making it both more generic and using a framework that exists already (like eselect) instead of reinventing the wheel; the XML/DTD catalog can easily be used for more than just DocBook, while I know the Gentoo documentation team does not want for the Gentoo DTD to just be available as a package to install in the system (which would make it much easier to keep updated for the nxml schemas, but sigh), I would love to be able to make fsws available that way (once I’ll finish building the official schema for it and publish it, again more on that in the future);
  • find out how one should be testing the DocBook XSL stylesheets, so that we can run tests for them; it would have probably avoided the problem I had with Autotools Mythbuster in the past months;
  • package the stylesheets for Xalan and Saxon, which are different from the standard ones; b-d-c already has support for them to a point (although not having to explicit this kind of things in the b-d-c replacement is desirable), but I didn’t have reason to add them.

I don’t think I’ll have much time on working on them in the future, so user contributions are certainly welcome; if you do open any bug for these issue, please do CC me directly, since I don’t intend (yet) to add myself to the sgml alias.

Yes, again more static websites

You might remember I like static websites and that I’ve been working on a static website framework based on XML and XSLT.

Upon necessity, I’ve added support to that framework for multi-language websites; this is both because people asked for my website to be translated in Italian (since my assistance customers don’t usually know English, not that well at least), and because I’m soon working on the website for a metal group that is to be available in both languages too.

Now, making this work in the framework wasn’t an easy job: as it is now, there is a single actual XML document that the stylesheet, with all its helper templates, gets applied to, it already applied a two-pass translation, so that custom elements (like the ones that I use for the projects’ page of my site – yes I know it gets stuck when loading) are processed properly, and translate into fsws proper elements.

To make this work I then applied a similar method (although now I start to feel like I did it in the wrong order): I create a temporary document filtering all the elements that have no xml:lang attribute or have the proper language in that, once for each language the website is written in. Then, I apply the rest of the processing over this data.

Since all the actual XHTML translation happens in the final pass, this pass become almost transparent to the rest of the processing, and at the same time, pages like the articles index can share the whole list of articles between the two versions, since I just change the section element of the intro instead of creating two separate page descriptions.

Now, I’ll be opening fsws one day once this is all sorted out, described and so on, for now I’m afraid it’s still too much in flux to be useful (I haven’t written a schema of any kind just yet, and I want to do that soon so I can even validate my own websites). For now, though, I can share the code I’m currently using to handle the translation of the site. As usual, I don’t rely on any kind of dynamic web application to serve the content (which the frameworks generate in static form), but rather I rely on Apache’s mod_negotiation and mod_rewrite (which ship with the standard distribution).

This is the actual configuration that vanguard is using to do the serving:

AddLanguage en .en
AddLanguage it .it

DefaultLanguage en
LanguagePriority en it
ForceLanguagePriority Fallback

RewriteEngine On

RewriteRule ^(/[a-z]{2})?/$     $1/home [R=permanent]

RewriteCond ${REQUEST_FILENAME} !-F
RewriteRule ^/([a-z]{2})/(.+)$ /$2.$1

(I actually have a few more rules in that configuration file but that’s beside the point now).

Of course this also requires that the MultiView option is also enabled, since that’s what makes Apache pick up the correct file without having map files around. Since the file are all named like home.en.xhtml and home.it.xhtml, requesting the explicit language as suffix allows Apache to just pick up the correct file, without having to mess with extra configuration of files.

Right now there are a few more things that I have to work on, for instance the language selection on the top should really bring you to the other language version of the same page, rather than the homepage. Or it works fine on single-language site just if you never use xml:lang, I should special-case that. For this to work I have to add a little more code to the framework, but it should be feasible in the next weeks. Then there are some extra features I haven’t even started implementing but just planned: an overlay based photo gallery, and some calendar management for ICS and other things like that.

Okay this should be it for the teasing about fsws; I really have to find time to set up a repository for, and release, my antispam rules, but that will have to wait for next week I guess.

More XSL translated websites

I have written before that, over CMS- or Wiki-based website, I prefer static websites, and that with a bit of magic with XSL and XML you can get results that look damn cool. I also have worked on the new xine site which is entirely static and generated from XML sources and libxslt.

When I wrote the xine website, I also reused some of the knowledge from my own website even though the two of them are pretty different in many aspects: my website used one xml file per page, with an index page, and a series of extra stylesheets that convert some even higher level structures into the mid-level blocks that then translated to XHTML; the xine website used a single XML file with XInclude to merge in many fragments, with one single document for everything, similarly to what DocBook does.

Using the same framework, but made a bit more generic, I wrote the XSL framework (that I called locally “Flameeyes’s Static Website” or fsws for short) that is generating the website for a friend of mine, an independent movie director (which is hosted on vanguard too). I have chosen to go down this road because he needed something cheap, and he didn’t care much about interaction (there’s Facebook for that, mostly). In this framework I implemented some transformation code that implements part of the flickr REST API, and also a shorthand to blend in Youtube videos.

Now, I’m extending the same framework, keeping it abstract from the actual site usage, allowing different options for settig up the pages, to rewrite my own website with a cleaner structure. Unfortunately it’s not as easy as I thought, because while my original framework is extensible enough, and I was able to add in enough of my previous stylesheets’ fragments into it without changing it all over, there are a few things that I could probably share again between different sites without needing to recreate it each time but require me to make extensive changes.

I hope that once I’m done with the extension, I’ll be able to publish fsws as a standard framework for the creation of static websites; for now I’m going to extend it just locally, and for a selected number of friends, until I can easily say “Yes it works” – the first thing I’ll be doing then would be the xine website. But I’m sure that at least this kind of work is going to help me getting better understanding of XSLT that I can use for other purposes too.

Oh and in the mean time I’d like to pay credit to Arcsin whose templates I’ve been using both for my and others’ sites… I guess I know who I’ll be contacting if I need some specific layout.

Wondering about feeds

You might have noticed in the past months a series of issues with my presence on Planet Gentoo. Sometimes posts didn’t appear for a few days, then there have been issues with entries figuratively posted in the future, and a couple of planet spam really made my posts quite obnoxious to many. I didn’t like it either, seems like I had some problems with Typo when moved to Apache from lighttpd, and then there has been issues with Planet and its handling of Atom feeds and similar. Now these problems should be solved, Planet has moved to Venus software, and it now uses the Atom feeds again which are much more easily updated.

But this is not my topic today, today I wish to write about how you can really mess it up with XML technologies. Yesterday I wanted to prepare a feed for the news on the xine’s website so that it could be shown on Ohloh too. Since the idea is to use static content, I wanted to generate the feed, with XSLT, starting from the same data use to generate the news page. Not too difficult actually, I do something similar for my website as well .

But, since my website only needs to sort-of work, while the xine site needs to actually be usable, I decided to validate the generated content using the W3C validator; the results were quite bad. Indeed, the content in the RSS feed needs to be escaped or just plain text, no raw XHTML is allowed.

So I turned to check Atom, which is supposedly better at things, and is being used for a lot of other stuff as well already. That really looks like XML technology for once, using the things that actually make it work nicely: namespaces. But if I look at my blog’s feed I do see a very complex XML file. I tried giving up on it for a while and gone back to RSS, but while the feed is simple around the entries, the entries themselves are quite a bit to deal with, especially since they require the RFC822 date format which is not really the nicest thing to deal with (for once, it expects days names and month names in English, and it’s far from easily parsed by a machine to translate in a generic date that can be translated in the feed’s user’s locale).

I reverted to Atom, created a new ebuild for the Atom schema for nxml (which by the way fail at allowing auto-completion in XSL files, I need to contact someone about that), and started looking at what is strictly needed. The result is a very clean feed which should work just fine for everybody. The code, as usual, is available on the repository.

As soon as I have time I’ll look into switching my website to also provide an Atom feed rather than an RSS feed. I’m also considering the idea of redirecting the requests for the RSS feed on my blog to Atom, if nobody gives me a good reason to keep RSS. I have already hidden them from the syndication links on the right, which now only present Atom feeds, and they are already the most requested compared to the RSS versions. For the ones who can’t see why I’d like to standardise on a single format: I don’t like redundancy where it’s not needed, and in particular, if there is no practical need to keep both, I can reduce the amount of work done by Typo by just hiding the RSS feeds and redirecting them from within Apache rather than keeping them to hit the application. Considering that typo creates feeds for each one of the tags, categories and posts (the latter I already hide and redirect to the main feed, since they make no sense to me), it’s a huge amount of requests that would be merged.

So if somebody has reasons for which the RSS feeds should be kept around, please speak now. Thanks!

The xine website: design choices

After spending a day working on the new website I’ve been able to identify some of the problems and design some solutions that should produce a good enough results.

The first problem is that the original site did not only use PHP and a database, but also misused them a lot. The usual way to use PHP to avoid duplicating the style is usually to get a generic skin template, and then use one or more scripts per page that gets included in the main one depending on parameters. This usually results in a mostly-working site that, while doing lots of work for nothing, still does not hog down the server with unneeded work.

In the case of xine’s site, the whole thing loaded either a static html page that gets included or a piece of php code that would define variables, which, care of the main script, would then be replaced in a generic skin template. The menu would also not be written once, but generated on the fly for each page request. And almost all the internal links in the pages would be generated by a function call. Adding the padding to the left side of the menu entries for sub-pages was done by creating (with PHP functions) a small table before the image and text that formed the menu link. In addition to all this, the SourceForge logo was cycling on a per-second basis, which meant that an user browsing the site would load about six different SourceForge images in the cache, and that no two request would have got the same page.

The download, release, snapshots and security pages loaded the data on the fly from a series of flat files that contained some metadata about them, and that then produced the output you’d have seen. And to add client-side timewaste to what was already a timewaste on the server side, the changes in shade of the left-handed menu were done using JavaScript rather than the standard CSS2 :hover option.

Probably because of the bad way the PHP code was written, the site had all the crawlers stopped by robots.txt, which is a huge setback for a site aiming to be public. Indeed, you cannot find it on Google’s cache system because of that, which meant that for last night I had to work with the WayBack machine to see how the site appeared earlier. And it was from one year ago, not what we had a few weeks ago. (This has since stopped being a problem since Darren gave me a static snapshot of the thing as seen on his system).

To solve these problems I decided a few things for the new design. First of all as I’ve already said it has to be entirely static after modification, so that the files served are just the same for each request. This includes removing visit counters (who cares nowadays, really), and the changing SourceForge logo. This ensures that crawlers and users alike will see the exact same content over time if it doesn’t change, keeping caches happy.

Also, all the pages will have to hide their extensions, which mean that I don’t have to care whether the page is .htm, .html or .xhtml. Just like my site all the extensions will be hidden so even the switch to a different technology will not invalidate the links. Again this is for search engine and users alike.

The whole generation is done with standard XSLT, without implementation-dependent features, which means it’ll work with libxslt just like with Saxon or anything else. Although I’m going to use libxslt for now since that’s what I’m using for my site as well. By using standard technologies it’s possible to reuse them for the future without relying on versions of libraries and similar. And thanks to the way XSLT has been designed, it’s very easy to decouple the content from the style, which is exactly what a good site should do to be maintainable for a long time.

Since I dislike custom solutions, I’ve been trying very hard to avoid using custom elements and custom templates outside the main skin, the idea is that XHTML usually works by itself, and adding a proper CSS will take care of most of the remaining stuff. This isn’t too difficult after you get around the problem that the original design was entirely based upon tables rather than proper div elements, but the whole thing has been manageable.

Besides, with this method adding a dynamically-generated (but statically-served) sitemap is also quite trivial, since it’s just a different stylesheet applied over the same general data for the rest of the site.

Right now I’m still working on fixing up the security page, but the temporary not-yet-totally-live site is available for testing and the repository is also accessible to see the code if you wish to see how it’s actually implemented. I’ve actually made some sophistication to the xine site I didn’t use for my own, but that will come with time.

The site does not yet validate properly, but the idea is that it will once it’s up, I “just” need to get rid of the remaining usage of tables.

Modernising XML technologies

While I’m at the hospital I decided I will be not wasting my time all day, although the article on daydreaming that Donnie linked is quite interesting indeed, staying here costs. It’s not the cost of the hospitalisation per se, because the health service (SSN) pays for it, but there are added costs: starting from the puny crosswords magazines, to the phone calls (which we try to share in the family, calling in turn), to the hostel fees, for my family to stay here in Verona for the weekend (and then once the surgery is near for my mother to stay here until they release me). And you can guess that I’ll spend my convalescence buying stuff to spend the time.

So, while I cannot feasibly work on my main programming jobs (the harddisk of the laptop is not big enough to keep all the development tools in the Windows partition on Boot Camp, nor the virtual machine with the data in it), I’m working on writing, which is something I can easily do from here even without a network connection (the article I wrote for the Italian edition of Linux Journal, two years ago, I wrote entirely offline, on the iBook), and refined just a few days before sending it off to the editor.

While I used to use LaTeX for my articles, it’s not really a good option when the article has to be published online, rather than being printed; indeed, even tex4ht, which is not a bad tool at all, does not make it nice to translate LaTeX documents to HTML. For this reason, I switched, a few months ago, starting from Implications of pure and constant functions to use DocBook. I’m still learning to use it, I’m not really good at it yet, but it’s nice. And with the recent release of the fifth major version, it also starts to feel much more XML than it did before. Which is something bad for many people, but good for me; I do like XML when it’s used in the right way for the right idea.

But the new DocBook release made me think quite a bit. Right now, the documentation for Gentoo is written using GuideXML, which is neither a subset nor a superset of DocBook, it’s a totally standalone custom format that has just a few similarities to DocBook. It would be nice if we could implement a new format for documentation, as an extension to DocBook 5 (which, through the use of namespaces, would be quite easy to implement and maintain, in my opinion); we could be keeping all the positive sides of DocBook, included its widespread usage and knowledge, and still adding those things we need and are present in GuideXML.

This of course would require a fair amount of work, as it means writing new stylesheets, new schemas, new conversion code, and converting a huge amount of documentation. Why should it be considered (in my opinion) if there are these drawbacks? Well, I have a few reasons.

The first is that, as I said before, DocBook is much more widely used than GuideXML, which, in turn, means there are many more people used to work with DocBook than with GuideXML. This makes it interesting because even if it’s more complex, it would require less specific knowledge to be able to write Gentoo documentation, and this is interesting as it would allow more contributions. I know that GuideXML is much less complex than DocBook, but the human nature is lazy, and if you got users who know DocBook already, it’s more likely than they’d be contributing if the format used in Gentoo was an extension rather than an independent standalone format.

The second is that, sincerely, I find GuideXML too limited in some regards; in the guides I have written, like the as-needed fixing guide and the backtracing guide, I ended up using the <c> element for command parameters. The problem here, in my opinion, is that it tries to be both meat and fish: it’s semantical in some aspects but it’s stylistic in others (<e>), it’s HTML-compatible in some regards (<p> and <pre>) but then it’s not for others (again <e>). The result is that I sincerely feel like I miss something from time to time, feeling that I suppose I would have with standard DocBook too, but this is why I would expect we’d be using an extension to that.

Also, there’s an extra advantage in this: with the due work it would be quite easy to have a PDF version of the guide, that would probably be quite welcome to the people interested in a printed version (printer-friendly versions aren’t that printer friendly after a while).

It would be nice to look at this together with the previous proposal from Tiziano for moving the definition of the XML formats we use from DTD to Relax NG. It’s important to remember that the XML technologies are designed to be extensible, and that they are not yet stable, in particular, many of them (like DTD) are now obsoleted, as they are not expressive enough, and they were kept as a compatibility with previous SGML-based markup languages; the same holds true for the older versions of DocBook.

New site style… XSL help, please :)

Okay, I wanted to blog about that yesterday, but I ended up the site restyle at 4am and also if I was able to sleep only at 6am, I wasn’t going to blog with that insomnia. I wish to thank haran who doesn’t know me ;) He wrote some of the most beautiful webdesigns for OSWD, and are his most of the ones I used for my site (and its one of his heavy elaborated the one used for Ziotto website -note: it’s an Italian crazy and fun site of some friends of mine, and me, too :P-).

This time using XSL I was able to abstract most of the work so that I just had to change a base template file to do most of the job, but I still have some problems with XSL that I want to nail down, for who wants to help me, in the second page I’ll show what I’m trying to do and does not work as I expected.. if you’re an XSL expert, please help me :‘)

With the main template that renders the base page list, I have also a template that designs the “blocks” of the site (the box you see with content). They are declared this way:

<!-- simple block -->
<xsl:template name="block" match="block">
  <xsl:param name="id" select="@id" />
  <xsl:param name="title" select="@title" />
  <xsl:param name="content" select="./" />

  <xhtml:div class="darkerBox">
    <xsl:attribute name="id"><xsl:value-of select="$id" /></xsl:attribute>
    <xhtml:h1><xsl:value-of select="$title" /></xhtml:h1>
    <xsl:apply-templates select="$content" />

    <xhtml:br />
  </xhtml:div>
</xsl:template>

Originally, I just used a matching template for <block> tags and then repeated the <div> thing on every page, but that was not fine as I had to change all the stylesheets every time I changed template.
With the params, I can just call the named template so for example to get articles I use:

<xsl:template match="articles">
  <xsl:call-template name="block">
    <xsl:with-param name="id">articles</xsl:with-param>
    <xsl:with-param name="title">Articles' list</xsl:with-param>
    <xsl:with-param name="content" select="./" />
  </xsl:call-template>
</xsl:template>

and that works fine, as far as the <articles> tag contains block-like formatting.

For the downloads page instead I wanted to do something different, and having the content parameter be created not only by templates:

<xsl:with-param name="content">
  <xhtml:ul>
    <xsl:apply-templates />
  </xhtml:ul>
</xsl:with-param>

but this does not work at all.

Am I missing something basic here? Every suggestion is welcome :)