Today I finally put online my new website based on the fsws framework. While still not ready for release, right now it can generate in a single call (but with dual pass!) the whole site, the page sitemap (compliant with the specification) and even the robots.txt file (my reason to generate it with the rest of the site is that it keeps a pointer to the sitemap, and at the same time, you can ignore a whole subtree much more easily, by just setting parameters on the various pages).
The nice thing about fsws is in its very lightweight output: the whole site I wrote for my friend is less than 300k, and requires almost no server-side handling at all. The only thing that I’m forced to do is some playing with Apache’s mod_rewrite to change the content type of the pages, because Internet Explorer (who else?) fails to handle properly-served XHTML content (and asks to save the pages instead of opening them).
But together with this particular quirk, I also keep another piece of code, that works quite alike a web application, even if it’s self-contained inside the webserver configuration: a sanity check for the browser, based on the user agent, just like my antispam filter in this blog. It checks for both older browser versions and particular user agent signatures that indicate the presence of adware, spyware and viruses on the requesting user’s system.
When these signatures are identified, all the requests for actual pages are redirected to an error-like page that warns the user about the problem and ask him (or her) to update or change browser, or to install and use an antivirus. Now, since the site is entirely static and there is no user interaction with the server-side components beside the HTTP server itself, there is no real need for me to discard requests coming from unsafe clients, so my only reason to actually implement this type of code is public service.
I haven’t implemented the same trick on my website, yet. I’m still a bit conflicted about its usage. From one side, applying it means that part of the internet users will be unable to even view my site, which being even my professional site, might be a not so sound business move; from the other point of view, if most of the sites out there (with the obvious exclusion of those providing tools like browsers and antivirus) were to refuse requests from IE6 and other old browsers, maybe their widespread user would be put to a stop.
And to which extent should I (we) be refusing requests? Having a minimum base version for any browser is a good start, but there is more to that. As I noticed, there are quite a few Windows spyware, adware, and trojans (especially dialups) that register themselves as part of the Internet Explorer user agent. I have no idea why they do that, maybe it’s to pay some kind of provision to the trojan’s authors, but we could be using this kind of information to notify the users about malware presence on their systems.
Unfortunately, there doesn’t seem to be a comprehensive database of user agent identifiers, although with a bit of search over a sample you can easily find a lot of useful data; and also, since the whole check right now is handled through a simple redirection, I have no way to provide the user with any kind of feedback about what kind of malware is in their system. I guess that using some quick javascript inside the error page itself would be able to solve this.
Only reason I wouldn’t disable IE6 support yet is, and you’ll have to wait a bit for it to become aparent.HR has software that it needs. HR’s software only works with IE6. IT would like to push IE7 or IE8 to everyone to lessen their headache. IT can not push IE7 until everything works with it. HR will/can not spend the money to upgrade. so until HR upgrades no one does.
That to me sounds like a perfectly good reason _to_ exclude IE6: once HR will understand that _nothing_ still works with that piece of bad history, then they’ll find the money to upgrade…
If only it worked that way. Also ots of businesses still have internal web apps that only work right in IE6. It’s a nightmare. I know that if Microsoft tanked tomorrow and all the copies of office stopped working, we would probably go out of business before we could recover.