My ideal editor

Notepad Art
Photo credit: Stephen Dann

Some of you probably read me ranting on G+ and Twitter about blog post editors. I have been complaining about that since at least last year when Typo decided to start eating my drafts. After that almost meltdown I decided to look for alternatives on writing blog posts, first with Evernote – until they decided to reset everybody’s password and required you to type some content from one of your notes to be able to get the new one – and then with Google Docs.

I have indeed kept using Google Docs until recently, when it started having some issues with dead keys. Because I have been using US International layout for years, and I’m too used to it when I write English too. If I am to use a non-deadkeys keyboard, I end up adding spaces where they shouldn’t be. So even if it solves it by just switching the layout, I wouldn’t want to write a long text with it that way.

Then I decided to give another try to Evernote, especially as the Samsung Galaxy Note 10.1 I bought last year came with a yet-to-activate 12 months subscription to the Pro version. Not that I find anything extremely useful in it, but…

It all worked well for a while until they decided to throw me into the new Beta editor, which follows all the newest trends in blog editors. Yes because there are trends in editors now! Away goes the full-width editing, instead you have a limited-width editing space in a mostly-white canvas with disappearing interface, like node.js’s Ghost and Medium and now Publify (the new name of what used to be Typo).

And here’s my problem: while I understand that they try to make things that look neat and that supposedly are there to help you “focus on writing” they miss the point quite a bit with me. Indeed, rather than having a fancy editor, I think Typo needs a better drafting mechanism that does not puke on itself when you start playing with dates and other similar details.

And Evernote’s new editor is not much better; indeed last week, while I was in Paris, I decided to take half an afternoon to write about libtool – mostly because J-B has been facing some issues and I wanted to document the root causes I encountered – and after two hours of heavy writing, I got to Evernote, and the note is gone. Indeed it asked me to log back in. And I logged in that same morning.

When I complained about that on Twitter, the amount of snark and backward thinking I got surprised me. I was expecting some trolling, but I had people seriously suggesting me that you should not edit things online. What? In 2014? You’ve got to be kidding me.

But just to make that clear, yes I have used offline editing for a while back, as Typo’s editor has been overly sensible to changes too many times. But it does not scale. I’m not always on the same device, not only I have three computers in my own apartment, but I have two more at work, and then I have tablets. It is not uncommon for me to start writing on a post on one laptop, then switch to the other – for instance because I need access to the smartcard reader to read some data – or to start writing a blog post while at a conference with my work laptop, and then finish it in my room on the personal one, and so on so forth.

Yes I could use Dropbox for out-of-band synchronization, but its handling of conflicts is not great, if you end up having one of the devices offline by mistake — better than the effets of it on password syncs but not so much better. Indeed I have bad experiences with that, because it makes it too easy to start working on something completely offline, and then forget to resync it before editing it from a different service.

Other suggestions included (again) the use of statically generated blogs. I have said before that I don’t care for them and I don’t want to hear them as suggestions. First they suffer from the same problems stated above with working offline, and secondly they don’t really support comments as first class citizens: they require services such as Disqus, Google+ or Facebook to store the comments, including it in the page as an external iframe. I not only don’t like the idea of farming out the comments to a different service in general, but I would be losing too many features: search within the blog, fine-grained control over commenting (all my blog posts are open to comment, but it’s filtered down with my ModSecurity rules), and I’m not even sure they would allow me to import the current set of comments.

I wonder why, instead of playing with all the CSS and JavaScript to make the interface disappear, the editors’ developers don’t invest time to make the drafts bulletproof. Client-side offline storage should allow for preserving data even in case of being logged out or losing network connection. I know it’s not easy (or I would be writing it myself) but it shouldn’t be impossible, either. Right now it seems the bling is what everybody wants to work on, rather than functionality — it probably is easier to put in your portfolio, and that could be a good explanation as any.

Diabetes control and its tech; should I build a Chrome app?

Now that I can download the data the data from three different glucometers, of two different models with my glucometer utilities I’ve been pondering on ways to implement a print out or display mode. While the text output is good if you’re just looking into how your different readings can be, when you have more than one meter it gets difficult to follow it across multiple outputs, and it’s not useful for your endocrinologist to read.

I have a working set of changes that add support for a sqlite3 backend for the tool, so you can download from multiple meters and then display them as belonging to a single series — which works fine if you are one person with multiple meters, rather than multiple users each with their glucometer. On the other hand, this made me wonder if I’m spending time working in the right direction.

Even just adding support for storing the data locally had me looking into adding two more dependencies (SQLite3 support, which yes, comes with Python, but needs to be enabled), and pyxdg (with a fallback) to make sure to put the data in the right folder, to simplify backups. Having a print-out or an UI that can display the data over time would add even more dependencies, meaning that the tool would be only really useful if packaged by distributions, or if binaries were to be distributed. While this would still give you a better tool on non-Windows OSes, compared to having no tool at all if left to LifeScan (the only manufacturer currently supported), it still is limiting.

In a previous blog post I was musing the possibility of creating an Android app that implements these protocols. This would mean already reimplementing them from scratch, as running Python on Android is difficult, so rewriting the code in a different language, such as Java or C#, would be needed — indeed, that’s why I jumped on the opportunity to review PacktPub’s Xamarin Mobile Application Development for Android, which will be posted here soon; before you fret, no I don’t think that using Xamarin for this would work, but it was still an instructive read.

But after discussing with some colleagues, I had an idea that is probably going to give me more headaches than writing an app for Android, and at the same time be much more useful. Chrome has a serial port API – in JavaScript, of course – which can be used by app developers. I don’t really look forward to implement the UltraEasy/UltraMini protocol in JavaScript (the protocol is binary structs-based, while the Ultra2 should actually be easy as it’s almost entirely 7-bit safe), but admittedly that solves a number of problems: storage, UI, print out, portability, ease-of-use.

Downloading that data to a device such as the Chromebook HP11 would also be a terrific way to make sure you can show the data to your doctor — probably more so than on a tablet, definitely more than on a phone. And I know for a fact that ChromeOS supports PL2303 adapters (the chipset used by the LifeScan cable that I’ve been given). The only problem with such an idea is that I’m not sure how HTML5 offline storage is synced with the Google account, if at all — if I am to publish the Chrome app, I wouldn’t want to have to deal with HIPPA.

Anyway, for now I’m just throwing the idea around, if somebody wants to start before me, I’ll be happy to help!

I hate the Facebook API

So, my friend is resuming his movie-making projects (site in Italian) and I decided to spend some time to make the site better integrated with Facebook, since that is actually his main contact media (and for what he does, it’s a good tool). The basic integration was obviously adding the infamous “Like” button.

Now, adding a button should be easy, no? It’s almost immediate doing so with the equivalent Flattr button, so there should be no problem to add the Facebook version. Unfortunately the documentation tends to be self-referencing to the point of being mostly useless.

Reading around google, it seems like what you need is:

  • add OpenGraph semantic data to the page; while the OpenGraph protocol itself only mandates title, url, type and image, there is an obscure Facebook doc actually shows that to work it needs one further data type, fb:admins;
  • since we’re adding fb:-prefixed fields, we’re going to declare the namespace when using XHTML; this is done by adding xmlns:fb="; to the <html> field;
  • at this point we have to add some HTML code to actually load the Javascript SDK… there are a number of ways to do so asynchronously… unfortunately they don’t rely, like Flattr, on loading after the rest of the page has loaded, but need to be loaded first, following the declaration of a <div id="fb-root" /> that they use to store the Facebook data.
window.fbAsyncInit = function() {
  FB.init({ cookie: true, xfbml: true, status: true });

(function() {
  var e = document.createElement('script');
  e.src = document.location.protocol + '//';
  e.async = true;

* then you can add a simple like button adding <fb:like/> in your code! In theory.

In practise, the whole process didn’t work at all for me on the FSWS generated website, with neither Chromium nor Firefox; the documentation and forums don’t really give much help, most people seems to forget fields and so on, but for what I can tell, my page is written properly.

Trying to debug the official Javascript SDK is something I won’t wish for anybody to be forced to; especially since it’s minimised. Luckily, they have released the code under Apache License and is available on GitHub so I went on and checked out the code; the main problem seems to happen in this fragment of Javascript from connect-js/src/xfbml/xfbml.js:

      var xfbmlDoms = FB.XFBML._getDomElements(
      for (var i=0; i < xfbmlDoms.length; i++) {
        FB.XFBML._processElement(xfbmlDoms[i], tagInfo, onTagDone);

For completeness, dom is actually document.body, tagInfo.xmlns is "fb" and tagInfo.localName in this case should be "like". After the first call, xfbmlDoms is still empty. D’uh! So what is the code of the FB.XFBML._getDomElements function?

   * Get all the DOM elements present under a given node with a given tag name.
   * @access private
   * @param dom {DOMElement} the root DOM node
   * @param xmlns {String} the XML namespace
   * @param localName {String} the unqualified tag name
   * @return {DOMElementCollection}
  _getDomElements: function(dom, xmlns, localName) {
    // Different browsers behave slightly differently in handling tags
    // with custom namespace.
    var fullName = xmlns + ':' + localName;

    switch (FB.Dom.getBrowserType()) {
    case 'mozilla':
      // Use document.body.namespaceURI as first parameter per
      // suggestion by Firefox developers.
      // See
      return dom.getElementsByTagNameNS(document.body.namespaceURI, fullName);
    case 'ie':
      // accessing document.namespaces when the library is being loaded
      // asynchronously can cause an error if the document is not yet ready
      try {
        var docNamespaces = document.namespaces;
        if (docNamespaces && docNamespaces[xmlns]) {
          return dom.getElementsByTagName(localName);
      } catch(e) {
        // introspection doesn't yield any identifiable information to scope

      // It seems that developer tends to forget to declare the fb namespace
      // in the HTML tag (xmlns:fb="") IE
      // has a stricter implementation for custom tags. If namespace is
      // missing, custom DOM dom does not appears to be fully functional. For
      // example, setting innerHTML on it will fail.
      // If a namespace is not declared, we can still find the element using
      // GetElementssByTagName with namespace appended.
      return dom.getElementsByTagName(fullName);
      return dom.getElementsByTagName(fullName);

So it tries to workaround when the xmlns is missing by using the full name fb:like rather than looking for like for their own namespace! So to work it around at first, I tried adding this code before FB.init call:

FB.XFBML._getDomElements = function(a,b,c) {
  return a.getElementsByTagNameNS('', c);

This actually should work as intended, and finds the <fb:like /> elements just fine. If it wasn’t, that is, that the behaviour of the code then went ape, and the button worked intermittently.

At the end of the day, I decided to go with the <iframe> based version like most of the other websites out there do; it’s quite unfortunate, I have to say. I hope that at some point FBML is replaced with something more along the lines of what Flattr does with simple HTML anchors that get replaced, no extra namespaces to load…

The bright side is that I can actually use FSWS to replace the XFBML code with the proper <iframe> tags, and thus once it’s released it’ll allow to use at least part of the FBML markup to produce static websites that do not require the Facebook Javascript SDK to be loaded at all… now, if FSF were to answer me on whether I can use text of the autoconf exception as a model for my own licensing I could actually release the code.