Facebook, desktop apps, and photography

This is an interesting topic, particularly because I had not heard anything about it up to now, despite having many semi-pro and amateur photographer friends (I’m a wannabe). It appears that starting August 1st, Facebook will stop allowing desktop applications to upload photos to albums.

Since I have been uploading all of my Facebook albums through Lightroom, that’s quite a big deal for me. On Jeffrey Friedl’s website, there’s this note:

Warning: this plugin will likely cease to work as of August 1, 2018, because Facebook is revoking photo-upload privileges for all non-browser desktop apps like this.

As of June 2018, Adobe and I are in discussions with Facebook to see whether something might be worked out, but success is uncertain.

This is now less than a month before the deadline, and it appears there’s no update for this. Is it Facebook trying to convince people to just share all their photos as they were shot? Is it Adobe not paying attention trying to get people on their extremely-expensive Adobe CC Cloud products? (I have over 1TB of pictures shot, I can’t use their online service, it would cost me so much more in storage!) I don’t really know, but it clearly seems to be the case that my workflow is being deprecated.

Leaving aside the consideration of the impact of this on me alone, I would expect that most of the pro- and semi-pro-photographers would want to be able to upload their pictures without having to manually drag them with Facebook’s flaky interface. And it feels strange that Facebook wants to stop “owning” those photos altogether.

But there’s a bigger impact in my opinion, which should worry privacy-conscious users (as long as they don’t subscribe to the fantasy ideal of people giving up on sharing pictures): this moves erodes the strict access controls from picture publishing that defined social media up to now, for any of the users who have been relying on offline photo editing.

In my case, the vast majority of the pictures I take are actually landscapes, flowers, animals, or in general not private events. There’s the odd conference or con I bring my camera to (or should I say used to bring it to), or a birthday party or other celebration. Right now, I have been uploading all the non-people pictures as public (and copied to Flickr), and everything that involves people as friends-only (and only rarely uploaded to Flickr with “only me” access). Once the changes go into effect, I lose the ability to make simple access control decisions.

Indeed, if I was to upload the content to Flickr and use friends-only limited access, very few people would be able to see any of the pictures: Flickr has lost all of its pretension to be a social media platform once Yahoo stopped being relevant. And I doubt that the acquisition of SmugMug will change that part, as it would be just a matter of duplicating a social graph that Facebook already has. So I’m fairly sure a very common solution to that is going to be to make the photos public, and maybe the account not discoverable. After all who might be mining the Web for unlisted accounts of vulnerable people? (That’s sarcasm if it wasn’t clear.)

In my case it’s just going to be a matter of not bringing my camera to private events anymore. Not the end of the world, since I’m already not particularly good at portrait photography, and not my particular area of interest. But I do think that there’s going to be quite a bit of problems in the future.

And if you think this is not going to be a big deal at all, because most parties have pictures uploaded by people directly on their mobile phones… I disagree. Weddings, christenings, cons, sport matches, all these events usually have their share of professional photographers, and all these events need to have a way to share the output with not only the people who hired them, but also the friends of those, like the invitees at a wedding.

And I expect that for many professionals, it’s going to be a matter of finding a new service to upload the data to. Mark my words, as I expect we’ll find that there will be, in the future, leaks of wedding pictures used to dox notable people. And those will be due to insecure, or badly-secured, photo sharing websites, meant to replace Facebook after this change in terms.

And you write a streaming server?

One of the things that I have actually to curse my current job for, is me having to deal with Adobe Flash and in particular with the RTMP protocol. The implementation of the RTMP we’re using on our server is provided by the Red5 project — and they are the ones I’m going to write about now.

Last July I’ve spent days and days looking up documentation about Red5 itself, as we couldn’t reach our resident expert, but at the time, their whole website was unavailable, and was just timing out. Yeseterday they told me that this was caused by some kind of DDoS, but even if that’s the case, something doesn’t feel right. Especially because, when I came back from VDD12 at the beginning of September, the website was actually reachable, but with the default configuration of a CentOS 5 system, which makes me think more of a hardware failure than a DDoS.

Right now the website is available, but the trac that should host the documentation is unreachable; a different website (Update (2016-07-29): that website is gone, sigh!) has still some documentation but hasn’t been updated for over two years, for the most part. There is also a company behind the project which on their team’s page lists their dogs, among others. Much as I appreciate companies that have a funny side, this is not funny when the project looks almost entirely dead.

But why am I complaining here? Well, what I gathered from the #red5 channel is that they blame the situation to a DDoS on their website and the fact that every time they try to put the wiki back online it goes offline. Uhm, okay…

Now, there are simple ways to handle DDoS in a fairly decent way that don’t require spending two months changing your setup… and in general it seems like very flimsy that this kind of DDoS are keeping going after two months and you can’t get your documentation up. Beside all your user and admin documentation (i.e. anything that is not developer-oriented) is only available on said wiki? Really?

So here I am, trying to figure out what to do with this hot potato of an install, with server software that is, simply put, completely unreliable (software is as reliable and trustworthy as the people who write it, that’s why you can often see what look like “ad hominem” against particular authors’ software — it’s not a fallacy because you have to trust the author if you run the software). I’m honestly not amused.

Why do FLOSS advocates like Adobe so much?

I’m not sure how this happens, but I see more and more often FLOSS advocates that support Adobe, and in particular Flash, in almost any context out there, mostly because they are now appearing a lot like an underdog, with Microsoft and Apple picking on them. Rather than liking the idea of cornering Flash as a proprietary software product out of the market, they seem to acclaim any time Adobe gets a little more advantage over the competition, and cry foul when someone else tries to ditch them:

  • Microsoft released Silverlight; which is evil – probably because it’s produced by Microsoft, or in alternative because it uses .NET that is produced by Microsoft – we have a Free as in Speech implementation of it in Novell’s Moonlight; but FLOSS advocates ditch on that: it’s still evil, because there are patents in .NET and C#; please note that the only implementation I know of Flash in the FLOSS world is Gnash which is not exactly up-to-speed with the kind of Flash applets you find in the wild;
  • Apple’s iPhone and iPad (or rather, all the Apple devices based on iPhone OS iOS) don’t support Flash, and Apple pushes content publishers to move to “modern alternatives” starting from the <video> tag; rather than, for once, agreeing with Apple and supporting that idea, FLOSS advocates decide to start name-calling them because they lack support for an ubiquitous technology such as Flash — the fact that Apple’s <video> tag suggestions were tied to the use of H.264 shouldn’t have made any difference at all, since Flash does not support Theora, so with the exclusion of the recently released WebM in the latest 10.1 version of the Flash Player, there wouldn’t be any support for “Free formats”;
  • Adobe stirs up a lot of news declaring support for Android; Google announces Android 2.2 Froyo, supporting Flash; rather than declaring Google an enemy of Free Software for helping Adobe spread their invasive and proprietary technology, FLOSS advocates start issuing “take that” comments toward iPhone users as “their phone can see Flash content”;
  • Mozilla refuses to provide any way at all to view H.264 files directly in their browser, leaving users unable to watch Youtube without Flash unless they do a ton of hacky tricks to convert the content into Ogg/Theora files; FLOSS advocates keep on supporting them because they haven’t compromised;

What is up here? Why should people consider Adobe a good friend of Free Software at all? Maybe because they control formats that are usually considered “free enough”: PostScript, TIFF (yes they do), PDF… or because some of the basic free fonts that TeX implementations and the original X11 used come from them. But all of this doesn’t really sound relevant to me: they don’t provide a Free Software PDF implementation, rather they have their own PDF reader, while the Free implementations often have to run fast towards, with mixed results, to keep opening new PDF files. As much as Mike explains the complexity of it all, the Linux Flash player is far from being a nice piece of software, and their recent abandon of the x86-64 version of the player makes it even more sour.

I’m afraid that the only explanation I can give to this phenomenon is that most “FLOSS advocates” line themselves straight with, and only with, the Free Software Foundation. And the FSF seem to have a very personal war against Microsoft and Apple; probably because the two of them actually show that in many areas Free Software is still lagging behind (and if you don’t agree with this statement, please have a reality check and come back again — and this is not to say that Free Software is not good in many areas, or that it cannot improve to become the best), which goes against their “faith”. Adobe on the other hand, while not really helping Free Software out (sorry but Flash Player and Adobe Reader are not enough to say that they “support” Linux; and don’t try to sell me that they are not porting Creative Suite to Linux just so people would use better Free alternatives).

Why do I feel like taking a shot at FSF here? Well, I have already repeated multiple times that I love the PDFreaders.org site from the FSFe; as far as I can see, FSF only seem to link to it in one lost and forgotten page, just below a note about CoreBoot … doesn’t make it any prominent. Also, I couldn’t find any open letter that blame PDF for being a Patent-risky format, which instead is present in the PDFreaders site:

While Adobe Systems grants a royalty-free use of any patents to the PDF format, in any application that adheres to the PDF specifications, other companies do hold patents that may limit the openness of the standard if enforced.

As you can see, the first part of the sentence admits that there are patents over the PDF format, but royalty-free use is granted… from Adobe at least, but nothing from eventual other parties that might have them.

At any rate, I feel like there is a huge double-standard issue here: anything that comes out of Microsoft or Apple, even with Free Software licenses or patent pledges is evil; but proprietary software and technologies from Adobe are fine. It’s silly, don’t you think so?

And for those who still would like to complain about websites requiring Silverlight to watch content, I’d like to propose a different solution to ask for: don’t ask for them to provide it with Flash, but rather with a standard protocol, for which we have a number of Free Software implementations, as well as being supported on the mainstream operating systems for both Desktops and mobile phones: RTSP is such a protocol.

HTTP-like protocols have one huge defect

So you might or might not remember that my main paid job in the past months (and right now as well) has been working on feng, the RTSP server component of the lscube stack .

The RTSP protocol is based off HTTP, and indeed uses the same message format as defined by the RFC822 text (the same used for email messages), and a request line “compatible” with HTTP.

Now, it’s interesting to know that this similitude between the two has been used, among other things, by Apple to implement the so-called HTTP tunnelling (see the QuickTime Streaming Server manual Chapter 1 Concepts, section Tunneling RTSP and RTP Over HTTP for the full description of that procedure). This feature allows clients behind standard HTTP proxies to access the stream, creating a virtual full-duplex communication between the two. Pretty neat stuff, even though Apple recently superseded it with the pure HTTP streaming that is implemented in QuickTime X.

For LScube we want to implement at a very least this feature, both server and client side, so that we can get on par with the QuickTime features (implementing the new HTTP-based streaming is part of the long haul TODO, but that’s beside the point now). To do that, our parser has to be able to accept the HTTP request and deal with them appropriately. For this reason, I’ve been working to replace the RTSP-specific parser to a more generic parser that accepts both HTTP and RTSP. Unfortunately, this turned out not to be a very easy task.

The main problem is that what we wanted to do was to do the least passes over the request line to get the data out; when we only supported RTSP/1.0 this was trivial: we knew exactly which method were supported, which ones appeared valid but weren’t supported (like RECORD) and which ones were simply invalid to begin with, so we set the value for the method passing by and then moved on to check the protocol. If the protocol was not valid, we cared not about the method anyway, but at worse we had to pass through a series of states for no good reason, but that wasn’t especially bad.

With the introduction of a simultaneous HTTP parser, the situation became much more complex: the methods are parsed right away, but the two protocols have different methods: the GET method that is supported for HTTP is a valid but not supported method for RTSP, and vice-versa when it comes to the PLAY method. The actions that handled the result of parsing of the method for the two protocols ended up executing simultaneously, if we were to use a simple union of state machines, and that, quite obviously, couldn’t have been the right thing to do.

Now, it’s really simple to understand that what we needed was a way to discern which protocol we’re trying to parse first, and then proceed to parse the rest of the line as needed. But this is exactly what I think is the main issue with the HTTP protocol and all the protocols, like RTSP, or WebDAV, that derive, or extend, it: the protocol specification is at the end of the request line. Since you usually parse a line in the latin order of characters (from left to right), you read the method before you know which protocol the client is speaking. This is easily solved by backtracking parsers (I guess LALR parsers is the correct definition, but parsers aren’t my field of work, usually, so I might be mistaken), since they first pass through the text to parse to identify which syntax to apply, and then they apply the syntax; Ragel is not such a parser, while kelbt (by the same author) is.

Time constrain and the fact that kelbt is even more sparingly documented than Ragel mean that I won’t be trying to use kelbt just yet, and for now I settled at trying to find an overcomplex and nearly unmaintainable workaround to have something working (since the parsing is going to be a black-box function, the implementation can easily change in the future when I learn some decent way to do that).

This all thing would have been definitely simpler if the protocol specification was at the start of the line! At that point we could just have decided the parsing further down the line depending on the protocol.

At this point I’m definitely not surprised that Adobe didn’t use RTSP and instead invented their own Real-Time Message Protocol not based on HTTP but is rather a binary protocol (which should also make it much easier to parse, to an extent).