Updated “Social” contacts

Given the announcement of Google+ shutdown (for consumer accounts, which mine actually was not), I decided to take some time to clean up my own house and thought it would be good to provide an update of where and why you would find me somewhere.

First of all, you won’t find me on Google+ even during the next few months of transition: I fully deleted the account after using the Takeout interface that Google provides. I have not been using it except for a random rant here and there, or to reach some of my colleagues from the Dublin office.

If you want to follow my daily rants and figure out what I actually complain the most loudly about, you’re welcome to follow me on Twitter. Be warned that a good chunk of it might just be first-world London problems.

The Twitter feed also gets the auto-share of whatever I share on NewsBlur, which is, by the way, what I point everyone to when they keep complaining about Google Reader. Everybody: stop complaining and just feel how much better polished Samuel’s work is.

I have a Facebook account, but I have (particularly in the past couple of years), restricted it to the people I actually interact with heavily, so unless we know each other (online or in person) well enough, it’s unlikely I would accept a friend request. It’s not a matter of privacy, given that I have written about my “privacy policy”, it’s more about wanting to have a safe space I can talk with my family and friends without discussions veering towards nerd-rage.

Also, a few years ago I decided that most of my colleagues, awesome as they are, should rather stay at arms’ length. So with the exception of a handful of people who I do go out with outside the office, I do not add colleagues to Facebook. Former colleagues are more likely.

If you like receiving your news through Facebook (a negative idea for most of tech people I know, but something that the non-tech folks still widely prefer it seems), you can “like” my page, which is just a way for WordPress to be able to share the posts to Facebook (it can share to pages, but not to personal accounts, following what I already complained before about photos). The page also gets the same NewsBlur shared links as Twitter.

Talking about photos, when Facebook removed the APIs, I started focusing on posting only on Flickr. This turned out to be a bit annoying for a few of my friends, so I also set up a page for it. You’re welcome to follow it if you want to have random pictures from my trips, or squirrels, or bees.

One place where you won’t see me is Mastodon or other “distributed social networks” — the main reason for it is that I got already burnt by Identi.ca back in the days, and I’m not looking forward to have a repeat of the absolute filter bubble there, or the fact that, a few years later, all those “dents” got lost. As much as people complain how Twitter is ephemeral, I can still find my first tweet, while identi.ca just disappeared, as I see it, in the middle of nowhere.

And please stop even considering following me on Keybase please.

Choosing a license for my static website framework

You might remember that some time ago I wrote a static website generator and that I wanted to release it as Free Software at some point in time.

Well, right now I’m using that code to create three different websites – mine the one for my amateur director friend and the one for a Metal band – which is not too shabby for something that started as a way to avoid repeating the same code over and over again, and it actually grew bigger than I expected at first.

Right now, it not only generates the pages, but also the sitemap and to some extent the robots.txt (by providing for instance the link to the sitemap itself). It can generate pages that link to Flickr photos and albums, including providing descriptions and having a gallery-like showcase page, and it also has some limited support for YouTube videos (the problem there is that YouTube does not have a RESTful API; I can implement REST calls through XSLT, but I don’t think I would be able to program GData protocol with that).

Last week, I was clearing up the code a bit more, because I’m soon going to use it for a new website (for a game – not video game – a friend of mine invented and is producing) and ended up finding some interesting documentation from Yahoo! on providing semantic information for their search engine (and I guess, to some extent, Google as well).

This brought up two questions to me:

  • is it worth keeping working on this framework based on XSLT alone? As I said, Flickr support was piece-of-cake, because the API they use is REST-based, but YouTube’s GData-based API definitely require something “more”. And at the same time, even Flickr gallery wrapping has been a bit of a problem, because I cannot really properly paginate using XSLT 1.0 (and libxslt does not support XSLT 2.0, where the iterators I needed are implemented). While I like the consistent generation of code, I start to feel like it needs something to pre-process the data before sending it out; for instance I could make some program just filter the references to youtube videos, write down an XML description of them, downloaded with GData, and then let XSLT handle that. Or cache the Flickr photos (which would be very good to avoid requesting all the photos’ details every time the website is updated);
  • I finally want to publish FSWS to the public; even if – or maybe especially – I want to discontinue it or part of it, or morph into something that is less “pure” than what I have now. What I’m not sure about is which license to use. I don’t want to make it just GPL, as that implies that you can modify it and never give anything back, since you won’t redistribute the framework, but the results; AGPL-3 sounds more like it, but I don’t want to make the pages generated by the framework to apply that license. Does anybody have an idea?

I’m also open to suggestion on how something like this should work. I really would prefer if the original content is written simply in XML: it’s close enough to the output format (XHMTL/HTML5) and shouldn’t be much of a trouble to write. The less vague idea I have on the matter is to use multiple steps of XML conversion; already the framework uses some nasty two-pass conversion of the input document (it splits in N-branches depending on the configured languages, then it processes the two branches almost independently to produce the output), and since some content is generated by the first pass, it’s also difficult to make sure that all the references are there for links and similar.

It would be easier if I could write my own xslt functions: I could just replace an element referring to a youtube video with a reference to a (cached) XML document, and similarly for Flickr photos. But to do so I guess I’ll either have to use JavaScript and an XSLT processor that supports it, or I should write my own libxslt-based processor that can understand some special functions to deal with GData and similar.

Facebook, usefulness of

Seems to me like the existence of Facebook is seen in either of two ways: either the coolest website in the world, or the most useless one; given I am subscribed, too one would expect me not to be in the latter category; but I really would take offence if you categorized me on the former category!

Indeed, I actually fall into a “third way” category: I find it useful to some point, but not very much. I do use it, and I have both friends, and a few work-related contacts (not many); I also have Gentoo users and developers, but I tend to select who I accept requests from (so if I spoke with you once or twice, it’s rare; if you’re somebody I happened to collaborate with for a while at least, then I’ll probably accept). I don’t feel like it’s an invasion of my privacy, to be honest, since my statuses are almost the same there as they are on identi.ca and twitter; my notes are usually just my own blog, I might do some non-idiotic meme from time to time (more on that later), I don’t really do quizzes or use strange applications. I might have some not-so-public photos, but that’s really just nothing you’d have fun seeing me in, since they are usually nights out with friends; and if it was just for me they could also be public, I don’t care. I do have my full details: phone numbers, street address, email and IM users, but they are not really private details, given that my phone numbers and addresses correspond to my work details, and the rest, well, let’s just say I don’t really have much imagination and you can find me as Flameeyes almost everywhere.

So what’s the usefulness of Facebook at this point in time for me? Well, I do aggregate my blog, to show it to my friends, and let them share it with others so that others can read what I write (I hoped that my post about Venetian public offices ended up shared more but it doesn’t seem like my friends do seem to be interested in real politics outside that of parties); I reach more people, that don’t follow identi.ca or twitter, and I do follow them too, so it really does not add much there either. When somebody I know I have as a contact on Facebook asks me for my details, well, my answer is just “Look at my Facebook profile”; it’s there for a reason). In general, it’s just another medium like this blog, like planets aggregators and so on. It does not really add much. It’s a bit more than an overhyped address book.

One note that is often made is that the idea of finding “people you haven’t seen in years” is pointless because… you haven’t seen them in years for some reason.Sometimes, though, it might just be a problem with losing contacts, going different ways but still interested in getting back in touch and hearing from, from time to time, so it works as a medium for that too.

And on a similar note, why do I find memes interesting, or even useful? Well sometimes you do know somebody, or at least met somebody but don’t know well enough to know some personal nitpicking details; memes might strengthen a bond between people by providing possibilities to compare and identify similar tastes and other stuff. In particular note-based memes (or blog based memes) don’t require you to use stupid third-party applications to do that. Yes I know it might sound silly, but I can use the example of an ex-classmate of mine who I haven’t seen in almost ten years for various reason, until facebook came and we actually found we now have common interests; people grow up and change.

Unfortunately, in all this I don’t see anything that can save Facebook from its financial problems: it really does not work for advertisement, most of the applications seems to be on the limit of fraud, and there is no fee to enter, nor there seems to be any particularly interesting or important paid services (as a counter-example, Flickr’s paid version, with no limit on photo upload and access to the original images, is a service for which even I pay!). For this reason, I really don’t relay (sorry for the words’ game) on Facebook to store important information (so yeah I do keep my address book updated outside of it), I wouldn’t be surprised if next month they start charging for service, or if in four they close down entirely. Nor I would miss them.

And to finish, why on earth am I writing about Facebook now? Well, I just want to warn my readers for why in the next few days they might find some Italian posts talking about Facebook; and in turn that is part of my plan to try instructing my friends and acquaintances on how to behave on the network, and with a computer. Which hopefully will allow me to write it once rather than re-explain it every other time I have to take over a PC to clean up from viruses and other issues.

More XSL translated websites

I have written before that, over CMS- or Wiki-based website, I prefer static websites, and that with a bit of magic with XSL and XML you can get results that look damn cool. I also have worked on the new xine site which is entirely static and generated from XML sources and libxslt.

When I wrote the xine website, I also reused some of the knowledge from my own website even though the two of them are pretty different in many aspects: my website used one xml file per page, with an index page, and a series of extra stylesheets that convert some even higher level structures into the mid-level blocks that then translated to XHTML; the xine website used a single XML file with XInclude to merge in many fragments, with one single document for everything, similarly to what DocBook does.

Using the same framework, but made a bit more generic, I wrote the XSL framework (that I called locally “Flameeyes’s Static Website” or fsws for short) that is generating the website for a friend of mine, an independent movie director (which is hosted on vanguard too). I have chosen to go down this road because he needed something cheap, and he didn’t care much about interaction (there’s Facebook for that, mostly). In this framework I implemented some transformation code that implements part of the flickr REST API, and also a shorthand to blend in Youtube videos.

Now, I’m extending the same framework, keeping it abstract from the actual site usage, allowing different options for settig up the pages, to rewrite my own website with a cleaner structure. Unfortunately it’s not as easy as I thought, because while my original framework is extensible enough, and I was able to add in enough of my previous stylesheets’ fragments into it without changing it all over, there are a few things that I could probably share again between different sites without needing to recreate it each time but require me to make extensive changes.

I hope that once I’m done with the extension, I’ll be able to publish fsws as a standard framework for the creation of static websites; for now I’m going to extend it just locally, and for a selected number of friends, until I can easily say “Yes it works” – the first thing I’ll be doing then would be the xine website. But I’m sure that at least this kind of work is going to help me getting better understanding of XSLT that I can use for other purposes too.

Oh and in the mean time I’d like to pay credit to Arcsin whose templates I’ve been using both for my and others’ sites… I guess I know who I’ll be contacting if I need some specific layout.

The importance of little things

Foreword of warning: this post might sound totally unrelated to Gentoo; it really can be meant as a metaphor for Gentoo, so if you don’t get it, please don’t say this shouldn’t be on Planet Gentoo right away. Thanks.

There are many little things in the world that count, when the numbers pile up; for instance the energy and water problem is such that little things like not keeping the water running while brushing your teeth or shaving are very common suggestions by Greens, together with turning off the lights when leaving a room, even for a few minutes.

Little things that count

But these already look big enough to be logical, in my opinion there are even smaller things, less obvious, less “logical” than should be considered to save energy, and water. The photo above pictures one of these: it’s a shower gel bottle, an used shower gel bottle (actually I finished it up yesterday). I think it’s pretty emblematic of the problem since it is left squeezed.

This kind of container seems quite nice: it’s usually recyclable, it’s plenty of gel so that you don’t have to buy lots in smaller containers and so on. On the other hand it has one huge defect in my opinion, when you’re almost done, you have a hard time to get out the last part of the gel. How is that a problem? When you’re under the shower you’re not likely to turn the water down while you squeeze out the gel, this is a waste of (hot) water.

Sure, it’s possible to just turn the bottle upside down and keep it that way when the gel is running low, but this can be quite tricky especially with bottle like that one that have larger bottoms: the center of gravity in that bottle, once turned upside down, is quite high and it tends to fall quite more easily.

For this reason I prefer other two types of bottles: the ones that have the opening at the bottom, and in particular the ones that are soft and squeezy. Both types reduce the amount of work, and time, needed to get the final part of the shower gel out. Unfortunately these seem to be quite less common form factors for shower gel, I don’t know why, maybe it’s a storage problem. On the other hand, using these types of bottles don’t require any extra time to squeeze the last part of the gel and does not require you to turn down the water any more than the start of it.

But of course, taking a look at the problem just from this side is not correct: there are more variables in play, for instance I don’t know what the reason for not using more down-opening bottles is (as I said, it might be a storage problem I can guess, but how does that impact the great scheme?), nor I know whether the softer material has different emergent properties. Indeed, it might be that upside-down bottles get wasted more often in storage, to an amount that makes the amount of water wasted look puny, or the production line for the softer material might waste much more energy. These are the non-obvious things that, most likely, somebody is weighting behind the scenes.

So why do I call this a metaphor? Well, it can be a metaphor for quite a few things: the small and big gains that need to be weighted about the efforts required along the software production line or the linear versus proportional time problem, and so on. In general, I think it’s just one of the little annoyances of life, and that it can make you think about lots of other issues when thinking of it more seriously than you’d normally do.

How can someone miss a meeting?

dscn0942

Well, shit happens people, and it seems like the extraordinary meeting that was supposedly scheduled yesterday night found Donnie and Wernfried (amne) alone in the channel.

As people seems to either look at this as a sign of the council misbehaviour, or just as an escape route from an hostile council, I’d like to let people know how it is possible at all that this happens.

Let me start saying that yeah this was a bad mistake and that it’s all the council members faults. Although this counts in both Donnie and Wernfried as well as us five missing the meeting. This does not mean, though, that this was done intentionally or that the council doesn’t care.

During March meeting we already seen that it’s sometimes difficult for all of us to remember the exact day of the meeting. Myself, I find it particularly easier to remember an appointment when it happens the same day every month (like my credit’s card bill due the 5th), rather than the same day of the week of the given week every month, which is the schedule that Gentoo Council meetings always had since it was enacted almost three years ago.

For this reason we activated ourselves in two direction: first, the council members exchanged contact information (mostly phone numbers) and second a calendar on Google Calendar was created for us to be able to get reminders of meetings and other stuff like that. Although many people may dislike the choice of using Google – I also dislike it as it doesn’t work with Konqueror – it’s the only easy thing we have available at the moment, so that will be it.

Now back to last week’s meeting: with the long list of topics to discuss, we reached two hours and a half after the meeting started. On the timezone me, Luca, Wernfried and Markus are, that would have been midnight. I think Petteri is even further along the timezone line, so that would be 1 or 2 AM for him. I was quite sleepy already, and I sincerely expected the meeting to end sooner. I took off basically at the time you can read my last line in the log (22:32 < FlameBook+> [reschedule to special I mean]), I got to the bathroom, then came back, checked if somebody called me, I seen Donnie saying I was okay with the reschedule, I didn’t pay much attention, closed the laptop and gone to sleep. Wernfried gone to sleep earlier, Luca I suppose went about at the same time as me. Nobody else from the council but Betelgeuse and Donnie seems to be around in the log.

Did I read summary or log the day after? Sincerely, no. I did think I was there till the end, as we were already late, and I don’t tend to read them usually anyway, I’m there during the meeting why should I read the summary? I usually read the replies but not the summaries themselves; I can tell you that Duncan on -dev made the point of the log being missing, so that I surely read. I didn’t even remember to ask what we had to do for the special meeting, and that’s entirely my fault, I should have asked. But I barely remembered saying I was okay with rescheduling, considering the lateness of the whole thing.

Now of course the problem is that life goes on, and if you don’t do something right away you most likely will never do it. That particular thing to do would have been writing the meeting up in the calendar. I’ve made an habit of writing down appointments, meetings, calls, everything in a calendar, even if it means having many calendars to keep synchronised. But as I said, it was late, and I didn’t really get up to that point that the meeting was already appointed to yesterday. Nobody else added the meeting to the calendar.

This should have been a task for the council of course, but the calender is set up in such a way that anybody can add it and invite other people to it. At any rate, nobody did this, but it’s still the council’s fault for none of the seven members to write it down.

Ordinary meetings are announced by Mike’s mailer, but this didn’t happen this time as it wasn’t an ordinary meeting. There was no traffic on the council alias or on the mailing list reminding us of the meeting. Life goes on, the time of the extraordinary meeting arrives, and Donnie and Wernfried are the only ones to show up.

I received exactly one reminder of the meeting, by Sput (quassel’s author), who sent me a message on Jabber. Unfortunately I’m working on a deadline and I’m working almost all day long – I’m taking time to write this while I have dinner. When I’m working on a deadline I tend to keep IMs on a different desktop, and limit myself to be disturbed by filtered email messages and SMS. Neither arrived so I kept going till late, and then I decided to sleep.

Now of course technically the fault is entirely of the council members. I still argue this does not trigger the rule of the 50% attendance, as we did held a meeting this month anyway, and the date of the new meeting wasn’t agreed upon by all the members at the end of last week’s.

But still, am I the only one who thinks that it sounds tremendously like a last exit to get rid of the council when people point everybody at the rules, asking a new election, without having tried a thing to get the meeting actually happening? It reminds me of somebody watching from a window while someone else gets beaten, and after the fact screaming that police should have been there to save the victim. It’s true that police should have been there, but are you sure you couldn’t have done something about that?

Now that I explained how it is possible that the meeting was missed by almost everybody, let me reiterate that my official position here (and I repeat mine, not the council as a whole, I’m writing as a single developer here), is that the 50% attendance rule does not apply to this case as the meeting wasn’t officially scheduled through a voting process. As some people like to call in governments legislation and other law terms, I’d say there was no meeting of the minds on this meeting at all – sorry, pun almost certainly intended :)

And, Donnie, Wernfried, I’m not blaming you, hope you understand. As I said we all screwed up, and I’m probably to blame extra as I was the one who started the idea of the calendar, and should have at least remembered to ask. I wanted to document the whole happening also because the next council, which will be elected anyway this summer, will know what not to do to make sure that people get in the meeting.

My take on compression algorithms

Biancospino - Hawthorn

I just read Klausman’s entry about compression algorithms comparison, and while I’m no expert at all in the field of compression algorithms, I wanted to talk a bit about it myself, from a power user point of view.

Tobias’s benchmarks are quite interesting, although quite similar in nature to many others you can find out there comparing lzma to gzip and bzip2. One thing I found nice for him to explicit is that lzma is good when you decompress more than compress. This is something a lot of people tend to skip over, causing some quite catastrophic (in my view) results.

Keeping this in mind, you can see that lzma is not really good when you compress as many times (or more) than you compress. When would that happen is the central point of this. You certainly expect a backup system to compress a lot more than decompress, as you want to take daily (or more frequent) backups, but the hope is never to need to restore one of those. For Gentoo users, another place where they compress more than decompress is for manpages and documentation. They are compressed every time you merge something, but you don’t tend to read all the manpages and all the documentation every day. I’m sure most users don’t ever read most of the documentation that is compressed and installed. Additionally, lzma does not seem to perform just as good on smaller files, so I don’t think it’s worth the extra time needed to compress the data.

One thing that Tobias’s benchmark has in common with the other benchmarks about lzma I’ve seen is that it doesn’t take much into consideration the memory usage. Alas, valgrind removed the massif graph that gave you the exact memory footprint of a process, it would have been quite interesting to see those graphs. I’d expect lzma to use a lot more memory than bzip2, to be so quick in decompression. This would make it particularly bad on older systems and embedded use cases, where one might be interested to save flash (or disk) space.

For what concerns GNU choice of not providing bzip2 files anymore, and just providing gzip or lzma compressed tarballs, I’m afraid that the choice has been political as much as technical, if not more. Both zlib (for gzip) and bzip2 (with its libbz2) have very permissive licenses, and that makes them ideal even for proprietary software, or free software with, in turn, permissive licenses like the BSD license. lzma-utils is still free software, but with a more restrictive license, LGPL-2.1.

While LGPL still allows proprietary software to link, dynamically, the library, it still is more restrictive, and will likely turn away some proprietary software developers. I suppose this is what the GNU project wants anyway, but I still find it a political choice, not a technical one. Also, it has an effect on users, as one has to either use the bigger gzip version or also install lzma-utils to be able to prepare a GNU-like environment on a proprietary system, like for instance Solaris.

I’m sincerely not convinced by lzma myself. It takes way too much time during compression to find it useful for backups, which is my main compression task, and I’m uncertain about its memory use. The fact that bsdtar doesn’t support it yet directly is also a bit of a turn down for me, as I grow used not to have three processes for extracting a tarball. Doug’s concerns about the on-disk format also makes it unlikely for me to start using that.

Sincerely, I’m more concerned with the age of tar itself, while there are ways to add stuff to tar that it wasn’t originally designed for, the fact that to change it you have to fully decompress it and then re-compress it makes it pretty much impossible to use as a desktop compression method like the rar, zip and ace (and 7z, somewhat, as far as I can see you cannot remove a file from an archive) formats are on Windows. I always found it strange that the only widespread archive method supporting Unix information (permissions, symlinks and so on) is the one that was used for magnetic tapes and is thus sequential by nature…

Well having it sequential makes it more interesting for backing up on a flash card probably (and I should be doing that by the way), but I don’t see it much useful to compress a bunch of random files with data on them… Okay that one of the most used cases for desktop compression has been compressing Microsoft Office’s hugely bloated files, and that both OpenOffice and, as far as I know, newer Office versions use zip files to put their XML into, but I still can see a couple of things I could be using a desktop compression tool from time to time…

Get the thorn out

Papavero

Sometimes it’s necessary to stand by your choices even when they are controversial. We all know that by now. One nice thing of volunteer Free Software is that if you don’t like a controversial decision, you can just leave, or fork, or in any case get away from who made the decision you don’t like.

I left when a decision was made that I didn’t like, I came back when the situation was, to my eyes, corrected. It was and is my freedom.

It so happens that the council made a decision, probably the only real decision since the Council was formed, the first time the council actually grew balls to do something even if that wasn’t going to please everyone.

Am I happy about the decision? Well not really, as it seems to me somewhat silly that we had to go down the road of actually making this decision. But I’m not displeased by the outcome. I think we should have taken this decision a long time ago actually.

To stop speaking like an abstract class in C#, the decision was to retire a number of developers, that for what the Council could gather and upon on are all considered poisonous to the project. Poisonous does not mean they have zero contribution, just that their contribution is shaded by disruption to the wellness of the project. This disruption comprises of a lot of actions, not just one or two. They might even not be huge by themselves, but if they are a lot, well, the size of them starts not to count (the so-called death of a thousand cuts).

This is not meant as a signal that you shouldn’t be criticising Gentoo. Critics are welcome if they are constructive. You can also work in parallel on competing products (hell, Greg KH is listed as a Gentoo Developer but works for Novell!), just as long as you don’t start to use your rights as a Gentoo developer to force people to move on something else, I’d suppose.

It doesn’t even related only to actions on Forums, as our Forums Admins are able to tackle those problems on their own (and I do trust with it). It relates to a lot of small things once again.

In general, the signal that we’re trying to bring through is “don’t poison your contribution to Gentoo”. You can criticise, you can joke, but if the people you joke upon don’t laugh with you at the joke, then apologise and stop it! Otherwise you’re just walking poison and we’re going to get rid of you, sooner or later. Hopefully sooner next time, before developers resign or reduce their involvement because of your actions.

For the Italian readers who read my political rant from yesterday (for those who can’t read Italian it’s a piece talking about job politics, what Italian unionists and politicians do and how it harms the system), you can see a slightly similitude between the two issues. In both cases you have to get rid of some people to avoid leaving everybody out at one point.

Oh and if we wanted to get rid of people working on Paludis, you can expect all of them to be gone, so no that’s not the cause either. It’s just incidental.

And for what it’s worth, nobody is trying to get rid of everybody they disagree with. Otherwise me and Donnie would be trying to get rid of each other ;) As I said before we don’t always agree on how to proceed with things, and we can be often found on opposite sides of an argument. Still we work together, and I’d say we do that quite happily, because of our difference in views: it usually stops us from going with the extremes. But you’ll never find me and Donnie exchanging snide comments, or insulting each other.

In Italy it officially seems spring, this spring cleaning was long due.

A few words from the cabling ladder

Don’t worry, to Ciaran’s disappointment I didn’t fell out of the face of the Earth just yet. Although I’m taking a lower profile in the last few days, there is no doubt I’ll soon be back full-fledged!

What I’m taking care of right now is mostly to fix my house’s cabling. It seems like all the light switches in the house are connected to the return line rather than to the phase line, so they aren’t really good for my idea of moving to LEDs as soon as they are feasible. Also, I’ve been replacing some 20 years old power sockets with some decent and more recent design (for who’s interested, I’m using Vimar Idea everywhere inside the house, I haven’t yet chosen the series to use for the ones outside, but they also need to be replaced).

Also, not sure if you seen it, but I’ve got an article published on LWN! I have to say this is exciting like the first day of school. I hope I’ll be able to write something more after this. Thanks to those users who suggested me to try, and to Donnie who actually convinced me :)

DSCN0640

My current working setup… temporary of course!

As I said, I’m limited at the laptop right now, which not only limits my action in Gentoo, but also makes it harder for me to write on the blog, as it keeps me from leaving a Konqueror window open with my current draft if I’m still half-thinking of something. I was just able two days ago to find a way to share data between my BootCamp Windows XP installation and my Parallels one, by using iSCIS on Enterprise. Trying to get Parallels to ccess the external hard drive in a stable fashion was probably asking too much. I have to say iSCSI looks seriously cool over Gigabit.

For the time being, I could suggest you to try out FriendFeed, you’ll find me as Flameeyes there. My reason for saying this is that I’m using Google Reader to keep myself updated on Planets and generic blogs, and I started “sharing” interesting posts that I would like to write about once Enterprise is back online with its screens. You’ll also see the photos I upload on flickr (if only it had Anobii support…)

Oh and I’ve almost finished Ratchet and Clank: Tools of Destruction in Challenge mode (that is, the second time you play it, with extra weapons and tougher enemies). I suppose I should be buying Devil May Cry 4 sooner than expected. Playing some games the night when I’m just too tired to work but I don’t feel like reading is something quite relaxing indeed, I was just not used to that.

Migrating from iPhoto to DigiKam

For my photos I’ve been using, up to now, iPhoto. The reason for this is that the big part of my photo collection is actually composed of my sister’s photos, which I downloaded directly with the MacBookPro when she asked me.

On the other hand, I use way more often my Linux box, and while iPhoto is a nice tool, DigiKam is not bad either, so I’d gladly move everything on Enterprise, as the mobility option for the photos is lost already once I moved everything on the external harddrive (as I now have more than 10GB of photos).

Unfortunately this migration is not going to be painless I’m afraid, especially because there are a few features I might be losing, unless I can spend some time writing stuff on my own.

The first problem would be having a way to save the photos from risks of losing data; the quick way would be to put them in raid. At the moment the photos are backed up by TimeMachine too, so I don’t risk losing them. I don’t think putting them in my /home directory will be a good idea: it’s just 14GB and the photos will soon be over that quota. I wonder if LVM can mirror partitions easily.

Then there’s the cleanness of storing the photos on the iPod, I don’t think Amarok can load them, can it? I have to check that out. And having them on the AppleTV too.

On the Wiki there are instructions on how to set up a DPAP (Digital Photo Access Protocol, I suppose it’s a relative of DAAP) server to share the photos with iPhoto, and AppleTV. I have to check that out, maybe writing an ebuild.

Of course the easiest way to handle this would be to have DigiKam actually providing DPAP support ;)

Another point I’d like to address is Flickr uploading. To upload to Flickr at the moment I’m using the FlickrUploadr, as the only plugin to allow that in iPhoto is proprietary commercial software. kFlickr is better than FlickrUploadr, but still isn’t well integrated (can’t just select an Album and say “upload to Flickr”).

Probably some of these concerns will be addressed with the KDE 4 version of DigiKam, I expect that to happen, but now I’m wondering if I should migrate already or wait for those to be done…

On the other hand, I wonder if there is anybody working on reverse engineering the protocol with which iTunes talks to AppleTV, it would be nice to be able to just command it through Amarok or DigiKam.

/me adds stuff to his TODO, which is probably way too big for this to ever happen.

But I certainly hope someone will write this stuff for me :) After all we all do our parts in the greater Free Software plan!