I’m in my network, monitoring!

While I was originally supposed to come here in Los Angeles to work as a firmware developer engineer, I’ve ended up doing a bit more than I was called for.. in particular it seems like I’ve been enlisted to work as a system/network administrator as well, which is not something that bad to be honest, even though it still means that I have to deal with a number of old RedHat and derivative systems.

As I said before this is good because it means that I can work on open-source projects, and Gentoo maintenance, during work hours, as the monitoring is done with Munin, Gentoo and, lately, Icinga. The main issue is of course having to deal with so many different versions of RedHat (there is at least one RHEL3, a few RHEL4, a couple of RHEL5, – and almost all of them don’t have subscriptions – some CentOS 5, plus the new servers that are Gentoo, luckily), but there are others.

Starting last week I started looking into Icinga to monitor the status of services: while Munin is good to know how things move over time and to have an idea of “what happened at that point”, it’s still not extremely good if you just want to know “is everything okay now or not?”. I also find most Munin plugins being simpler to handle than Nagios’s (which are what Icinga would be using), and since I already want the data available on graphs, I might just as well forward the notifications. This of course does not apply to boolean checks that are pretty silly on Munin.

There is some documentation in the Munin website on how to set up Nagios notifications, and it mostly works flawlessly for Icinga. With the one difference being that you have to change the NSCA configuration, as Icinga uses a different command file path, and a different user, which means you have to set up



I’m probably going to make the init script have a selectable configuration file and install two pairs of configuration files, one in /etc/icinga and hte other in /etc/nagios so that each user can choose which ones to use. This should make it easier to set it up.

So while I don’t have much to say for now, and I have had little time to post about this in the past few days, what my plan, in regard to Icinga and Munin, consists of is primarily cleaning up the nagios-plugins ebuild (right now it just dumps all the contrib scripts without caring about them at all, let alone caring about the dependencies), and writing documentation on the wiki about Icinga the way I cleaned up the one about Munin — speaking of which, Debian decided to disable CGI in their packages as well, so now the default is to keep CGI support disabled unless required and it’s provided “as is”, without warranties it ever works. I also have to finish setting up the Munin async support, which becomes certainly useful at this point.

I’m also trying to fit in Ruby work as well as the usual Tinderbox mangling so … please bear with my lack of update.

Too much web content

I’m not much of a web development person. I try to keep my fiddling with my site to a minimum and I focus most of my writing on this blog so that it’s all kept at the same place. I also try not to customise my blog too much beside not having it appear like any other Typo-based blog (the theme is actually mostly custom). For the design of both the site and the blog I relied on OSWD and adapted the designs found there.

I also tend to not care about webservices, webapplications and all that related stuff, it’s out of my sphere, I also try not to comment about web-centric news since I sincerely don’t care. But unfortunately, like most developers out there, I often get inquired about possibilities with webapplications and sites development and so on so forth.

For this reason, I came to be quite opinionated, and probably against the majority of the components who “shape” the net as it is now.

One of my opinions is that you shouldn’t use on-request geneated pages for static content which is what most sites do, with CMSs, Wikis, no-comment blogs and stuff like that. The only reasons why I’m using a web application for my blog is that first of all I happen to write entries while I’m on the go, and second I allow user comments, which is what makes it a blog rather than a crappy website. If I didn’t allow comments, I would have no reason to use a webapplication and could probably just do with a system fetching the entries from an email account.

Another opinion is that you shouldn’t reinvent the wheel because it’s cool. I’m sincerely tired of the amount of sites that include social networking features like friendship and similar. I can understand it when it’s the whole idea of the site (Facebook, FriendFeed) but do I care on sites like Anobii ? (on the other hand I’m glad that Ohloh does not have such a feature).

I’ve been asked at least three times about developing a website with social networking features, with friendship and the stuff, and two out of three times, the target of the projects were “making money”. Sure, okay, keep on trying.

Every other site out there has a CMS to manage the news entries, which could also be acceptable when you have a huge archive and the ability to search through it, but do I need to know which hour it is right now? I have a computer in front of me, I can check it on that (unless of couse I’m looking to find out if it’s actually correctly synchronised). Does every news or group site have to have a photo gallery with its own software on it? There are things like Picasa and Flickr too.

But one thing I sincerely loathe is all the sites that are up with Trac or MediaWiki to provide some bit of content that rarely needs to be edited. Even FreeDesktop.org site is basically a big huge wiki with the developers having write access. Why, I don’t know since you can easily make the thing use DocBook and process the files with a custom stylesheet to produce the pages shown to the user. It’s not like this is overly complex. Especially when just a subset of the people browsing the site have access to edit it.

Similarly, I still wonder why every other WordPress blog requires me to register to the main WordPress site to leave comments. I can understand Blogger and LiveJournal requiring a login either with them or OpenID (and I use my Flickr/Yahoo OpenID for that) but why should I do that on a per-site basis, repeatedly?

But even counting that in, I’m tired of the amount of sites that just duplicate information. Why was xine’s site having its own “security advisory” kind of thing? It’s not like we’re a distribution. Thankfully, Darren started just using the assigned CVE numbers since a few years ago so there is no further explosion of pages. Hopefully, I can cut out some of the pointless content of the site to reduce it.

In the day of I-do-everything sites, I’m really looking forward for smaller, tighter sites that only provides the information they have to instead of duplicating it over and over again. The good web is the light web.