This Time Self-Hosted
dark mode light mode Search

Maybe not-too-neat Apache tricks; Part 1

In general, I’m not an expert sysadmin; my work usually involves development rather than administration, but as many other distribution developers, I had to learn system administration to make sure that the packages do work on the users’ systems. This gets even messier when you deal with Gentoo and its almost infinite amount of combinations.

At any rate, I end up administering not only my local systems, but also two servers (thanks to IOS Solutions who provides xine with its own server for the site and bugzilla). I started using lighttpd, but for a long series of circumstances I ended up moving to Apache (mostly content negotiation things). I had to learn the hard way about a number of issues — luckily security was never involved.

My setup moved from a single lighttp instance to one Apache that kept running two static websites, one Bugzilla and one Typo instances, to two Apache on two servers, one running a static website and a Bugzilla instance, the other running a few static websites and a Typo instance via passenger. The latter is more or less what I have now.

From one side, Midas is keeping up the xine website (static, generated via XSLT after commit and push); from the other, Vanguard – the one I pay for – keeps this blog my website and a few more running. I used to have a gitweb instance (and Gitarella before that), but I stopped providing the git repositories myself, much easier to push them to Gitorious or GitHub as needed.

The static websites use my own generator for which I still have to find a proper license. Most of these sites are mine or simply of friends of mine, but with things changing a bit for me, I’m going to start offering that as a service package for my paying customers (you have no idea how many customers would just be interested in having a simple, static page updated once in a few months… as long as it looked cool).

But since I have, from time to time, to stop Apache to make changes to my blog – or in the past Passenger went crazy and Apache stopped from answering to requests at all – I’m not very convinced about running the two alongside for much longer. I’ve then decided it was a good idea to start figuring out an alternative approach; the solution I’m thinking of requires the use of two Apache instances on the same machine; since I cannot use different ports for them (well, I could run my blog over 443/SSL but I don’t think that would be that good of an idea for the read-only situation), I’ve now requested a second IP address (the current vserver solution I’m renting should support up to 4), and I’ll run two instances with that.. over the two different IP addresses.

Now, one of the nice things of splitting the two instances this way is that I don’t even need ModSecurity on the instance that only serves the static sites; while they are not really as static as a stone (I make use of content negotiation to support multiple languages on the same site, and mod_rewrite to set the forcing), there is no way that I can think of that any security issue is triggered while serving them. I could even use something different from Apache to serve them, but the few advanced features I make use of don’t make it easy to switch (content negotiation is one, another is rewritemaps to recover moved/broken URLs). And obviously, I wouldn’t need Passenger either.

But all the other modules? Well, those I’d need; and since by default they are actually shared modules (I have ranted about that last November), loading two copies of them means duplicating the .data.rel and the other Copy on Write sections. Not nice. So I finally bit the bullet and, knowing that Apache upstream allows using them as builtin, I set out to find if the Gentoo packaging allows for that situation. Indeed it does, but by mishandling the static USE flag which made it quite harder to find out. After enabling that one, disabling the mem_cache, file_cache and cache modules (that are not loaded by default but are still built, and would be built-in when using the static USE flag), and restarting Apache, the process map looked much better, as now the apache2 processes have quite less files open (and thus a much neater memory map).

One thing that is interesting to note: right now, I’ve not been using mod_perl for Bugzilla because of the configuration trouble; one day I might actually try that. Possibly with a second Apache instance on Midas, open only on SSL, with a CACert certificate.

Now it might very well be possible that you were to need a particular module only in one case, such as mod_ssl to run a separate process for an SSL-enabled Apache 2 instance… in that case, one possible solution, even though not extremely nice, is to use the EXTRA_ECONF trick that I already described.. in this case, you could create a /etc/portage/env/www-servers/apache file with this content:

export EXTRA_ECONF="${EXTRA_ECONF} --enable-ssl=shared"

On a separate note, I think one of the reasons why our developers let the default be dynamic modules is more related to the psychology of calling it “shared”. It makes it sound like it’s wasting memory when you have multiple processes using a “non-shared” module.. when in reality you’re creating much more private memory mappings with the shared version. Oh well.

Unfortunately, as it happens, the init system we have in place does not allow for more than one Apache system to be running; it really requires different configuration files and probably a new init script, so I’ll have to come back to this stuff in the next days, for the remaining parts.

There are though three almost completely unrelated notes that I want to sneak in:

  • I’m considering a USE=minimal (or an inverse, default-enabled, USE=noisy) for pambase; it would basically disable modules such as pam_mail (tells you if you have unread mail in your queue — only useful if you have a local MDA), pam_motd (gives you the Message of the Day of the system) and pam_tally/pam_lastlog (keep track of login/su requests). The reason is that these modules are kept loaded in memory by, among others, sshd sessions, and I can’t find any usefulness in them for most desktop systems, or single-user managed servers (I definitely don’t leave a motd to myself).
  • While I know that Nathan complained to me about that, I think I start to understand why the majority of websites seem to stick with www or some other third-level domain: almost no DNS service seem to actually allow for CNAME to be used on the origin record (that is, the second-level domain); this means that you end up with the two-levels domain to point directly to an IP, and changing a lot of those is not a fun task, if you’re switching the hosting from one server to another.
  • CACert and Google Chrome/Chromium don’t seem to get along at all. Not only I’ve been unable to tell it to accept the CACert root certificate, but while trying to generate a new client certificate with it, the page is frozen solid. And if I try to install it after generating it with Firefox, well… it errors out entirely.
Comments 7
  1. CaCert works fine with Chromium here. The command to import the root cert is:certutil -d sql:$HOME/.pki/nssdb -A -t “C,,” -n CAcert -i root.crt

  2. Exactly like last time:<typo:code>% certutil -d sql:$HOME/.pki/nssdb -A -t “C,,” -n CAcert -i root.crtcertutil: function failed: security library: invalid arguments.</typo:code>Not sure what might be off there…

  3. As far as I know setting an CNAME record for a whole domain is not allowed by the DNS itself.The RFC does not allow for it because of the side effects and implications it woud have.Looking at what a CNAME means you end up with the fact, that all properties of the domain entry are copied from the domain you point to. Including things like IPv4, IPv6 record but also MX records and so on.Thus the domain itself always has to have an A record – according to the RFC.

  4. My understanding of is that one (at least) nameserver is req’d that has authority to delegate the CNAMES .

  5. I really wish people would stop using CACert certificates. CACert truly don’t pass the audit requirements of a proper certification authority (and yes, they are working on it – but there are valid security reasons to reject them from your root store) and there is good reason that Mozilla rejected them.Now, that said and done: you can get free SSL certificates, which are accepted widely (Including by, of all things, MSIE!) from StartSSL ( http://www.startssl.com/ ). An added bonus is that they are heavily involved in the Mozilla security policy discussions.

  6. Recently I had a lot of troubles with phusion-passenger 2.2.15 and Apache 2.2 as Rails started to throw a lot of EPIPE errors (broken pipes) and stopped from serving its applications; after dealing the problem with upstream, it turned out to be … a *WebKit/Safari issue* that triggered, on Hardened Linux Kernel 2.6.3x (GrSec+Pax), time-out connections when there aren’t at all, making Apache unresponsive and freezing Rails. The solution was to downgrade to kernel 2.6.29 (which has a double, and useless, control of the TCP time-out connection which let to correctly handle the connections marked wrongly as dead, control that has been removed from the early 2.6.3x as a consequence of a code clean-up/optimization necessary for the upcoming new network stack of the 2.6.35) until the WebKit issue will be solved by upstream.@owen thank you very much for the info, very useful! I’ll trya StartSSL certificate to secure my main email account.@diego: as you are using OVH for domain name registration, there is also an OVH service for free SSL certificates: http://www.ovh.it/ssl/ssl_s… (quite the same of StartSSL, but I don’t have luked so much the site) yeah, I know, the content of the OVH’s site is very chaotic and not easily searchable.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.