For my work I store a huge amount of data in my systems, starting with a number of customers’ systems’ backups, which take the best part of 1TB of storage to preserve for the usual year or two that I usually store them (customers are very happy that way). This unfortunately means that even the 5TB of space present in Yamato is quite restrictive for my business. It’s for this reason, and others, that I ordered the components to build myself a 12TB storage server.
For said server I decided to go with an AMD Fusion board, by Sapphire, both because I’ve been disappointed by Intel’s Atom (and nVidia’s ION), and because the price was right for a board with 5 SATA ports and an eSATA one — four 3TB Western Digital Caviar Green disks, one Kingston SSD that I still had laying around, plus a WD MyBook Studio II that is currently connected to Yamato. In particular, for those asking, the fact that most of the “high end” Atoms – such as the D510 I have on my frontend – do not support CPU speed throttling is appalling; the AMD Fusion board I have do not have said problem.
Anyway, the board itself looks good and works fine — and it was curious for me to notice that it uses UEFI 2.1, rather than BIOS, as its firmware.. it’s the first desktop-class board I found with such a setup (admittedly I haven’t bought many, lately). Also, while the IOMMU feature is disabled by default, the help text for it in the configuration pages states that it is useful to translate I/O operations under Linux.. yes it explicitly mentions Linux in the help text! Kudos to them!
Out of experience with Yamato, when you have so many disks with a complex situation (RAID1 and LVM), booting can be troublesome; in Yamato’s case a bit more, as the order the disks are listed on depends among other things by the order in which they are detected by the kernel, which can change between version and version since the box uses three different controllers and three different drivers. For this reason I was hoping to use UEFI-style booting through grub2 rather than the classic old BIOS boot…
Turns out it wasn’t that good an idea; even with scarabeus’s guide, I was unable to get it to work; it seems like to set it up properly you need to boot an EFI-capable kernel (which is not the one in SysRescueCD, that I usually use to install my systems), and boot it EFI-style. If you were to boot it without EFI support, you wouldn’t be able to find the EFI variables in the
/sys hierarchy anyway. I ended up discarding the whole idea of that, since at the end grub2 can make sense of the RAID devices and can find my rootfs just fine.
But there is one thing to note here that is quite important. When I configured my kernel for the first time, I enabled EFI support; then I enabled the hardened features (GRSec and Pax)… and then when I booted it the first time I went to look for the EFI variables without finding them — I didn’t yet understand that booting via PC BIOS wouldn’t have shown them anyway! Turns out that the
KERNEXEC option that is so troublesome with virtualisation … is also incompatible with EFI. If you enable that option (which both the server and workstation configurations in
hardened-sources enable), the EFI support in the kernel is entirely shut off.
So it appears using EFI is still too soon: we lack tools stable enough, to begin with, and documentation (I had to rely on ArchLinux’s documentation for the most part, but even that is not very clear and refers to black magic tricks; I refrain on writing documentation on the topic myself because I don’t understand all of it at all, even after reading Matthew Garret’s posts on the topic). And it seems to require the kernel to perform unsafe operations (RWX mappings), which is definitely not a good thing.
I start to understand Matthew’s feelings in regard of EFI: it might end up with a more normal boot process once the transition is complete, but between limitations with backward compatibility and no real good feature that is missing – with exception of nvram support – it doesn’t look like it’s something we should be paying much attention to, for now.
Your ‘nvram support’ link is broken because it does not have a leading http://
Hi Diego,I plan to work more on that Gentoo GRUB2 guide that scarabeus has on his devsite (in fact I’m already an author of the EFI portion) – it is largely incomplete now.And yes, you can boot into grub2 in EFI mode even when you cannot run efibootmgr (thus breaking the endless loop): just copy grub.efi into [EFI System Partition]/EFI/BOOT/BOOTX64.EFI and be sure to pass something like -p /grub2/grub.cfg to grub2-mkimage when creating that image. You also need to include some crucial grub modules such as: part_gpt and msdos.So the procedure should look something like this:1. create EFI system partition and mount it to /boot2. cp /lib/grub2/x86_64-efi/* /boot/grub23. grub2-mkimage -p /grub2/grub.cfg -o grub.efi part_gpt fat (or part_msdos if you use this)4. cp grub.efi /boot/EFI/BOOT/BOOTX64.EFI5. grub2-mkconfig …(this is obviously for an 64bit system, AFAIK all UEFI systems now are 64bit (except old Intel Macs))You might even be able to boot a kernel with CONFIG_EFI=n but you’ll probably will have problems with video (as neither VGA text console, VGA fb or EFI fb will be available).
If you are having trouble with the detection order of disks, try identifying them by UUID instead of /dev/sdX letters.For computers with one hard drive, it makes no difference. When you start getting a 3U box with 16 drives like, say, http://www.supermicro.com/p… , it pays to use something which doesn’t rely on their power-up order.
Well,it seems you’ve been just a little too early, if you look at what appeared on the kernel mailing list yesterday. My impression is that with that patch set applied the booted image can handle both BIOS and UEFI mode.Ref.: http://article.gmane.org/gm…
Matěj thanks for the instructions! That sounds like _much_ more useful than what I have read up to now; but I guess I’ll rather wait for you to update the whole guide before trying this out on my laptop ;)Bryan, I can reference them by UUID when using the kernel @root=@ parameter, but in the case of Yamato that does not work because root is on a mdraid device; and in the case of Archer it’s not where the problem lies, as the issue is rather with the BIOS numbering scheme of drives.. but a non-issue with that board it seems, luckily!Andy, I don’t think that does what you expect it to: it simply means that the Linux kernel can boot without actually having to involve a boot loader (such as grub or elilo); this is still good news as that means having a more straightforward boot process, but it’s still limited (no choice of fallback kernels; no ability to edit kernel’s command-line on the fly). And it doesn’t solve the conflict with hardened.
When I use mdraid+LVM devices for boot, I use a genkernel initramfs and say e.g. real_root=/dev/mapper/<vgname>-<lvname>. Then you just need to check:1. That the MD array gets assembled1a. Make sure all partitions are the “Linux RAID autodetect” type1b. Embed an mdadm.conf which uses UUIDs to name the disks into your initramfs2. That “lvm pvscan” finds your MD arrays as PVs2a. Ensure lvm.conf causes the MD devices to be scanned, and that this happens after they are assembledOf course you need to put LVM and dmraid into the initramfs by giving genkernel the appropriate options. I configure my kernels by hand, but I don’t make any direct modifications to the initramfs genkernel gives.So I’m not entirely sure what the problem would be. For GRUB, just install the bootloader into the MBR of every drive and it won’t matter what your device.map file contains.I’ve used this technique on LVM+MD setups, on LVM+MD+dm_crypt (with the encryption above the mirroring but below the LVM PV), and on LVM+hwraid (MegaRAID).
in the latest PaX (and now grsec) patch i tried to accomodate EFI under KERNEXEC but i can’t test it myself, so feel free to give it a try and let me know the results.
I don’t have an EFI board to test it with, but I have been playing with syslinux lately. (I prefer it over both grub legacy and grub2.) It seems it would play quite well in the EFI System Partition.