Both Ted Tso on Kernel Planet and LWN refer to an interview to Knuth, TeX author, in which he seems to criticise the choice of multi-core processors.
One quote which I find very short-sighted is:
Let me put it this way: During the past 50 years, I’ve written well over a thousand programs, many of which have substantial size. I can’t think of even five of those programs that would have been enhanced noticeably by parallelism or multithreading. Surely, for example, multiple processors are no help to TeX….
While I agree that it doesn’t help TeX, parallelism is implicit in almost all modern desktop uses (and for what concern servers, I think there is little doubt about its usefulness). Find me a modern desktop when someone is running just one thread that requires CPU time.
Most people (that I know of at least) tend to use their desktop for multiple tasks: reading mail, surfing the web, listening to music, watching videos. Some of these tasks are inherently well-suited for multithreading, but there is one other thing to consider: you usually do all these things at once.
Well, maybe not exactly as once, although it is true for multimedia as you rarely do that exclusively, but you leave mail loading while you surf. And you often leave multiple pages loading at once while you surf, thanks to tabbed browsing. Even when you’re writing something with (La)TeX, you usually have at least the editor and the TeX environment running at once.
Then start to consider other things. You’re connected to the network, almost always in some cases, you’ve got disks that need to be synced properly, you have processes running in background, keeping things in cache, working for you without you seeing them. And you’ve got a mouse that you use, which needs to move a cursor. And wonder what? Xorg is trying to make that take its own thread.
All these things leads me to one conclusion: even if your main applications are not designed to work in parallel, most of the time adding cores will make things running more smoothly. And I’m sure that in the future we’ll see more multithreaded applications anyway, as multithreading, while not suitable for text processing (by design), works pretty well for a lot of other fields.
Myself, I’m waiting for the new Ruby with actual multithreading to port my ruby-elf tools: most of that kind of processing can work in parallel. I’d just need a decent new box. I’m looking for an EU supplier carrying Opterons, but it seems to be difficult to find them, as a person rather than a company.
” … as multithreading, while not suitable for text processing (by design) … ” I dont agree with this statement. When you type, the writer can verify the grammar; he can save your job or whatsover…
I was meaning the kind of text processing that TeX does. I suppose I should have said text parsing instead.Indeed it’s almost any kind of sequential parsing that can’t be properly multithreaded. Things like multimedia formats tend not to be entirely serial (you have the demuxer splitting video and audio, then the two decoders working in parallel, extra filters, and so on, all working on different “rails”), but text processing like TeX, or XML/HTML parsing, usually are not multithreaded.But you can multithread what happens after those: as you said you could have a software checking the grammar, while another renders the result.As I said “in another post”:https://blog.flameeyes.eu/2…, I don’t like word processors like Word or OpenOffice Writer, but those are things that can very much be multithreaded (and they probably are).
Arguing against multiprocessors on the basis that they do not speed up all sequential programs is like arguing against a processor having a multiply instruction on the basis that it will not speed up all additions.
Dude, none of the tasks you mentioned need any serious CPU time, they are exclusively IO bound. Drawing a mouse cursor is not exactly CPU intensive, same for web browsing, mail reading or decoding a tiny bit of music. Machines from the 80’s could do that without any problems (well, except the music decompression, this is these days completely IO bound, but used to be CPU bound on a 486).What Knuth is talking about is CPU bound processes. Like compiling a TeX document, or building a program, or simulating atomic bombs.
If you think that Audio and Video decoding are not CPU bound, you probably never worked on multimedia-related code, I’m afraid.As for their intensity, a lot of things happen at the same time, so network-bound processes also need to have CPU time as the most used modern network cards, at least on desktop, need to handle some of the data verification in kernel rather than in hardware. VoIP calls, while also network bound, use a lot of CPU because they have to use the best compression to reduce the network load, which increase the CPU load.If you look it that way, as you said there is little difference between nowadays and 10 years ago (there is a huge difference between the 80s for sure!), as the processor speed per se takes little difference. It would also make very little difference for desktop users to increase the _speed_ of the CPU, so it’s a good improvement to increase the _amount_ of CPUs.But you should also change specs quite a bit with the time. A machine from the ’80s could probably have lynx and mutt open at once, probably swapping, but it couldn’t be running Konqueror and KMail for sure.Moving a cursor is not CPU-bound, but handling it? The cursor moves over windows, the windows have to react. Look at the CPU time Xorg takes on your system, then think again if nothing I said is CPU bound. Sure it’s not _purely_ CPU bound, but it’s not just I/O bound.
All of the things you mention can run in parallel on my current laptop and only use a fraction of the CPU. I haven’t run Linux on the desktop for a couple of years (currently run OS X), but if Xorg takes that much CPU when moving a mouse pointer, that just proves Xorg is ridiculously inefficient – moving my mousepointer wildly still leave my Macbook Pro 80% idle.Knuth is talking about scaling it – improving the perfomance. Most of these tasks are NOT very parallel. They may be able to use multiple threads, but they very quickly hit a plateu beyond which adding more threads don’t make them faster, and indeed may make them slower due to the added synchronization cost.For a word processor, for example, where are the real benefits in multiple threads? There are hardly any unless you take the considerable effort to try to the document rendering in parallel, which is bound to be extremely error prone.For a combination such as the ones you describe, most consumers will hit that plateu with two cores, _maybe_ four.Beyond that point only very specific apps will benefit. Things like raytracing and other heavily parallelizable tasks will benefit, but typical productivity apps will not.And a machine from the 80’s, namely my Amiga’s, could very well run a graphical mail app, a web browser, a drawing program, play music and assorted other things at once – all with a 7.16MHz 68000 in less than 2MB of RAM. Yes, the UI had less flashy graphics and a lower resolution, but that should still show that performance of current apps have much more to do with bloat than with real needs – the word processors available certainly had all the functionality I’d need, for example.You say it would make very little difference to increase the speed of the CPU vs the number, but you are wrong. Doubling the speed of the CPU doubles the performance of anything that’s CPU bound. Doubling the number of CPUs at the same performance increases the performance of sufficiently multi-threaded applications by _up to_ the same amount.The incremental benefit per extra CPU diminish even assuming a 100% loaded environment, whereas the incremental benefit of a faster CPU grows linearly in the same environment.In other words, increasing the number of cores benefit you less than speeding up the CPU. In fact, it does that even on perfectly parallelisable tasks, because you get the overhead of handling cache coherency etc.
Sure you ca run them in parallel on a single core system, that still doesn’t mean they can’t make good use of parallel CPUs. As I said, any network-bound job will make use of CPU on modern desktops for processing network data, for instance.Also, I didn’t mean that _in theory_ adding more CPU power has no effect, I said it doesn’t make much effect for a desktop user. A desktop user very rarely would be using 100% of his CPU time. But what a desktop user wants is cool graphics and responsiveness. Both these things can’t be given by a single highly powerful CPU, but are well suited for multi-core systems.Dig up Lennart’s comments about audio mixing in userspace. While it doesn’t need _a lot_ of CPU, it usually needs a CPU ready to take the task. In this case increasing the power of a single CPU is near to useless.If you think about it, we’re using parallel computing for quite a few years by now for advanced graphic functions, and it seems to be working very well…
> I’m looking for an EU supplier carrying Opterons, but it seems to be difficult to find them, as a person rather than a company.Have a look at http://www.alternate.eu/HTH