decoration decoration
Stories

GROKLAW
When you want to know more...
decoration
For layout only
Home
Archives
Site Map
Search
About Groklaw
Awards
Legal Research
Timelines
ApplevSamsung
ApplevSamsung p.2
ArchiveExplorer
Autozone
Bilski
Cases
Cast: Lawyers
Comes v. MS
Contracts/Documents
Courts
DRM
Gordon v MS
GPL
Grokdoc
HTML How To
IPI v RH
IV v. Google
Legal Docs
Lodsys
MS Litigations
MSvB&N
News Picks
Novell v. MS
Novell-MS Deal
ODF/OOXML
OOXML Appeals
OraclevGoogle
Patents
ProjectMonterey
Psystar
Quote Database
Red Hat v SCO
Salus Book
SCEA v Hotz
SCO Appeals
SCO Bankruptcy
SCO Financials
SCO Overview
SCO v IBM
SCO v Novell
SCO:Soup2Nuts
SCOsource
Sean Daly
Software Patents
Switch to Linux
Transcripts
Unix Books

Gear

Groklaw Gear

Click here to send an email to the editor of this weblog.


You won't find me on Facebook


Donate

Donate Paypal


No Legal Advice

The information on Groklaw is not intended to constitute legal advice. While Mark is a lawyer and he has asked other lawyers and law students to contribute articles, all of these articles are offered to help educate, not to provide specific legal advice. They are not your lawyers.

Here's Groklaw's comments policy.


What's New

STORIES
No new stories

COMMENTS last 48 hrs
No new comments


Sponsors

Hosting:
hosted by ibiblio

On servers donated to ibiblio by AMD.

Webmaster
The Daemon, the GNU and the Penguin, Ch. 19 - Dr. Peter Salus
Thursday, October 13 2005 @ 11:59 PM EDT

Here is our next installment of The Daemon, the GNU and the Penguin, by Dr. Peter Salus, Chapter 19: "Tanenbaum and Torvalds".

For the observant, I made a mistake in numbering earlier, calling Excursus: The GPL and Other Licenses Chapter 18, and that threw off the numbering. I have now corrected the error. I have also set up a permanent page for Dr. Salus' book, with all the chapters linked, with their titles, so you can find all the chapters easily. Just look for Salus Book as a permanent link on the left any time you wish to read or check a fact from the book.

********************

The Daemon, the GNU and the Penguin

~ by Dr. Peter H. Salus

Chapter 19. Tanenbaum and Torvalds

Linus posted his queries, his information and his work on comp.os.minix beginning in mid-1991. But on 29 January 1992, Andy Tanenbaum posted a note with the line:

Subject: LINUX is obsolete1

After a few introductory paragraphs, Tanenbaum got to his real criticism of Linux:

As a result of my occupation, I think I know a bit about where operating systems are going in the next decade or so. Two aspects stand out:

Microkernel vs Monolithic System

Most older operating systems are monolithic, that is, the whole operating system is a single a.out file that runs in 'kernel mode.' This binary contains the process management, memory management, file system and the rest. Examples of such systems are UNIX, MS-DOS, VMS, MVS, OS/360, MULTICS, and many more.

The alternative is a microkernel-based system, in which most of the OS runs as separate processes, mostly outside the kernel. They communicate by message passing. The kernel's job is to handle the message passing, interrupt handling, low-level process management, and possibly the I/O. Examples of this design are the RC4000, Amoeba, Chorus, Mach, and the not-yet-released Windows/NT.

While I could go into a long story here about the relative merits of the two designs, suffice it to say that among the people who actually design operating systems, the debate is essentially over. Microkernels have won. The only real argument for monolithic systems was performance, and there is now enough evidence showing that microkernel systems can be just as fast as monolithic systems (e.g., Rick Rashid has published papers comparing Mach 3.0 to monolithic systems) that it is now all over but the shoutin'.

MINIX is a microkernel-based system. The file system and memory management are separate processes, running outside the kernel. The I/O drivers are also separate processes (in the kernel, but only because the brain-dead nature of the Intel CPUs makes that difficult to do otherwise). LINUX is a monolithic style system. This is a giant step back into the 1970s. That is like taking an existing, working C program and rewriting it in BASIC. To me, writing a monolithic system in 1991 is a truly poor idea. . . .

Linus responded the same day with: "Well, with a subject like this, I'm afraid I'll have to reply. Apologies to minix-users who have heard enough about linux anyway. I'd like to be able to just 'ignore the bait', but ... Time for some serious flamefesting!" and a long (somewhat intemperate, but this is a 23-year old student) response.

There was a good deal of going back and forth, and even Brian Kernighan put in a few lines. But the result was that Andy remains to this day a committed microkernel devotee and Linus has continued with a largely monolithic system. (Of course, this generalization is inaccurate, but it serves.)

And, on a certain level, there is no question in my mind but that Andy's position is right: microkernels are "better" than monolithic systems. But, on the other hand, I find both Andy's original posting unnecessarily rebarbative and Linus' "serious flamefesting" inappropriate.

Over a decade later, I find it had to discern any anger or resentment on either side. I asked Andy about the exchange, but he just shrugged me off. "In a straight test," he later remarked, "Linux loses by about 8% in speed." That may well be true. But it's not much of an epitaph.

However, I think the microkernel (as evidenced in Mach [and in the Hurd], in Chorus, in Amoeba) is superior to the monolithic kernel, as long as the message-passing is efficient.

I guess I'll now be subject to a flame war.


1A large collection of the correspondence -- or at least that of the "major" contributors -- can be found here. As I am interested in discussing this, I will refrain from extensive citation. A version of much of the discussion is available as Appendix A of Open Sources: Voices from the Open Source Revolution (O'Reilly, 1999; ISBN 1565925823).


Dr. Salus is the author of "A Quarter Century of UNIX" (which you can obtain here, here, here and here) and several other books, including "HPL: Little Languages and Tools", "Big Book of Ipv6 Addressing Rfcs", "Handbook of Programming Languages (HPL): Imperative Programming Languages", "Casting the Net: From ARPANET to INTERNET and Beyond", and "The Handbook of Programming Languages (HPL): Functional, Concurrent and Logic Programming Languages". There is an interview with him, audio and video,"codebytes: A History of UNIX and UNIX Licences" which was done in 2001 at a USENIX conference. Dr. Salus has served as Executive Director of the USENIX Association.

This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/2.0/ or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.


  


The Daemon, the GNU and the Penguin, Ch. 19 - Dr. Peter Salus | 147 comments | Create New Account
Comments belong to whoever posts them. Please notify us of inappropriate comments.
Thanks
Authored by: Anonymous on Friday, October 14 2005 @ 12:34 AM EDT
Thank you. I look forward to the latest chapter of this fascinating book every
week.

Larry N.

[ Reply to This | # ]

Corrections
Authored by: TimMann on Friday, October 14 2005 @ 12:37 AM EDT
On the "Salus Book" page, the chapter 19 link points back to itself,
not to chapter 19.

[ Reply to This | # ]

The Daemon, the GNU and the Penguin, Ch. 19 - Dr. Peter Salus
Authored by: dmarker on Friday, October 14 2005 @ 12:40 AM EDT

Peter,
Again thanks for the on going effort.

To pre-empt the obvious debate that will ensue. I want to go on record as saying
I too was totally committed to the concept of micro-kernel designs & so were
many OS architects in IBM when WPOS was being developed. But even back then
there were IBM internal wars that got quite heated about just how effective
micro-kernels really could and would be.

WPOS appears to have got derailed on the issue of device drivers and what space
they ran in & how well they ran in that space :)

I believe graphics support in the early days also mitigated against true u-k
designs.

In retrospect I will say "thankyou Linus turned up" as elegant or not,
Linux has done for Open Systems & F/OSS what Unix couldn't and Linus's
efforts have succeded in surviving where Rashid's IBM WPOS etc have basically
not.

Let the debate rage :)

Doug Marker

[ Reply to This | # ]

8%???
Authored by: mrcreosote on Friday, October 14 2005 @ 01:13 AM EDT
"In a straight test," he later remarked, "Linux loses by about 8%
in speed."

Test of what?! I haven't seen many Minix results on TPC.

Sounds like Andy needs to find a better place to get his grapes.

---
----------
mrcreosote

[ Reply to This | # ]

The Daemon, the GNU and the Penguin, Ch. 19 - Dr. Peter Salus
Authored by: billposer on Friday, October 14 2005 @ 01:24 AM EDT
Is Windows/NT really a micro-kernel, or was that a rumor that turned out not to
be true? I've never heard it called a micro-kernel - I thought it was modelled
after VMS. And MS doesn't exactly have a reputation for modularity.

[ Reply to This | # ]

20-20 hindsight
Authored by: mrcreosote on Friday, October 14 2005 @ 01:57 AM EDT
AT believed (and still believes) that microkernel is the way to go.

But he also believed, and I quote

"Once upon a time there was the 4004 CPU. When it grew up it became an
8008. Then it underwent plastic surgery and became the 8080. It begat the
8086, which begat the 8088, which begat the 80286, which begat the 80386, which
begat the 80486, and so on unto the N-th generation. In the meantime, RISC
chips happened, and some of them are running at over 100 MIPS. Speeds of 200
MIPS and more are likely in the coming years. These things are not going to
suddenly vanish. What is going to happen is that they will gradually take over
from the 80x86 line. They will run old MS-DOS programs by interpreting the
80386 in software..... I think it is a gross error to design an OS for any
specific architecture, since that is not going to be around all that
long."

I wonder what else he was wrong about?


---
----------
mrcreosote

[ Reply to This | # ]

MicroKernels and Linus
Authored by: Anonymous on Friday, October 14 2005 @ 02:00 AM EDT
Interesting to read the 13 year old usenet postings.
1) Even though Linus' original post was hot headed (Jan 29, 1992), he
appologized a day or so later, (Jan 30 1992), and in the typical
self-deprecating way we have come to know and respect. You don't see that very
often in flame wars.
2) It is slightly amusing to read the 10 year old predictions of the demise of
the 80x6 architecture, and its inevitable replacement by RISC processors. It is
not the first time the 'experts' have been wrong is it?
3) There seems to be some misunderstanding of just what is a microkernel. One
poster says he wants a microkernel because he wants loadable device drivers.
Heck RSX-11 (and Microware's OS-9 RTOS) had those (loadable AND UNloadable) back
in the 80's. I wonder how many other 'microkernel' features are just
implementation details that can be done equally well in monolithic kernels?
4) Good thing Linus didn't wait for the GNU-HURD isn't it?

[ Reply to This | # ]

OT threads
Authored by: John_Doe#1 on Friday, October 14 2005 @ 02:46 AM EDT
<a href="http://www.example.com/">Text for link</a>

Choose HTML from drop down box

Preview post and open the link in another window/tab
to make sure it works

[ Reply to This | # ]

Microkernel vs. Monolithic
Authored by: Anonymous on Friday, October 14 2005 @ 03:23 AM EDT
The basic problems I have with microkernels are:

1. They're not "micro" - the printout of Mach 3.0, which
I used as a start of (hoping to) work on the Hurd is
400+ double sided, two pages per side sheets of paper.

2. They're not "simpler". Mach 3.0 is excruciatingly
complex and in Minix, too many "subsystems" know
things about the interiors of other subsystems.

Toon Moene (not logged in while at "work")

[ Reply to This | # ]

Disingenuous
Authored by: Anonymous on Friday, October 14 2005 @ 04:26 AM EDT
This installment is nothing except a statement of faith. There is no
information whatsoever to back anything up.

Anyway, Linux is split into manageable subsystems. That the borders are softer
means that the programmers need to learn more than just the part they are
working on, but it also means that the borders can be shifted to accommodate the
growth of the tasks and the capabilities of the workers.

The microkernel idea breaks down where the hard-drawn intra-system borders
become so complex that it takes away most of the manageable mindshare just for
maintaining the artificial borders.

In short: Linux has shown to lend itself well to self-organizing social
processes of handling complexity. Microkernels are about _designed_ models of
handling complexity. For the "Linus at helm" situation, the former
approach might have been more effective because the massive parallelism of
developers without dependence on central interface authorities manages
backtracking out of dead alleys faster than if they were painstakingly avoided
in design.

And blanket declarations of faith in design principles without an actual track
record or even a plausible theory backing them up sound very, very weak.

Peter Salus in this manner actually is making a worse case for microkernels than
they deserve.

[ Reply to This | # ]

Microkernels
Authored by: pds on Friday, October 14 2005 @ 09:10 AM EDT
There's no question that microkernels are a more elegant design. That doesn't mean that every implementation of a microkernel will be more elegant: Mach has obviously picked up a lot of baggage over the years. But, if you want to see what a microkernel CAN be you should check out QNX, which is a simply amazing OS (yes, I know it's not even close to being free). In QNX even the process handler is separate from the kernel proper. The kernel itself does two things: message passing and scheduling threads (I can't remember if it does VM as well). Processes in QNX are just bundles of threads. All drivers, the network stack, filesystems, everything else is just a userspace process.

But, more elegant doesn't always mean more successful. I've been using Linux for 12 years and other UNIX versions starting 9 years before that: I think they're great. Microkernels appeal to the software designer in me; Linux is what I use every day.

[ Reply to This | # ]

OT: SCO, IBM Battle Over Software Rights
Authored by: Anonymous on Friday, October 14 2005 @ 10:26 AM EDT
Boston College IP
Forum

http://www.bc.edu/bc_org/avp/law/st_org/iptf/headlines/index.html

[ Reply to This | # ]

The Daemon, the GNU and the Penguin, Ch. 19 - Dr. Peter Salus
Authored by: Groklaw Lurker on Friday, October 14 2005 @ 10:51 AM EDT
Yes, pound for pound, quality for quality, a well designed microkernel is
considered slightly superior to a well designed monolithic kernel by the kernel
engineers I've spoken with on this subject. (Mind you, at least one of these
kernel engineers used to be a SCO kernel engineer).

The difference isn't enough (less than 10%) for anyone to get their panties in a
bundle over it. I've been told it depends more on what is done with the design.
For instance, in a side by side comparative benchmark, all things being equal,
which is most likely to be faster in a 'common' work environment, XP or Linux?
Hands down, the winner is Linux I think!



---
(GL) Groklaw Lurker
End the tyranny, abolish software patents.

[ Reply to This | # ]

Micro vs. monolithic == program(s) vs library
Authored by: pmk on Friday, October 14 2005 @ 11:27 AM EDT
I've worked on both monolithic kernels and microkernel-based systems (a port of
system V, a system based on Chorus, and a port of Linux) on several generations
of Cray systems.

My perspective on this old argument is that it comes down to what you believe
the operating system's kernel is. If you believe that the kernel is a program,
a multithreaded program, or a collection of programs, the structure of a
microkernel will have more appeal to you and seem "elegant" and well
structured. If you believe that the kernel is a big library of code shared by
all processes, including lots of signal handlers and a few background threads,
then a modern monolithic kernel like Linux will have more appeal.

I do not contend that the first view is wrong or naive, but it is certainly more
common among the vast majority of computer users whose only knowledge of the
existence of the kernel is as a "program" that's secretly in charge of
the computer. But I do think that the second view corresponds better to the
reality within the machines, and that a monolithic kernel maps more efficiently
to it. The kernel is a library shared by all processes, not a program or
collection of programs. The fact that a good shared preemptive multiprocessor
library is nontrivial to write or understand doesn't mean that it's the wrong
approach.

It's a safe bet that this article is going to touch off another round of
"Software engineering is good vs. needless overhead and complexity are
bad" arguments. But really, unless you actually want to work on the guts
of a kernel, this is a tired old debate that you can and should safely ignore.
(Though it's always fun to watch academic types argue with practitioners...)

(I do admit that part of my bias against microkernels stems from my experience
with Chorus in UNICOS/mk, a microkernel that's written in C++ and badly so, IMO.
Complicated constructors and destructors make systems code very hard to follow
and debug...)

[ Reply to This | # ]

My 2 cents on microkernel vs monolith
Authored by: Anonymous on Friday, October 14 2005 @ 01:51 PM EDT
The microkernel is a very elegant and attractive approach to system
architecture. However, it is difficult to carry the elegant design through to a
performance optimized system. Most of the really difficult issues that the
designers have to address have to do with limitations of the physical hardware.


As with all operating systems, the real rub comes when you have to make it run
fast in pretty much everything that it does and in pretty much all supported
configurations. No matter what you think about performance requirements, sooner
or later someone is going to benchmark the systems using what they think is a
reasonable load. Microkernel based systems rarely reach the performance level of
monolithic kernels. And worse, the attempt makes them large and complex.


A real problem for the designers is the collection of standards that must be
supported. Many of these systems are defined with requirements that are hard to
fit into the elegant design. For example a function that fits into one module
may need data that is in another module.


Switching execution from the kernel to user space is difficult and time
consuming on most systems. Getting from user space back into the kernel is also
time consuming. Everyone that crosses the boundary pays a time penalty. The
microkernel takes subsystems out of the kernel for among other reasons to
protect its memory space from corruptions caused by defective code. That memory
protection is not free, it takes time to set up the protection when entering a
module. The microkernel has to make up for the lost time through other
efficiencies.


A second difficult issue is copying data. It takes time to copy data. When you
have to craft a message and copy data into the message it takes time. The
components that are communicting through messages have to make up for the lost
time by other efficiencies in the code.


A third difficult issue to address is context switching. When you are in a
monolithic kernel you typically have kernel threads that all share the same
memory context and have a private stack. Switching among these threads is
relatively cheap (not free) both in the amount of context that must be perserved
and the limited need to flush memory translation lookaside buffers. When you
switch context to a user process there is a lot more time consuming work to
perform.


Take an example: How would you design SCSI support for multiple hosts and a
variety of devices into a microkernel?
One approach would be to use the existing approach of a three level environment.
At the top is SD, ST, SG to permit applications to access disks, tapes, and
other SCSI devices. The middle layer funnels the requests through to the low
level drivers and takes care of error management. At the low level there is a
driver for each different adapter. One problem comes in how you manage passing
large volumes of data through this design. In the monolithic kernel all modules
can directly access the data and control information. With a microkernel other
mechanisms must be used to avold copying data through each module.

philc - not logged in

[ Reply to This | # ]

Yay! Let the debates rage!
Authored by: Anonymous on Friday, October 14 2005 @ 05:22 PM EDT
Open, accessible systems let folks tinker and experiment and argue about the results in an open manner. It's great! It's stimulating! How much more boring and intellectually dead where there is unanimity of agreement.

Or, much worse, where there is no possibility of disagreement. Imagine world where there was only Windows. Take what you've been given (at quite a price!) and live with it. If you do manage to get a look at the internals in an industrial or academic setting, better watch your step with that NDA!! What? Your license says it's illegal to benchmark?? Better to just shut up!

[ Reply to This | # ]

Link to the USENET Discussion cached on Google Groups
Authored by: TAZ6416 on Friday, October 14 2005 @ 06:08 PM EDT
http://grou ps.google.com/groups?threadm=12595%40star.cs.vu.nl

Jonathan

Oscar The Grouch Does America

[ Reply to This | # ]

So which is better? I think ...
Authored by: tanstaafl on Friday, October 14 2005 @ 06:19 PM EDT
... it depends upon what your task is. If U need to run on minimal hardware, or
consume the least possible resources, your best bet is a microkernel. QNX is a
good example of an efficient microkernel system; the last time I looked, the
kernel itself was only about 32K, with everything (and I mean _*/everything/*_)
- filesystems, I/O, even processes - except message passing (and maybe ISRs)
outside the kernel. U can mix and match as U please, and for assembling
embedded systems, it just doesn't get any better.

If, on the other hand, U want a system that always looks the same, with lots and
_lots_ of hooks built in to the kernel, then a monolithic system is your best
bet. If U strip down a system, it can be painful performing the simplest tasks,
and if U're approaching an unknown micro-kernel system for the first time,
getting up to speed on the way things work can take almost as long as writing
it. Even a highly-functional micro-kernel-based system need not have any
resemblance to a 'standard' distribution; it can look just enough like a
'standard' distro to drive U crazy trying to figure it out.

Each is good for certain situations; let us hope both continue to be developed.

[ Reply to This | # ]

The best point about the Micro-Kernal/Monolithic Kernal arguement is...
Authored by: The Mad Hatter r on Friday, October 14 2005 @ 08:01 PM EDT


That the competition between two seperate programming models is driving both
into making enhancements and improvements that wouldn't occur if there was only
one kernal type.

Diversity drives innovation (I hated using that word, but it's the only one I
can think of that fits.)



---
Wayne

telnet hatter.twgs.org

[ Reply to This | # ]

Microkernels and Java
Authored by: Anonymous on Friday, October 14 2005 @ 11:54 PM EDT
The discussion about microkernels is like the discussion about Java. It goes
something like: "Today's CPU's are so fast that a few percentage points of
speed loss is acceptable."

Yeah, I just spent X thousands of dollars on the latest hardware to run my
programs at the speed of the _obsolete_ hardware. Now that's smart... NOT!

Building monolithic kernels like Linux is hard(er). Yes, but the system runs
faster. Writing programs in C/assembly is hard(er), but the programs run faster.
In order to do similar things with microkernels and Java, you have to have a
machine with 3 times more CPU grunt and 5 times more memory. And money grows on
trees too ;-)

[ Reply to This | # ]

The Daemon, the GNU and the Penguin, Ch. 19 - Dr. Peter Salus
Authored by: Anonymous on Monday, October 17 2005 @ 12:18 AM EDT
<yawn />

Next we will be arguing over which is "better": big-endian or
little-endian? Here's an example of a pointless argument with a stupid,
unfounded outcome:

"It should be "obvious" to even the most dim-witted individual
that big-endian is superior because it makes sense when multi-byte numbers are
layed out in memory or spoken over a wire. This is backed up by the fact that
big-endian byte order is the standard for the Internet.

"Therefore, PowerPC (big-endian) is better than Intel (little-endian) and
Apple should never have switched."

Of course, both the "argument" and the "conclusion" above
are nonsense.

Same goes for the macro/micro-kernel agrument, and the conclusion that Mach is
somehow "better" than Linux. You can make the same arguments about
gender, or skin colour too, by the way....

[ Reply to This | # ]

The Daemon, the GNU and the Penguin, Ch. 19 - Dr. Peter Salus
Authored by: Anonymous on Monday, October 17 2005 @ 09:06 AM EDT
Whilst my heart may belong to micro kernels, my mind really just wants the job
done... and what it turns out I really want is a modular system, however it's
actually implemented. Most in-use OSs are a bit of a hybrid these days anyway
(leaning more to monolithic by and large).

It was a slightly academic (hah...) argument then, it's even more so now.

[ Reply to This | # ]

Groklaw © Copyright 2003-2013 Pamela Jones.
All trademarks and copyrights on this page are owned by their respective owners.
Comments are owned by the individual posters.

PJ's articles are licensed under a Creative Commons License. ( Details )