It is hybrid-Microkernel
insanity (duh, the type that top
management shows when it does not know how
their product is
inferior, yet still goes with it due to the marketing
mantra). Can you hear anyone at the top of Microsoft
yelling Windows,
Windows, Windows, we are Windows, so we
will do Windows, no matter what, we
love Windows...?
If I were MS, I would have pulled a BSD Kernel, or their
own
UNIX, (meaning, a ...stable monolithic kernel) out of the
hat, and run
legacy apps via a virtual machine environment.
NT Kernel???? As a typical
hybrid-Microkernel, it is just a
big mess (kinda like a huge ball of string
that everyone
just patches by putting on more string), hard to make it
work on
all devices the same (and save battery power at the
same time). Any who knows
what mess some patch fixed 5
years ago, that will pop up and create problems if
you try
to patch on again, without the STAFF around who did the
patching
before???
http://en
.wikipedia.org/wiki/Kernel_(computing)#Monolithic_k
ernels_vs._microkernels
"Monolithic kernels vs. microkernels
As the computer kernel
grows, a number of problems become
evident. One of the most obvious is that the
memory
footprint increases. This is mitigated to some degree by
perfecting the
virtual memory system, but not all computer
architectures have virtual memory
support.[32] To reduce the
kernel's footprint, extensive editing has to be
performed to
carefully remove unneeded code, which can be very difficult
with
non-obvious interdependencies between parts of a kernel
with millions of lines
of code.
By the early 1990s, due to the various shortcomings of
monolithic
kernels versus microkernels, monolithic kernels
were considered obsolete by
virtually all operating system
researchers. As a result, the design of Linux as
a
monolithic kernel rather than a microkernel was the topic of
a famous debate
between Linus Torvalds and Andrew Tanenbaum.
[33] There is merit on both sides
of the argument presented
in the Tanenbaum–Torvalds debate.
[edit]Performances
Monolithic kernels are designed to have all of their
code in
the same address space (kernel space), which some developers
argue is
necessary to increase the performance of the
system.[34] Some developers also
maintain that monolithic
systems are extremely efficient if well-written.[34]
The
monolithic model tends to be more efficient[citation needed]
through the
use of shared kernel memory, rather than the
slower IPC system of microkernel
designs, which is typically
based on message passing.[citation needed]
The
performance of microkernels constructed in the 1980s the
year in which it
started and early 1990s was poor.[35][36]
Studies that empirically measured the
performance of these
microkernels did not analyze the reasons of such
inefficiency.[35] The explanations of this data were left to
"folklore", with
the assumption that they were due to the
increased frequency of switches from
"kernel-mode" to "user-
mode",[35] to the increased frequency of inter-process
communication[35] and to the increased frequency of context
switches.[35]
In fact, as guessed in 1995, the reasons for the poor
performance of
microkernels might as well have been:
-(1) an actual inefficiency of the
whole microkernel
approach,
-(2) the particular concepts implemented in
those
microkernels, and
-(3) the particular implementation of those
concepts.[35]
Therefore it remained to be studied if the solution to
build
an efficient microkernel was, unlike previous
attempts, to apply the correct
construction techniques.[35]
On the other end, the hierarchical protection
domains
architecture that leads to the design of a monolithic
kernel[30] has a
significant performance drawback each time
there's an interaction between
different levels of
protection (i.e. when a process has to manipulate a data
structure both in 'user mode' and 'supervisor mode'), since
this requires
message copying by value.[37]
By the mid-1990s, most researchers had
abandoned the belief
that careful tuning could reduce this overhead
dramatically,
[citation needed] but recently, newer microkernels,
optimized for
performance, such as L4[38] and K42 have
addressed these problems.[verification
needed]
Hybrid (or) Modular kernels
Main article: Hybrid kernel
Hybrid kernels are used in most commercial operating systems
such as
Microsoft Windows NT, 2000, XP, Vista, and 7. Apple
Inc's own Mac OS X uses a
hybrid kernel called XNU which is
based upon code from Carnegie Mellon's Mach
kernel and
FreeBSD's monolithic kernel. They are similar to micro
kernels,
except they include some additional code in kernel-
space to increase
performance. These kernels represent a
compromise that was implemented by some
developers before it
was demonstrated that pure micro kernels can provide high
performance. These types of kernels are extensions of micro
kernels with some
properties of monolithic kernels. Unlike
monolithic kernels, these types of
kernels are unable to
load modules at runtime on their own. Hybrid kernels are
micro kernels that have some "non-essential" code in kernel-
space in order for
the code to run more quickly than it
would were it to be in user-space. Hybrid
kernels are a
compromise between the monolithic and microkernel designs.
This
implies running some services (such as the network
stack or the filesystem) in
kernel space to reduce the
performance overhead of a traditional microkernel,
but still
running kernel code (such as device drivers) as servers in
user
space.
Many traditionally monolithic kernels are now at least
adding (if
not actively exploiting) the module capability.
The most well known of these
kernels is the Linux kernel.
The modular kernel essentially can have parts of
it that are
built into the core kernel binary or binaries that load into
memory on demand. It is important to note that a code
tainted module has the
potential to destabilize a running
kernel. Many people become confused on this
point when
discussing micro kernels. It is possible to write a driver
for a
microkernel in a completely separate memory space and
test it before "going"
live. When a kernel module is loaded,
it accesses the monolithic portion's
memory space by adding
to it what it needs, therefore, opening the doorway to
possible pollution.
A few advantages to the modular (or)
Hybrid kernel
are:
-Faster development time for drivers that can operate from
within
modules.
-No reboot required for testing (provided the
kernel is not
destabilized).
-On demand capability versus spending time recompiling a
whole kernel for things like new drivers or subsystems.
-Faster
integration of third party technology (related to
development but pertinent
unto itself nonetheless).
-Modules, generally, communicate with the kernel
using a
module interface of some sort.
-The interface is generalized
(although particular to a given operating system) so it is
not always possible
to use modules.
-Often the device drivers
may need more flexibility than
the module interface affords.
Essentially, it is two system calls and often
the safety
checks that only have to be done once in the monolithic
kernel now
may be done twice.
Some of the disadvantages of
the modular approach are:
-- With more interfaces to pass through, the possibility of
increased bugs
exists (which implies more security holes)
.
-- Maintaining modules
can be confusing for some
administrators
when dealing with problems like
symbol differences".
Hmmm, maybe the delay is that they are
doing a monolithic
kernel, after giving up on NT, but the marketing folks can
still call it Windows?