decoration decoration
Stories

GROKLAW
When you want to know more...
decoration
For layout only
Home
Archives
Site Map
Search
About Groklaw
Awards
Legal Research
Timelines
ApplevSamsung
ApplevSamsung p.2
ArchiveExplorer
Autozone
Bilski
Cases
Cast: Lawyers
Comes v. MS
Contracts/Documents
Courts
DRM
Gordon v MS
GPL
Grokdoc
HTML How To
IPI v RH
IV v. Google
Legal Docs
Lodsys
MS Litigations
MSvB&N
News Picks
Novell v. MS
Novell-MS Deal
ODF/OOXML
OOXML Appeals
OraclevGoogle
Patents
ProjectMonterey
Psystar
Quote Database
Red Hat v SCO
Salus Book
SCEA v Hotz
SCO Appeals
SCO Bankruptcy
SCO Financials
SCO Overview
SCO v IBM
SCO v Novell
SCO:Soup2Nuts
SCOsource
Sean Daly
Software Patents
Switch to Linux
Transcripts
Unix Books

Gear

Groklaw Gear

Click here to send an email to the editor of this weblog.


You won't find me on Facebook


Donate

Donate Paypal


No Legal Advice

The information on Groklaw is not intended to constitute legal advice. While Mark is a lawyer and he has asked other lawyers and law students to contribute articles, all of these articles are offered to help educate, not to provide specific legal advice. They are not your lawyers.

Here's Groklaw's comments policy.


What's New

STORIES
No new stories

COMMENTS last 48 hrs
No new comments


Sponsors

Hosting:
hosted by ibiblio

On servers donated to ibiblio by AMD.

Webmaster
Andrew Morton's Speaker Notes from SDForum
Monday, November 22 2004 @ 06:00 AM EST

Andrew Morton saw Groklaw's coverage of the "Linux is not forking like Unix" article, and he has now graciously provided his speaker's notes from SDForum, on the theme of "the interface between open source software development and the software-using business world." He says, "It's very close to what was said." I know, knowing you like I do, that you will enjoy it much more than any third-party report about what he allegedly said. I found it fascinating reading, and I'm happy I can share it with you now.

UPDATE:

You might like to check out Lamlaw's November 22 article on this speech.

*********************************

SDForum, 16 Nov 2004.

Will today talk about several issues related to the interface between open source software development and the software-using business world. These two big and quite different communities have come together in recent years with considerable success and surprisingly little friction. I'll be looking at matters related to the development, or creation of open source software rather than, say, the adoption of open source software.

I'll talk for maybe 20 minutes and will leave 15 minutes or so for a q-n-a discussion. All this is just one guy's opinion and I can and do make mistakes. So I'd really seriously ask that if people have disagreements with what I say or if they perceive insufficiencies in it, please let's bring those up in the discussion -- otherwise I'll just end up spouting the same garbage next time I stand up in front of some long-suffering people such as yourselves.

- software engineer, working on Linux kernel.

- Along with LT, have overall responsibility for development, delivery and quality of the public Linux kernel, available from kernel.org. Mainly do that by collecting, integrating and re-releasing the work of the many members of the kernel development team.

The public kernel is an input to the kernel which is released by Linux distributors such as Red Hat and Suse/Novell.

- It takes thousands of software packages to make up a distribution, and the kernel is just one of them. But it is the single thing which defines that distribution as being a "linux" distribution.

- My words are of course most applicable to the kernel project, but can be generalised to many of the most important open source software projects.

- It's interesting to note that the most important and successful open source products implement what one could describe as "legacy infrastructure".

Let's look at those two words:

Legacy:

- These products are implementing something which has been done many times before: operating system kernels, runtime software libraries, window managers, http servers and their variously-tiered tools, mail servers, various forms of file server, image manipulation programs, programming language compilers and interpreters, word processors, spreadsheets, database management, etc.

Many of the above are thirty or more year-old technology. Legacy stuff which everyone knows how to implement. All the intellectual property value has been wrung out of these technologies years ago and anyone who ships such products commercially is, to a large extent, providing to their customers a low-margin maintenance and support function.

Why did I describe it as "infrastructure"?

- Many of these successful open source products are implementing functions which other, higher-level software builds upon. The operating system, the libraries, the low-level network servers, the database tools, etc.

All of these provide basic infrastructure which will sit underneath non-open-source software products which are developed and marketed in the conventional commercial manner. ISV's are concentrating their investment and their innovation on higher-level customer-facing products while open source provides the legacy backend of the software stack.

- So the term "legacy infrastructure" places successful open source software into its commercial, historical and IT engineering context.

The rule is not universally true, of course. There are some open source products which are indeed state-of-the-art with research in their fields and which are competitive with commercial products. Examples of this would include projects such as valgrind (a form of software debugging tool) and the Ogg Vorbis project, which continues to deliver world-class media streaming codecs.

But such projects are the exception in the open-source world: frankly, if an open-source team is working well together, developing and delivering leading-edge software which others find valuable then that team should go and form a company and take a shot at getting rich with it -- this is not the space where open source licensing makes sense.

Let's look briefly at the resourcing for open source development.

- In the Linux kernel project, pretty well all of the main developers are working fulltime for technology companies of one form or another. The days of the bearded geek working in his basement purely for his own satisfaction are long gone on such projects.

Companies pay staff engineers to work on open source software products for several reasons.

- because they have a commercial interest in the quality of those products.

- so they have some leverage on the product's future feature set.

- so they have staff at hand who understand and can support the product.

- if they manufacture hardware: so the product supports that hardware well.

- Rather than directly hiring their own engineers, companies will also fund open source development by entering into contractual arrangements with other parties, mainly Linux distributors, for all of the same reasons. The main distributors of Linux are employing a large number of world-class engineers who work purely on open source products.

- Companies who contribute to open source projects place their engineers into a peer-to-peer relationship with the rest of the development team, thus gaining influence over the project and total visibility into the development and planning processes.

All these good things do not come for free. They only really come when the company (or, more specifically, individuals within that company) become recognised contributors to the project.

It's a sad fact of life that if someone pops up out of nowhere with a question or a contribution, they will have a hard time getting attention. This is not because of spite. We're not saying "you haven't helped us before so we're not talking to you". The reason why newcomers tend to face barriers is derived from the open source trust model -- if a contributor has a track record then we can take their changes with a degree of comfort. But if we've never heard from the contributor before then we basically need to go through their code line-by-line, and it's a ton more work for already busy people.

One of the things which I do is to try to prevent such things from simply falling on the floor. If someone is having procedural or process problems with the kernel team then they should contact me directly and I can generally offer advice, grease wheels, make things happen, kick heads or whatever else needs doing.

- Companies contribute engineering resources to open source projects for two strategic reasons:

- Firstly: resource pooling. Maintaining an entire OS is expensive, but with open source you get to pool development resources with the other users of the product while retaining many of the benefits of an in-house development project.

- And the second main reason why companies contribute to open source is to avoid vendor lockin. One way to obtain your low-level software is to simply license it from another IT vendor, and the cost of this could well be similar to the cost of using and contributing to an open source equivalent. But with open source you get full access to all the technology, you get access to the products key developers and you get full rights to modify the product if you need to do so and you get good visibility into the product's roadmap. In fact, you can to some extent control that roadmap if you're prepared to put appropriate resourcing into it.

So the one-sentence summary of open source from a technology businessperson's point of view would be: a source of legacy infrastructure software whose development is cost-optimised via resource pooling and which naturally provides protection against vendor lock-in.

How do new features find their way into "legacy infrastructure" open source projects such as the Linux kernel? In other words, what is the requirements analysis and planning process?

- First up, with a legacy project, the feature set tends to be well understood.

We're implementing 30-year-old technology, so we're working to all that prior understanding of how these things should traditionally operate. This removes a lot of uncertainty from the design process.

And to a large extent we're strongly guided by well-established standards: POSIX, IEEE, IETF, PCI, various hardware specs, etc. There's little room for controversy here.

- Generally, new features are small (less than one person-year) and can be handled by one or two individuals. This is especially true of the kernel which, although a huge project is really an agglomeration of thousands of small projects. Linus has always been fanatical about maintaining the quality and sanity of interfaces between subsystems, and this stands us in good stead when adding new components.

This agglomeration of many small subsystems fits well into the disconnected, distributed development team model which we use.

If the project was a large greenfield thing, such as, say, an integrated security system for the whole of San Jose airport then open source development methodologies would, I suspect, simply come undone: the amount of up-front planning and the team and schedule coordination to deliver such a greenfield product is much higher than with "legacy infrastructure" products.

The resourcing of projects in the open source "legacy infrastructure" world is interesting. We find that the assignment of engineering resources to feature work is very much self-levelling. In that if someone out there has sufficient need for a new feature, then they will put the financial and engineering resources into its development. And if nobody ends up contributing a particular feature, well, it turns out that nobody really wanted the feature anyway, so we don't want it in the kernel. The system is quite self-correcting in that regard.

Of course, the same happens in conventional commercial software development: if management keeps on putting engineers onto features which nobody actually wants then they won't be in management for very long. One hopes. But in the open source world we really do spend zero time being concerned with programmer resource allocation issues -- the top-level kernel developers never sit around a table deciding which features deserve our finite engineering resources for the next financial year. Either features come at us or they do not. We just don't get involved at that level.

And this works. Again, because of the nature of the product: a bundle of well-specified and relatively decoupled features. If one day we decided that we needed to undertake a massive rewrite of major subsystems which required 15 person years of effort then yes, we'd have a big management problem. But that doesn't happen with "legacy infrastructure" projects.

Development processes and workflow

- All work is performed via email. Preferably on public mailing lists so a record of discussions is available on the various web archives. I dislike private design discussions because it cuts people out of the loop, reduces the opportunity for others to correct mistakes and you just end up repeating yourself when the end product of the discussion comes out.

- Internet messaging via the IRC system is used a little bit, but nothing serious happens there -- for a start it's unarchived so for the previously mentioned reasons I and others tend to chop IRC design discussions off and ask that they be taken to email.

- We never ever use phone conferences.

- The emphasis upon email is, incidentally, a great leveller for people who are not comfortable with English -- they can take as much time as they need understanding and composing communications with the rest of the team.

- Contributors send their code submissions as source code patches to the relevant mailing lists for review and testing by other developers. The review process is very important. Especially to top-level maintainers such as myself. I don't understand the whole kernel and I don't have the time or expertise to go through every patch. But I very much like to see that someone I trust has given a patch a good look-over.

- When a patch has passed the review process it will be merged into one of the many kernel development trees out there. The USB tree, the SCSI tree, the ia64 tree, the audio driver tree, etc. Each one of these trees has a single top-level maintainer.

I run a uber-tree called the "mm kernels" which integrates the latest version of Linus's tree with all the other top-level trees (32 at the last count). On top of that I add all the patches which I've collected from various other people or have written myself -- this ranges from 200 to 700 extra patches. I bundle the whole lot together and push it out for testing maybe twice a week.

When we're confident that a particular set of patches has had sufficient test and review we will push that down into Linus's tree, which is the core public kernel, at kernel.org.

- Vendors such as Red Hat and Suse will occasionally take a kernel.org kernel and will add various fixes and features. They will go through a several month QA cycle and will then release the final output as their production kernel.

- The preferred form of bug reports from testers is an email to the relevant mailing list. We go through a diagnosis and resolution process on the public list, hopefully resulting in a fix. This whole process follows a many-to-many model: everyone gets to see the problem resolution in progress and people can and do chip in with additional suggestions and insights.

This process turns out to be quite effective.

- We do have a formal web-based kernel bug reporting system, using bugzilla. But the bugzilla process is one-to-one rather than many-to-many and ends up being much less effective because of this. I screen all bugzilla entries and usually I'll bounce them directly to up email if I think the problem needs attention.

The mailing lists are high volume and it does take some time to follow them. But if a company wishes their engineering staff to become as effective as possible with the open source products, reading and contributing to the development lists is an important function and engineer time should be set aside for this.

The new kernel development model

People may have heard about this, and it does significantly affect consumers of the public kernel.

In previous kernel release cycles over the past ten years we've followed a model of a 2-, 3- or 4-year development cycle, after which the kernel is declared "stable". Linus will hand that stable kernel off to a lieutenant and Linus will then fork off a new unstable kernel for ongoing development. The forked-off stable kernel is the prime source base for kernel consumers for the next several years and we are very cautious and conservative about what changes are made to it. This had the downside that the stable kernel tended to lag in features and device support, so vendors ended up adding a lot of their own patches before releasing that kernel in distributions. It had the upside that the codebase was as reliable as we could make it.

In July of this year we tossed all that out the window. Because we'd discovered that we could keep a massive rate of change flowing into the tree without destabilising it too much. This is partly because we've improved our processes and tools (the adoption of the bitkeeper revision control system helped here). It is also a reflection of the increasing number and skill of the kernel development team. It is also a reflection of the increased quality of the kernel itself: the need for massive destabilising rewrites which break the whole world simply isn't there any more, so we don't need those long developemnt cycles.

So we're currently running on a roughly two-month development cycle. We pack a whole lot of new features and enhancements into each release, but they're months apart rather than years apart. So the kernel continues to make good progress, but at a steadier rate.

This does mean that the production kernel is not as bug-free as it would be if we were concentrating only upon stability, but we do continue to modify these new processes and the kernel's quality does continue to improve even as we add new features to it.


  


Andrew Morton's Speaker Notes from SDForum | 75 comments | Create New Account
Comments belong to whoever posts them. Please notify us of inappropriate comments.
Andrew Morton's Speaker Notes from SDForum
Authored by: Anonymous on Monday, November 22 2004 @ 06:19 AM EST

I think the core problem we have here is a simple miss-understanding of AM's words.

In many cases open-source projects "fork" when something dramatic happens, a good example is the license change introduced into XFree86 which triggered a fork into X.Org. However the Linux kernel is different; development streams have always been forked from the stable series (with a certain amount of back-porting going on by the distro suppliers) until they are fit to be considered stable themselves. In addition there are a large number of kernel forks that are used as staging posts for the mainstream kernel (-ac, -mm, etc). Finally the distro manufacturers (Redhat, Mandrake, SuSE etc) all maintain their own kernel forks.

To summarise, forks for most projects are often A Bad Thing. For the kernel its the normal way of doing things. The confusion is understandable given this.

[ Reply to This | # ]

OT Here Please
Authored by: fudisbad on Monday, November 22 2004 @ 06:41 AM EST
For current affairs, legal filings and CC10 rulings.

[ Reply to This | # ]

Andrew Morton's Speaker Notes from SDForum
Authored by: TToni on Monday, November 22 2004 @ 07:42 AM EST
Very very interesting. Thanks Andrew!

I especially liked the parts with "this works, because". It makes all
a lot of sense and I really admire you guys.

TToni

[ Reply to This | # ]

Interesting comment. "getting rich"
Authored by: Anonymous on Monday, November 22 2004 @ 08:28 AM EST
"...frankly, if an open-source team is working well together, developing
and delivering leading-edge software which others find valuable then that team
should go and form a company and take a shot at getting rich with it...".

He doesn't mention whether it should become closed source or not, but it does
echo my own thoughts in that the only really saleable software is the truely
innovative. This also provides some answers to the software patents problem.
Don't patent it, just be innovative and make money out of it.

I've thought this about databases, which AM considers as legacy infrastructure,
Oracle and IBM make money out of their databases because they have benefits that
the OSS rivals cannot match... yet. The only way for IBM and Oracle to make
money out of database software (I'm ignoring support at the moment) is to keep
innovating. When the innovation stops OSS software will catch up.

This does raise the spectre of "timed innovation" or releasing new
innovations on a timescale that keeps OSS just enough behind to keep the
proprietary software profitable. Although, I think the law of deminishing
returns will probably come into effect and the new innovations will be required
by smaller and smaller numbers of users, essentially the proprietary software in
these situations will become pretty much bespoke software. At this point I guess
the most profitable thing to do is to open source it and let the support and any
further development costs be shared by the community.

[ Reply to This | # ]

Perhaps Steve Balmer...
Authored by: Anonymous on Monday, November 22 2004 @ 08:35 AM EST
... can be persuaded to send us his speakers notes, so we can see what he really
said in that Asia speech

[ Reply to This | # ]

Socialization
Authored by: rsteinmetz70112 on Monday, November 22 2004 @ 10:44 AM EST
I am struck by how much he defines the development process as a social one,
built on individual trust and cooperation, requiring than people be socialized
into the process.





---
Rsteinmetz

"I could be wrong now, but I don't think so."

[ Reply to This | # ]

Andrew Morton's Speaker Notes from SDForum
Authored by: Latesigner on Monday, November 22 2004 @ 10:52 AM EST
A good article.
He puts Open Source just where it should be.

[ Reply to This | # ]

Microsoft's Motive thread
Authored by: Anonymous on Monday, November 22 2004 @ 12:25 PM EST
More on the Burst case. From El Reg

El Reg

Other links

link

link

link

--

MadScientist

[ Reply to This | # ]

.."Uh-oh, Microsoft! And Longhorn is due in how many months? Or was it years?" ;-)
Authored by: Anonymous on Tuesday, November 23 2004 @ 12:07 AM EST
.."So we're currently running on a roughly two-month
development cycle. We pack a whole lot of new features and
enhancements into each release, but they're months apart
rather than years apart. So the kernel continues to make
good progress, but at a steadier rate."

..heh. ;-)

[ Reply to This | # ]

Groklaw © Copyright 2003-2013 Pamela Jones.
All trademarks and copyrights on this page are owned by their respective owners.
Comments are owned by the individual posters.

PJ's articles are licensed under a Creative Commons License. ( Details )