Groklaw member PolR sent me some observations on Microsoft's Bilski amicus brief [PDF; text] submitted to the US Supreme Court in the case In Re Bilski. Oral argument will be on November 9th. Presumably their arguments will be before the court. But are they technically accurate? PolR thinks they are not, and he decided to correct some materials in it, both some historical facts and the description of how computers today work.
Is it true, as Microsoft wrote in its brief, that computers are at heart just a "collection of tiny on-off switches--usually in the form of transistors"? Or that "The role of software is simply to automate the reconfiguration of the electronic pathways that was once done manually by the human operators of ENIAC"? Are computers just a modern equivalent to the telegraph or the Jacquard loom, a series of on-off switches, as the brief asserts?
Or is that hyberbole, and technically inaccurate hyperbole?
How do modern computers really work? What impact did the discovery of the universal Turing machine have on how computers work, compared to prior special-purpose computers like ENIAC? What are the differences between how analogue and digital computers work?
We have heard from the lawyers, but what about from those whose area of expertise is the tech? I think you'll see how this technical information ties in with the questions the US Supreme Court would like answered -- presumably accurately -- as to whether or not software should be patentable and whether computers become special purpose machines when software is run on them. Po1R's collected some very useful references from experts. Feel free to add more references in your comments.
Groklaw user fncp sent us urls demonstrating that the ENIAC was doing symbolic manipulations. It didn't perform calculations in the manner of a differential analyzer:
Official history of the ENIAC
The ENIAC on a chip project
The ENIAC was one of the first electronic devices able to do symbolic manipulations. What it didn't do is store programs in the same memory as data. This is the crucial step that was missing to incorporate the principles of an universal Turing machine into an actual device. It is only when this step was accomplished that the possibility of programs manipulating or generating other programs becomes possible. Research on how to implement this possibility led to the development of what is now known as the Von Neumann computer architecture.
The Stanford Encyclopedia of Philosophy writes on this history:
In 1944, John von Neumann joined the ENIAC group. He had become ‘intrigued’ (Goldstine's word, , p. 275) with Turing's universal machine while Turing was at Princeton University during 1936–1938. At the Moore School, von Neumann emphasised the importance of the stored-program concept for electronic computing, including the possibility of allowing the machine to modify its own program in useful ways while running (for example, in order to control loops and branching). Turing's paper of 1936 (‘On Computable Numbers, with an Application to the Entscheidungsproblem’) was required reading for members of von Neumann's post-war computer project at the Institute for Advanced Study, Princeton University (letter from Julian Bigelow to Copeland, 2002; see also Copeland , p. 23). Eckert appears to have realised independently, and prior to von Neumann's joining the ENIAC group, that the way to take full advantage of the speed at which data is processed by electronic circuits is to place suitably encoded instructions for controlling the processing in the same high-speed storage devices that hold the data itself (documented in Copeland , pp. 26–7). In 1945, while ENIAC was still under construction, von Neumann produced a draft report, mentioned previously, setting out the ENIAC group's ideas for an electronic stored-program general-purpose digital computer, the EDVAC (von Neuman ). The EDVAC was completed six years later, but not by its originators, who left the Moore School to build computers elsewhere. Lectures held at the Moore School in 1946 on the proposed EDVAC were widely attended and contributed greatly to the dissemination of the new ideas.
Update 2: A comment by Groklaw member polymath is worth highlighting, I think:
Von Neumann was a prestigious figure and he made the concept of a high-speed stored-program digital computer widely known through his writings and public addresses. As a result of his high profile in the field, it became customary, although historically inappropriate, to refer to electronic stored-program digital computers as ‘von Neumann machines’.
Reducto ad absurdum
Authored by: polymath on Sunday, November 01 2009 @ 11:04 AM EST
The last extract reveals that Microsoft's brief fails even on its own terms.
While Morse's telegraph was patentable the sequence of 1's and 0's used to send any given message was not patentable. While Jacquard's loom was patentable the arrangement of holes on the cards that produced cloth was not patentable. While the machines for manipulating Hollerith cards were patentable the arrangement of holes representing the information was not patentable. Even a printing press is patentable subject matter but the arrangement of type to produce a story is not patentable. Likewise computer hardware may be patentable subject matter but the pattern of transistor states that represent programs and data are not patentable.
All of those patterns are the proper subject matter of copyright law. Neither patents nor copyright protect ideas or knowledge and we are free to create new implementations and/or expressions of those ideas.
To allow a patent on a program is akin to allowing a patent on thank you messages, floral fabric designs, census information, or vampire stories. It is simply nonsense.
An Observation On the
Amicus Curiae Brief from Microsoft, Philips and Symantec
~ by PolR
I have noticed something
about PDF the amicus brief from Microsoft, Philips and Symantec submitted to the US SUpreme Court in the In re Bilski case. This amicus brief
relies on a particular interpretation of the history of computing and on its own description of
the inner workings of a computer to argue that software should be
patentable subject matter. I argue that both the
history and the description of the actual working of a computer is inaccurate.
I note that the authors of
the brief are lawyers. They are not, then, presumably experts in the history of
computing. The statements from the brief are in direct contradiction
with information found at expert sources I've collected here.
How Do Computers Work
According to the Brief?
Here is how the brief describes how computers work:
The fantastic variety in which computers are
now found can obscure the remarkable fact that
every single one is, at its heart, a collection of tiny
on-off switches--usually in the form of transistors.
See generally David A. Patterson & John L.
Hennessy, Computer Organization and Design (4th
ed. 2009); Ron White, How Computers Work (8th ed.
2005). Just as the configuration of gears and shafts
determined the functionality of Babbage's computers,
it is the careful configuration of these on-off switches
that produces the complex and varied functionality of
Today, these on-off switches are usually found in
pre-designed packages of transistors commonly
known as "chips." Thin wafers of silicon, chips can
contain many millions of transistors, connected to
one another by conductive materials etched onto the
chip like a web of telephone lines. They are organized such that they can be turned on or off in patterned fashion, and by this method, perform simple
operations, such as turning on every transistor
whose corresponding transistor is off in the
neighboring group. From these building blocks,
mathematical and logical operations are carried out.
Patterson & Hennessy, supra, at 44-47 & App. C.
The challenge for the inventor is how to use
these transistors (and applying the principles of
logic, physics, electromagnetism, photonics, etc.) in a
way that produces the desired functionality in a useful manner. Computer programming is an exercise
in reductionism, as every feature, decision, and
analysis must be broken down to the level of the rudimentary operations captured by transistors turning on and off. This reductionism is matched by the
detail with which transistors must be configured and
instructed to carry out the thousands or millions of
operations required by the process.
Early electronic computers were "programmed"
by laboriously rewiring their electrical pathways so
that the computer would perform a desired function.
ENIAC--the first general-purpose electronic digital
computer, functioning at the mid-point of the Twentieth Century --
could take days to
program, with operators physically
switches and cables. Patterson &
at 1.10. [ed: graphic of ENIAC]
Fortunately, this is no longer the case. Transistors, packaged onto silicon chips, permit electronic
manipulation of the pathways between them, allowing those pathways to be altered to implement different processes without direct physical manipulation.
The instructions for this electronic reconfiguration
are typically expressed in computer software. See
Microsoft Corp. v. AT&T Corp., 550 U.S. 437, 445-46
(2007) (noting that, inter alia, Windows software renders a general-purpose computer "capable of performing as the patented speech processor").
To allow more sophisticated control over the millions of transistors on a chip, inventors rely on a
multi-layered scheme of pre-designed software "languages" that help bridge the gap between the on-off
language of the transistor and the words and grammar of human understanding. These allow control of
the transistors on a chip at various levels of specificity, ranging from "machine language," which allows
transistor-level control, to "programming languages,"
which allow operations to be defined through formal
syntax and semantics that are more easily understood by humans. Each language pre-packages the
mathematical and logical operations that are most
useful for the users of that particular language. See
Patterson & Hennessy, supra, at 11-13, 20-21, 76-80.
Using these languages, the inventor can create "software" that
defines the operations of semiconductor chips and other hardware.
These operations are the steps of a computer-implemented process. The
role of software is simply to automate the reconfiguration of the
electronic pathways that was once done manually by the human
operators of ENIAC.
What is the Glaring Error in this
The brief fails to mention a most
important mathematical discovery -- the universal Turing machine -- and
how it influenced the development of the computer. The ENIAC didn't
use this mathematical discovery, while computers built afterwards use
it. Because of the discovery of Turing, the programming of modern
computers doesn't operate under the same principles as the ENIAC.
An article titled "Computability and Complexity", found on The Stanford Encyclopedia of
Philosophy's website, describes
the contribution of universal Turing machines to the development
construction of a universal machine gives the most fundamental
insight into computation: one machine can run any program whatsoever.
No matter what computational tasks we may need to perform in the
future, a single machine can perform them all. This is the insight
that makes it feasible to build and sell computers. One computer can
run any program. We don't need to buy a new computer every time we
have a new problem to solve. Of course, in the age of personal
computers, this fact is such a basic assumption that it may be
difficult to step back and appreciate it.
What, In Contrast, Was the Operating
Principle of the ENIAC?
According to Martin
Davis, Professor Emeritus, Department of Computer Science
Courant Institute of Mathematical Sciences
New York University, described here as one of the greatest living mathematicians and computer scientists, and who is interviewed here,
in his book Engines of Logic: Mathematicians and the Origin of the
Computer, in Chapter 8, page 181, here's how ENIAC worked:
enormous machine, occupying a large room, and programmed by
connecting cables to a plugboard rather like an old-fashioned
telephone switchboard, the ENIAC was modeled on the most successful
computing machines then available -- differential analyzers.
Differential analyzers were not digital devices operating on numbers
digit by digit. Rather numbers were represented by physical
quantities that could be measured (like electric currents or
voltages) and components were linked together to emulate the desired
mathematical operations. These analog machines were limited in their
accuracy by that of the instruments used for measurements. The ENIAC
was a digital device, the first electronic machine able to deal with
the same kind of mathematical problems as differential analyzers. Its
designers built it of components functionally similar to those in
differential analyzers, relying on the capacity of vacuum-tube
electronics for greater speed and accuracy.
Davis, according to Wikipedia is also the co-inventor of the Davis-Putnam and the DPLL algorithms. He is a co-author, with Ron Sigal and Elaine J. Weyuker, of Computability, Complexity, and Languages, Second Edition: Fundamentals of Theoretical Computer Science, a textbook on the theory of computability (Academic Press: Harcourt, Brace & Company, San Diego, 1994 ISBN 0-12-206382-1 (First edition, 1983). He is also known for his
model of Post-Turing machines. Here is his
Curriculum Vitae [PDF], which lists his many other papers, including his famous article, "What is a Computation?" published in Mathematics Today, American Association for the Advancement of Science, Houston, January 1979 [elsewhere referenced as "What is a Computation?", Martin Davis, Mathematics Today, Lynn Arthur Steen ed., Vintage Books (Random House), 1980].
What is a Differential
Here is an explanation,
complete with photographs at the link:
There are two distinct
branches of the computer family. One branch descends from the abacus,
which is an extension of finger counting. The devices that stem from
the abacus use digits to express numbers, and are called digital
computers. These include calculators and electronic digital
How Do Universal Turing
Machines Differ from the ENIAC?
The other branch descends from the
graphic solution of problems achieved by ancient surveyors. Analogies
were assumed between the boundaries of a property and lines drawn on
paper by the surveyor. The term "analogue" is derived from the
Greek "analogikos" meaning by proportion. There have been many
analogue devices down the ages, such as the nomogram, planimeter,
integraph and slide rule. These devices usually perform one function
only. When an analogue device can be "programmed" in some way to
perform different functions at different times, it can be called an
analogue computer. The Differential Analyser is such a computer as it
can be set up in different configurations, i.e. "programmed", to
suit a particular problem.
analogue computer the process of calculation is replaced by the
measurement and manipulation of some continuous physical quantity
such as mechanical displacement or voltage, hence such devices are
also called continuous computers. The analogue computer is a powerful
tool for the modelling and investigation of dynamic systems, i.e.
those in which some aspect of the system changes with time. Equations
can be set up concerned with the rates of change of problem
variables, e.g. velocity versus time. These equations are called
Differential Equations, and they constitute the mathematical model of
a dynamic system.
previous explanation spelled out the differences between a digital
computer and an analog computer. But what is the unique
characteristic of a digital computer? The Stanford Encyclopedia of
Philosophy's The Modern History of Computing describes
the programming of a universal Turing machine as the manipulation
of symbols stored in readable and writable memory:
In 1936, at Cambridge University, Turing invented the principle of
the modern computer. He described an abstract digital computing
machine consisting of a limitless memory and a scanner that moves
back and forth through the memory, symbol by symbol, reading what it
finds and writing further symbols (Turing ). The actions of the
scanner are dictated by a program of instructions that is stored in
the memory in the form of symbols. This is Turing's stored-program
concept, and implicit in it is the possibility of the machine
operating on and modifying its own program.
ENIAC was manipulating current and voltages to perform the
calculation in the manner of a differential analyzer. But symbols are
different. They are the 0s and 1s called bits. They are like letters
written on paper. You don't measure them. You recognize them and
manipulate them with precisely defined operations. This is a
fundamental insight that the program is data. It is what makes it
possible to have operating systems and programming languages because
when program is data, you can have programs that manipulate or
generate other programs.
Davis, in the same book previously quoted, on page 185, explains the role
It is well understood that the computers developed after Wold War II
differed in a fundamental way from earlier automatic calculators. But
the nature of the difference has been less well understood. These
post-war machines were designed to be all-purpose universal devices
capable of carrying out any symbolic process, so long as the step of
the process were specified precisely. Some processes may require more
memory than is available or may be too slow to be feasible, so these
machines can only be approximations to Turing's idealized universal
machine. Nevertheless it was crucial that they had a large memory
(corresponding to Turing's infinite tape) in which instructions and
data could coexist. This fluid boundary between what was instruction
and what was data meant that programs could be developed that treated
other programs as data. In early years, programmers mainly used this
freedom to produce programs that could and did modify themselves. In
today's world of operating systems and hierarchies of programming
languages, the way has been opened to far more sophisticated
applications. To an operating system, the programs that it launches
(e.g. your word processor or email program) are data for it to
manipulate, providing each program with its own part of the memory
and (when multitasking) keeping track of the tasks each needs carried
out. Compilers translate programs written in one of today's popular
programming languages into the underlying instructions that can be
directly executed by the computer: for the compiler these programs
Does This Mean That It Is
Possible for Computer Algorithms to be Generated by Programs as
Opposed to Being Written by Humans?
For example, consider the language SQL that is used to access
information stored in a relational database. We can write in SQL
name from presidents where birthdate
statement may be used to retrieve the names of all presidents whose
date of birth is prior to January 1, 1800, assuming the
database contains such information. But the language SQL doesn't
specify the algorithm to use to do so. The language is free to use
the algorithm of its choice. The language may read all presidents in
the database one by one, test their birth date and print those that
pass the test. Or the language may as well read an index of the
presidents that lists the presidents in the order of their birth date
and print only the first president in the list until it find one born
past January 1, 1800. Which algorithm is best will depend
on how the database has been structured, and the choice is left to the
Why Does a Program
Require Setting Transistors on a Modern Computer?
because the RAM memory of modern computers where the program symbols
are stored is made of transistors. They are required only to the
extent the symbols will not be remembered by the computer if the
transistors are not set. If the computer memory is not made of
transistors, a program can be loaded in memory without setting any
transistors. This was the case with some early models of computers, as
is reported by Martin Davis, in the same book, on page 186:
In the late 1940s, two devices offered themselves as candidates for
use as computer memory: the mercury delay line and the cathode ray
tube. The delay line consisted of a tube of liquid mercury; data was
stored in the form of an acoustic wave in the mercury bouncing back
and forth from one end of the tube to another. Cathode ray tubes are
familiar nowadays in TVs and computer monitors. Data could be stored
as a pattern on the surface of the tube.
the symbols may be represented by transistors in silicon chips, but
also as grove patterns on optical disk, magnetic pattern on magnetic
hard drives or tapes, wireless electromagnetic signals, optical waves
etc. There is a diversity of media that is possible. Symbols are
information like ink on paper except that computers use media other
than ink. Programs may be downloaded from the Internet. During the
transit the same symbols may take any of these forms at a point or
another. The symbols are translated from one form to another as the
information is transferred from one piece of equipment to another. At
each step the meaning of the information is preserved despite the
change in physical representation. It is this symbolic information
that is used to program the computer.
Why Does the Amicus Brief
Argue That the Setting of Transistors is Relevant to Patentability of
because they argue that it makes software similar or comparable to some industrial-age
patentable devices. From the brief pp. 14-17:
In this respect, modern computer-related inventions are no different
from other patent-eligible innovations that have produced a new and
useful result by employing physical structures and phenomena to
record, manipulate, or disseminate information.
Perhaps the most celebrated example of such technological innovation
is Samuel Morse's invention of the electric telegraph, which (like
modern computers) employed binary encoding in conjunction with the
sequential operation of switches. Although petitioners focus almost
exclusively on the Court's rejection of his eighth claim (on which
more below), the Court allowed a number of other claims, including
the fifth. O'Reilly v. Morse, 56 U.S. 62, 112 (1854). That claim
was for "the system of signs, consisting of dots and spaces, and of
dots, spaces and horizontal lines." Id. at 86. This system, an
early version of Morse Code, was nothing other than a system for
manipulating an on-off switch -- the telegraph key in a prescribed
manner to produce the useful result of intelligible communications
between two parties. Indeed, although much less complex, the
telegraph system -- a web of interconnected switches spreading around
the globe, enabling binary-encoded communication -- was comparable to
the modern Internet.
The Industrial Age also knew software and hardware in a literal
sense; the core concepts in computer design and programming were
developed in this period. The principle of encoded instructions
controlling a device found application at the opening of the
Nineteenth Century, with the famous Jacquard loom, a device (still in
use today) that adjusts the warp and weft of a textile in response to
"programming" contained on punch cards. The loom's control
apparatus consists of a series of on-off switches which are
controlled by the pattern of holes punched in the cards, just as the
pattern of microscopic pits and lands on the surface of a CD can be
used to control the transistor switches inside a computer. Hyman,
supra, at 166; Patterson & Hennessy, supra, at 24.
Inventors soon seized on the "programming" principle applied in
the Jacquard loom. A defining characteristic of Babbage's
Analytical Engine, for example, was the use of punch cards, adopted
from the Jacquard loom, to store the programs run by the machine.
"Following the introduction of punched cards early in 1836 four
functional units familiar in the modern computer could soon be
clearly distinguished: input/output system, mill, store, and
control." Hyman, supra, at 166. Babbage's close friend, Ada
Lovelace (the daughter of Lord Byron), is now recognized as "the
first computer programmer" for her work developing software
programs for the Analytical Engine. Nell Dale et al., Programming and
Problem Solving with C++, 406-407 (1997).
Later in the Nineteenth Century, Herman Hollerith, a U.S. Census
Office employee, developed a means of tabulating census results using
punch cards and mechanical calculation. His method allowed the
country to complete the 1890 census two years sooner and for five
million dollars less than manual tabulation. William R. Aul, "Herman
Hollerith: Data Processing Pioneer," Think, 22-23 (Nov. 1972). The
company he founded became the International Business Machines Corp.,
and the once-prevalent IBM punch-cards were both the direct
descendent of the means used to program a Jacquard loom and the
immediate predecessor to today's CDs and other media, which contain
digitized instructions for modern computers to open and close
millions of switches.
As has been often noted by historians of technological
development, our perceptions of innovation and modernity are often
misguided -- the roots of technological change are deep, and run
farther back in our history than we perceive. See Brian Winston,
Media Technology & Society: A History 1 (1998) (arguing that
current innovations in communications technology are "hyperbolised
as a revolutionary train of events [but] can be seen as a far more
evolutionary and less transforming process"). That is certainly
true with respect to computer-related inventions.
While the hardware and software implemented by a modern e-mail
program may be orders of magnitude more complex than the dot-dash-dot
of a telegraph key, the underlying physical activity that makes
communication possible -- the sequential operation of switches -- is
fundamentally the same.
Are Modern Computer
Really Similar to These Industrial Age Device?
because none of these devices implements the principle of a
universal Turing machine. For example Andrew Hodges, a well-known
Alan Turing historian and maintainer of the Alan Turing's home page, describes the difference
between Babbage's analytical engine and computers as follows:
So I wouldn't call Charles Babbage's 1840s Analytical Engine the
design for a computer. It didn't incorporate the vital idea which is
now exploited by the computer in the modern sense, the idea of
storing programs in the same form as data and intermediate working.
His machine was designed to store programs on cards, while the
working was to be done by mechanical cogs and wheels.
cards storing programs for the Babbage engine were punched cards like
those used for the Jacquard loom mentioned in the brief. Martin
Davis, in the same book on pages 177-178, also mentions the Jacquard loom:
The Jacquard loom, a machine that could weave cloth with a pattern
specified by a stack of punched cards, revolutionized weaving
practice first in France and eventually all over the world. With
perhaps understandable hyperbole, it is commonly said among
professional weavers that this was the first computer. Although it is
a wonderful invention, the Jacquard loom was no more of a computer
than is a player piano. Like a piano it permits a mechanical device
to be controlled automatically by the presence or absence of punched
holes in an input medium.
amicus brief did not explain that modern computers are programmed
from symbolic information. Likewise it doesn't discuss the difference
in nineteenth-century patent law between patenting an Industrial age
device that manipulate information and patenting the *information* in
the device. Should they have explored this difference they would have
found that the hole patterns in Jacquard punched cards and piano
rolls are a much closer equivalent to computer software than any
Industrial Age physical apparatus.