
What Does "Software Is Mathematics" Mean? Part 1  Software Is Manipulation of Symbols ~ by PolR 

Saturday, October 13 2012 @ 08:34 PM EDT

What Does "Software Is Mathematics" Mean?  Part 1
Software Is Manipulation of Symbols
by PolR
This article is licensed under a Creative Commons License. [Article as PDF.]
You probably have heard computer professionals say that software is mathematics. You've certainly read it on Groklaw more than once. But is it true? What does that statement mean? I want to show you, first, why it's true, and I will also answer some typical criticisms. My purpose, however, is to suggest a way to develop a test for when a patent involving software is or is not
patenteligible, now that the Federal Circuit has granted an en banc review of CLS Bank International v. Alice Corporation. The questions the court would like answered are:
a. What test should the court adopt to determine whether a computerimplemented invention is a patent ineligible "abstract idea"; and when, if ever, does the presence of a computer in a claim lend patent eligibility to an otherwise patentineligible idea?
b. In assessing patent eligibility under 35 U.S.C. § 101 of a computerimplemented invention, should it matter whether the invention is claimed as a method, system, or storage medium; and should such claims at times be considered equivalent for § 101 purposes?
I suggest that a test based on manipulations of symbols would work. I'd like to explain why I think this might be the right place to draw the line.
Mathematics is a language^{1}. Software is mathematics because, according to the principles of computation theory, the execution of the software is always a mathematical computation according to a mathematical algorithm. That is, the execution of a computer program is the computer making utterances in mathematical language.
Some people think the sentence "software is mathematics" means software is described by mathematics, and then they say that everything could be described by mathematics. In that sense, they say, everything would be mathematics. If we push that logic to its conclusion nothing would be patentable, because mathematics is not supposed to be patentable subject matter. They say this is an absurd result contrary to law.
This is not what we mean when we say software is mathematics. If we want, we can describe software with mathematics. Sometimes we do. This is not the reason we say software is mathematics. We say the execution of the software is the utterance in mathematical language as opposed to the meaning of the utterance. An utterance is mathematics. The meaning of the utterance is described by mathematics. This is one important difference between these two phrases as I hope to now show you.^{2} This point is related to the distinction between three types of mathematical entities: formulas, algorithms and computations.
I'll show you that using E=mc^{2}.
The Overview
The famous equation E=mc^{2} is a mathematical formula. It is text written with mathematical symbols in mathematical language. This is an utterance in mathematical language. It is analogous to what we call a (declarative) sentence in English grammar. This formula is mathematics.
The meaning of this formula is a law of nature, actually a law of physics. It is a statement relating the mass of an object at rest with how much energy there is in this object. The energy of the object and its mass are described by the utterance in the language of mathematics. The object, its mass and its energy, are not mathematics. They are described by mathematics because they are what the formula means.
The formula implies a procedure to compute the energy when the mass is known. Here it is:

Multiply the speed of light c by itself to obtain its square c^{2}.

Multiply the mass m by the value of c^{2} obtained in step 1.

The result of step 2 is the energy E.
This kind of procedure is known in mathematics as an algorithm. The formula is not the algorithm. This procedure is the algorithm. Someone with sufficient skills in mathematics will know the algorithm simply by looking at the formula. This is why it is often sufficient to state a formula when we want to state an algorithm.
This algorithm is a procedure equivalent to making a logical deduction. Suppose we know the mass, how can we know the energy? We should be able to deduce it from the relationship stated by the formula. The algorithm is how we proceed with this deduction. When we use an algorithm to solve a problem we actually use a form of logic.^{3} Algorithms too are part of the language of mathematics.
The task of carrying out the algorithm is called a computation. When carrying out the algorithm with pencil and paper we have to write mathematical symbols, mostly digits representing numbers but also other symbols such as the decimal point. These writings too are utterances in mathematical language. In the example, the meaning of the utterances are numbers representing the speed of light, its square, the mass and the energy of an object. Carrying out the algorithm is mathematics because it is making these utterances.
But what if we use a machine? We may use a digital device such as a pocket calculator or a computer to carry out the calculation. Then the symbols are no longer marks of pencil on paper. They are bits in some digital electronics circuit. This too is utterances in mathematical language, but now the machine and not the human is making the utterances. According to the mathematical principles underlying the stored program architecture the execution of all computer programs is the execution of a mathematical algorithm, therefore this execution is a mathematical computation. This is what "software is mathematics" means.^{4}
This was an overview of the key ideas. Let's now elaborate.
Mathematics Is a Language
Some people, I am sure, will wonder what I mean when I say mathematics is a language. For them mathematics is about mathematical subject matter like numbers, geometric figures or abstract set theory.^{5} This view is correct but mathematics is more than this. Serious mathematical work requires to write symbols on paper or bits in a computer. The linguistic aspect is unavoidable. On the other hand these symbols are pointless without their mathematical meanings. This is really like the two sides of the same coin. One cannot exist without the other. In this sense mathematics is a language even though mathematics is also the study of mathematical subject matter.
There is also the issue of foundations. There is a branch of mathematics called mathematical logic. This is where mathematicians define the foundations of their discipline. What is a mathematical formula? What is a theorem? What are the criteria of logic that must be met for a mathematical proof to be valid? These are some of the foundational questions answered by mathematical logic. There is also a branch of mathematics called the theory of computation. This is actually a subbranch of mathematical logic. This is where mathematicians define what is an algorithm and what is a computation according to an algorithm. If you go read these definitions in textbooks from competent authors you will find they are all elements of the language of mathematics. The definitions expressly refer to the symbols, their arrangement in syntax and their semantics. All of mathematics ultimately rely on these foundations. In this sense, mathematics is indeed a language.
Symbols Are Abstract Ideas
Languages are written with symbols, typically letters, digits and punctuation marks. Mathematical language adds to this list various mathematical symbols.
There is a difference between the symbols and their physical representations. Think of letters for example. We could say letters are marks of ink on paper but we would be wrong. If we use a pencil they are marks of lead on paper. Or when you use a computer they are arrangements of pixels on the screen. If you walk down a city street you may see on buildings neon signs and carvings in stone. Symbols are abstract ideas. We can recognize them when we see their physical representations. But still, the symbols are not the representations.
Bits are symbols. Like letters they can be represented multiple ways. In pathways between transistors they are voltages. In main memory they are electrical charges stored in capacitors. On hard disks they are magnetic moments. On some optical disks they are cavities in the surface. There are many other forms bits can take. Like letters, bits are abstract ideas.
Symbols Need Not Be Watched by Humans to Have Meanings
Some people expect symbols to be something visible to humans. This view is too strict. Symbols need not be visible to the human eye to carry their meanings.^{6} While bits are not normally visible they may be indirectly observed by a programmer using debugging tools if he wishes so. Then the programmer may read them and understand their meanings.
An extreme example are the
cuneiform
tablets from the ancient Mesopotamian civilizations. Thousands of these tablets were buried in the sands of the MiddleEast for centuries, unknown to all men living during this period. The ancient languages were forgotten. But still, archaeologists were able to find the tablets and decipher them. What happened to their meanings in all these years where no live human either knew the language or was even aware that there were tablets buried there? The meanings didn't disappear. They were patiently waiting for the archaeologists.
Something similar happens to Groklaw comments. After they are typed and the commenter clicks on the "submit" button, a series of electronic adventures begins. The bits are sent over the Internet and must go through countless communication wires and equipments before reaching the Groklaw database in some data center. Then the comments are stored in a database kept on magnetic storage. When a reader clicks on the link to display the comment, more electronic adventures occur as the comment is transfered across the Internet from the Groklaw database to the user's screen. The comments are unobserved when they are stored in the database and they are unobserved when they travel across the Internet. But, when they reach destination, the comments still have their meanings.
Computations Don't Process Electrons, Computations Process Symbols
Let me tell you some fundamentals of digital electronic and the underlying principles of mathematics. I hope this will clarify the difference between hardware and symbols.
Bits are from a twosymbol alphabet, written as 0 and 1. The relevant branch of mathematics is boolean algebra which is a part of logic. The bits stand for truth values, 0 stands for false and 1 stands for true. Boolean algebra also recognizes three operators, and, or and not, which could be applied to truth values. These operators correspond to the ordinary operations of logic of the same name.
Let me describe some algorithms for each boolean operators.
The and operator accepts two arguments. It is true when both arguments are true. It is false otherwise. The corresponding algorithm is:

Read two symbols as input.

If both symbols are 1 then the answer is 1.

Otherwise the answer is 0.
The or operator also accepts two arguments. It is true when either or both arguments are true. It is false otherwise. The corresponding algorithm is:

Read two symbols as input.

If both symbols are 0 then the answer is 0.

Otherwise the answer is 1.
The not operator reverses the truth value of its sole argument. The corresponding algorithm is:

Read one symbol as input.

If the symbol is 1 then the answer is 0.

Otherwise the answer is 1.
There are many more operations permitted in boolean algebra. They are all computed by combination of the three elementary operators. Everything in boolean logic can be done by assembling multiple instances of these three simple algorithms into a bigger algorithm.
Please look at these algorithms attentively. You will see that they are all operations on the symbols alone. They can be executed without referring to their meanings. All that is needed is to identify whether the symbols are 1 or 0 and act accordingly. There is no need to decide whether they correspond to any truth value. This is typical of all mathematical algorithms. Mathematicians have defined their criteria for what is a mathematical algorithm as opposed to some other kind of procedure. This observation is a consequence of some of these criteria.^{7}
Boolean operators may be implemented by means of digital electronics precisely because there is no need to refer to the meanings of the bits. If electronic signals are treated as representation of symbols the activity of the circuit is the mirror image of the algorithms.
For example, let's assume an engineer decides that a voltage of 0V means the bit 0 and a voltage of 0.5V means the bit 1. He also decides he will not use other voltages. He can build circuits called logic gates which correspond to the boolean operators.
The AND gate implements the boolean and operator:

The circuit takes two voltages as input on two incoming wires and produces one voltage as output on one outgoing wire.

If both incoming voltages are 0.5V the outgoing voltage is 0.5V.

Otherwise the outgoing voltage is 0V.
The OR gate implements the boolean or operator:

The circuit takes two voltages as input on two incoming wires and produces one voltage as output on one outgoing wire.

If both incoming voltages are 0V the outgoing voltage is 0V.

Otherwise the outgoing voltage is 0.5V.
The NOT gate implements the boolean not operator:

The circuit takes one voltage as input on one incoming wire and produces one voltage as output on one outgoing wire.

If the incoming voltage is 0.5V the outgoing voltage is 0V.

Otherwise the outgoing voltage is 0.5V.
Please take the time to check that the operations on the voltages are indeed the mirror image of the algorithms. When voltages are interpreted as bits as decided by the engineer, the action of the circuit is to carry out operations of boolean logic. More complex algorithms may be implemented by assembling multiple logic gates in a more complex circuit. This is one of the principles of digital electronics. Circuits manipulate symbols by manipulating their physical representations as voltages.
Now ask yourself, what happens if the engineer reverses the convention? What happens if he decides that 0V means 1 instead of 0? Of course this implies that 0.5V would mean 0 instead of 1. Would the circuits still be the mirror image of the algorithms? The answer is "yes", except that the AND gate is now computing the or operator while the OR gate now computes the and operator. The NOT gate still computes the not operator. In short, the AND and OR gates have swapped their roles. Please look again at the gates and see for yourself.
What does this mean? It means the bits are not the voltages. One cannot tell which algorithm is computed by a circuit just by looking at what the circuit does to the voltages. The same circuit will implement two different algorithms depending on how the voltages are interpreted as symbols.
As I said before, bits are symbols. They are abstract ideas. They can be represented by any voltages an engineer may chose. The pair 0V and 0.5V is a common industry standard but other standards may exist. The pair 0V and 0.35V is another common one. Bits may also be represented by something which are not voltages. In main memory they are electric charges in capacitors.^{8}
Some people think a computation is the electronic process, the transistor activity manipulating electricity which occurs inside the circuit. They are wrong. A computation is about processing bits. It is about the symbols. It is not about voltages or electrons.
Bits Must Be Organized Into Syntax
One cannot randomly put together a bunch of symbols and expect them to mean something. Symbols must be grouped together according to rules of syntax. Something like E=mc^{2} is a mathematical formula while =e)Ru+ is not. This is partly because E=mc^{2} complies with the rules of mathematical syntax while =e)Ru+ does not.
Similarly bits in a computer must be organized according to rules of syntax. Often they are grouped to represent numbers. There are several ways to do this.
One possibility is called "unsigned integers". Under this convention bits represent natural numbers 0, 1, 2, 3 … up to the maximum quantity of numbers permitted by the available quantity of bits. If we have 8 bits at our disposal unsigned numbers are in the range 0 to 255. Another possibility is called "2scomplement^{9} format" which allows negative numbers. The same 8 bits in 2scomplement format can represent numbers in the range 128 to 127.
Notice how the two syntaxes represent different ranges of numbers. This means that sometimes, the same series of bits may mean two different numbers depending on the choice of syntax. For example 11111111 means 255 as an unsigned integer and it means 1 in 2scomplement format. It is not possible to tell which it is just by examining the bits because they are the same. To tell the difference we must know which syntax the one who wrote the bits has used.
This is a key idea. The one who writes the bits gets to choose the syntax. If you don't know which one he chose you can't read the bits.
This is another reason why data is not the same thing as its electrical representation. Let's suppose an engineer aligns 8 voltages in a row and tells you which voltage corresponds to a 0 or a 1. He asks you to read the number. Can you read it? You can't unless the engineer tells you his choice of syntax. Just knowing the voltages and the bits they stand for isn't good enough.
This ambiguity may also apply to circuits. For example there are circuits called adders for adding binary numbers. The relevant algorithms for addition are such that the same adder which adds unsigned numbers also correctly adds numbers in 2scomplement format.^{10} For example when this circuit adds 10000000+01111111 resulting into 11111111 it could mean either 128+127=255 in unsigned integer format or 128+127=1 in 2scomplement format. It is not possible to know which it is from an examination of the bits and the circuit structure alone. We must also know which convention on the representation of numbers is used.
The circuit is identical. The bits are identical. The meaning is different because the syntax is different. The user of the circuit gets to choose whether he adds unsigned integers or 2scomplement numbers because the same circuit does both. The computation is not the electronic activity of the circuit. It is an abstract idea involving symbols and a choice of syntax to represent meanings.
There are lots of rules of syntax for various types of data. There are some for floating point numbers. They are needed when the computation requires fractions. Other rules are encoding for characters such as ASCII or Unicode. There are standards for very complex data like video and audio files. Many of these rules are international standards. Other are defined by the authors of particular programs for internal use.
Data Has Meaning
Symbols have meanings. In the case of boolean algebra, 0 means false and 1 means true. In the case of arithmetic series of bits mean numbers. Sometimes the numbers encode letters and the text has meanings like legal briefs or contracts. The text has meaning. Sometimes the bits are stored in files or databases. This data also has meaning.
Mathematical symbols may have simultaneously two types of meanings. There is the abstract mathematical meaning, like truth values and numbers. Then there may be some non mathematical interpretation given to the abstract mathematical meaning. For example a number may be a count of apples in the grocery inventory, or it may be the speed of a rocket in flight.
Like symbols and syntax, the meanings of the symbols is not a physical component of the circuit doing the computation. When you use a computer to maintain the inventory of a grocery there are bits representing information about lemons and other food. These bits are descriptions of the food; they are not the food. The lemons are not electronic components of the machine.
Data and Computations Are Contents
To sum up what we have seen so far, we have symbolic language defined by a series of syntactical and semantical relationships as set forth below:

There is some physical substrate, often elements of a physical computer, which represents symbols.

There are conventions of syntax on how the symbols are organized.

When symbols have the proper syntax, they have meaning in mathematics, such as boolean or numerical values.

The mathematical values may be used, alone or in combination, to represent other entities such as letter of the alphabet or the complex structures found in files and databases such as video and database records.

Then the mathematical language may also be given some non mathematical interpretations.
All these relationships are defined by various conventions. There is the convention which identifies which voltage represents which bit. There are conventions on the format of numbers. There are conventions on how to use numbers to represent letters, digits and other characters. There are conventions on file formats and data structures. There are many more conventions. This is not an exhaustive list.
These conventions come from diverse sources. Some of these conventions are defined by the computer engineers who have designed the computer components. Others are industry standards. Some more are part of culture, like the meaning of English words we find in textual information. Other conventions are defined by the programmer when he defines the data used by its program. Regardless of the source all these conventions are intangible. They are neither physical elements of circuitry nor physical phenomenons. They are elements of knowledge which are required to be able to read and write the data.
Where do algorithms fit in this picture? Algorithms are independent from hardware and meanings. They lie in the middle; they are about the symbols and their syntax.
Algorithms, in the mathematicians' sense of the word, are methods for manipulating the symbols. A computation is the manipulation carried out according to the algorithm. Symbols are not something physical like voltages as the discussion of boolean gates and voltages has shown. Computations and algorithms are not hardware processes because they manipulate the symbols and not the voltages.
An algorithm must be machine executable. It must be the type of procedure which could be executed by logic gates which may react only to voltages representing symbols.^{11} No interpretation of the meanings is possible. The gates can't do that. The meanings of the symbols is understandable to someone who can read the bits but it is not used by the computing circuit when carrying out a computation. Therefore an algorithm doesn't depend on the meanings to be executed.
If algorithms are neither a hardware process nor an operation on meaning, what are they? Algorithms are manipulations of the uninterpreted symbols and their syntactic arrangement.
Please think of a legal brief. You need a physical stack of paper with marks of inks. Or you need electrons for your electronic file. But the brief is not a stack of paper with marks of inks and it is not electrons in a file. Like a legal brief, algorithms and computations are not physical entities. They are contents. They require a physical representation but still remain different from the representation.
But unlike a legal brief, algorithms are limited to the symbols and their syntax. They must be executable without having to interpret the semantics. In this respect, algorithms are more like a printing press which is able to print the letters without referring to their meanings. But unlike the printing press, algorithms are not machines, because they are not physical entities.
How Computer Hardware Carries Out Computations
With this background information, we may ask how does a computer carries out a computation? We have already seen part of the explanation in the discussion of the boolean gates. When the voltages are used to represent bits the gates implement boolean operators. If we give an algorithm to a competent engineer, he will find an arrangement of gates that will carry out the corresponding computation. When the engineer etches the gates on an integrated circuit he makes a dedicated circuit because this circuit can carry out only the computation for which it has been designed. But a general purpose computer can carry out any computation as required by the software. Making a general purpose computer requires something more.
Mathematicians have discovered a special category of algorithms called universal algorithms. Several universal algorithms are known.^{12} Each of these algorithms can compute every function which is computable provide we give them a corresponding program as input. In effect, any universal algorithm can emulate the behavior of every other algorithm. This is why we call them universal. This is like a glorified Swiss army knife but for computing. A single algorithm suffices to serve the purposes of all of them.
Dedicated circuits for universal algorithms may be implemented with boolean gates. This circuit can only compute the universal algorithm for which it has been designed. This is exactly like a dedicated circuit for any other algorithm but with one difference. The chosen algorithm is universal. This makes the circuit a programmable general purpose computer. The reason is that the universal algorithm will carry out any computation when it receives the corresponding program as input. The task of writing such a program and giving it to the computer is called "programming the computer".
Please note that a program is data. It is made of bits. This is like giving a video as input to a video player. In both cases we are giving data as input to an algorithm.
Most modern general purpose computers are built according to a common engineering pattern called the
stored program computer architecture. These computers are dedicated to carrying out a universal algorithm called the
instruction cycle.^{13} A program for this algorithm is a series of instructions. Each instruction corresponds to an operation which must be performed whenever the instruction is executed.
A computer has many parts. Two of them stand out as the most important in the execution of instructions.^{14} The first one is main memory. This component is exactly what its name implies. It is a container for information, a place where bits can be read and written. The instructions must be stored in the computer main memory before they can be executed. The other important computer component is called the processor, or the CPU.^{15} This component too is what its name implies. It reads the instructions from memory and carries out the corresponding operations.
The instruction cycle works as follows:^{16}

The CPU reads an instruction from main memory.

The CPU decodes the bits of the instruction.

The CPU executes the operation corresponding to the bits of the instruction.

If required, the CPU writes the result of the instruction in main memory.

The CPU finds out the location in main memory where the next instruction is located.

The CPU goes back to step 1 for the next iteration of the cycle.
As you can see, the instruction cycle executes the instructions one after another in a sequential manner. In substance the instruction cycle is a recipe to "read the instructions and do as they say".^{17}
This is an algorithm in the sense mathematicians give to this word. It is a series of steps for performing a task by manipulating symbols which meet the requirements mathematicians have defined for a procedure to be a mathematical algorithm. We say software is mathematics because the execution of all computer programs is the execution of a universal mathematical algorithm. In this sense software is making utterances in the language of mathematics.
Please recall the objection mentioned at the beginning of this article, that some people think we say software is described by mathematics. You should now see more precisely why we are not arguing that. The instruction cycle is not a description of the computer. It is what the computer does. We are arguing that software is mathematics because what the computer does is a mathematical computation according to a mathematical algorithm.
Sources
The mathematical model of computation corresponding to the stored program computer is called the RASP or Random Access Stored Program. It has been documented in chapter 1 of The Design and Analysis of Computer Algorithms [Aho 1974, see references, below]. This is how mathematicians and theoretical computer scientists state in the mathematical language the algorithm implemented as the instruction cycle.
The RASP is a universal version of a family of algorithms called register machines. This family is described in Models of Computation and Formal Languages [Taylor 1998, see references, below] chapter 5.
I have explored how the algorithm of the instruction cycle can be stated in the language of lambdacalculus. This statement is more detailed because it covers a wider range of features commonly found in modern computers. This file can be downloaded here (PDF).
Some people think that the presence of random elements brings an algorithm outside the scope of mathematics. This is incorrect. Mathematics, including probability theory, is able to handle randomness. In particular there are in mathematics random mathematical processes called
stochastic processes. Physicists routinely use probability theory to state some of the laws of
quantum mechanics. In computer science a normal deterministic algorithm can be transformed into a
randomized algorithm if a source of random numbers is treated as an input from which data is read. Then the procedure of a probabilistic algorithm is not different from a deterministic one, the difference is in the input. This is precisely how operating systems such as Linux treat hardware number generators. From a software perspective they are systems files from which random data is read.
The semantics of some programming languages have been defined mathematically. The text of program implemented in one of these languages must be the description of a mathematical algorithm as defined by the language semantics. An example of such a language is Standard ML. The definition of the language is found in The Definition of Standard ML (Revised) [Milner 1997, see references, below]. See Commentary on Standard ML [Milner 1991, see references, below] for a commentary and in Concurrent Programming in ML [Reppy 2007, see references below] for the mathematical definition of I/O and concurrent programming libraries. Another example of a programming language with a mathematically defined semantics is Coq. This language is documented in Interactive Theorem Proving and Program Development, Coq'Art: The Calculus of Inductive Constructions [Bertot 2004, see references below].
More mathematical references may be found in previous articles published on Groklaw:
A Remark on Algorithms and Abstract Ideas
Mathematical algorithms are a subcategory of procedures for manipulating symbols. The instruction cycle is also a procedure for manipulating symbols.
Some legally skilled people say that all procedures for manipulating symbols are abstract ideas. This is interesting. The question of whether a particular computation is a mathematical algorithm may be superfluous. As soon as we show a procedure is manipulating symbols we don't need to determine whether this manipulation is mathematical. We already have proof that it is an abstract idea.
I think this circumstance may be used to develop a test for when a patent involving software is patent eligible. Case law from the Federal Circuit suggests that the legal difficulty is to find a workable definition of some key terms.
The Federal Circuit had problems agreeing on what the term "mathematical algorithm" means. From in re Warmerdam:
One notion that emerged and has been invoked in the computer related cases is that a patent cannot be obtained for a "mathematical algorithm." See In re Schrader,, 22 F.3d 290, 30 USPQ2d 1455 (Fed.Cir.1994) and cases discussed therein. That rule is generally applied through a twostep protocol known as the FreemanWalterAbele test, developed by our predecessor court, the first step of which is to determine whether a mathematical algorithm is recited directly or indirectly in the claim, and the second step of which is to determine whether the claimed invention as a whole is no more than the algorithm itself. See Schrader, 22 F.3d at 292, 30 USPQ2d at 1457.
The difficulty is that there is no clear agreement as to what is a "mathematical algorithm", which makes rather dicey the determination of whether the claim as a whole is no more than that. See Schrader, 22 F.3d at 292 n. 5, 30 USPQ2d at 1457 n. 5, and the dissent thereto.
Trying to define what is an abstract idea doesn't seem to be working either. From MySpace, Inc. v. GraphOn Corp.:
When it comes to explaining what is to be understood by "abstract ideas" in terms that are something less than abstract, courts have been less successful. The effort has become particularly problematic in recent times when applied to that class of claimed inventions loosely described as business method patents. If indeterminacy of the law governing patents was a problem in the past, it surely is becoming an even greater problem now, as the current cases attest.
In an attempt to explain what an abstract idea is (or is not) we tried the "machine or transformation" formulaâ€”the Supreme Court was not impressed. Bilski, 130 S.Ct. at 322627. We have since acknowledged that the concept lacks of a concrete definition: "this court also will not presume to define `abstract' beyond the recognition that this disqualifying characteristic should exhibit itself so manifestly as to override the broad statutory categories of eligible subject matter… ." Research Corp. Techs., Inc. v. Microsoft Corp., 627 F.3d 859, 868 (Fed.Cir.2010).
Our opinions spend page after page revisiting our cases and those of the Supreme Court, and still we continue to disagree vigorously over what is or is not patentable subject matter. See, e.g., Dealertrack, Inc. v. Huber, ___ F.3d ___ (Fed. Cir.2012) (Plager, J., dissentinginpart); Classen Immunotherapies, Inc. v. Biogen IDEC, 659 F.3d 1057 (Fed.Cir.2011) (Moore, J., dissenting); Ass'n for Molecular Pathology, 653 F.3d 1329 (Fed.Cir. 2011) (concurring opinion by Moore, J., dissenting opinion by Bryson, J.); see also In re Ferguson, 558 F.3d 1359 (Fed.Cir. 2009) (Newman, J., concurring).
This effort to descriptively cabin § 101 jurisprudence is reminiscent of the oenologists trying to describe a new wine. They have an abundance of adjectivesâ€”earthy, fruity, grassy, nutty, tart, woody, to name just a fewâ€”but picking and choosing in a given circumstance which ones apply and in what combination depends less on the assumed content of the words than on the taste of the tongue pronouncing them.
I think the concept of manipulation of symbols is welldefined. It should be possible to develop a workable legal test on this basis. An obvious possibility is to modify the FreemanWalterAbele test which has been rejected in Warmerdam, testing for manipulations of symbols instead of algorithms. Another possibility is to make a literal application of Mayo v. Prometheus "to transform an unpatentable law of nature into a patenteligible application of such a law, one must do more than simply state the law of nature while adding the words 'apply it'." This alternative test would compare the utility of the claim taken as a whole with the utility of the manipulations of symbols taken alone, adding the words "apply it". There must be a substantial difference between the two, otherwise the claim is not patenteligible according to Mayo.
I think the courts should be able to apply this kind of test because the fundamental problem with the definition of terms is solved.
In connection with the en banc review by the Federal Circuit of CLS Bank International v. Alice Corporation, let's think now about symbols in that context.
It goes without saying that symbols must have a physical representation. This doesn't make a procedure to manipulate symbols something less abstract. The digital equivalent of processes for pushing a pencil and making marks of lead on paper should not be patent eligible.
Arguing that allowing such to be patented is not patenting mathematics in the abstract is a charade. There is no way of carrying out a mathematical computation without actually writing the symbols. Therefore it should not matter whether the manipulation of symbols is claimed as a method, a system, a storage medium or any other physical device. It should not be patent eligible in any and all of these scenarios.
The meanings given to the symbols should make no difference. I suggest this as the line: As long as the procedure manipulates symbols their meanings will never confer patent eligibility.
I have a riddle that may help convince you why this is a desirable result.
Please take a pocket calculator. Now use it to compute 12+26. The result should be 38. Now give some non mathematical meanings to the numbers, say they are counts of apples. Use the calculator to compute 12 apples + 26 apples. The result should be 38 apples. Do you see a difference in the calculator circuit? Here is the riddle. What kind of non mathematical meanings must be given to the numbers to make a patenteligible difference in the calculator circuit?
This is the kind of question the Federal Circuit is asking about computers. When I read case law about section 101 patentable subject matter, I see the court analyze the meanings of the bits to determine whether the invention is abstract. But at the same I see the court working from a legal theory where a software patent is actually a hardware invention. Can't they see this is a contradiction? This is the point of the riddle. The meanings of the bits is not a hardware component of the machine and it is not influencing the steps of the computation.
Activities like input, output and mental steps of reading, writing and thinking about the meaning of symbols should not confer patent eligibility to a manipulation of symbols. These steps are just more manipulations of symbols.
Data gathering where nothing more is achieved but obtaining the data which will be manipulated should not confer patent eligibility to a manipulation of symbols.
_____________
References
[Aho 1974] Aho, Alfred V., Hopcroft, John E, and Ullman, Jeffrey D.. The Design and Analysis of Computer Algorithms, AddisonWesley Publishing Company 1974
[BenAri 2001] BenAri, Mordechai, Mathematical Logic for Computer Science, Second Edition, SpringerVerlag, 2001
[Bertot 2004] Bertot, Yves, Castéran, Pierre, Interactive Theorem Proving and Program Development, Coq'Art: The Calculus of Inductive Constructions, Springer, 2004
[Devlin 2000] Devlin, Keith, The Language of Mathematics, Making the Invisible Visible, W.H. Freeman, Henry Holt and Company, 2000
[Epstein 1989] Epstein, Richard L., Carnielli, Walter A., Computability Computable Functions, Logic and the Foundations of Mathematics, Wadsworth & Brooks/Cole, 1989
[Hamacher 2002] Hamacher, V. Carl, Vranesic, Zvonko G., Zaky. Safwat G., Computer organization, Fifth Edition, McGrawHill Inc. 2002
[Kleene 1967] Kleene, Stephen Cole, Mathematical Logic, John Wiley & Sons, Inc. New York, 1967. I use the 2002 reprint from Dover Publications.
[Kluge 2005] Kluge, Werner, Abstract Computing Machines, A Lambda Calculus Perspective, SpringerVerlag Berlin Heidelberg 2005
[Milner 1991] Milner, Robin, Tofte, Mads, Commentary on Standard ML , The MIT Press, 1991.
[Milner 1997] Milner, Robin, Tofte, Mads, Harper, Robert, MacQueen, David, The Definition of Standard ML (Revised) , The MIT Press, 1997
[Reppy 2007] Reppy, John H., Concurrent Programming in ML, Cambridge University Press, First published 1999, Digitally printed version (with corrections) 2007.
[Taylor 1998] Taylor, R. Gregory, Models of Computation and Formal Languages, Oxford University Press, 1998
Footnotes
1 There are actually several mathematical languages because each branch of mathematics has its own requirements for notation. But for simplicity let's say it is a language.
2 There is more to mathematics than utterances in mathematical language. Mathematical entities like numbers and geometrical figures are also mathematics. The article mentions the utterances in mathematical language because this is the part of mathematics the sentence "software is mathematics" refers to.
Some people sometimes try to draw a contradiction between described by mathematics and is mathematics. They argue that when something is described by mathematics it is not mathematics because the things being described is not the description. There is no such contradiction. It is possible and common to describe entities like numbers, geometric figures and algorithms with mathematical formulas. For example a circle centered at the origin with a radius of unit 1 is described by the equation x^{2}+y^{2}=1. Therefore mathematical entities can both be mathematics and be described by mathematics.
Similar situations occur outside of mathematics. For example it is possible to describe the syntax of an English sentence using English. Then the sentence is both English and described by English.
3 This is a fact of mathematics. One mathematically precise statement of the connection between algorithms and logical deductions is called the CurryHoward correspondence. Another mathematically precise statement of this connection is the resolution algorithm used in mathematical logic. This discovery had led to the development of logic programming. For the more familiar family of imperative programming languages, the mathematically precise statement of this connection is called Hoare logic.
4 There is more to software than program execution. But according to patent law, the aspect of software which is patented is the program execution. We say "software is mathematics" as a slogan to remind people that the aspect of software which is patented is entirely a mathematical computation.
5 I am sure our mathematician friends will have something to say about what is mathematical subject matter. I chose to keep this discussion simple. If you need to explore the question I suggest you read [Devlin 2000], especially the prologue. This book discusses what is mathematics in a language easily understandable by non mathematicians.
6 Bits are copyrightable whether or not they are watched by humans.
7 See [Kleene 1967] p. 223, where Stephen Kleene defines the notion of algorithm as mathematicians see it. You will find several requirements. Among them there is this one:
In performing the steps we have only to follow the instructions mechanically, like robots; no insight or ingenuity or intervention is required of us.
If the meanings of the symbols are ambiguous it is impossible to execute the algorithm in this manner. Resolving the ambiguity is an intervention that requires insight or ingenuity. This requirement would not be met. On the other hand if the symbols are unambiguous, for purposes of executing an algorithm mechanically, like robots, their meanings are superfluous. Why should we need the step of noticing the symbol means true when we already know it is 1? There is no reason. The algorithm works just as well when we act on the symbols alone like these boolean algorithms do.
This logic applies to all algorithms in the sense mathematicians give to this word. This point has been noticed by Richard Epstein and Walter Carnielli. See [Epstein 1989] p. 70, where they describe a series of models of computations used to define classes of algorithms:
What all of these formalizations have in common is that they are all purely syntactical despite the often anthropomorphic descriptions. They are methods for pushing symbols around.
8 This assumes the memory technology is DRAM. Other technologies may represent bits differently.
9 This is pronounced two's complement.
10 See [Hamacher 2002] page 368.
11 I am referring to the criteria mathematicians have defined for a procedure to be an algorithm in the sense of mathematics. See footnote 7 above. While mathematicians have not stated their criteria in these terms they are equivalent in practice with requiring that algorithms be machine executable manipulation of symbols. In the case of digital electronics this means the algorithm must be computable by a circuit made of logic gates.
12 A theoretically important universal algorithm is called the universal Turing machine. Other universal algorithms are used in practice for programming purposes. Instruction cycles are the preferred universal algorithm for imperative programming which is the most widely used programming paradigm. In logic programming algorithms such as SLD resolution are used to resolve problems described in logical terms using Horn clauses. See Mathematical Logic for Computer Science, Second Edition [BenAri 2001] chapter 7 and 8 for a complete discussion. In functional programming various implementations of normal order βreduction are used. See [Kluge 2005] for a book dedicated to several implementations of normal order βreduction. It should be noted that normal order βreduction is a universal algorithm which modifies its program as the computation progresses. It cannot be assumed the program is always unchanged by the computation as is usually (but not always) the norm in imperative programming.
13 Other universal algorithms exist as indicated in footnote 12. However the instruction cycle is the chosen one when building a stored program computer.
14 Main memory is the equivalent of a pad of paper in a pencil and paper calculation. The CPU is the electronic equivalent of whoever handles the pencil. We may view the stored program computer as an automated version of a human carrying out a pencil and paper calculation, but much faster.
Other components of stored program computers are the peripheral devices for input and output and communication components called buses to interconnect everything and also other components like power supply, casing etc.
15 The acronym CPU stands for Central Processing Unit.
16 This is a simplified explanation for readability. Here is how [Hamacher 2002] p. 43 describes the hardware implementation of an instruction cycle. (emphasis in the original)
Let us consider how this program is executed. The processor contains a register called the program counter (PC) which holds the address of the instruction to be executed next. To begin executing a program, the address of its first instruction (i in our example) must be placed into the PC. Then, the processor control circuits use the information in the PC to fetch and execute instructions, one at a time, in the order of increasing addresses. This is called straightline sequencing. During the execution of each instruction, the PC is incremented by 4 to point to the next instruction. Thus, after the Move instruction at location i + 8 is executed the PC contains the value i + 12 which is the address of the first instruction of the next program segment.
Executing a given instruction is a twophase procedure. In the first phase, called instruction fetch, the instruction is fetched from the memory location whose address is in the PC. This instruction is placed in the instruction register (IR) of the processor. At the start of the second phase, called instruction execute, the instruction in IR is examined to determine which operation to be performed. The specified operation is then performed by the processor. This often involve fetching operands from the memory or from processor registers, performing an arithmetic or logic operation, and storing the result in the destination location. At some point during this twophase procedure, the contents of the PC are advanced to point at the next instruction. When the execute phase of an instruction is completed, the PC contains the address of the next instruction, and a new instruction fetch phase can begin.
17 Every stored program computer contains a hardware implementation of this algorithm. This is not the only possible way to implement it. Program like a virtual machine for a Python bytecode interpreter are software implementations of the instruction cycle. There are also universal algorithms which are not instruction cycles. These algorithms too may be implemented in software. Ultimately the software universal algorithms are executed by the hardware instruction cycle.
Programs intended to be given as input to software implementations of some universal algorithm are not instructions for the hardware instruction cycle. This is a point to remember because some people think all software are instructions for the CPU. Sometimes they build legal arguments based on this idea. These arguments are often wrong because they don't take into account all the possible forms software can take.


Authored by: designerfx on Saturday, October 13 2012 @ 08:59 PM EDT 
post corrections here [ Reply to This  # ]

 corrections thread  Authored by: Anonymous on Saturday, October 13 2012 @ 09:31 PM EDT
 Words missing in last para of footnotes  Authored by: Anonymous on Saturday, October 13 2012 @ 11:09 PM EDT
 It analogous to > It is analogous to  Authored by: feldegast on Sunday, October 14 2012 @ 01:08 AM EDT
 The text of program implemented : program>a program [N/T]  Authored by: bugstomper on Sunday, October 14 2012 @ 01:17 AM EDT
 If we use a pencil they are marks of lead on paper.  Authored by: feldegast on Sunday, October 14 2012 @ 01:19 AM EDT
 The semantics of some programming languages has been > have been  Authored by: bugstomper on Sunday, October 14 2012 @ 01:23 AM EDT
 there is in mathematics > there are in mathematics  Authored by: grokpara on Sunday, October 14 2012 @ 04:22 AM EDT
 On cannot > One cannot (N/T)  Authored by: Anonymous on Sunday, October 14 2012 @ 08:24 AM EDT
 read the bits > know what the bits mean  Authored by: nsomos on Sunday, October 14 2012 @ 09:00 AM EDT
 0.5V>5V, 0,35V>3.5V  Authored by: Anonymous on Sunday, October 14 2012 @ 10:31 AM EDT
 Missing html entities  Authored by: Anonymous on Sunday, October 14 2012 @ 01:12 PM EDT
 Bits are copyrightable  Authored by: Anonymous on Sunday, October 14 2012 @ 01:24 PM EDT
 rest mass  Authored by: Anonymous on Sunday, October 14 2012 @ 06:04 PM EDT
 rest mass  Authored by: Anonymous on Monday, October 15 2012 @ 12:27 PM EDT
 rest mass  Authored by: Anonymous on Monday, October 15 2012 @ 05:33 PM EDT
 rest mass  Authored by: Anonymous on Tuesday, October 16 2012 @ 04:26 AM EDT
 how does a computer carries > how does a computer carry  Authored by: Anonymous on Sunday, October 14 2012 @ 07:36 PM EDT
 An algorithm must be machine executable.  Authored by: pcrooker on Sunday, October 14 2012 @ 11:13 PM EDT
 corrections thread  Authored by: Anonymous on Monday, October 15 2012 @ 03:00 AM EDT
 Formulae and algorithms  Authored by: Anonymous on Monday, October 15 2012 @ 09:15 AM EDT
 phenomenons > phenomena  Authored by: Anonymous on Monday, October 15 2012 @ 04:30 PM EDT
 "...writing the symbols." > '...representing the symbols.'  Authored by: Tinstaafl on Monday, October 15 2012 @ 05:50 PM EDT
 0.5V > 5.0V, 0.35V > 3.5V  Authored by: Marc Mengel on Thursday, October 25 2012 @ 12:34 PM EDT

Authored by: designerfx on Saturday, October 13 2012 @ 08:59 PM EDT 
off topic comments here [ Reply to This  # ]

 Apple Maps Too Accurate  Authored by: Anonymous on Saturday, October 13 2012 @ 11:26 PM EDT
 Apple Sued for Patent Infringement (LED Lighting)  Authored by: TerryC on Sunday, October 14 2012 @ 05:00 AM EDT
 Apple Sued for Patent Infringement (LED Lighting)  Authored by: Anonymous on Sunday, October 14 2012 @ 07:28 AM EDT
 Apple Sued for Patent Infringement (LED Lighting)  Authored by: Gringo_ on Sunday, October 14 2012 @ 07:34 AM EDT
 Apple Sued for Patent Infringement (LED Lighting)  Authored by: hairbear on Sunday, October 14 2012 @ 10:34 AM EDT
 Apple Sued for Patent Infringement (LED Lighting)  Authored by: Anonymous on Sunday, October 14 2012 @ 11:05 AM EDT
 Apple Sued for Patent Infringement (LED Lighting)  Authored by: albert on Sunday, October 14 2012 @ 11:55 AM EDT
 Jurisdiction?  Authored by: Anonymous on Sunday, October 14 2012 @ 02:30 PM EDT
 Jurisdiction?  Authored by: Anonymous on Sunday, October 14 2012 @ 02:35 PM EDT
 Apple Sued for Patent Infringement (LED Lighting)  Authored by: Anonymous on Monday, October 15 2012 @ 06:35 AM EDT
 ...but latterly in DC motor control...  Authored by: artp on Monday, October 15 2012 @ 12:13 PM EDT
 Apple Sued for Patent Infringement (LED Lighting)  Authored by: Anonymous on Tuesday, October 16 2012 @ 02:41 AM EDT
 Shouldn't these be posted in Newspicks? n/t  Authored by: albert on Sunday, October 14 2012 @ 12:08 PM EDT
 Maybe!  Authored by: tiger99 on Tuesday, October 16 2012 @ 08:29 AM EDT
 Maybe!  Authored by: albert on Tuesday, October 16 2012 @ 01:21 PM EDT
 The Death of Free Speech  Authored by: Anonymous on Sunday, October 14 2012 @ 12:13 PM EDT
 Workers in Chinese factory reveal ‘life of hell’ producing iPhone 5  Authored by: Anonymous on Sunday, October 14 2012 @ 12:52 PM EDT
 Alert: Live now Skydiver Felix Baumgartner soon to jump  Authored by: Gringo_ on Sunday, October 14 2012 @ 01:05 PM EDT
 NVIDIA wants to remove GPL marker from Linux interface  Authored by: hardmath on Sunday, October 14 2012 @ 11:46 PM EDT
 Well it is a point of view.  Authored by: Anonymous on Monday, October 15 2012 @ 03:37 AM EDT
 Well, you can tell the good posts by the number of trolls  Authored by: Anonymous on Monday, October 15 2012 @ 11:49 AM EDT
 Yep  n/t  Authored by: Tufty on Tuesday, October 16 2012 @ 01:09 AM EDT
 Remember "Securitization"?  as in BOOM goes the economy? Now howz about we do it with cancer ?  Authored by: Anonymous on Monday, October 15 2012 @ 01:07 PM EDT
 Apple/Samsung UK appeal  Authored by: jmc on Monday, October 15 2012 @ 02:46 PM EDT
 ACTA is back  Authored by: Anonymous on Monday, October 15 2012 @ 04:44 PM EDT
 Planet with four suns discovered by volunteers  Authored by: Anonymous on Tuesday, October 16 2012 @ 02:22 AM EDT
 Ada Lovelace v. Marie Curie = A Day Worth Celebrating  Authored by: Anonymous on Tuesday, October 16 2012 @ 06:13 AM EDT
 Game of Life with floating point operations ..  Authored by: Anonymous on Tuesday, October 16 2012 @ 07:13 AM EDT
 Clint Eastwood Settles Legal Dispute Over Chair (No, Not That Chair)  Authored by: tiger99 on Tuesday, October 16 2012 @ 07:54 AM EDT
 European Parliament Committee Calls For Creation Without Copyright To Become EU Policy  Authored by: tiger99 on Tuesday, October 16 2012 @ 08:20 AM EDT

Authored by: Anonymous on Saturday, October 13 2012 @ 09:18 PM EDT 
Will you be filing this as a brief? Because you really should, the judges
need to see this.
Wayne
http://madhatter.ca
[ Reply to This  # ]


Authored by: red floyd on Saturday, October 13 2012 @ 09:32 PM EDT 
Please put the title of the newpick as your title.

I am not merely a "consumer" or a "taxpayer". I am a *CITIZEN* of the United
States of America.
[ Reply to This  # ]

 Apple Sued for LED Lighting Found in iPad 3 & MacBook Pro: With a computer/ with an LED  Authored by: Anonymous on Saturday, October 13 2012 @ 10:02 PM EDT
 Curioser and Curioser  Authored by: Anonymous on Saturday, October 13 2012 @ 10:11 PM EDT
 Rubbish  Authored by: Anonymous on Saturday, October 13 2012 @ 10:36 PM EDT
 Rubbish, indeed!  Authored by: albert on Sunday, October 14 2012 @ 12:07 PM EDT
 Rubbish  Authored by: dio gratia on Sunday, October 14 2012 @ 04:59 PM EDT
 Rubbish  Authored by: Anonymous on Sunday, October 14 2012 @ 06:47 PM EDT
 Apple Sued for LED Lighting Found in iPad 3 & MacBook Pro: With a computer/ with an LED  Authored by: Tufty on Sunday, October 14 2012 @ 01:43 AM EDT
 Apple Sued for LED Lighting Found in iPad 3 & MacBook Pro: With a computer/ with an LED  Authored by: JamesK on Sunday, October 14 2012 @ 07:39 AM EDT
 Apple Sued for LED Lighting Found in iPad 3 & MacBook Pro: With a computer/ with an LED  Authored by: albert on Sunday, October 14 2012 @ 12:04 PM EDT
 Apple Sued for LED Lighting Found in iPad 3 & MacBook Pro: With a computer/ with an LED  Authored by: Anonymous on Sunday, October 14 2012 @ 04:56 PM EDT
 Obvious and _necessary_  Authored by: myNym on Monday, October 15 2012 @ 04:50 AM EDT
 what can Google do proactively to prevent antitrust suit?  Authored by: Anonymous on Sunday, October 14 2012 @ 09:40 AM EDT
 Boy, if I were an attorney  Authored by: Anonymous on Sunday, October 14 2012 @ 12:36 PM EDT
 Rupert Murdoch calls phonehacking campaigners 'scumbag celebrities'  Authored by: tiger99 on Sunday, October 14 2012 @ 02:41 PM EDT
 Indie film legend accuses Apple, Google of Web piracy  Authored by: Anonymous on Monday, October 15 2012 @ 02:22 AM EDT
 Google Vs. Vringo  Authored by: JimDiGriz on Monday, October 15 2012 @ 12:48 PM EDT
 FTC Trade Commission Chairman Jon Leibowitz  Authored by: Anonymous on Monday, October 15 2012 @ 05:08 PM EDT

Authored by: red floyd on Saturday, October 13 2012 @ 09:34 PM EDT 
Please place your transcriptions here.
Those of you who have been doing it know the drill. Anyone else interested can
click the Comes v. MS link at the top of the page.

I am not merely a "consumer" or a "taxpayer". I am a *CITIZEN* of the United
States of America.
[ Reply to This  # ]


Authored by: rsteinmetz70112 on Saturday, October 13 2012 @ 10:41 PM EDT 
It does not matter what mathematicians, software engineers, or programmers
define as "mathematics" it only matters what Judges or Legislators
define as "mathematics".
Additionally I think but cannot be sure that the case law generally refers to
"algorithms" or "equations" as legally defined, not
"mathematics" in general.
Finally this whole line of reasoning seems to be advancing a technical(or
theological) argument against a point of public policy which in our society
should be decided by the legislative branch of government.

Rsteinmetz  IANAL therefore my opinions are illegal.
"I could be wrong now, but I don't think so."
Randy Newman  The Title Theme from Monk
[ Reply to This  # ]


Authored by: Anonymous on Saturday, October 13 2012 @ 11:23 PM EDT 
An algorithm must be machine executable. It must be the type of procedure
which could be executed by logic gates which may react only to voltages
representing symbols
This is your problem. I doubt any computer has
ever existed that consisted only of logic gates. I know modern computers have
resistors, capacitors, oscillators, fans, mechanical storage devices,
environmental sensors, keyboards, mice, monitors, Ethernet ports, etc. All of
these can and do impact calculations.
For example, one of the first
things you learn about Turing machines is that one Turing machine can emulate
two separate Turing machines. The problem is, those Turing machines are assumed
to stay in lock step. Two separate CPUs with two separate oscillators will
never stay in lock step, so you need a random factor for one CPU to emulate two.
Furthermore, the proof that one Turing machine is as good as two assumes that
there is no data corruption. In the real world, this isn't true
either.
If the goal is to argue that software that runs on real
hardware is mathematics, you're going to have to abandon Turing machines and
Turing computable and the definition of algorithm you learned in CS.
I
believe that a Turing machine with an oracle that provides a random symbol on
demand might be enough to model real hardware that doesn't interact with people.
I don't believe you can argue that software that interacts with people is
mathematics without arguing that people are mathematics. I could be wrong, it
might be possible to claim people are just another source of
entropy.
To switch to a Turing machine with a random oracle,
I think
you need to change your definition of algorithm to something like (I should do
this in TeX, but I'm way too rusty:)
Let S be a finite set of symbols. Let
X(S) be the set of all finite strings of symbols from S. An algorithm F is a
map F:X(S) x X(S) > [0,1] such that for every a in X(S) Fa:X(s) > [0,1]
with Fa(x) = F(a,x) is a probability distribution and there exists a Turing
computable function H: X(S) x X(S) > X(S) and a probability distribution
G:X(S) > [0,1] such that for every a,b in X(S) there is a c in X(s) with
G(c) = F(a,b) and H(c,a) = b.
Sorry about the run on, but math doesn't
lend itself to periods. At least it doesn't without typesetting. What all that
is trying to say is that there is a Turing machine that with the right random
input, it does what you want. The point of F is that the first argument is the
input, the second is the output and the results is the probability of getting
the given output from the given input. If you require the range of F to be {0,
1} then you should get the standard definition of algorithm.
Anything
that satisfies the above definition of algorithm is clearly math. The hard part
is to show that the above yields anything interesting. It's clearly more than
Turing computable because it can generate random numbers. I think this will
give you enough, but someone needs to check to make sure this definition of
algorithm is inclusive enough to eliminate most software patents.
None
of this is really new or hard. I would be stunned if there wasn't some mention
of this somewhere in the literature. It's possible the hard work has already
been done. Someone who can search the appropriate journals should do so. I
think the keywords Turing, random and oracle will get some good hits if you can
eliminate things related to the database company.
[ Reply to This  # ]

 Pointless  Authored by: Anonymous on Saturday, October 13 2012 @ 11:43 PM EDT
 What Does "Software Is Mathematics" Mean? Part 1  Software Is Manipulation of Symbols ~ by PolR  Authored by: PolR on Sunday, October 14 2012 @ 12:06 AM EDT
 Software that runs on real computers  Authored by: Anonymous on Sunday, October 14 2012 @ 12:03 PM EDT
 What Does "Software Is Mathematics" Mean? Part 1  Software Is Manipulation of Symbols ~ by PolR  Authored by: Anonymous on Sunday, October 14 2012 @ 03:36 PM EDT
 Software *treats* computers as composed purely of logic gates, inputs, outputs.  Authored by: Anonymous on Monday, October 15 2012 @ 02:56 AM EDT
 Very nice point  Authored by: Anonymous on Monday, October 15 2012 @ 02:43 PM EDT
 Very nice point  Authored by: Anonymous on Monday, October 15 2012 @ 05:59 PM EDT
 Another angle?  Authored by: Anonymous on Monday, October 15 2012 @ 07:12 PM EDT
 Another angle?  Authored by: PolR on Monday, October 15 2012 @ 07:25 PM EDT
 patentable?  Authored by: Anonymous on Monday, October 15 2012 @ 07:55 PM EDT
 Another angle?  Authored by: Anonymous on Monday, October 15 2012 @ 08:01 PM EDT
 Another angle?  Authored by: PolR on Monday, October 15 2012 @ 08:13 PM EDT
 Another angle?  Authored by: Anonymous on Monday, October 15 2012 @ 08:58 PM EDT
 Another angle?  Authored by: PolR on Monday, October 15 2012 @ 09:21 PM EDT
 Another angle?  Authored by: Anonymous on Monday, October 15 2012 @ 09:34 PM EDT
 What is math  Authored by: Anonymous on Wednesday, October 17 2012 @ 01:56 PM EDT
 This is your problem. You don't seem to understand computer design.  Authored by: celtic_hackr on Monday, October 15 2012 @ 09:50 PM EDT

Authored by: Anonymous on Saturday, October 13 2012 @ 11:29 PM EDT 
There are good ideas in here, but most lawyers will not pay
attention because the writing is not structured in a way
that they can follow.
1. Give an outline of your argument. This is part 1,
apparently. Why is it divided into two parts? What will be
in part 2? For that matter, what do you consider the most
important conclusion of part 1?
1a. Tell 'em what you're gonna say, then say it, then tell
'em what you said. Don't just start talking about
mathematical utterances without explaining why a lawyer
should care. And don't assume that when a reader reached
the end of the page they understood what the main conclusion
should be. Tell them again.
2. Complete your argument. Engineers often respond to a
question by giving conclusive evidence but fail to draw a
conclusion. Asked "what is the capacity of this hard drive?"
they may say, "It's a Model 1234 from IBM", which is correct
but not sufficiently helpful  sure, that's enough
information with which to obtain the desired answer (e.g.,
from Google) but if you are being asked a particular
question, give the answer to that question. Once you
identify the question you're trying to answer (maybe it's
"what is a test that judges could apply to exclude software
from patentability that would be mathematically correct"),
you can organize your writing around giving a clear answer
to that question.
3. Don't bury the lede. The paragraph where you suggest a
new test (or two) is underneath the heading "A remark on
algorithms and abstract ideas." Headings are supposed to
help the reader follow the argument, not to hide the main
idea!
4. Give examples and anticipate counterarguments. Maybe
this will be in your part two, but let's say we adopt your
test  what result do we get if the software is implemented
as a hardware circuit instead?
Long tangent related to number 4:
I think if it uses memory at all, then it is "writing
symbols" by your definition, so not patentable. Is that a
correct interpretation of your test, and if so, is that a
desirable result?
(I believe it's possible to implement a calculation
without using memory. I'd guess I/O buffers don't count,
or the result would be that almost nothing digital is
patentable even if everything is implemented in hardware.
In any case, you could implement a specialpurpose
calculator in hardware (maybe analog hardware) without
buffering the I/O. Would such a piece of hardware be non
algorithmic and therefore patentable? Example: Enigma
machine. The intended use is to manipulate symbols, but is
that what the Enigma machine itself does?)
5. Keep it simple. My tangent to number 4 probably doesn't
belong in this piece, but would be good for a followup.[ Reply to This  # ]


Authored by: Anonymous on Sunday, October 14 2012 @ 12:23 AM EDT 
I agree with previous commenters that this is pure ontology. It will convince
neither judges nor even programmers. Worse, it looks daunting a read; I couldn't
even reach halfway through.
It would be so much easier to point out that
all software is recursively definable functions, as explained in the ChurchTuring
thesis. Or that it can be expanded into boolean circuits. This
way even the intellectually challenged would have to recognize this is
mathematics. And they wouldn't even have to read anything, since not only can
the above be illustrated to neophytes with Powerpoint crayons, but it's been
extensively researched by illustrious mathematicians (even they might know, such
as Alan Turing) since even before computers existed. [ Reply to This  # ]


Authored by: Anonymous on Sunday, October 14 2012 @ 01:46 AM EDT 
As someone who spent a third of his long career in engineering and
who has
obtained an advanced degree in that field, and who has also spent
twothirds of
his career in the patent field, I think I am qualified and entitled
to
ask:
Why is it that the arguments presented in this article sound so much
like this to me?
[ Reply to This  # ]


Authored by: Anonymous on Sunday, October 14 2012 @ 02:59 AM EDT 
Neither is patentable. [ Reply to This  # ]


Authored by: Anonymous on Sunday, October 14 2012 @ 05:54 AM EDT 
Software is not mathematics.
Mathematics itself is a broad discipline from pure mathematics concenred with
the logical consequences derivable from sets of axioms to applied mathematics
concerned with how the results and methods of pure mathematics can be applied to
praticcal problems.
Software can to an extent be analysed by mathematics and all software depends on
algorithms, often trivial obvious algorithms, which are a part of applied
mathematics but software itself is more than this.
There is a massive freedom in how software is structured and how it performs
functions. The design of software is critical and this design is not principally
concerned with algorithmic concerns but how to make the software easy to
understand, how to make it easy to extend and how to manage the complexity
inherent in whatever task is being performed. Software engineering is a
massively practical and creative disciplince with only occasional need for
significant mathematical skills.
A good example of the distinction is work I did medical imaging. At the time
some complex front end signal processing was performed by banks of laser trimmed
matched analog electronics. This was expensive. I derived some mathematical
relations that made it possible to achieve a similar or better result digitally
with the practical componenst available at the time. This is perhaps mathematics
but what is interesting is two years later working with another company I saw
they were using my idea to process signals but with no understanding at all of
the underlying maths. They had decided there must be a relationship between the
signals concerned and had simply measured it emprically and encoded it in look
up tables. They then implemented it in software. The real drive for the idea was
not the mathematics but the current state of electronics in terms of ADCs,
multipliers and the performance of low cost DSPs.
The point is that software is not mathematics but engineering. It is concerned
with solving practical problems and not concerned with formal logic or proofs
except as tools.
A second point is there are many routes to a solution and what drives the types
of solutions is the tehcnical and commercial environment more than the ability
of individial engineers or developers.
There is a real problem with patents especially SW patents in the US but the
problem is not because software is mathematics but because of the practical
consequences of the current system. The mathematical argument is false but in
any case has the character of relgious dogma. The exclusion of mathematics from
patentable material was surely itself based on a practical assesment of the
consequences.[ Reply to This  # ]

 Software is not mathematics  Authored by: Ian Al on Sunday, October 14 2012 @ 06:30 AM EDT
 Software is not mathematics  Authored by: Anonymous on Sunday, October 14 2012 @ 06:34 AM EDT
 Software is not mathematics, but algorithms in software are.  Authored by: Anonymous on Sunday, October 14 2012 @ 07:45 AM EDT
 Software is not mathematics  Authored by: Anonymous on Sunday, October 14 2012 @ 07:45 AM EDT
 I have not said the disciplines are the same  Authored by: PolR on Sunday, October 14 2012 @ 08:41 AM EDT
 Software is not mathematics  Authored by: Anonymous on Sunday, October 14 2012 @ 11:38 AM EDT
 Software is not mathematics  Authored by: Anonymous on Sunday, October 14 2012 @ 03:59 PM EDT
 Software is mathematics, you idiot.  Authored by: Anonymous on Monday, October 15 2012 @ 03:01 AM EDT
 So, what your saying is...  Authored by: hAckz0r on Monday, October 15 2012 @ 01:18 PM EDT
 Software IS mathematics  Authored by: Anonymous on Monday, October 15 2012 @ 02:57 PM EDT
 Software is not mathematics  Authored by: Anonymous on Monday, October 15 2012 @ 03:34 PM EDT

Authored by: Ian Al on Sunday, October 14 2012 @ 06:02 AM EDT 
I read that in some of PolR's previous pieces. I was sure he would be right, but
I did not really get it.
A while ago I thought of a simple
demonstration of math being a manipulation of symbols. It's as easy as 1, 2,
3.
I start by counting up from zero:
zero
1
The
colour, red, ^{1}
I
II
0100
The colour, green
six
.......
1000
The
initial numeric value is 0 and the math algorithm is to obtain the next numeric
value in the series by adding one, 1 or the colour brown.
Obviously,
trying to write counting on paper forces one to use symbols. The math algorithm
just becomes the manipulation of the symbols. Let's avoid all that confusing
symbolic stuff by actually uttering the math. Of course, my utterances are the
words in the English language that symbolise numeric values; one, two, three.
four, five, six, seven, eight. (Uhm... un, deux, trois... forget
it!)
It changes nothing. The stuff on paper is just another way of
uttering the math. It's the same with a calculator and a
computer.
Let's try PolR's suggestion of hiding the utterances of
counting in a computer so that we don't see or hear it.
Let
Count=0
Let Brown=1
Do Loop
Print Count
Let Count=Count plus
Brown
End Loop
If I really did not want to see what was happening
to my manipulation of symbols in the computer, I should not have printed out the
values of Count.
There must be an exception to the rule that every
processor instruction is an instruction to manipulate the symbols it loads from
memory. In fact, there isn't. The only things that the processor can get from
and put to memory is the binary symbols '0' and '1'. The only thing that the
processor can do is execute a symbol manipulation on the symbols it gets from
memory.
Even the loading and saving of symbols from and to memory is a
computer version of writing and rubbing out symbols on the paper tape of
Turing's hypothetical math machine.
In summary, math is the
manipulation of symbols. The computer processor is only capable of manipulating
symbols. Any software that is executed by a computer processor is only
manipulating symbols because that is all the processor can do. Ergo, software is
math.
 Regards
Ian Al
Software Patents: It's the disclosed functions in the patent, stupid! [ Reply to This  # ]


Authored by: jesse on Sunday, October 14 2012 @ 07:10 AM EDT 
<blockquote>
3. The result of step 2 is the energy E.
</blockquote>
I believe this is usually referred as the "energy equivalent E",
rather than "the energy E". But I'm not sure of the current thought on
that.[ Reply to This  # ]


Authored by: Anonymous on Sunday, October 14 2012 @ 09:19 AM EDT 
The problem with with the "Software is Mathematics" argument
is that it is counter productive when it comes to arguing
against software patents. The reason is that patents are a
legal construct, and must therefore be countered in legal
terms, by comparing it with practice and limitation on
patentibility in other areas such as patents on hardware,
chemical and manufacturing processes, and accepted non
patentability in other areas  eg. novels and other creative
content. The use of abstractness as a reason for non
patentibility of software which was why software was not
patentable in the past also needs to be explained in clear
legal terms and resurrected to block patentibility of
software. The nonpatentibility of scientific principles,
and the nonpatentability of specifications which have
limited options, or are arbitrary and do not correspond to a
real optimum or benefit should also be emphasised.
I fear that introducing a new basis for patentability based
on the "Software is Mathematics" (which lawyers and the
public in general will not understand), will simply expand
the arugment for even wider patentability based on the fact
that because public key encryption patents have already been
granted, there is precedence for the patenting of
mathematics in certain contexts  at least that is how
lawyers will see it.
I think it is far more productive to explain things to
lawyers, law makers, and the public in terms of accepted
limits to patentability that they do understand from other
areas. For example, allowing patenting of novel plots is
absurd, so why is the analogy of this in software being
allowed. You can't patent a method of using fast
multiplication that involves looking up log tables, adding
the values and looking up antilogs on paper, so why was
Intel granted a patent to use this on a computer. You need
to explain that for patents on alloys, you cannot be granted
a patent if you cannot show that the specific values or
ranges you are patenting are optimal in a stated way  so
for example you can't claim ownership of say steelchrome
alloy specifications with say between 2% to 5% chrome and 1%
molybdenum just by quoting the percentages you wish to claim
 you need to identify the optimum values and explain
exactly how they are optimal. It is also not possible to
patent the use of things that have a limited number of
options  for example use of symbols, cyphers, names etc.
Why then is it possible to claim ownership for an arbitrary
range of parameters in the case of standards and protocols
required for interoperability simply by quoting a set of
parameters.
The reasons why we have patents should also be made clear in
the simplest terms  it isn't something given to us by god
or a natural right, but an artificial limited monopoly
granted by the state for the sole purpose of promoting
innovation for the benefit of the consumer/public. Ideas are
something that everyone should be able to have and
implement. The blocking and monopolization of ideas by state
decree is a very unnatural thing, and should be limited to
only those cases where a definite benefit to the public and
the consumer can be shown.
[ Reply to This  # ]


Authored by: BitOBear on Sunday, October 14 2012 @ 09:31 AM EDT 
A good example of the contextual significance of symbolism is is the two oldest
(?) known scripts. Called "Linear A" and "Linear B", we are
able to translate Linear B, but Linear A remains a complete mystery.
So the symbols exist, but lacking context, they have been deprived of meaning
for us. In theory some day we will rediscover the context for those symbols and
therefore the meaning(s) and therefore the useful data still stored in Linear
A.
The thing is we have that data. It exists. We can and do manipulate it, but only
to sore it, retrieve it, stare at it, and wonder.
This means, by definition, a computer (just like a person) can acquire, control,
and transmit "real" data while maintaining zero comprehension of any
_meaning_ for that data. We know it exists and it happens every day.
Homomorphically (is that a real word) computers don't typically even
"care" about things like the signed or unsigned nature of a domain of
bits. Take two operands of eight bits and apply any of the mathematical
operators (and, not, or, add, subtract, multiply, divide, etc) you will, by
design, execute exactly the same transformation whether the one or both of the
operands are signed or unsigned. That is, the bit patterns that go in and come
out of the transformation are identical. (This is _why_ "two's
complement" was chosen to represent signed numbers in modern computers.
(ASIDE: two's complement also removes some "unfortunate" problems with
the older representations that did care... things like "negative zero"
in ones complement representations. These were removed _because_ they were
inferior mathematical representations. Computers have been, over their
developmental lifecycle, "scrubbed clean" of as much of the
mathematically problematic features as possible.)
The real implication of this whole contextdeprived data processing is that any
declaration of "purpose" in program, or a patent for same, is not
material to the technology or "invention".
That is, since data doesn't have any meaning to the computer (which is
demonstrated by the fact that computers assign and require no _meaning_ to the
symbols at all).
So all of the cases of "transferring data between a client and a server for
the purposes of dividing the video stream decoding work" or whatever,
should be stripped of words like "client" "server" and all
the words after "for the purpose of". Such declarations of
"invention" read "transferring data between computers" and
nothing more. The computer doesn't know _why_ and there is no actual distinction
between "client" and "server" when it comes to moving data.
There is just the computer that has data the the computer that doesn't. An HTTP
request, for instance involves data moving from the "client" to the
"server" before the data moves the other way. The "client"
sends data to get the "server" to send data, and that data in turn
usually gets the "client" to send more data to get a
"server" to send more data in turn. The only real distinction is
"we call the thing that goes first the client" or "the server is
the one that waits in standby for another machine". Once a connection is
made things are much less clear cut than most people understand.
Those semantics really don't exist as such.
Software and so software patents are chockablock full of distinctions that
make no difference, particularly once you understand that the contexts imposed
for convenience in the mind of the various programmers and writers are stripped
away utterly before the computer starts pushing symbols around.[ Reply to This  # ]


Authored by: BitOBear on Sunday, October 14 2012 @ 10:18 AM EDT 
The contents of footnote 16 are wholly dependent on the "PC" (e.g.
Intel 8086/8088 architecture) and not computing in general. As such the six
steps outlined in the main text are false. The following is a "better"
definition of what happens in a modern computer:
(0) the computer always has a concept of "the next instruction" which
is located by a value stored in a known location, this location is usually
called "the instruction pointer".
(1) The computer reads the "next instruction" from the memory pointed
to by the instruction pointer into an electrical buffer inside the chip. This
_always_ changes the instruction pointer deterministically, advancing it
"by one instruction".
(2) The act of putting a bit pattern into that buffer electromechanically
activates subsections of the chip, each section of which has a fixed
electromechanical function. (The onness and the offness of the bits literally
turns parts of the CPU on or off.) These sections will be called
"domains" for a moment.
(3) Each electromechanical domain will performs a simple action such as reading
or writing from memory by a particular means (directly, through a register,
etc), doing arithmetic (add, subtract, compare, etc),
NOTE 1: One of the things that can be read or (over)written to is the
instruction pointer. Writing it will change the designated instruction.
Once all of the functional domains are finished, the processor then goes back to
step one (1).
NOTE 2: "instruction decoding" is something of a misnomer and it only
takes place on Complex Instruction Set Computer (CISC) machines. When a complex
instruction is encountered, say a "long multiply" the CPU will replay
a known series of simple instructions that are "built into" the chip.
This replay is electromechanical itself. So there is no thought, or really
actual "decoding" taking place. This is a convenience for the
architecture so that "long or hard" operations like multibyte
"multiplication" don't have to be done "longhand".
There is a real electrical and mechanical reason that executing one instruction
is called "turning the crank" as executing one instruction is
identical to pulling the handle on an mechanical adding machine. Indeed in the
original Babbage differential and integral engine designes there was a literal
crank and each full turn was one instruction.
For reference see http://www.retrothing.com/2006/05/curta_miniature.html which
is a primary example of a good mechanical adder. Imagine functional domains that
set the various value levers and then one domain to turn the crank. A different
instruction might not turn the main crank, but turn the clearaccumulator
(outer) crank. The selection of which crank to turn might be the lowest bit of
the instruction for example.
Computational execution of each instruction inside a computer is factually
electromechanical and functionally atomic.
== Now to Opine ==>
The real reason software should not be patentable is that using a machine
exactly how it was designed to operate (and likely already patented to operate
that very way) isn't eligible for patent _again_.
I can't patent "only sewing straight lines using a sewing machine that can
do more". Why should I be able to patent "repeatedly comparing numbers
to a known number to find a 'good' match"?
The computer is already patented. Patenting its application to a particular
purpose that it, as manufactured, is capable of doing, is not inventing
anything. It's using something.
Software can only make a computer do what it can already do by design, and what
it is already patented to do (or not) by the designer.[ Reply to This  # ]


Authored by: Anonymous on Sunday, October 14 2012 @ 10:42 AM EDT 
Polr,
If you are auditing the main "REPLY" comment strings
only, then
please go back to your "THANKS" string above as
there are questions and
suggestions (read all posts, thank
you).
Clicky back to:
your "Thanks"
link  with following "new"
comments
.
[ Reply to This  # ]


Authored by: sciamiko on Sunday, October 14 2012 @ 11:18 AM EDT 
It's all a matter of levels of indirection. Lewis Carroll understood it well, as
he wrote in Through the Looking Glass and what Alice found there (Ch
8):
"... The name of the song is called 'Haddocks'
Eyes'"
"Oh, that's the name of the song, is it?" Alice said, trying to
feel interested.
"No, you don't understand," the Knight said, looking a
little vexed. "That's what the name is called. The name really is 'The
Aged Aged Man'"
"Then I ought to have said 'That's what the song
is called'?" Alice corrected herself.
"No, you shouldn't: that's quite
another thing! The song is called 'Ways and Means' but that's only
what it's called, you know!"
"Well, what is the song, then?"
said Alice, who was by this time completely bewildered.
"I was coming to
that," the Knight said. "The song really is 'Asitting on a Gate', and
the tune's my own invention."
Talking of inventions, Alice later
recognises the tune as another.
s.
[ Reply to This  # ]


Authored by: Anonymous on Sunday, October 14 2012 @ 11:28 AM EDT 
your a [redacted] lawyer
end of story[ Reply to This  # ]


Authored by: Anonymous on Sunday, October 14 2012 @ 11:31 AM EDT 
Unfortunately, the problem is not a lack of understanding (though there
certainly is that). The fundamental problem is that Rader and his clowns have
decided that software must be patentable. That is a religious position on their
part. You cannot argue it. They have decided that they are going to make sure
that software is patentable and they will make any ruling, no matter how
illogical or irrational, that is required to achieve that result.[ Reply to This  # ]


Authored by: JamesK on Sunday, October 14 2012 @ 12:21 PM EDT 
Actually, you can get by with two operators. NOT and either AND or OR. In
logic integrated circuits, the two most common functions are NAND (NOT AND) and
NOR (NOT OR). You can build *EVERYTHING* in logic using just one of those two
functions and nothing else.

The following program contains immature subject matter. Viewer discretion is
advised.[ Reply to This  # ]


Authored by: Anonymous on Sunday, October 14 2012 @ 12:21 PM EDT 
But is it all avoidable if they include
"a system including a computer program where an input device translates
data to symbols and an output device translates the processed symbols back to
data"
Sure the computer is just doing symbol manipulation, aka math mapping from one
domain to another. But when you wrap it with an input device and output device
you get real world effect.
And that is where the patent is. Real world effect of idea space inventions.
The point then is that if it is simply adding the (+input,+output) to the
software makes it a real world machine you might as well just patent the
software as to be useful any software is going to have inputs and outputs.
[ Reply to This  # ]


Authored by: Anonymous on Sunday, October 14 2012 @ 12:30 PM EDT 
If the invention contains software and the software can be
written in any computer language, then that aspect of the
invention is protected by Copyright law only.[ Reply to This  # ]

 is this too simple?  Authored by: Anonymous on Monday, October 15 2012 @ 07:31 AM EDT
 yes, too simple  Authored by: Anonymous on Monday, October 15 2012 @ 03:26 PM EDT
 yes, too simple  Authored by: Anonymous on Tuesday, October 16 2012 @ 09:14 AM EDT

Authored by: JamesK on Sunday, October 14 2012 @ 12:35 PM EDT 
There was (is?) a logic display at the Ontario Science Centre in Toronto. In
that display, the bits were balls falling down clear tubes to the logic
function. You could see what happened when one or two balls entered an AND or
OR gate. There were more complex functions too.

The following program contains immature subject matter. Viewer discretion is
advised.[ Reply to This  # ]


Authored by: Anonymous on Sunday, October 14 2012 @ 12:39 PM EDT 
Mathematics is a language as are FORTRAN, C, French, German, English, etc.
Mathematics is a precise means of expressing certain concepts.
The fact that something can be expressed in a language doesn't make that thing a
language.
If being able to express an idea in a language makes it unpatentable, then there
can be no patents. Language is how we communicate.
The real problem is absurdly broad claims that lack novelty, such as the LED PWM
patent.[ Reply to This  # ]


Authored by: Anonymous on Sunday, October 14 2012 @ 01:01 PM EDT 
Hi  I've never gotten an actual Groklaw account, but have
been a daily visitor since very near the beginning.
Anyway  *please* consider adding a permanent link to a
collection of all of PolR's excellent essays. I don't see
anything like that in the sidebar currently.
These concepts really need to be made clear to our decision
makers, and I have never seen anyone lay out the issues with
clarity and precision anywhere approaching our worthy
contributor. We ought to make it as easy as possible to
locate these writings.
David Bruce[ Reply to This  # ]


Authored by: Anonymous on Sunday, October 14 2012 @ 01:21 PM EDT 
Both the program and the data are linguistically
encoded information. No lawyer or jurist could make stand
the negation of this statement in courts.
Inside any executing computer, programs input information
commonly referenced as data, but look at what is actually
happening. The distinction between program and data is
arbitrary and recursively layered like pancake stacks. In
reality, there is no absolute and enforceable difference
between program and data. One's current perspective and
motivation informs one's use of natural language, but
experts understand that there is no theoretical difference,
just linguistic shorthand. Any program is merely data to
another program encoded as hardware circuits, firmware
controllers, or software statements.
Show me any U.S. jurist so incompetent on the bench as to
adjudicate for the patenting of data. Oh! Crap! There is
the DC Court of Appeals turning the 21st Century into
the Dark Ages.
U.S. Copyright law governs intellectual property (IP) rights
for data, not U.S. Patent law. Let them patent hardware.
Firmware depends on the context within the system
hardware, as to patentability in my humble opinion.[ Reply to This  # ]


Authored by: Teran on Sunday, October 14 2012 @ 01:37 PM EDT 
Where is the mathematical proof?
Words, words, words.
Lazy. Not fit for peer review.
[ Reply to This  # ]


Authored by: OpenSourceFTW on Sunday, October 14 2012 @ 02:18 PM EDT 
Supposed I am given a program to "run", with the instructions on a
sheet of paper (memory), a stack of blank paper (to use as registers), a pencil
(data paths perhaps), a metronome to represent the clock, and some switches and
indicator lights (I/O). Why can't it be done?
I don't need a computer to run a program. I could conceivably "run" it
by hand, decided what the instructions meant, and copying data from the
"memory" to my "registers," performing ADD/SUB/Logic gate
operations on paper, flipping switches, and reading indicator lights.
There is nothing inherently special about software, it is simply a type of
mathematics that involves Boolean algebra and a truly massive number of
operations. That is why no one will bother doing it by hand, but that doesn't
mean it couldn't theoretically be done.
If most programs only involved a few operations, no one would venture to say
they were patentable. It is only because they involve
millions/billions/trillions operations that this argument seemingly (but
incorrectly) is said to have merit.
Note: you might say my setup could be patented (perhaps a special station with
the switches and lights arranged in a certain fashion), but you would only be
patenting the HARDWARE, not the software.[ Reply to This  # ]


Authored by: Anonymous on Sunday, October 14 2012 @ 02:52 PM EDT 
Thanks for sharing your ideas. I believe we had as previous discussion
regarding software  computation theory  mathematics as they related to patents
a while back.
I continue to make the same basic affirmation  mathematics does not process
anything. But I do believe there is space for patenting software even while
adhering to the idea that software is mathematics. I will _try_ to share my
idea of why, and then I will be suggesting a simple way to reform software
pattens in court by suggesting that presumptions be clarified and/or changed.
Background:
Software/mathematics is a language and so is English.
They are the means by which ideas can be communicated. A system which
understands processor instructions used in software and interprets those
instructions by performing their natural operations (within the context of the
symbols establishing their domain).
The algorithm example you created for e=mc^2 is an example of a mathematical
formula translated into one possible English version understood by even people
not familiar with mathematical equations. Those instructions could
theoretically be followed and some meaning full answer may emerge. (within the
context of the symbols established by the domain encompassing the English
language).
Analysis:
So how can algorithms and patents may relate? I'm going to simplify so that the
complexity of the multilayer modeling of modern software and computer hardware
don't interfere with my analogy.
The simplest computer and requisite hardware for my example is an abacus. My
software is going to be instructions describing bead manipulations on the
abacus, however it's written on a sheet of paper. The operator of the abacus
takes responsibility for loading (reading) and interpreting the instructions, as
well as changing the state (by means of moving beads) of our computer.
At this point we have a complete system, all of which can be described by
computations theory as well as mathematics. I may even be able to patent the
instructions I've written on the paper, but only if I could create a presumption
that the algorithm would be implemented on the hardware. Of course the main
issue about creating any such presumption is that it would rely on the
presumption of the presence of the operator and more importantly his or her
volition to process my algorithm on the machine.
It's this "where the tire meets the road" point where we need to
examine the patentability of software.
So what about those presumption as they apply to modern computers and software?
Well the operator related functions of loading(reading) has been automated into
the CPU and chipset. The only dependency is whether or not the algorithm is
available to the CPU (wither the software is on disk). However there are still
operator related functional requisites that need to be addressed.
The main issue and the one I believe is interesting to examine, is the issue
about the volition of the operator to process the algorithms available. When
Samsung sells a smartphone it is delivered in the off state. All the software
loaded onto the device might be there but it's undeniably inaccessible because
of the machine state requisite that executing those algorithms require. There
is no useful work or processing that may be performed. Patent infringement is
impossible at this point.
When the owner of the shiny smartphone plugs the phone in and turns it on, is
there an issue where the volition of operator comes into play? Does the
operator not provide a genesis of all the volitions that caused the machine to
start processing the available software? The same can be said about starting
the execution of a program.
Obviously software can load other software and execute it according to the
design or idea the programer may have in mind. Does providing such instructions
in software constitute evidence of some volition of the software maker? Perhaps
it does. But how does that volition compare to that of the operator who turns
the device on?
Conclusion:
Most cases of software patent infringement simply presumes that software is
executed simply because it exists within the memory systems of a device.
Furthermore, the volition of the software maker and the device operator are
often enjoined as though they where one. But the underlying point of where
"the rubber meets the road" is the basis for the validity of software
patents, the software instructions being performed by the hardware is the point
where the validity of software patents is based. It also happens to be the area
where there is no established legal precedent and everything seems to be
presumed.
So obviously one way of reforming software patents is to establish/change these
presumptions. Perhaps create some tests or assign some weight to operator
participation to the model. Certainly by the establishment of a judicial
presumption – that the operator who turns the device on is the genesis of all
operator volition contained in the device – manufacturers of the systems can
enjoy the protections of not willfully infringing any patents. Such a
presumption may even be enough to exempt them completely. The establishment of
such a presumption would obviously would make software patents worthless
although they may still technically be valid.[ Reply to This  # ]


Authored by: Anonymous on Sunday, October 14 2012 @ 03:29 PM EDT 
My apologies, I did not read very far. The abstract and first sentence
presented enough issues that I stopped there. First, if mathematics
is a
language, then what does language mean? As languages may be
described via
mathematical notation, and we know as an absolute that
natural languages cannot
be described formally, and mathematics require
formal definitions, mathematics
is a formal language described
mathematically. That seems to be a loop. Or,
looking at it another way, if
mathematics is a language, what is the grammar?
What are the words and
sentences that are provably in and out of
syntax? Mathematics is...
well, I can't finish that sentence, but the
definition would have to account for
axioms and extensions built on logic and
proof, just as a definition of
science would have to include observation and
hypotheses. I think
a more promising way to look at software is to
understand that all machine
code is comprised of instructions which come from a
finite set of possible
instructions. Enumerate the steps and use concatenation
and one may
convert
any program into a giant number and each program will be
uniquely
numbered. The number is an integer and thus an element of a countably
infinite set. Therefore,
all programs exist, just as any number we can think
of exists. Unfortunately
for that argument, nondeterminism, concurrency and
parallelism issues
interfere with the pure, singlethreaded algorithm. Some
problems are NP.
Sometimes choices have to be made which are optimal for one
set of
inputs
and an antipattern for another set. We can also easily
imagine
a program that writes a future step as it's executing. The
other issue
that I noted at the outset is that software, hardware, and
firmware are all, in
a very real sense, interchangeable. Any program could
be replicated with
hardware. And indeed, for a dedicated embedded
device, it may be better to
avoid software. Our power grid is at risk to
cyberattack because control
devices have an os and software in order to
facilitate remote monitoring and —
very important — control. Note, I'm not
second guessing the engineers by
mentioning the point. The inconvenience
of replacing hardware with software is
that then the device does only one
thing and in one way. Still, if hardware and
software are interchangeable, it
makes problematic the assertion that software
patents are bad, but
hardware patents acceptable. In Mathematics where
one "gets" is
based on the assumptions, i.e., where one starts. The patent
system has
been fitfully adapted from the steam age into the Information Age.
20 years
seems too long. Innovation is perverted from progress to a search for
the
patentable. Patents don't always work out as well for the inventors as
advertised. Mutual Assured Destruction is as fragile a preventative against
destructive strife as were the alliances of 1910 Europe. I appreciate the
effort you put into writing this and I apologize that by disengaging after
reading the first sentence and footnote I may have missed the
convincing
argument that transcends my comments on semantics. I'm
thinking that liberating
software from the patent system isn't enough and to
focus on that implicitly
endorses our current patent system. We must
choose a different starting
point. [ Reply to This  # ]


Authored by: Anonymous on Sunday, October 14 2012 @ 04:21 PM EDT 
Mathematics and abstract thought exceptions to patentability
in the United States are constructs of the courts.
The reasoning for the exception appears to be patents on
abstract thought and mathematics grant an exclusive right
that is much to broad.
Making the breadth of the patent the determining factor
seems much more useful.
The ChurchTuring thesis roughly states that anything that
is achievable can be described with a Turing machine, or the
Lambda calculus.
So from the perspective of scope of a patent grant any
general result in software or mathematics should be denied
patent protection because it is much too general.
Any patent that will not allow someone skilled in the art to
read the patent and create the machine should be denied
patentability because it is not clearly interpretable and
much too general.
Any application of a general result to a specific problem
should be denied patentability because the utility of the
general case is already well understood, and nothing
interesting is added.
So for a test I would propose the questions how broad is the
applicability of this patent? If the answer is not
obviously a small narrow field there is a problem. If the
patent may not be worked around by a small modification I
suggest that the patent is too broad.
[ Reply to This  # ]


Authored by: ikh on Sunday, October 14 2012 @ 06:36 PM EDT 
An excellent article as usual from PoIR.
First some clarifications of some of the usual misunderstands that are littered
through out the comments.
A computer is just the memory and the processor. These two elemnts are what make
a Turing machine. Keybord, mouse, network and all other input or output devices
are referd to as peripherals. They work by writing to or reading from particular
memory locations and then signaling the processor that data has been read or
written. Peripherals form a part of a computer sytem but they are not part of
the computer its self. They are not part of the computational system that is a
realisation of a Universal Turing Machine.
We don't have to argue that the source code of a computer program is Maths or
abstract though becase the courts have already clearly stated this to be the
case and Patent Lawyers have not found a weakness that they feel they can argue
against with source code.
This is why, Patent Lawyers argue that loading a program into a computer makes a
new patentable machine. This is why PoIR concentrated on the representation of
software in a running computer.
Software, whether in source code or when running on a computer is an algebra. We
can mathematically prove a one for one mapping between source code and the
running representation of the program on a processor. As PoIR has said, modern
dgital computers use bits ( short for binary digits ) as the representation. But
nothing in the maths of computation requires this. It is just the most practical
method of realising a computer.
There are three different mathematical systems that define the axioms, and rules
for computers. PoIR favours Llambda Calculus, I am more familar with Turing
Machines, and others may prefer the theory of recursive functions. It does not
matter which is used because they have been mathematically proven to be
equivelent to each other. They all produce machines with identical properties.
I need to say a little about proof here. Proof is used in different ways in
different subjects. E.g. in Law, in a civil case in the U.S. and U.K. legal
proof is
about evaluating the evidence on the "Balance of Probabilities". In a
criminal case the evidence is evaluated against the criteria of "beyond a
Reasonable doubt". In science, ( Physics, Chemistry, Biology ) The standard
of evidence is even higher. When General Relativity replaced Newtonian Gravity
there was just one piece of evidence that Physics could not explain. The orbit
of the planet Mercury. General Relativity also made a predection of
gravitational lensing. General Relativity produce results that predicted both of
these and matched all the results from Newtonian gravity and therefore it
replace Newtonian gravity as the model we use for gravity.
Mathematical proofis a very different beast. It is an absolute. If I prove
something mathematically, there, by definition, will never be a counter example.
Andrew Wiles in 1995 proved Fermats last Therom. It says that you can not have
an equation of a^n + b^n = c^n for any integer n where n is greater than 2.
Where a^n means raising a to the power n. In all of known history, there has
never been a well fomed mathematical proof where evidence can produce a counter
example.
Now, you can not mathematically prove a law of physics. In fact, laws of physics
or any other science, are really models that are the most predictive that we
have been able to create so far. To be able to prove something mathematically
you have to be doing Maths.
So what can we say mathematically about software or computer programs. Well, we
can write the specification of a computer program in maths. We can then implemnt
the specification in the programming language of your choice. Lets say I
implement it in C++. We can then mathermatically prove that the implementation
is exactly equivilent to the specification at the source code level. We can then
implement the specification in BASIC and prove that it is equivelent. We can do
the same thing with assembler language. We can also prove that the assembler
language is one for one equivelent to the binary representation of the running
program.
If software was not maths then these proofs would not be possible.
Software is mathematics. Sadly, there are highly trained engineers, Physisits,
Applied Mathematicians out there that have not had the training in the correct
areas of Mathematics and Computer Science, that do not understand this
relationship. This does not change the truth. As anyone with a reasonable
mathematical training will tell you, there are many mathematical results that
are counterintuitive and go completely against common sense. Common sense has
nothing to do with mathematical results.
Let me give a very simple example of unintutive maths ( other may come up with
better examples. ) Divison by zero in Maths is an illegal operation. By common
sense, a lot of people would expect division by zero would produce infinity.
Instead, it is absolutely forbidden in Maths. Why? It is simple. If you allow
division by zero you can prove a therom to be true, and you can prove the same
therom to be false. Let me make that a little more concrete. Most of us, from
school remember Pythagaros therom. The square of the hypotenuse is equal to the
sum of the squares of the other tw sides. There are literally hundreds of proofs
of this therom. After all a therom means that there is at least one mathematical
proof. If we allow division by zero then it becomes possible to prove the therom
wrong! This breaks mathematics. That is why the operation is forbidden.
Software is Mathematics, not because I say so, but because it is an algebra.
Because software make theroms. Because I can make proofs. It is irrelevant if I
use maths formulas and equations in software. They are not important. Diamond v
Diehr says that using equations in an invention does not stop that being a
patentable invention. But software itself can never be patentable nor can
software alone ever infringe a patent.
This leads on to the questions asked by the Court of Appeals.
a. What test should the court adopt to determine whether a computerimplemented
invention is a patent ineligible "abstract idea"; and when, if ever,
does the presence of a computer in a claim lend patent eligibility to an
otherwise patentineligible idea?
Its simple. In Diamond v Diehr a computer controlled the process of curing
rubber. Ignore how the computer does what it does but not what it does. Alow the
computer to be substituted by a human to do what the computer does. Do you still
have an invention? In Diamond V Diehr, if the compiuter were replaced by a human
making the calculations, you still have a new process for curring rubber. That
is reasonably patentable. Does it patent the computer program? No! I could use
exactly the same program in a simulator of rubber curing and it should not
infringe.The computer program is not protected. The mathematical equation used
to decide how long to cure the rubber is not protected. Only the whole process
of curring rubber is protected.
The second questions asked by the Court of Appeals was:
b. In assessing patent eligibility under 35 U.S.C. § 101 of a
computerimplemented invention, should it matter whether the invention is
claimed as a method, system, or storage medium; and should such claims at times
be considered equivalent for § 101 purposes?
Absolutely not. The Supreme Court has ruled time and time again that the
formulation of the claim, the words used should be irrelivent. All such claims
should be equivelent.
/ikh
[ Reply to This  # ]


Authored by: jjon on Sunday, October 14 2012 @ 09:49 PM EDT 
The article briefly mentioned printing presses, and that got
me thinking:
Computer
hardware should be treated like printing presses. I.e.:
You can
patent an improved printing press; you should be
able to patent
improved
computer hardware.
You can copyright (but not patent) a book that's
printed
with a printing press;
you
should be able to copyright (but not
patent) a computer
program that runs on a
computer.
You can't patent "a
printing press configured to print a
specific type of
book",
since loading the
printing plates into the press clearly
doesn't make a
different
machine for
patent purposes. So you shouldn't be able to
patent "a
computer
configured to
run a specific algorithm", since it clearly
doesn't make a
different
machine.
Question: Given the current patent law, as used for software
patents, why CAN'T
I
patent "a printing press configured to print <some new
book>"?
Is this a way to
show the absurdity of the current law? [ Reply to This  # ]


Authored by: webster on Monday, October 15 2012 @ 12:16 AM EDT 
.
The computer, since it is electrical, can basically only recognize current and
no current. It can therefore only read numbers, binary numbers that reflect its
binary nature on and off. There is an on and off switch on the outside but
multitudes more on the inside. They go to things like disk drives, speakers,
ports, peripherals, accessories, etc.
To make it all work one has to give the computer instructions, but the computer
can only read the numbers. Numbers that correspond to hardware in the computer
and set circuits to do work are translated to words that can be used to
manipulate the computer. That becomes programming language. You have to use
these words in a specified way and you can't make anything up unless you define
it to the computer with words it already knows or the right binary number.
So instructing a computer is specialized "language" which underneath
has to be numbers and math.
A computer can do everything you tell it to do. A patent on speech is patenting
an abstraction. Expressions of ideas and writings enjoy copyright protection.
Imagine if there were patents on poems of love.
Could I send thee messages on the wings of a dove?
.
[ Reply to This  # ]


Authored by: cricketjeff on Monday, October 15 2012 @ 07:29 AM EDT 
The "software is maths" argument is actually too deep, the patents
should have been rejected simply because they did not describe anything novel.
Computers are general purpose machines, they are supposed to be able to do
anything physically within their compass, I can't patent undoing crosshead
screws with a flathead screwdriver that happens to fit. If you can find a way to
make a computer produce toffeeapples without adding anything to its normal
peripherals, that would be patentable, but do sort or process data, that is what
it was designed to do!

There is nothing in life that doesn't look better after a good cup of tea.[ Reply to This  # ]


Authored by: Anonymous on Monday, October 15 2012 @ 11:08 AM EDT 
If the universe is a simulated reality, the question of patentability via
this test becomes muddled.
Background: It is possible that the universe
is merely a simulation generated by some extremely complex computing force. Our
being simulated is difficult to test, but is nonetheless a possibility which
cannot be ruled out, and which has implications for the question "what is math".
By the reasoning in this article, all products of the simulation are expressions
of some mathematical framework. In what cases are patents valid for objects
which, in the context of the simulation are nonmathematical, but from outside
the simulation are math?
You might argue that descriptions within the
simulation may hold to objects without the simulation, and therefore they are
described by mathematics rather than deduced. It is possible, however,
that descriptions within the simulation are meaningless when applied outside of
the simulation. If the laws of physics significantly differ outside the
simulation visavis our own, one might conclude that our description describes
a purely mathematical object with no "real world" equivalent.
We might
find, rather, that our patents hold only within the confines of our own
universe. This makes some sense, but begs the question: what if the description
provided by the patent holds outside of our own universe? Does the fact that the
idea was generated within the simulation somehow invalidate its patentability
without? Perhaps the patent should hold everywhere it is valid. But how do you
distinguish between the patent being used within the model, and the patent being
used by some outside agent who interacts with the model?
It seems that one
possible test would be whether the description generates the object, or
merely describes it. That is, despite the fact that an object is generated by
some mathematical process, manipulation of that object by subsequent
mathematical process may still be patentable. If the object, and therefore its
manipulation, hold no meaning outside of the mathematical structure, then its
patentability is moot outside of said structure. Within the simulation, however,
the manipulation could be just as "real" as the operation of a cotton gin, and
may therefore be patentable.
So if one piece of software defines a
symbol, and another manipulates it, where does this leave us? And how do we
define where one algorithm ends and another begins? For that matter, do our own
patents hold in our own simulations when those simulations become sufficiently
complex that the patent might be implemented within? [ Reply to This  # ]


Authored by: Anonymous on Monday, October 15 2012 @ 11:33 AM EDT 
This rolls around every few months. What's the point?
Mathematics is used as the building blocks in a lot of different fields
(including music).
That doesn't make all these fields "mathematics".[ Reply to This  # ]


Authored by: Anonymous on Monday, October 15 2012 @ 02:54 PM EDT 
...you might end up getting mathematics patented.
In fact it looks like that
has already happened for "hard" math.
Does not compute: court says only hard math is
patentable [ Reply to This  # ]


Authored by: Anonymous on Monday, October 15 2012 @ 05:01 PM EDT 
A procedure can manipulate symbols without being mathematics. To be
mathematics, the manipulations have to be selected from a set that is defined
clearly enough that they can be carried out identically by different people
following the same instructions. The rules must be unambiguous.
Law and art can involve manipulation of abstract symbols, but they're not
math.
[ Reply to This  # ]


Authored by: Maot on Monday, October 15 2012 @ 07:55 PM EDT 
If software is maths, is maths software? Is software just a proper subset of
maths? I believe not.
I'm not disagreeing with the view presented here I just believe there are other,
equally valid views. Some may be based on intangibles but are not invalid views.
All viewpoints should be able to coexist.
Aside from trying to fit a taxonomy in order to apply a silly law, or for the
purposes of pure computer science or pure math research, who normally cares?
Yes, using the perspective that software is just maths can be helpful in some
circumstances, in others, such a view does not help.
I've made a living as a software engineer (or teacher of software engineering)
now for many years.
I am not, never have been and never will consider myself a mathematician. I know
plenty of great mathematicians who would never consider themselves programmers
or software engineers. Can they write software  some of them! Can I do math?
Sure; however, why bother, that's what my computer is good at :)
So from this simple perspective, software is not math, it's a different
discipline done by different people, with different aptitudes. Math is done by
mathematicians, software is done by software engineers  or if you're unlucky,
programmers :D
I'm not arguing that all software can't inherently be represented as maths,
broken down into it's subcomponents, which is the view presented in this
article.
Sure, it can  however, to the layman and also the the professional, this
shouldn't matter (at least not often).
Nor should it matter to the lawyer, like I say, in this context we're discussing
a silly law that can be interpreted many ways by many experts.
Next: what does software do? Does all software solve mathematical problems? Not
as far as the user is concerned and often, not as far as the programmer is
concerned. We build abstractions on the shoulder of giants. With the
abstractions sufficiently far removed from the maths, it really is no longer
maths. The sum is greater than the parts.
That's the value we add.
Are we just big bags of minerals and water? No.
When we build a machine with a consciousness (and it's a matter of when, not if,
in my opinion) is that maths? No, of course not, no more than you or I are just
maths.
How we define and draw the line between software and maths is always going to be
open to interpretation  a grey area, however, I believe something becomes more
than just maths and becomes software.[ Reply to This  # ]


Authored by: Anonymous on Monday, October 15 2012 @ 09:26 PM EDT 
If you can copyright it, you can't patent it.
The problem we face is that legal boundaries and definitions are fluid and can
be pushed. And where those boundaries and definitions are consistently pushed in
only one direction over time they tend to move.
It isn't enough to have a clear logical argument that everyone agrees makes
sense. That clear argument will be twisted into a pretzel over time by the
pressure. Even clear simple language like "Software as such cannot be
patented" can be twisted over time to mean the exact opposite. What hope
then does your more complicated quasiphilosphical argument have of withstanding
such pressure?
The ONLY way to win a long term victory here is to create a boundary that CANNOT
move because it will be pushed from both sides. All it takes to do this is the
bedding in of a very simple principle. Things which can be copyrighted cannot be
patented.
If this is established in law then courts cannot give in to the pressure to
patent software without immediately invalidating copyright on that same type of
software.
People are going to want to push on both sides of that boundary. People will in
fact be reluctant to push that boundary at all as in doing so they may just hurt
themselves.
That is the only way to win here  to turn the enemies strength against himself.
[ Reply to This  # ]

 Makes zero sense  Authored by: Anonymous on Monday, October 15 2012 @ 11:06 PM EDT

Authored by: celtic_hackr on Monday, October 15 2012 @ 10:06 PM EDT 
To me it's obvious.
Software is to computers what a perforated roll of paper is to a player piano.
The persons who invented the player piano and the computer were entitled to
patent protection. The people who make the "sheet music
program"/"computer program" which the the player piano/computer
plays are entitled to Copyright Protection.
End of story.
You can't patent a work of art. That's all a computer program is. A work of
multimedia, interactive art, normally reduced to an electronic digital
representation. But can just as easily be reduced to a perforated roll of paper
fed into a specific reading device. Or a stack of perforated cardstock.
[ Reply to This  # ]


Authored by: Anonymous on Monday, October 15 2012 @ 11:22 PM EDT 
Has anyone, anywhere ever experimented with interfacing an ordinary computer to
an electric piano keyboard for the purpose of obtaining character inputs
(particularly commands and source code)? I envisage a courtroom demonstration
where a computerilliterate musician is handed an original score and is able to
play it with sufficient accuracy that the result is plausibly music but has the
side effect of programming the computer to do something obviously intentional (
display "Hello World!" perhaps). With suitably sparse sampling of the
fairly rich available input it ought to be possible to embed the programming
activity in the music  not so much hidden as an extra layer of functionality
available to more sophisticated interpretations. It may be a very inefficient
programming mechanism but I see no loss in generality and nothing to suggest
patentability in comparison with any other musical composition where copyright
is the only available protection. Even as a thought experiment it might have
some value but I would not be surprised if the necessary ingredients are already
available or within easy reach by adaptation.
jjb~49[ Reply to This  # ]


Authored by: Anonymous on Monday, October 15 2012 @ 11:29 PM EDT 
Just a suggestion, when writing this sort of piece, you should always come
right out and announce your conclusion right in the beginning. You talk about
a test that you are proposing right in the beginning, but by the time I finish
reading, I am not really sure what the test is. Don’t hide the ball, just come
out
and tell us what you want to convince us of.
Is it "As long as the
procedure manipulates symbols their meanings will
never confer patent
eligibility."? Every computerimplemented method
manipulates symbols, right?
So what is left as being patentable?
Is it "Activities like input,
output and mental steps of reading, writing and
thinking about the meaning of
symbols should not confer patent eligibility to a
manipulation of symbols."?
If none of these things confer patent eligibility and
manipulation doesn't
confer patent eligibility, what confers eligibility? Are you
just arguing for
against software patents altogether? That isn't really a test,
unless you test
is to determine whether something is a computerimplemented
method.
Is
it "Data gathering where nothing more is achieved but obtaining the data
which
will be manipulated should not confer patent eligibility to a manipulation
of
symbols."?
Is it . . .????
This entire piece seems to be about
what is not patent eligible. You never
talk about what is a patent eligible
computerimplemented method. In other
words, how do you apply your test
(whatever it is) to that find a computer
implement method is patentable. The
Federal Circuit isn't asking whether any
computerimplemented methods can be
patented (this is already well settled).
The Federal Circuit is asking how to
determine whether a given computer
implemented method is patentable. Can you
apply your test both ways?
[ Reply to This  # ]


Authored by: dio gratia on Tuesday, October 16 2012 @ 02:46 AM EDT 
The meanings given to the symbols should make no difference. I suggest this
as the line:
As long as the procedure manipulates symbols their
meanings will never confer patent eligibility.
There's this issue
of abstracts that goes beyond signals and symbols. For instance court systems
allow electronic filing in general allowing PDF or other document
formats.
There's a point of philosophy that became prominent during the
middle ages on 'the name is the thing', where you treat the named object as the
object itself.
A file in a particular format (e.g. Word .doc) is not a
document. It's software for a virtual machine to be able to effect the printing
of a document. WYSIWYG editors just happen to allow us to manipulate the result
as if it were real  in the abstract.
As far as the rest of us we happily
exchange document format files and entertain them as they were equivalent with
printed documents. The definition of document has been
extended to embrace the virtual machine program representation (See 3:).
We
can manage to understand that even in computer storage it's still a symbolic
representation (See 2 b:). Yet we allow patents on symbolic representation
manipulations (
I4i v. Microsoft, US Patent 5,787,449 where we
equate the symbolic representation (document file) with "an original or official
paper relied on as the basis, proof, or support of something (1 b:).
All
when those symbolic representations in computer memory are comprised of signals ("a
detectable physical quantity or impulse (as a voltage, current, or magnetic
field strength) by which messages or information can be transmitted ").
MPEP
2106 Patent
Subject Matter Eligibility directs us to the distinction as to when signals
are not patentable eligible:
Nonlimiting examples of claims that
are not directed to one of the statutory categories:
i.
transitory forms of signal transmission (for example, a propagating electrical
or electromagnetic signal per se), In re Nuijten, 500 F.3d 1346, 1357, 84 USPQ2d
1495, ___ (Fed. Cir. 2007);
ii. a naturally occurring organism,
Chakrabarty, 447 U.S. at 308;
iii. a human per se, The LeahySmith
America Invents Act (AIA), Public Law 11229, sec. 33, 125 Stat. 284 (September
16, 2011);
iv. a legal contractual agreement between two parties, see In
re Ferguson, 558 F.3d 1359, 1364, 90 USPQ2d 1035, ___ (Fed. Cir. 2009) (cert.
denied);
v. a game defined as a set of rules;
vi. a computer
program per se, Gottschalk v. Benson, 409 U.S. at 72;
vii. a company,
Ferguson, 558 F.3d at 1366; and
viii. a mere arrangement of printed
matter, In re Miller, 418 F.2d 1392, 1396, 164 USPQ 46, ___ (CCPA
1969).
A claim that covers both statutory and nonstatutory
embodiments (under the broadest reasonable interpretation of the claim when read
in light of the specification and in view of one skilled in the art) embraces
subject matter that is not eligible for patent protection and therefore is
directed to nonstatutory subject matter. Such claims fail the first step and
should be rejected under 35 U.S.C. 101 , for at least this
reason.
It's this intersection of symbol ("something
that stands for or suggests something else by reason of relationship,
association, convention, or accidental resemblance", 2 b:) and signal that
suggests the key to understanding the limits of symbols in patents. We look at
In re Nuijten which despite Judge Linn's minority opinion found that signals
aren't articles of manufacture (the closet of the four categories of patentable
subject matter as described in MPEP 2106. They are Process, Machine,
Manufacture and Composition of Matter). We have seen various District decisions
promoting nontransitory signals to composition of matter.
We see the word
transitory in the MPEP quote above and the decision depended on nontransitory
to define and article of manufacture. We see that the MPEP 2106 I. iii
definition of ("an article produced from raw or prepared materials by giving to
these materials new forms, qualities, properties, or combinations, whether by
handlabor or by machinery") is likely broad enough to include changes in
magnetic permeability in storage media or perhaps even FLASH or other
nonvolatile memory not based on magnetic hysteresis.
It does suggest that
the patentability hinges on imperative versus declarative forms (methods versus
abstracts) which should also embrace nontransitory versus transitory. We see
software patents embracing methods of producing nontransitory effects
representing signals (symbols) but generally not abstracts themselves
(Bilski).
For software patents there's this intersection between copyright
and patent articles of manufacture that might be governed by In re Miller which
cites a reliance on printed matter.
Software per se isn't patentable. Gottschalk v. Benson which addresses for the most
parts the four categories of patentable subject matter under § 101 and shows
transformation of signals by themselves is abstract, the transitory or
declarative side versus imperative of method side. The Supreme Court also
throws the issue over the the fence to Congress:
If these programs
are to be patentable,^{[6]} considerable problems are raised which only
committees of Congress can manage, for broad powers of investigation are needed,
including hearings which canvass the wide variety of views which those operating
in this field entertain. The technological problems tendered in the many briefs
before us^{[7]} indicate to us that considered action by the Congress is
needed.
That ambiguity has never been resolved by Congress.
It
looks like without Congressional action the only chink in the armor for software
patents on either strong enforcement of transitory versus nontransitory
transformation or the printed matter doctrine.
See Semiotics
101: Taking the Printed Matter Doctrine Seriously, Kevin Emerson Collins 
85 Indiana Law Journal 1379 (2010):
The printed matter doctrine is
a branch of the section 101 doctrine of patent eligibility that, among other
things, prevents the patenting of technical texts and diagrams. The contemporary
formulation of the doctrine is highly problematic. It borders on incoherency in
many of its applications, and it lacks any recognized grounding in the Patent
Act. Yet, despite its shortcomings, courts have not abandoned the printed matter
doctrine, likely because the core applications of the doctrine place limits on
the reach of the patent regime that are widely viewed as both intuitively
“correct” and normatively desirable. Instead of abandoning the doctrine, courts
have marginalized it. They have retained the substantive effects of the printed
matter doctrine but avoided analyzing it whenever possible. This Article adopts
a different approach: it takes the printed matter doctrine seriously. It
reinterprets the printed matter doctrine as the sign doctrine, revealing both
the conceptual coherence hidden in the doctrine’s historical applications and
the doctrine’s asofyet unnoticed statutory grounding. The key to this
reconceptualization is recognizing that the printed matter doctrine is in effect
already based on semiotic principles. The printed matter doctrine purports to be
about information, but it is actually about signs. It purports to curtail the
patenting of worldly artifacts, but it actually curbs the reach of patent
protection into mental representations in the human mind. To support these
arguments, this Article offers a course in “Semiotics 101”: a semiotics primer
strategically targeted on the principles that prove to be relevant to the
section 101 doctrine of patent eligibility. This Article also examines one
unexpected consequence of taking the printed matter doctrine seriously and
adopting a semiotic framework. It reconsiders the patentability of a class of
software inventions which are defined here as “computer models.” As a matter of
semiotic logic, the routine patentability of newly invented computer models
under the contemporary patent eligibility doctrine cannot be reconciled with the
categorical unpatentability of mechanical measuring devices with new labels
under the printed matter doctrine.
The actual article can be found
here
(PDF, 972 KB). The gist of which is that we don't disambiguate types of
software patents sufficiently to avoid those dealing purely in signs (symbols,
semiotics). Fixing that would likewise appear to fall under the remit of
Congress. In the mean time we get both patentees (their practitioners) and
courts relying on semantic distinctions that can't always be supported
compounded by a Patent Office more than willing to let the courts sort it all
out.
The question is can the Supreme Court or the CAFC provide a decision
that can limit excesses? What is needed immediately is guidance preventing
contrasting court decisions based on conflicting interpretations.
I'd
personally like to see disk storage of signals held to be transitory, though
that's not likely to happen. There's also a collision with the printed matter
doctrine that perhaps should. Storage is the media upon which a narrative is
written.
There's this distinction in copyright between authorship and
copies that storage media sneaks under the tent skirt into patents by equating
manufacture with fixation.
And this says pure software ought to pick one,
copyright or patents. The issue being it may be easier to avoid copyright than
a patent because copyright protection does not extend to the underlying idea.
Well guess what, it doesn't in patents either other than process patents and
certainly not as far as the domain of printed matter.
What's needed is a
greater confluence of legal and technological expertise in making legal
decisions from the looks of thing. What we get is semantic analysis without
understanding the semiotics (signs). The slightest language shift leads one
audience or another astray.
In the mean time it may be sufficient to make
the distinction between process (imperative) and math (declarative) while noting
programs can contain both. Sort of makes you wonder if you could hang up a
patent claim based on software source language. You might make a plaintiff
prove what happens by expert witness. We've already seen expansive
interpretations there.
Then again there are those of us who question whether
software patents address Constitutional purpose which doesn't appear to provide
a short cut, either.
In the mean time it might be safe to say as long as
manipulation of symbols doesn't produce an actual machine, a manufacture of one
or more articles, or composition of matter symbols themselves don't confer
patent eligibility.
I enjoyed reading the article. [ Reply to This  # ]


Authored by: Anonymous on Tuesday, October 16 2012 @ 04:53 AM EDT 
When is the deadline to submit amicus briefs to the Federal
Circuit?[ Reply to This  # ]


Authored by: Anonymous on Tuesday, October 16 2012 @ 08:04 AM EDT 
In the sentence "If you go read these definitions in textbooks from
competent authors you will find they are all elements of the language of
mathematics.", you should drop the word "go" since you have two
conjugated verbs sidebyside.[ Reply to This  # ]


Authored by: Anonymous on Tuesday, October 16 2012 @ 08:49 AM EDT 
sounds like an excuse for more frivolous lawsuits be a lawyer win the lottery
(or several...)
one would wonder how many patents were being built for hundreds of years before
the patent office existed (DaVinci???)[ Reply to This  # ]


Authored by: Anonymous on Tuesday, October 16 2012 @ 11:31 AM EDT 
For precision:
E = m c**2
Calculates the energy of the "rest mass"  the energy of a particle at
rest. This is useful in calculating the energy liberated when a particle and an
instance of the antiparticle annihilate each another.[ Reply to This  # ]


Authored by: Anonymous on Wednesday, October 17 2012 @ 09:03 AM EDT 
I believe you are correct in showing that software is
Mathematics. Applying the
rule of syllogism to the
generalization that "Mathematics
is
abstract", one
finds that software algorithms are abstract
ideas
and therefore should not be
patentable.
However, the same facts and concepts that you use to
show that
software is mathematics can be used to show
directly, that
software is abstract. You
could then go directly to saying that software should
not be
patentable.
By doing that you eliminate the syllogism above,
simplifying the argument.
You also short circuit all the fallacious
arguments, (but
tedious and time consuming to refute), in
prior replies about
the precise boundaries of Mathematics.
Why is this not a way to strengthen
the argument? [ Reply to This  # ]


Authored by: Anonymous on Wednesday, October 17 2012 @ 02:09 PM EDT 
Recommendation One:
Every patent must identify how future infringements are
detected and characterized, as to the requested monopoly.
Recommendation Two:
No activity that eludes all the patent's stated detection
methods is liable for infringing that patent.
Recommendation Three:
All detection methods and characterizations of
infringements are restricted to direct observations and
physical measurements (e.g., voltmeter, spectrometer).
The patents I have been involved with are shockingly
murky. Even the patent attorneys decided one granted
patent has two completely antithetical interpretations.
The adversary was banking on this flaw in their patent
to win in court.
It is not sufficient to claim what the patent covers. The
patent should clearly state in legal, operational steps,
precisely how the world will determine infringement in any
future legal proceeding.[ Reply to This  # ]




