decoration decoration
Stories

GROKLAW
When you want to know more...
decoration
For layout only
Home
Archives
Site Map
Search
About Groklaw
Awards
Legal Research
Timelines
ApplevSamsung
ApplevSamsung p.2
ArchiveExplorer
Autozone
Bilski
Cases
Cast: Lawyers
Comes v. MS
Contracts/Documents
Courts
DRM
Gordon v MS
GPL
Grokdoc
HTML How To
IPI v RH
IV v. Google
Legal Docs
Lodsys
MS Litigations
MSvB&N
News Picks
Novell v. MS
Novell-MS Deal
ODF/OOXML
OOXML Appeals
OraclevGoogle
Patents
ProjectMonterey
Psystar
Quote Database
Red Hat v SCO
Salus Book
SCEA v Hotz
SCO Appeals
SCO Bankruptcy
SCO Financials
SCO Overview
SCO v IBM
SCO v Novell
SCO:Soup2Nuts
SCOsource
Sean Daly
Software Patents
Switch to Linux
Transcripts
Unix Books

Gear

Groklaw Gear

Click here to send an email to the editor of this weblog.


You won't find me on Facebook


Donate

Donate Paypal


No Legal Advice

The information on Groklaw is not intended to constitute legal advice. While Mark is a lawyer and he has asked other lawyers and law students to contribute articles, all of these articles are offered to help educate, not to provide specific legal advice. They are not your lawyers.

Here's Groklaw's comments policy.


What's New

STORIES
No new stories

COMMENTS last 48 hrs
No new comments


Sponsors

Hosting:
hosted by ibiblio

On servers donated to ibiblio by AMD.

Webmaster
I doubt it | 443 comments | Create New Account
Comments belong to whoever posts them. Please notify us of inappropriate comments.
I doubt it
Authored by: Anonymous on Friday, January 04 2013 @ 03:59 PM EST
I'm not a mathematician, as previously stated, but I'd disagree since I think you're taking what PolR out of context. Taken on their own they'd always be pure maths. But at least you now acknowledge that adding 2 numbers is now abstract which is in contrast to your earlier position.
I haven't changed my position. My position is still that when software (whih was the context, after all) adds two numbers, that is not pure math -- it is applied to something. Now, it may be applied to something completely silly, or the application may be in aid of further pure math, such as searching for primes, or it may be applied to something physical, like signal processing, but the addition is a process that is happening, with, e.g. voltages changing in the real world.

But note that this is the software when executing. The program -- the static text we stare at all day long -- is obviously an abstraction of what we are going to make happen. I can write such an abstraction for a digital, or even sometimes an analog, circuit, however, and press a few buttons and wait a few weeks for my IC to come back. It's no different to me.

I'd argue that a computer is not doing applied maths. It doesn't assign meanings to the numbers that it manipulates.
And neither does an amplifier assign meanings to the signal it multiplies. The only difference is that one is analog and the other is digital. From my perspective, the difference between an analog amplifier, and a digital multiply, whether done directly in hardware, or done in hardware as directed by a program, is that one can suffer from calibration error and various types of analog noise, and the other suffers from quantization error.
It just adds, subtracts, stores, etc. It doesn't look at a forumla and say that pressure = force/area which is the result of a mass under acceleration exerting a force which is applied over a certain area which can be measured in N/m2.
And the same is true of any analog circuit.
When a computer is computing the next highest prime to be discovered is that applied maths or is it pure maths? From what you're saying so far I think you think it is applied maths.
Yes, applied math (numerical methods) are being used to discover something about our universe.
Congratulations on hiding your secret from your employer ;-)
Thanks!
I apologise for being a little insulting earlier, I could have worded things better. So you understand how computers work down to the nitty gritty, yet you're still unwilling to accept that software is just mathematics and that we shouldn't argue that it is therefore unpatentable. C'est la vie.
Apology accepted. Sometimes people get indignant when others cannot see what they perceive as the truth. Then insults happen. I'm getting used to it, and getting better at expressing myself so that the insults are milder than they used to be. As I have expressed before, most software patents are utter garbage, and I don't really have a problem with the idea that we should make software unpatentable -- I think it could be a huge net gain for the economy.

As to what I accept, I can easily accept that software is just mathematics. But then, by my logic (and of course, YMMV) most hardware becomes "just mathematics" as well. A lot of commentators here argue that hardware is different because atoms are involved, but that's really neither here nor there, especially when considering processes.

I actually have a lot of sympathy for the judges who are grappling with trying to treat software as math, because the edge cases are extremely dificult. I really don't see it as clear-cut as PolR and others make it out to be.

I was trying to point out how utterly daft the doctrine of equivalents is when applied to software patents. I think that lawyers use it when it suits them but ignore it when it doesn't.
Some applications of it are. Lawyering that will make your head spin, but that's not a really good application of it in my book. I think it only works that way when the judges get confused, and it's easy to see how that happens in this context.
Now as a both a processor and and software developer, you can probably help me straighten my thoughts. What comes first? The hardware or the software (ie algorithms you need your hardware to run)?
The algorithms come first, but it's an iterative process, and the actual meandering path taken depends on how much weight you give to different goals (size, clock speed, power consumption, execution speed of various benchmarks). A lot of embedded processors are application-specific, and hardware/software co-design is often used to help with the tradeoffs.

For stock processors, the iteration happens over _years_. I don't know if you remember the RISC vs. CISC wars, but we were always promised that RISC would win hands-down. There are a lot of reasons why that didn't happen, but the actual instruction decode for an X86 processor, for example, is a tiny portion of the chip, and decodes the instruction into RISC-like micro-operations. Anyway, someone developing an X86 processor these days has a lot of preexisting code (algorithms) to benchmark and run, and a lot of design decisions to make. The same thing is true, at a higher level, if you are building a brand new CPU architecture -- decide what kind of applications you want to support, find relevant benchmarks, on the web, from EEMBC, whereever, and try to build a useful machine.

Has that always been the case ever since the first computing engine was designed?
I think so, for the most part. But it's very similar to pure software development, where the best developers alternate between building reusable tools (libraries, compilers, etc.), and building things that use those tools. Often, a non-reusable tools is built first, then there is an insight about how to generalize it. Lather, rinse, repeat.

For a pure math :-) perspective, look at lisp. McCarthy conceived of a Turing complete language as a tool for humans to communicate precise program information, and was reportedly quite surprised when Steve Russell made it run on real hardware. The same thing happens at lower levels. Lisp Machines Inc developed machines to run Lisp. Western Digital actually reduced the UCSD Pascal P-Machine to a chip. But due to Moore's law and the economics of semiconductor manufacturing, machines get more general purpose every year, and it's hard to successfully build and sell a processor that supports a non-mainstream architecture.

I may have just changed my mind on which is more complex to make but I'm not sure because, after working things out in my head, if I were making a processor I'd be working on the algorithms first.
If you're building a new processor, that's exactly what you would do. Of course, as in anything else, an experienced engineer might make one that is more future proof than a junior engineer, because he can extrapolate from what he is doing today to what he might need to do next year. And after you figured out your algorithms, you might even use a specialized toolkit to help you partition your design into hardware and software, allowing you to trade off speed vs. size vs. power consumption vs. reprogrammability.
I'm still leaning toward the hardware because you'd need the know how of the maths+engineering for the hardware.
OTOH, once you learn a bit about hardware, you might form the opinion, as I have, that it's pretty much all the same, and that to draw an arbitrary distinction about what is patentable from how you decided to partition your system doesn't really make all that much sense :-)
I think PolR means, the output data wasn't created solely by the programmer, just as the (unbound) book wasn't created solely by the novel writer. The new invention is the input data, not the output data, just as the novel is the new invention, not the (unbound) book.
But in the argument about whether or not software is patentable, there is no room for the discussion of the results of running the software. This is one of the reasons why this analogy is so confusing.
Without the book in which the novel is recorded there is no invention to claim. Software patents, as written as broadly as they sometimes are, make their claims on the results of running the software, not the actual invention which is the software itself. I think.
Ah, I see what you're getting at. I view software/hardware as a continuum, but obviously it is not viewed that way legally. You wouldn't want your invention to be disallowed, so you add junk until it gets allowed. And the results are ugly and confusing. Personally, I think the best way to view software is to think of the process. You are not really patenting the book; just one part of a particular process for creating the book. So when a software patent claims the results of running the software, you have to think of that as a patent on the process of the creation of the book, not as a patent on the new physical book itself.
I'll probably give up on this discussion for now. I'm sure you have already. My brain hurts now..
My brain has been hurting for years. But consider this. There are functions that can be done, essentially identically, in software, or in digital hardware or in analog hardware. (For example, an amplifier multiplies an input by a gain, which might be constant, or might itself be another variable, depending on the amplifier.) If one of those functions is new and non-obvious, why would it be patentable in one or two of those cases and not all of them?

From my perspective, PolR appears to be the consummate mathematician. He constructs worlds that are utterly internally consistent, but that don't mesh with the real world. The patent lawyers are consummate human beings (though tending towards the psychopathic side, no doubt). They work with the messy real world with its inconsistent treatment of the same new useful invention being patentable or not depending on how the system is partitioned and they do what humans do -- come up with workarounds. It is -- dare I say it -- engineering. So I am not at all happy with the lawyers who patent silly simple things (but historically that happened on lots of things before it happened on software) but I actually admire the lawyers who can somehow get the courts to view similar things similarly, even if it takes the absurd magical faerie pixie dust incantation of "plus a computer" to do so.

The only problem is that the number of truly new unobvious things in software is miniscule, completely dwarfed by all the junk patents. So that's a baby I'd certainly be willing to throw out with the bath water. I'm just not sure how it can be done logically.

And I certainly don't envy the supremes their task of trying to straighten this all out.

[ Reply to This | Parent | # ]

Groklaw © Copyright 2003-2013 Pamela Jones.
All trademarks and copyrights on this page are owned by their respective owners.
Comments are owned by the individual posters.

PJ's articles are licensed under a Creative Commons License. ( Details )