decoration decoration
Stories

GROKLAW
When you want to know more...
decoration
For layout only
Home
Archives
Site Map
Search
About Groklaw
Awards
Legal Research
Timelines
ApplevSamsung
ApplevSamsung p.2
ArchiveExplorer
Autozone
Bilski
Cases
Cast: Lawyers
Comes v. MS
Contracts/Documents
Courts
DRM
Gordon v MS
GPL
Grokdoc
HTML How To
IPI v RH
IV v. Google
Legal Docs
Lodsys
MS Litigations
MSvB&N
News Picks
Novell v. MS
Novell-MS Deal
ODF/OOXML
OOXML Appeals
OraclevGoogle
Patents
ProjectMonterey
Psystar
Quote Database
Red Hat v SCO
Salus Book
SCEA v Hotz
SCO Appeals
SCO Bankruptcy
SCO Financials
SCO Overview
SCO v IBM
SCO v Novell
SCO:Soup2Nuts
SCOsource
Sean Daly
Software Patents
Switch to Linux
Transcripts
Unix Books

Gear

Groklaw Gear

Click here to send an email to the editor of this weblog.


You won't find me on Facebook


Donate

Donate Paypal


No Legal Advice

The information on Groklaw is not intended to constitute legal advice. While Mark is a lawyer and he has asked other lawyers and law students to contribute articles, all of these articles are offered to help educate, not to provide specific legal advice. They are not your lawyers.

Here's Groklaw's comments policy.


What's New

STORIES
No new stories

COMMENTS last 48 hrs
No new comments


Sponsors

Hosting:
hosted by ibiblio

On servers donated to ibiblio by AMD.

Webmaster
The Philosophical Perspective | 756 comments | Create New Account
Comments belong to whoever posts them. Please notify us of inappropriate comments.
Emergent Properties and Patents
Authored by: Anonymous on Friday, July 20 2012 @ 09:51 PM EDT
Let's suppose that Searle is right, and that the process he describes (and
implies) is successfully embodied in software. At that point we would have a
device that to all external appearance "understands" Chinese. The
(appearance of) understanding is an emergent property of the device. Should
this be patentable?

I raise this question because an emergent property argument for software patents
is a real possibility. I'm sure we'd all agree that the "emergent
property" of displaying a line as a result of moving a mouse (think
graphics application) is not worthy of a patent. I doubt that anyone will try
to argue forward from this to the Chinese device and identify a magic transition
point. I'm concerned about arguing backward from the device to claim that a
"simpler" emergent property is patentable. And then a still simper
one. And then....

I suggest that this might be worth exploring with an eye to heading them off at
the pass.

[ Reply to This | Parent | # ]

The Philosophical Perspective
Authored by: Anonymous on Friday, July 20 2012 @ 11:07 PM EDT
The Stanford page lists a few kinds of "critics", some of
whom take rather bizarre-sounding positions. My favorite is
the systems view: the man in the room doesn't understand
chinese, but the "system" consisting of the man + the rules
he follows, does understand Chinese for all intents and
purposes. Upon reflection, this is not far from my own
view.

The Stanford page omits the most fundamental criticism: it
takes for granted that the Chinese interlocutors understand
Chinese, and it assumes that this understanding is
fundamentally different from what the man in the room does.

Of course, the process used by the Chinese speakers is
different in some ways: I presume that the Chinese
speakers can produce results faster - that's evidence that
they're not consciously repeating to themselves the sorts of
rules that the man in the room would have to follow.
They've learned an unconscious, possibly different
algorithm, one that makes better use of the massively
parallel human brain.

But just because there may be more than one algorithm, or
that one algorithm is better suited to a particular machine
(brain) architecture, doesn't by itself justify the
assertion that the difference is particularly important. I
just read the article "what color are your bits" (find the
link in the comment with that title), and what Searle is
doing is assigning Colour to what the Chinese speakers do vs
what the man in the room does. If you look at the output,
they're indistinguishable. To my mind, that makes the
processes fundamentally equivalent. Searle is saying the
opposite: that the Chinese have "understanding" and the man
in the room does not. But that's circular; all he's shown
(or I should say illustrated, since it's only a thought
experiment) is that there are some differences in the
processes, and he's choosing to define "understanding" on
the basis of those differences. That seems backwards to me,
because he hasn't shown that the differences are important
or fundamental. If we can't recognize understanding by
"looking at the bits" (in the sense decribed in the article
on Colour), maybe we should consider that "understanding"
isn't real.

[ Reply to This | Parent | # ]

The Philosophical Perspective
Authored by: PolR on Saturday, July 21 2012 @ 12:27 AM EDT
My view is more pragmatic. There are no hardware components in a computer able
to act on the semantics of the bits. They can only manipulate raw uninterpreted
bits. This forces the programmer to write mathematical algorithms in the
mathematicians' sense of the word otherwise the program is not machine
executable.

I must confess I didn't put much thought on the human brain chemistry aspect of
the problem. I only thought of it in practical terms. We can teach a human a
procedure which requires to act on the meanings of data they are given. Judges
and juries routinely do that in a court room. Legal tests and jury instruction
forms are not mathematical algorithms. From this perspective there is a real
difference between the steps of a computer program and the mental steps of a
human. The programmers are bound by constraints which don't affect those who
teach procedures to human beings.

If we want to take the human brain chemistry into consideration this doesn't
change the constraints on the programmers a bit. When he programs he still must
abstract the meanings of the bits away in order to get a machine executable
procedure because the machine still can't act on the meanings of the data. We
still can teach a human to take meanings into account. The only difference is in
how we interpret this difference. Now we have a credible way to argue that
inside the human brain there is a mathematical algorithm which seems like non
algorithmic when viewed from the outside. This is an argument that the legal
distinction between mental steps and computer steps is arbitrary.

[ Reply to This | Parent | # ]

Groklaw © Copyright 2003-2013 Pamela Jones.
All trademarks and copyrights on this page are owned by their respective owners.
Comments are owned by the individual posters.

PJ's articles are licensed under a Creative Commons License. ( Details )