|
Authored by: Anonymous on Friday, July 20 2012 @ 09:51 PM EDT |
Let's suppose that Searle is right, and that the process he describes (and
implies) is successfully embodied in software. At that point we would have a
device that to all external appearance "understands" Chinese. The
(appearance of) understanding is an emergent property of the device. Should
this be patentable?
I raise this question because an emergent property argument for software patents
is a real possibility. I'm sure we'd all agree that the "emergent
property" of displaying a line as a result of moving a mouse (think
graphics application) is not worthy of a patent. I doubt that anyone will try
to argue forward from this to the Chinese device and identify a magic transition
point. I'm concerned about arguing backward from the device to claim that a
"simpler" emergent property is patentable. And then a still simper
one. And then....
I suggest that this might be worth exploring with an eye to heading them off at
the pass.[ Reply to This | Parent | # ]
|
|
Authored by: Anonymous on Friday, July 20 2012 @ 11:07 PM EDT |
The Stanford page lists a few kinds of "critics", some of
whom take rather bizarre-sounding positions. My favorite is
the systems view: the man in the room doesn't understand
chinese, but the "system" consisting of the man + the rules
he follows, does understand Chinese for all intents and
purposes. Upon reflection, this is not far from my own
view.
The Stanford page omits the most fundamental criticism: it
takes for granted that the Chinese interlocutors understand
Chinese, and it assumes that this understanding is
fundamentally different from what the man in the room does.
Of course, the process used by the Chinese speakers is
different in some ways: I presume that the Chinese
speakers can produce results faster - that's evidence that
they're not consciously repeating to themselves the sorts of
rules that the man in the room would have to follow.
They've learned an unconscious, possibly different
algorithm, one that makes better use of the massively
parallel human brain.
But just because there may be more than one algorithm, or
that one algorithm is better suited to a particular machine
(brain) architecture, doesn't by itself justify the
assertion that the difference is particularly important. I
just read the article "what color are your bits" (find the
link in the comment with that title), and what Searle is
doing is assigning Colour to what the Chinese speakers do vs
what the man in the room does. If you look at the output,
they're indistinguishable. To my mind, that makes the
processes fundamentally equivalent. Searle is saying the
opposite: that the Chinese have "understanding" and the man
in the room does not. But that's circular; all he's shown
(or I should say illustrated, since it's only a thought
experiment) is that there are some differences in the
processes, and he's choosing to define "understanding" on
the basis of those differences. That seems backwards to me,
because he hasn't shown that the differences are important
or fundamental. If we can't recognize understanding by
"looking at the bits" (in the sense decribed in the article
on Colour), maybe we should consider that "understanding"
isn't real.[ Reply to This | Parent | # ]
|
|
Authored by: PolR on Saturday, July 21 2012 @ 12:27 AM EDT |
My view is more pragmatic. There are no hardware components in a computer able
to act on the semantics of the bits. They can only manipulate raw uninterpreted
bits. This forces the programmer to write mathematical algorithms in the
mathematicians' sense of the word otherwise the program is not machine
executable.
I must confess I didn't put much thought on the human brain chemistry aspect of
the problem. I only thought of it in practical terms. We can teach a human a
procedure which requires to act on the meanings of data they are given. Judges
and juries routinely do that in a court room. Legal tests and jury instruction
forms are not mathematical algorithms. From this perspective there is a real
difference between the steps of a computer program and the mental steps of a
human. The programmers are bound by constraints which don't affect those who
teach procedures to human beings.
If we want to take the human brain chemistry into consideration this doesn't
change the constraints on the programmers a bit. When he programs he still must
abstract the meanings of the bits away in order to get a machine executable
procedure because the machine still can't act on the meanings of the data. We
still can teach a human to take meanings into account. The only difference is in
how we interpret this difference. Now we have a credible way to argue that
inside the human brain there is a mathematical algorithm which seems like non
algorithmic when viewed from the outside. This is an argument that the legal
distinction between mental steps and computer steps is arbitrary.
[ Reply to This | Parent | # ]
|
|
|
|
|