|
Authored by: PolR on Monday, July 23 2012 @ 12:41 AM EDT |
I think we have talked past each other.
There is a purely pragmatic point of view to consider. It is possible to write a
set of rules which are not algorithms because they require the agent that
carries out the rules to make some judgment calls. You will find such rules in
the legal processes. You may say this can be emulated with weighted
probabilities but I can tell you that human applying such systems don't do
weighted probabilities. They inject information and decisions from outside of
the rules, basically complementing the system in their own ways. Different
humans will inject different information and decisions based on their own
knowledge, experience, convictions and biases. These systems based on judgment
calls expect humans to inject information in this manner. This is what I mean by
"incompletely specified". People are free to fill in the missing
parts.
On the other hand computers require that everything is spelled out
algorithmically. Computers are not capable of injecting information which is not
explicitly present either in the inputs or in the algorithm. Even a random
decision is an input from a source of random numbers. So the practical
difference between computers and humans is real. When presented with this
difference, why wouldn't a judge consider it? This is surely a distinction he
will be able to understand. I am sure a judge will frown at the suggestion that
his use of discretion is equivalent to a throw of the dice.
Returning to the Chinese room, did anyone actually implemented one? Did anyone
tried to debug this kind of rule set to see if everything actually occurs as
predicted by the thought experiment? What if the Chinese speakers could tell the
man in the room doesn't speak Chinese because no one is able to actually write
the set of rules which has the desired properties? Perhaps Chinese culture is
such that no such st of rules is possible in the first place and the thought
experiment is just a story of fiction.
That being said, I admit I don't know whether the Chinese room experiment leads
to valid conclusions. It is quite possible that someday someone will be able to
write a program of artificial intelligence allowing a robot to interact with
human like in a science fiction movie. Or perhaps research in the human brain
will prove they are nothing more than biochemical Turing-equivalent computers.
So it is possible that things turn out as you see them. My objections are
two-fold:
1- This scenario is not yet certain, too many unknowns remain.
2- The pragmatic point of view also matters, non algorithmic procedures exist
and are used in real life.
I will also point out that the argument of this article doesn't depend on how
these philosophical speculations are resolved. The argument is that the
capabilities of the computer are limited at the hardware level in such manner
that programs can only be algorithms and symbols must be processed without using
their meanings. Whether the human brain has a similar limitation is unimportant
to my argument because the answer to this question won't change the facts that
programs can only be algorithm and the computer hardware can only process
symbols without using their meanings.
[ Reply to This | Parent | # ]
|
|
|
|
|