|
Authored by: Anonymous on Thursday, July 19 2012 @ 10:10 PM EDT |
Actually, the confusion was within the phrase "what
computers can do". The hardware of a general-purpose
computer does not indulge in semantics, but a computer, with
a little programming, *can* act as if it recognizes
semantics [which I argue is no different from recognizing
semantics]- as in your example of a computer treating some
input (bitstream originally created by a keystroke / visual
pattern / radio signal / etc) as meaning the letter "F".
The question of what a computer is "capable of" is an
important one in this debate, as we saw when whatsisname
wrote that guest article. I say that a computer "can" do
anything it can be programmed to do. I conclude that
computers can do semantics, just as humans can.
In common usage, what "the computer does" is mostly what the
programming does, not what the hardware does.
All I'm saying is that when you say that a computer "can't"
do something, you need to be clearer that you're talking
about the hardware level in a general-purpose computer, and
it's not so much that it "can't", it's that it doesn't need
to. You previously gave a really clear explanation about
why symbolic processing (that was blind to semantics at the
hardware level) was important - I'm afraid I can't recall it
now, but I'll try to find it.[ Reply to This | Parent | # ]
|
|
Authored by: Imaginos1892 on Friday, July 20 2012 @ 02:27 PM EDT |
Does the brain operate entirely on chemistry? I saw a program
on Discovery last week where they explore the possiblilty
that the microtubules within neurons are small enough that
quantum effects become significant; that each nerve cell is
a massively parallel quantum processor. That would add a
whole new dimension to understanding consciousness, and the
attempt to duplicate it with computers.
------------------------
Nobody expects the Spanish Inquisition!![ Reply to This | Parent | # ]
|
|
|
|
|