Jump to content
  • entries
  • comments
  • views

Artificial Intelligence: Revisited




About two months ago, Methrage posed a question on these forums about Artificial Intelligence. I tried to get a good discussion going, but it didn't work out well. Recently I ran into almost his same argument in other places a lot over the last few days (irc, a philosophy forum, and a book). So I decided it was needed to try and set the internets straight on why scaling up in AI does not produce consciousness. I'm going to use Methrage's post as an example (and hopefully he'll respond), but it doesn't mean his idea is poor. It was an intelligent question, and interesting.

Let's start by examining consciousness. If we produced a super-computer who could think faster than us, reason better than us, and contained more knowledge than us it would be consciousness right? In order to answer this question, I'm going to call upon a thought experiment.

Imagine for a moment that you were born deaf. To make matters worse, your parents sell you to a scientist in a third world country in order for him to do experiments. The scientist in order to insure you hear no sound puts you in a sound proof room. He then gives you all knowledge on hearing and sound. You read all the information grasp it, and understand it. Still you don't hear, you do not experience sound.. The scientist then develops a way to increase your IQ by thousands. You now know more about all sorts of things, including everything about human hearing and sound, and reason better than even the scientist himself. Still you have not heard sound. Now the scientist develops a choclear implant, and implants you with it, and for the first time......you hear his voice. You experience sound.

You can very easily see the paralells between our subject and a robot. Even if the AI is given all the information in the world, if he does not have consciousness then he won't spontanouesly develop it. Consciousness is not something that can be produced by knowledge or reason. There are several theories on how, what, and even if it is produced; but one thing we can say for sure is you can not reason yourself into consciousness.

Now to take on Methrage's claim:

For a computer to be self aware, it needs to constantly adjust its programming and make new decisions on what to do next based on previous decisions its made and information it comes upon.

So let's take this on. First we have to ask "what is self awareness". Does self-awareness imply consciousness? Of course it does. In order to reflect on the self you read something to be able to do the reflecting. In fact, consciousness is often described as "awareness". So according to our above example, Methrage and anyone in support of a "scaling up" argument is wrong.

In order for a computer to be "self-aware" it must be able to reflect upon istelf. In order to it to reflect upon itself it must recognize itself as seperate from nature. In order to recognize itself as seperate from nature, it must have consciousness.


Knowledge and reasoning does not imply consciousness.

You must be conscious in order to be self aware.

Methrage and all "scaling up" ideas are wrong.



Recommended Comments

i don't want to be a dick but this is a pretty naive piece and probably not enough to be able to go ahead and call someone's ideas wrong

you're essentially talking in absolutes about fields that you don't seem to know much about. methrage's post doesn't really make sense though, to be fair.

Link to comment

if you really care about this topic... and have not seen the Star Trek TNG episode about whether or not Data is a machine or a sentient being... watch it.. you would find it interesting in light of this post.

Link to comment

i don't want to be a dick but this is a pretty naive piece and probably not enough to be able to go ahead and call someone's ideas wrong

you're essentially talking in absolutes about fields that you don't seem to know much about. methrage's post doesn't really make sense though, to be fair.

I'm certainly interested in hearing why you believe I'm wrong if you have a reason.

Also, I'll check it out Rush.

Link to comment

We do not yet understand how our brains work, but we do know shows that neurons are not just biological transistors. A transistor can only respond to an input by either switching itself on or off, a neuron has up to 50 different responses to any given electrochemical input from another neuron.

Computers and organisms were build for fundamentally different things. A computer is built as a mathematical organism, it's most basic functions are mathematical. An organism on the other hand is primarily concerned with survival and reproduction, which are its most basic functions. All organisms can consume and reproduce, all computers can do math, and both of these functions form the very core of what they are.

Fundamentally, computers cannot ever think like a living organism because computers and brains are different things, and function in fundamentally different ways, but complement each other very well. A computer could be built sophisticated enough to achieve basic consciousness, but does so inefficiently compared to an organism and would always struggle with the basic differences in design.

Link to comment

Self-awareness is unprovable. The appearance of self-awareness (eg., "intelligence" as we humans understand it) is provable, and Alan Turing developed a test for it way back in the 1950s (maybe 1950 exactly? I forget).

Methrage is clearly wrong -- for a computer to give the appearance of being intelligent/self-aware, it doesn't need to do anything besides fooling a person. That's the whole point of the Turing test... If it walks like a duck and quacks like a duck, it is a duck.

The technology behind the AI is irrelevant. AI doesn't NEED to do anything except fool its audience.


Link to comment

Sound is just a sensory input that we evolved. There is no reason why a computer/robot/AI/what have you wouldn't be able to have this same sensory input in the form of detecting vibrations in the air. Even if we couldn't design it ourselves, an AI that could reproduce in a similar manner to humans would undergo evolution and could possibly adapt itself to being able to hear if that were beneficial to its' survival.

All consciousness is is the ego that your brain sticks on top of all of the different senses and inputs and chemical messages that you're going through to simplify things and unite it all in a way that we can function. We're just a bunch of cells that have given up a certain degree of autonomy to be part of a larger, more fit organism. There is no fundamental reason why AI couldn't exist the same way that we do, other than what Ogaden was talking about in regards to us being made from different parts.

Link to comment

Consciousness is fantasy. Humans, like rocks, and like [the set of atoms containing both rocks and humans], are just a collection of matter and energy responding to other matter and energy in the environment. Just because we make decisions through analysis doesn't mean we have free will; we simply have very advanced conditioning exactly like your hypothetical AI.

tl;dr: You have nothing on that AI.

Link to comment
Add a comment...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Create New...