About two months ago, Methrage posed a question on these forums about Artificial Intelligence. I tried to get a good discussion going, but it didn't work out well. Recently I ran into almost his same argument in other places a lot over the last few days (irc, a philosophy forum, and a book). So I decided it was needed to try and set the internets straight on why scaling up in AI does not produce consciousness. I'm going to use Methrage's post as an example (and hopefully he'll respond), but it doesn't mean his idea is poor. It was an intelligent question, and interesting.
Let's start by examining consciousness. If we produced a super-computer who could think faster than us, reason better than us, and contained more knowledge than us it would be consciousness right? In order to answer this question, I'm going to call upon a thought experiment.
Imagine for a moment that you were born deaf. To make matters worse, your parents sell you to a scientist in a third world country in order for him to do experiments. The scientist in order to insure you hear no sound puts you in a sound proof room. He then gives you all knowledge on hearing and sound. You read all the information grasp it, and understand it. Still you don't hear, you do not experience sound.. The scientist then develops a way to increase your IQ by thousands. You now know more about all sorts of things, including everything about human hearing and sound, and reason better than even the scientist himself. Still you have not heard sound. Now the scientist develops a choclear implant, and implants you with it, and for the first time......you hear his voice. You experience sound.
You can very easily see the paralells between our subject and a robot. Even if the AI is given all the information in the world, if he does not have consciousness then he won't spontanouesly develop it. Consciousness is not something that can be produced by knowledge or reason. There are several theories on how, what, and even if it is produced; but one thing we can say for sure is you can not reason yourself into consciousness.
Now to take on Methrage's claim:
For a computer to be self aware, it needs to constantly adjust its programming and make new decisions on what to do next based on previous decisions its made and information it comes upon.
So let's take this on. First we have to ask "what is self awareness". Does self-awareness imply consciousness? Of course it does. In order to reflect on the self you read something to be able to do the reflecting. In fact, consciousness is often described as "awareness". So according to our above example, Methrage and anyone in support of a "scaling up" argument is wrong.
In order for a computer to be "self-aware" it must be able to reflect upon istelf. In order to it to reflect upon itself it must recognize itself as seperate from nature. In order to recognize itself as seperate from nature, it must have consciousness.
Knowledge and reasoning does not imply consciousness.
You must be conscious in order to be self aware.
Methrage and all "scaling up" ideas are wrong.