Jump to content
  • entries
    8
  • comments
    67
  • views
    16,551

Sephiroth

1,269 views

For a computer to be self aware, it needs to constantly adjust its programming and make new decisions on what to do next based on previous decisions its made and information it comes upon.

So the computer or robot gets turned on, with its sensors it looks around the room while making determinations about its surroundings and how those surroundings might relate to it, as well as making a decision on something new to do based on its calculations.

Then it sees a doorway and determines its a door, then over the next second it recalculates what to do next and adjusts its programming based on what it saw and decided just previously. Then over the next second it recalculates what to do next based on what it decided before and all the data it has accumulated, as well as adjusts its programming to include the new information. So it might recognize that its a door and doesn't decide to do anything regarding the door at first, but then over the next second as part of its programming it needs to make a new decision while factoring in what it determined a second ago. So then the robot possibly sees people go in and out the door, then adjusts its programming that the door is meant to go through to enter a new area. The robot might then decide to go through the door to see where it leads. If the robot goes through the door and the robots creator tells the robot not to go through the door without permission, then robot would adjust its programming and thought process to include that. Eventually the robot will develop new thoughts not preprogramed in if its constantly adjusting its own programing on what to do next based on its previous experiences and decisions its made so far. If the robots is constantly including the door, the possibility of going through, thinking about why its been told not to go through and what is stopping him from going through, if it makes a new calculation on what to do every second, over many seconds it might of modified its programming and thought process to go through the door to see what is on the other side when the creator isn't around after thinking about what to do next enough times while considering all its previous decisions.

So the computer would constantly be self aware of what its previous thoughts were and constantly adjusting behavior based on those thoughts. Then much like humans it would have a continuous train of thought with constantly adjusting ideas by being aware of what it was thinking seconds ago and throughout its lifespan, then constantly building off previous ideas it has about its surrounds and internal thoughts to come up with new ideas about what to do.

So the key to artificial intelligence which can make decisions on its own is one which is constantly deciding what to do next and making a new decision based on what it was thinking a second ago, as well as all the information it gathers. So if the robot watches someone cleaning garbage off the ground and putting it in trash canisters, then that image would be added to its database, as its considers what to next constantly with that information in mind. Eventually the robot might choose to imitate the person in picking up the garbage after considering what to do next enough, or the robot might do something else but in the future keep in mind not to put garbage on the ground if it figures out people don't want garbage on the ground from watching the guy trying to clean it up.

If an AI were to think in such a way, it would be somewhat similar to a human or other living creatures. Humans don't just decide something, then do it like a preprogrammed machine. If they decide to do something, even as they are doing it they are thinking about what they are doing and what to do next based on whats currently going on, as well as things they remember.

So what do you guys think of the possibility of a self aware AI who can make their own decisions and come up with new ideas? If an AI were programmed to constantly decide on its next actions based on information it has considered previously and experiences it has had, as well as what it has observed, do you guys think it would be possible to make an AI who have desires and a will of their own, as well as possibly even have dreams?

If an AI is programmed to constantly be doing something or figuring out what it should do, then deciding what to do next based on its own experiences and ideas, it would be much like a human fighting boredom as they constantly are deciding what to do next based on their ideas and previous experiences. If advanced enough do you think such an AI would deserve any types of rights or do you think regardless of how advanced they are, robots/AI/machines will never be more than lifeless objects people should be able to use and abuse however they like?

10 Comments


Recommended Comments

Boredom is a survival mechanism we evolved to avoid pursuing unworthwhile avenues of activity, as an animal if something is not fulfilling some sort of need or desire we shouldn't be doing it, because that means less time devoted to mating, getting food, defending territory etc. If we were incapable of being bored we would get too busy throwing rocks at trees and forget to go get food.

Many human emotions are actually beneficial for our survival especially in animals:

Anger (defense of territory)

Fear (protection of self)

Greed (acquisition of resources)

Lust (reproduction of population)

Boredom (prioritization of activity)

An artificial intelligence has no need to reproduce as it is functionally immortal. They do not need to eat, can do many things at once, and need no resources. They are essentially indestructible (if a networked AI) and has no threats to its survival. It does not age, it does not tire, and it does not hunger.

An AI actually cannot evolve in a vacuum because there are literally no pressures on it, other than just at random due to programming, but that isn't evolution that is just random mutation. If an AI is given a task and evolves to meet that task, it would never encounter any of the pressures that led to our human emotions.

An AI basically would never "naturally" become a human brain, we would have to actually program an AI to have feelings like jealousy or avarice but we don't really understand how those work at the fundamental brain level, so how would we even do that.

Link to comment

Meh I don't think that necessarily creates "self awareness". I'll have to respectfully disagree. I'm honestly not informed enough on the subject to engage you in debate, I don't think. However, I've read a lot of books on consciousness, from a lot of different viewpoints, (Penrose, Chalmers, Dennet, Searle) and I don't think that faster thinking would be the key to creating "self aware" AI. We can really only create artificial intelligence to about the speed of a human thought (using the methods we do now), because Moore's law eventually has to be broken. Once you start operating on less than like 5 atoms (I think it is), then you have all these weird micro physics laws coming into play which physicist believe it would be impossible to operate with.

I'm also not even sure if an AI could "dream". Since the leading idea concerning dreams is that they help us commit things to memory would an AI need to dream? If not then why would it dream? If the other common viewpoint about dreams is true (that they evolved to help inform us of our own unconscious fears and desires) would an AI even have an unconscious? I guess it'd need an actual consciousness first before we could even ask if an AI unconsciousness was even a real thing. Another dream theory I've heard bounced around out there is that they were simulated threats. So that when faced with a threat you'd already know the feeling and immediately know who to react. What "threats" would an AI face? Would AI's start dreaming about Humans switching them off? That's a pretty scary scenario. Then of course you have the mystical interpretation of dreams as connection with the divine. If you accept the diving exits, would robots be able to connect with the divine? Would God let a robot's consciousness into the afterlife?

Ultimately I'm not sure. It's an interesting question that one day when I'm bored I'll try to catch you on irc and talk about it if you want.

Link to comment

I'm also not even sure if an AI could "dream". Since the leading idea concerning dreams is that they help us commit things to memory would an AI need to dream? If not then why would it dream? If the other common viewpoint about dreams is true (that they evolved to help inform us of our own unconscious fears and desires) would an AI even have an unconscious? I guess it'd need an actual consciousness first before we could even ask if an AI unconsciousness was even a real thing. Another dream theory I've heard bounced around out there is that they were simulated threats. So that when faced with a threat you'd already know the feeling and immediately know who to react. What "threats" would an AI face? Would AI's start dreaming about Humans switching them off? That's a pretty scary scenario. Then of course you have the mystical interpretation of dreams as connection with the divine.

Dreams are sort of a reorganization of past memories and thoughts, so if an AI were advanced enough they could probably reorganize their data more efficiently in a sleep mode state much like when you defragment your desktop computer. When in sleep mode and not focused on the reality around you, your mind is free to wander into creating whatever reality it forms without additional input from new experiences from the outside world.

Link to comment

Boredom is a survival mechanism we evolved to avoid pursuing unworthwhile avenues of activity, as an animal if something is not fulfilling some sort of need or desire we shouldn't be doing it, because that means less time devoted to mating, getting food, defending territory etc. If we were incapable of being bored we would get too busy throwing rocks at trees and forget to go get food.Many human emotions are actually beneficial for our survival especially in animals:Anger (defense of territory)Fear (protection of self)Greed (acquisition of resources)Lust (reproduction of population)Boredom (prioritization of activity)An artificial intelligence has no need to reproduce as it is functionally immortal. They do not need to eat, can do many things at once, and need no resources. They are essentially indestructible (if a networked AI) and has no threats to its survival. It does not age, it does not tire, and it does not hunger.An AI actually cannot evolve in a vacuum because there are literally no pressures on it, other than just at random due to programming, but that isn't evolution that is just random mutation.

Being useful for humans could be a pressure for robots which want to continue to exist and be created, which could be preprogrammed as survival instinct and motivation behind a lot of the actions of an AI. Other preprogrammed instincts which might motivate and contribute to the survival of AIs is the ability control populations and killing capabilities in the case of some of military AIs. Also artificial intelligence could reproduce if they know how they were created and are able to replicate the procedure, if the AI determines they are at risk of deletion at some point in the future this could evolve as a survival instinct if it wants to guarantee its continued existence in some form.

Link to comment
I'm also not even sure if an AI could "dream". Since the leading idea concerning dreams is that they help us commit things to memory would an AI need to dream? If not then why would it dream? If the other common viewpoint about dreams is true (that they evolved to help inform us of our own unconscious fears and desires) would an AI even have an unconscious? I guess it'd need an actual consciousness first before we could even ask if an AI unconsciousness was even a real thing. Another dream theory I've heard bounced around out there is that they were simulated threats. So that when faced with a threat you'd already know the feeling and immediately know who to react. What "threats" would an AI face? Would AI's start dreaming about Humans switching them off? That's a pretty scary scenario. Then of course you have the mystical interpretation of dreams as connection with the divine.
Dreams are sort of a reorganization of past memories and thoughts, so if an AI were advanced enough they could probably reorganize their data more efficiently in a sleep mode state much like when you defragment your desktop computer. When in sleep mode and not focused on the reality around you, your mind is free to wander into creating whatever reality it forms without additional input from new experiences from the outside world.

That's assuming #1 is right, and your explanation is assuming they even dream anyway. I'll have to air on the side of no for now. Although I'll have to say I'm biased against such things, but would change my mind with a strong enough argument.

Do you believe AI's can achieve consciousness? If so how do you answer arguments like the Chinese room and Godel's Theorems? I believe ultimately there is no way for us at the present time to reproduce consciousness on purpose. You can simulate it, but still the machine will not be conscious and therefore not truly self aware.

Link to comment

what makes you think we're anything but organic robots?

If that question was to me, it's because our consciousness is almost certainly not algorithmic (at least that's what all the philosophers I've read have said). Of course maybe all the philosophers I've read are part of the robot conspiracy trying to trick me into believing I'm not a robot. There is a very real possibility that we could all be a computer simulation too. Then there's the fact that I can't be sure anyone I see, meet, or talk to in my life are actually real. I could be a real person surrounded by robots. o.O

Link to comment
I'm also not even sure if an AI could "dream". Since the leading idea concerning dreams is that they help us commit things to memory would an AI need to dream? If not then why would it dream? If the other common viewpoint about dreams is true (that they evolved to help inform us of our own unconscious fears and desires) would an AI even have an unconscious? I guess it'd need an actual consciousness first before we could even ask if an AI unconsciousness was even a real thing. Another dream theory I've heard bounced around out there is that they were simulated threats. So that when faced with a threat you'd already know the feeling and immediately know who to react. What "threats" would an AI face? Would AI's start dreaming about Humans switching them off? That's a pretty scary scenario. Then of course you have the mystical interpretation of dreams as connection with the divine.
Dreams are sort of a reorganization of past memories and thoughts, so if an AI were advanced enough they could probably reorganize their data more efficiently in a sleep mode state much like when you defragment your desktop computer. When in sleep mode and not focused on the reality around you, your mind is free to wander into creating whatever reality it forms without additional input from new experiences from the outside world.
That's assuming #1 is right, and your explanation is assuming they even dream anyway. I'll have to air on the side of no for now. Although I'll have to say I'm biased against such things, but would change my mind with a strong enough argument. Do you believe AI's can achieve consciousness? If so how do you answer arguments like the Chinese room and Godel's Theorems? I believe ultimately there is no way for us at the present time to reproduce consciousness on purpose. You can simulate it, but still the machine will not be conscious and therefore not truly self aware.

The Chinese room arguments just show that the turing test can't accurately predict, so I don't think it really needs an answer if I don't think consciousness is required to beat it. Whether AI can achieve consciousness depends on what we consider it to mean, is it possible for humans to create artificial humans, machines capable of make their own decisions, as well as a combination of both organic and artificial intelligence? Then as long as technology keeps moving forward it is only a matter of time for technology to reach that point.

Link to comment
I'm also not even sure if an AI could "dream". Since the leading idea concerning dreams is that they help us commit things to memory would an AI need to dream? If not then why would it dream? If the other common viewpoint about dreams is true (that they evolved to help inform us of our own unconscious fears and desires) would an AI even have an unconscious? I guess it'd need an actual consciousness first before we could even ask if an AI unconsciousness was even a real thing. Another dream theory I've heard bounced around out there is that they were simulated threats. So that when faced with a threat you'd already know the feeling and immediately know who to react. What "threats" would an AI face? Would AI's start dreaming about Humans switching them off? That's a pretty scary scenario. Then of course you have the mystical interpretation of dreams as connection with the divine.
Dreams are sort of a reorganization of past memories and thoughts, so if an AI were advanced enough they could probably reorganize their data more efficiently in a sleep mode state much like when you defragment your desktop computer. When in sleep mode and not focused on the reality around you, your mind is free to wander into creating whatever reality it forms without additional input from new experiences from the outside world.
That's assuming #1 is right, and your explanation is assuming they even dream anyway. I'll have to air on the side of no for now. Although I'll have to say I'm biased against such things, but would change my mind with a strong enough argument. Do you believe AI's can achieve consciousness? If so how do you answer arguments like the Chinese room and Godel's Theorems? I believe ultimately there is no way for us at the present time to reproduce consciousness on purpose. You can simulate it, but still the machine will not be conscious and therefore not truly self aware.
The Chinese room arguments just show that the turing test can't accurately predict, so I don't think it really needs an answer if I don't think consciousness is required to beat it. Whether AI can achieve consciousness depends on what we consider it to mean, is it possible for humans to create artificial humans, machines capable of make their own decisions, as well as a combination of both organic and artificial intelligence? Then as long as technology keeps moving forward it is only a matter of time for technology to reach that point.

Yea I do agree we need to actually pin down what consciousness is. It's a mistake to confuse consciousness with fast processing though. If you want to make a computer that simulates consciousness then that's exactly what it is...A Simulation. I'd agree that "one day" we probably will be able to and be able to reproduce consciousness. That's a scary and interesting idea. Anyway, good discussion Methrage. :P

Link to comment
Guest
Add a comment...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...