Topic
Many threads regarding dualism, theism, and philosophy in general, often run into this topic. Is it even hypothetically possible to build a computer that will be sapient -- i.e., an artificial being that will think, feel, and socialize as humans do ? Well, here's our chance: to resolve this debate once and for all ! Smoke 'em if you got 'em, people, this post is gonna be a long one.
I claim that creating Strong AI (which is another name for a sapient computer) is possible. We may not achieve this today, or tomorrow, but it's going to happen sooner rather than later.
First, let me go over some of the arguments in favor of my position.
Pro: The Turing Test
Alan Turing, the father of modern computing (well, one of them), proposed this test in ages long past, when computers as we know them today did not yet exist. So, let me re-cast his argument in modern terms.
Turing's argument is a thought experiment, involving a test. There are three participants in the test: subject A, subject B, and the examiner E. A and B are chatting on AIM, or posting on this forum, or text-messaging each other on the phone, or engaging in some other form of textual communication. E is watching their conversations, but he doesn't get to talk. E knows that one of the subjects -- either A or B -- is a bona-fide human being, and the other one is a computer, but he doesn't know which one is which. E's job is to determine which of the subjects is a computer, based on their chat logs. Of course, in a real scientific setting, we'd have a large population of test subjects and examiners, not just three beings, but you get the idea.
Turing's claim is that if E cannot reliably determine which being -- A or B -- is human, then they both are. Let me say this again: if E can't tell which of the subjects is a computer, then they're both human, with all rights and privileges and obligations that humanity entails.
This seems like a pretty wild claim at first, but consider: how do you know that I, Bugmaster, am human ? And how do I know that you're human, as well ? All I know about you is the content of your posts; you could be a robot, or a fish, it doesn't really matter. As long as you act human, people will treat you as such (unless, of course, they're jerks who treat everyone like garbage, but that's another story). You might say, "well, you know I'm human because today's computers aren't advanced enough to post intelligently on the forums", but doesn't prove much, since our technology is advancing rapidly all the time (and we're talking about the future, anyway).
So, if you're going to deny one of Turing's subjects his humanity, then you should be prepared to deny this humanity to everyone, which would be absurd. Therefore, a computer that acts human, should be treated as such.
Pro: The Reverse Turing Test
I don't actually know the proper name for this argument, but it's sort of the opposite of the first one, hence the name.
Let's say that tomorrow, as you're crossing the street to get your morning coffee, you get hit by a bus. Your wounds are not too severe, but your pinky is shattered. Not to worry, though -- an experimental procedure is available, and your pinky is replaced with a robotic equivalent. It looks, feels, and acts just like your pinkie, but it's actually made of advanced polymers.
Are you any less human than you were before the treatment ?
Let's say that, after getting your pinkie replaced, you get hit by a bus again, and lose your arm... which gets replaced by a robo-arm. Are you human now ? What if you get hit by a bus again, and your left eye gets replaced by a robotic camera -- are you less human now ? What if you get a brain tumor, and part of your brain gets replaced ? And what if your tumor is inoperable, and the doctors (the doctors of the future, of course) are forced to replace your entire brain, as well as the rest of your organs ? Are you human ? If so, then how are you different from an artificial being that was built out of the same robotic components that your entire body now consists of ?
Note that this isn't just idle speculation. People today already have pacemakers, glasses, prosthetic limbs, and yes, even chips implanted in their brains to prevent epileptic seizures (and soon, hopefully, Alzheimers). Should we treat these people as less human than their all-natural peers ? I personally don't think so.
Ok, I know that many of you are itching to point out the flaws in these arguments, so let me go over some common objections.
(to be continued below)
Is it possible to build a sapient machine ?
Moderator: Moderators
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Post #131
Okay, but this is a major admission by you after I spent over 30 posts trying to get you to see this very simple fact. And, what it means is that the majority of us will have a really good reason to be skeptical that a human reaction of pain coming from an artificial lifeform is actually the gadget experiencing pain. The "Turing Test" applied to qualia has failed miserably at determining if a artificial creature is experiencing pain. Do we agree?Bugmaster wrote:My argument was never solely about pain -- it was your choice to focus on it. Video tapes of people in pain do not feel pain, and neither do animatronic heads, because they are not fully interactive -- as I've pointed out in my previous post.
Post #132
Wuh ? Have you been reading my posts ? I've been saying this from the very beginning ! In fact, my very opening statement says,harvey1 wrote:Okay, but this is a major admission by you after I spent over 30 posts trying to get you to see this very simple fact.Bugmaster wrote:My argument was never solely about pain -- it was your choice to focus on it. Video tapes of people in pain do not feel pain, and neither do animatronic heads, because they are not fully interactive -- as I've pointed out in my previous post.
Interactive conversation is the key. I've been saying this from day one, and you've just been replying with, "rocks don't feel pain ! neither do tape recorders !" We agree there.Bugmaster wrote:There are three participants in the test: subject A, subject B, and the examiner E. A and B are chatting on AIM, or posting on this forum, or text-messaging each other on the phone, or engaging in some other form of textual communication. E is watching their conversations, but he doesn't get to talk. E knows that one of the subjects -- either A or B -- is a bona-fide human being, and the other one is a computer, but he doesn't know which one is which. E's job is to determine which of the subjects is a computer, based on their chat logs. ...
Turing's claim is that if E cannot reliably determine which being -- A or B -- is human, then they both are.
It depends. Is the gadget interactive ? Or is it a recording ?And, what it means is that the majority of us will have a really good reason to be skeptical that a human reaction of pain coming from an artificial lifeform is actually the gadget experiencing pain.
If it is interactive, and it acts just as a human in pain would act, then I'd argue that it feels pain. Of course, in order to fully act as a human in pain would act (begging for mercy, fighting back, whatever), it would have to engage in a wide variety of human behaviors... which is what the Turing Test is all about.
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Post #133
If you've been agreeing with me all along, then why do I disagree again with this paragraph? If the animatronic head is linked by RF to a human in a SWAT van, and someone (call them person X) is talking to the animatronic head (but doesn't know a human in a SWAT van is monitoring and controlling the animatronic head via data feeds and video inputs), isn't it true that person X can be easily fooled into believing the animatronic head (whom person X thinks is a human being) is experiencing pain when the human in the SWAT van presses button "4878"? There really is no pain experienced by anyone despite what person X honestly and wholeheartedly believes. In that case, your "Turing Test" fails as a valid test in determining if an animatronic head experiences pain.Bugmaster wrote:If it is interactive, and it acts just as a human in pain would act, then I'd argue that it feels pain. Of course, in order to fully act as a human in pain would act (begging for mercy, fighting back, whatever), it would have to engage in a wide variety of human behaviors... which is what the Turing Test is all about.
Post #134
In practice, what would the head have to do to "easily" fool our subject, assuming that he (the subject) possesses at least an average intelligence ? It's the "easily" part that frustrates your argument.harvey1 wrote:If the animatronic head is linked by RF to a human in a SWAT van, and someone (call them person X) is talking to the animatronic head (but doesn't know a human in a SWAT van is monitoring and controlling the animatronic head via data feeds and video inputs), isn't it true that person X can be easily fooled into believing the animatronic head (whom person X thinks is a human being) is experiencing pain when the human in the SWAT van presses button "4878"?
I claim that, in order to fool our subject, the head would have to perform an extremely human-like set of behaviors, which it would be unable to perform unless something within the "head/controller" system was actually conscious. It could be the SWAT guy, or it could be an electronic computer, doesn't matter at that point.
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Post #135
The Chinese animatronic head already demonstrated the "in principle" ability to fool someone. It is now just a matter of having technology improve upon controlling this head to perfect the technology. There's no reason to doubt that this technology won't develop. There's nothing in principle that forbids it. At the end of the day, the animatronic head experiences no pain, and neither does anyone that controls the head.Bugmaster wrote:In practice, what would the head have to do to "easily" fool our subject, assuming that he (the subject) possesses at least an average intelligence ? It's the "easily" part that frustrates your argument. I claim that, in order to fool our subject, the head would have to perform an extremely human-like set of behaviors, which it would be unable to perform unless something within the "head/controller" system was actually conscious. It could be the SWAT guy, or it could be an electronic computer, doesn't matter at that point.
Post #136
Sorry, I believe this is false. Where in the article about that head does it say that the head could hold a fully interactive, intelligent conversation with an observer ? Remember that it's the conversational ability that is under question here, not appearance (see my previous posts). Heck, if we judged humanity based solely on appearance, then I myself would probably be classified as inhuman by some people (heh) -- not to mention Stephen Hawking !harvey1 wrote:The Chinese animatronic head already demonstrated the "in principle" ability to fool someone.
Returning back to your original argument about pain: I think you've missed an important point that I was trying to raise. I grant you that a robot may fool someone that it is in pain. However, in order to do that, the robot would need to be at least as good as a human actor who fools someone that he's in pain -- or, it needs to be remote-controlled by such an actor, which amounts to the same thing. And, in order to be that good at acting, the robot would have to be able to reproduce all of the behaviors that we traditionally associate with consciousness, which would make it conscious in my book.
I find it somewhat... discouraging... that you repeatedly bring up K-bots (as defined in my previous posts) as evidence against my position, when I have indicated repeatedly that they would fail the Turing Test, and are thus not applicable as evidence. Are you really not able to tell the difference between a head that can talk in a free, unscripted, intelligent, and emotionally aware manner -- and a head that just sits on the shelf looking pretty ?
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Post #137
BM, you're forgetting the guy in the SWAT van. He can watch the conversation and control the head to do exactly, and say exactly as is required to fool anybody. The key is, the SWAT van guy wants you to believe the animatronic head is feeling pain. He does so by pushing his nifty "4878" button.Bugmaster wrote:Sorry, I believe this is false. Where in the article about that head does it say that the head could hold a fully interactive, intelligent conversation with an observer?
Which is a given for this thought exercise...Bugmaster wrote:I grant you that a robot may fool someone that it is in pain. However, in order to do that, the robot would need to be at least as good as a human actor who fools someone that he's in pain -- or, it needs to be remote-controlled by such an actor
The animatronic head just has very good controls that only make it appear to be human and have qualia, but it is the guy in the SWAT van that is controlling what it says and how it says it. (He has a whole bunch of people in the van to help him if it makes it easier for you to visualize.)Bugmaster wrote:And, in order to be that good at acting, the robot would have to be able to reproduce all of the behaviors that we traditionally associate with consciousness, which would make it conscious in my book.
You see, though, the reason why I ignore that part of your argument is because you cannot tell whether the animatronic head is just a fancy toaster controlled by some clever people and nifty machinary and software, or is a human being. Once we come to the point to where this is technically possible (and the Chinese animatronic head shows that this day is not so far in the future...), the Turing-like test that you devised would be irrelevant when it comes to knowing if machines feel pain. If there is really any question, then we'd have to take the thing apart (i.e., if the designer was being uncooperative by not providing the mechanism of why the animatronic head looked like it was feeling pain).Bugmaster wrote:I find it somewhat... discouraging... that you repeatedly bring up K-bots (as defined in my previous posts) as evidence against my position, when I have indicated repeatedly that they would fail the Turing Test, and are thus not applicable as evidence. Are you really not able to tell the difference between a head that can talk in a free, unscripted, intelligent, and emotionally aware manner -- and a head that just sits on the shelf looking pretty ?
This shows what I've said all along. The Turing test is a distraction when it comes to knowing if machines have qualia. Looking at their behavior is not a sufficient means to know (or believe), and therefore irrelevant. It just seems to me that you want to resist this conclusion, yet you "grant [me] that a robot may fool someone that it is in pain." I really see no place where you can go with your argument as far as qualia is concerned. You've effectively admitted that the Turing test is irrelevant for robots with respect to qualia.
Post #138
How does this relate to the very real Chinese animatronic head that you're offering up as evidence for your position ?harvey1 wrote:BM, you're forgetting the guy in the SWAT van...Bugmaster wrote:Sorry, I believe this is false. Where in the article about that head does it say that the head could hold a fully interactive, intelligent conversation with an observer?
Right, so I conclude, based on the Turing Test, that the animatronic head is powered by a human consciousness. In reality, it's powered by the SWAT guy, who... is, in fact, a human consciousness ! In what way is my conclusion wrong ?He can watch the conversation and control the head to do exactly, and say exactly as is required to fool anybody. The key is, the SWAT van guy wants you to believe the animatronic head is feeling pain. He does so by pushing his nifty "4878" button... The animatronic head just has very good controls that only make it appear to be human and have qualia, but it is the guy in the SWAT van that is controlling what it says and how it says it.
As I've repeatedly said, this doesn't matter. I'm concerned with the entire system -- the head and whatever it is that powers it -- and, as long as it acts conscious, I'm going to treat it as such. The entire system, not just the head itself.You see, though, the reason why I ignore that part of your argument is because you cannot tell whether the animatronic head is just a fancy toaster controlled by some clever people and nifty machinary and software, or is a human being.
Essentially, what you're saying here is, "Stephen Hawking's life-support rig is not conscious, therefore Stephen Hawking is not conscious". That's false.
Are you saying that a human being cannot fool you into thinking he or she is in pain ? That can't be right.It just seems to me that you want to resist this conclusion, yet you "grant [me] that a robot may fool someone that it is in pain."
Remeber, again, the two important facts about the Turing Test:
1). It uses an unscripted, fully interactive conversation, and
2). It tests the entire system, not just its user interface device.
If you go back and re-read my opening statement (really !), you'll see how I have underscored these requirements.
In my opening statement, I set up a conversation between several entities on an online forum (such as this one). The conversation is your only evidence for or against their humanity, and you don't know a priori which of them are biologically human. In a sense, all of them are animatronic heads that are run by remote control. Yet you know, for a fact, that some of them are remote-controlled by humans, and some of them are remote-controlled by AIs.
If your notion of consciousness does not allow you to distinguish between the two, then it has no explanatory power, and thus no value.
Note that I have asked you repeatedly to offer some evidence for your position; i.e., offer me some way of detecting consciousness in a given entity. You dodged that question for a while, by appealing to some mysterious future Theory of Qualia, until finally admitting that you cannot build a "consciousness detector" to my very modest specifications.
So, if your notion of consciousness has no supporting evidence, and no explanatory power... then what good is it ?
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Post #139
The real Chinese animatronic head demonstrates that the movements are very close to being human, and therefore a guy in the SWAT van controlling the head with this technology could fool people as you granted in your last post.Bugmaster wrote:How does this relate to the very real Chinese animatronic head that you're offering up as evidence for your position?
Because in this example, there is no feeling of pain. The human(s) controlling the animatronic head from the SWAT van might be having a good ole' time as the animatronic head tells you how much pain it is in.Bugmaster wrote:Right, so I conclude, based on the Turing Test, that the animatronic head is powered by a human consciousness. In reality, it's powered by the SWAT guy, who... is, in fact, a human consciousness! In what way is my conclusion wrong?
But, wait, you said before that you should treat the feeling of pain as real for bots as well. Do you agree now that the feeling of pain must be substantiated by other means than just behavior for animatronic heads?Bugmaster wrote:As I've repeatedly said, this doesn't matter. I'm concerned with the entire system -- the head and whatever it is that powers it -- and, as long as it acts conscious, I'm going to treat it as such. The entire system, not just the head itself.
No. I'm saying that an animatronic head which shows the behavior of pain when the button "4878" is pushed is totally irrelevant to whether the animatronic head feels pain.Bugmaster wrote:Are you saying that a human being cannot fool you into thinking he or she is in pain? That can't be right.It just seems to me that you want to resist this conclusion, yet you "grant [me] that a robot may fool someone that it is in pain."
Because, as it would take another large number of posts to show, the fact that animatronic heads do not feel pain even though they can behave as if they are in pain is a basis to reject the same argument that animatronic heads that behave consciously are in fact conscious. That being the case, we are only justified in believing what bot designers can demonstrate in their algorithms. If they can't demonstrate it, then parsimony forces us to believe that the bots are not feeling pain and not conscious, but just mimickery devices. Mimickery is not sufficient to show a cause for the feeling of pain or consciousness.Bugmaster wrote:Note that I have asked you repeatedly to offer some evidence for your position; i.e., offer me some way of detecting consciousness in a given entity. You dodged that question for a while, by appealing to some mysterious future Theory of Qualia, until finally admitting that you cannot build a "consciousness detector" to my very modest specifications. So, if your notion of consciousness has no supporting evidence, and no explanatory power... then what good is it ?
Post #140
If that's your only point, then why all the complexity ? The SWAT guy could just post "ow ow it hurts me" on some Internet forum, as per my opening statement. We don't even need the robo-head.harvey1 wrote:The real Chinese animatronic head demonstrates that the movements are very close to being human...
Bingo. When you see a human being acting as though he was in pain, you assume that he is, most likely, in pain. But in fact, that human could be a trained actor having a grand joke at your expense -- you have no way of knowing.But, wait, you said before that you should treat the feeling of pain as real for bots as well. Do you agree now that the feeling of pain must be substantiated by other means than just behavior for animatronic heads? ...No.Bugmaster wrote:Are you saying that a human being cannot fool you into thinking he or she is in pain? That can't be right.
All I'm saying that, should you see an AI acting exactly as humans act when they're in pain, you still have no way of knowing what the AI feels. All you have to go on is its behavior.
In both cases -- the human actor fooling you, and the AI fooling you -- you'd be logically justified in concluding that both entities do, in fact, feel pain. I realize that you really, really don't want to acknowledge this, which is why I've offered several ways out of this situation:
* Devise a "consciousness detector" that can reliably tell you which entity is conscious and which isn't (thus, essentially, offering supporting evidence for dualism). You admitted you cannot do this.
* Prove that it is logically impossible for anything but a human to feel pain. You have denied this explicitly (and we agree on this point, at least).
* Prove that it is logically impossible for anything but the process of evolution to produce machines (biological or otherwise) that can feel pain. You haven't offered any evidence for this proposition, which sounds preposterous unless you accept dualism, which brings me to the next prospect:
* Prove that it is logically impossible for materialism to be true. So far, I'm not convinced.
* State that your belief in what amounts to an immaterial soul is based on faith (as per my opening argument). There's nothing wrong with this position, logically speaking, but it precludes any possibility of rational debate with the unbelievers.
I honestly can't think of anything else you can do to get out of the absurdities that I've highlighted in your argument. I'm open to suggestions, of course.
Sorry, this does not follow. Human actors can act as though they are in pain without being in pain -- and yet, I think you'd agree that human actors are conscious. In fact, as I've said earlier, an actor would have to be conscious in order to properly simulate the feeling of pain.Because, as it would take another large number of posts to show, the fact that animatronic heads do not feel pain even though they can behave as if they are in pain is a basis to reject the same argument that animatronic heads that behave consciously are in fact conscious.