Is it possible to build a sapient machine ?

For the love of the pursuit of knowledge

Moderator: Moderators

Post Reply
User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Is it possible to build a sapient machine ?

Post #1

Post by Bugmaster »

Topic

Many threads regarding dualism, theism, and philosophy in general, often run into this topic. Is it even hypothetically possible to build a computer that will be sapient -- i.e., an artificial being that will think, feel, and socialize as humans do ? Well, here's our chance: to resolve this debate once and for all ! Smoke 'em if you got 'em, people, this post is gonna be a long one.

I claim that creating Strong AI (which is another name for a sapient computer) is possible. We may not achieve this today, or tomorrow, but it's going to happen sooner rather than later.

First, let me go over some of the arguments in favor of my position.

Pro: The Turing Test

Alan Turing, the father of modern computing (well, one of them), proposed this test in ages long past, when computers as we know them today did not yet exist. So, let me re-cast his argument in modern terms.

Turing's argument is a thought experiment, involving a test. There are three participants in the test: subject A, subject B, and the examiner E. A and B are chatting on AIM, or posting on this forum, or text-messaging each other on the phone, or engaging in some other form of textual communication. E is watching their conversations, but he doesn't get to talk. E knows that one of the subjects -- either A or B -- is a bona-fide human being, and the other one is a computer, but he doesn't know which one is which. E's job is to determine which of the subjects is a computer, based on their chat logs. Of course, in a real scientific setting, we'd have a large population of test subjects and examiners, not just three beings, but you get the idea.

Turing's claim is that if E cannot reliably determine which being -- A or B -- is human, then they both are. Let me say this again: if E can't tell which of the subjects is a computer, then they're both human, with all rights and privileges and obligations that humanity entails.

This seems like a pretty wild claim at first, but consider: how do you know that I, Bugmaster, am human ? And how do I know that you're human, as well ? All I know about you is the content of your posts; you could be a robot, or a fish, it doesn't really matter. As long as you act human, people will treat you as such (unless, of course, they're jerks who treat everyone like garbage, but that's another story). You might say, "well, you know I'm human because today's computers aren't advanced enough to post intelligently on the forums", but doesn't prove much, since our technology is advancing rapidly all the time (and we're talking about the future, anyway).

So, if you're going to deny one of Turing's subjects his humanity, then you should be prepared to deny this humanity to everyone, which would be absurd. Therefore, a computer that acts human, should be treated as such.

Pro: The Reverse Turing Test

I don't actually know the proper name for this argument, but it's sort of the opposite of the first one, hence the name.

Let's say that tomorrow, as you're crossing the street to get your morning coffee, you get hit by a bus. Your wounds are not too severe, but your pinky is shattered. Not to worry, though -- an experimental procedure is available, and your pinky is replaced with a robotic equivalent. It looks, feels, and acts just like your pinkie, but it's actually made of advanced polymers.

Are you any less human than you were before the treatment ?

Let's say that, after getting your pinkie replaced, you get hit by a bus again, and lose your arm... which gets replaced by a robo-arm. Are you human now ? What if you get hit by a bus again, and your left eye gets replaced by a robotic camera -- are you less human now ? What if you get a brain tumor, and part of your brain gets replaced ? And what if your tumor is inoperable, and the doctors (the doctors of the future, of course) are forced to replace your entire brain, as well as the rest of your organs ? Are you human ? If so, then how are you different from an artificial being that was built out of the same robotic components that your entire body now consists of ?

Note that this isn't just idle speculation. People today already have pacemakers, glasses, prosthetic limbs, and yes, even chips implanted in their brains to prevent epileptic seizures (and soon, hopefully, Alzheimers). Should we treat these people as less human than their all-natural peers ? I personally don't think so.

Ok, I know that many of you are itching to point out the flaws in these arguments, so let me go over some common objections.

(to be continued below)

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #143

Post by harvey1 »

Bugmaster wrote:
You backed off your assertion that bots that display painful responses must feel pain...
That was never my assertion. I merely pointed out that we're logicially justified in believing that bots that display a full range of human responses (including those we associate with pain) are human (and that they're experiencing pain). Note that, in your SWAT example, our conclusion is correct (the SWAT guy is human), despite the fact that the bot is not, in fact, a bot. Our conclusion may still be wrong in some cases, of course, but, without being omniscient, we have no intellectually honest choice but to accept it.
Bugmaster wrote:I think that a machine that could duplicate all of my motions and actions (in the non-trivial sense, we're not talking about videotapes here) would need to be executing the same algorithm that I am. How else is it going to replicate all my actions?
Okay, I'm confused by the apparent contradiction here. How can you say that it was never your assertion that bots that display painful responses must feel pain, and yet you asserted that, "I think a machine that could duplicate all of my motions and actions would need to be executing the same [pain] algorithm"? How is that not a contradiction between these two statements? Do you still think that a machine must execute the same internal algorithms as you in order to duplicate your motions and actions?

User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Post #144

Post by Bugmaster »

harvey1 wrote:Do you still think that a machine must execute the same internal algorithms as you in order to duplicate your motions and actions?
I claim that a machine that duplicates all of my motions and actions perfectly, would need to be executing the same algorithm as I am.

However, no two human beings express or experience pain the same way; therefore, if you merely want a machine that experiences pain (or mimicks the experience of pain, if you prefer) in some reasonably human fashion, then any old algorithm would do, as long as it produces the desired result.

I don't see a contradiction here.

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #145

Post by harvey1 »

Bugmaster wrote:
harvey1 wrote:Do you still think that a machine must execute the same internal algorithms as you in order to duplicate your motions and actions?
I claim that a machine that duplicates all of my motions and actions perfectly, would need to be executing the same algorithm as I am.
This is why I want to focus on this issue, because I don't agree. If the animatronic head is controlled by the guy in the SWAT van, and the guy in the SWAT van pushes the "4878" button, then the animatronic head will exactly duplicate your outward indications of pain. However, at no point is the animatronic head executing an algorithm which gives the animatronic head the feeling of pain. So, how can it be executing the same algorithm as you if you feel pain and the animatronic head doesn't?

User avatar
QED
Prodigy
Posts: 3798
Joined: Sun Jan 30, 2005 5:34 am
Location: UK

Post #146

Post by QED »

Duplication of the "outward signs" is obviously not what this is all about. What we are looking for is something that can be inwardly duplicated. This duplication normally takes place through reproduction. If reproduction has duplicated the ability to perceive pain through all the generations of life then it once was humble. Can we not then describe how it might have started out as a neural state created by damaged nerves?

Despite Harvey's objection, the state might be just like a flag after all -- one that is learned to be avoided. So what is the motive for avoidance? Well, sensations are constructed as neural states and can be ascribed internal values. These values act as attractors or repellers and just as we can "drum up" on-demand a tune in our heads so, I suggest, we can drum-up a discord -- a state of pain. It is an essential state, and it must modify our behaviour. But I suggest that's all it takes for the whole thing to work. As such it could readily be implemented in robotics.

But then Harvey would still be shaking his head and saying the robot isn't feeling anything. Well, that's a semantic as far as I can see. Apart from the infinite subtleties of human pain I think it's just as valid to say that we don't feel anything either. Alternatively, and with just as much truth, we might also say that we do feel things and to an infinitely lesser degree so do robots.

I can still sense some incredulity about this, as I have described it all before with no success. First I would repeat that pain has evolved and that it would be essential (or something along the same effective lines) from pretty much day one of multicellular life. So this tells me that it needn't be algorithmically sophisticated, and that we can expect it to be implementable in very simple neural setups. Secondly, to try and get over the incredulity that something so apparently tangible and significant as pain might simply be reduced to flags and values, I mentioned our ability to create other mental states on-demand. That sound of scraping chalk on the blackboard that somehow seems to come in through or teeth makes me think that pain bears a significant relation to auditory stimulus and may point to the internal source of the feeling Harvey is so earnestly seeking.

User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Post #147

Post by Bugmaster »

harvey1 wrote:This is why I want to focus on this issue, because I don't agree. If the animatronic head is controlled by the guy in the SWAT van, and the guy in the SWAT van pushes the "4878" button, then the animatronic head will exactly duplicate your outward indications of pain.
Right, so would a video tape of me. However, you are once again missing the two important points:

* Interactivity
* Identical behavior

If you put me side-by side with the Chinese Room (for that is what your SWAT head really is), and subjected us both to the same series of arbitrary stimuli, and we reacted in identical ways -- including pain response -- then I'd argue that the Chinese Room executes the same exact algorithm as me.

At this point, you can look inside the Chinese Room, point at the various components, and exclaim, "Look ! This gerbil on the treadmill doesn't experience pain !", or "Look ! This series of gears and pulleys doesn't understand Chinese !", but you'd be missing the point: it's the entire room that's conscious (or, in this case, the entire room is Bugmaster 2.0) not any of its components, regardless of whether they're made of gerbils or humans or electronics.

Again: I claim that the Chinese Room could not "exactly duplicate my outward indications of pain" unless it was running a full copy of me, because my outward indications of pain vary greatly, and are ultimately tied in to the rest of my behavior. Sometimes, I talk about my pain (when the doctor asks me about it, for example), sometimes I just wince, sometimes I scream out profanities, sometimes I mutter profanities under my breath, sometimes I get violent on the human who's causing me pain, etc. etc. That's consciousness for you.

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #148

Post by harvey1 »

Bugmaster wrote:
harvey1 wrote:This is why I want to focus on this issue, because I don't agree. If the animatronic head is controlled by the guy in the SWAT van, and the guy in the SWAT van pushes the "4878" button, then the animatronic head will exactly duplicate your outward indications of pain.
the entire room that's conscious (or, in this case, the entire room is Bugmaster 2.0) not any of its components, regardless of whether they're made of gerbils or humans or electronics... I claim that the Chinese Room could not "exactly duplicate my outward indications of pain" unless it was running a full copy of me, because my outward indications of pain vary greatly, and are ultimately tied in to the rest of my behavior. Sometimes, I talk about my pain (when the doctor asks me about it, for example), sometimes I just wince, sometimes I scream out profanities, sometimes I mutter profanities under my breath, sometimes I get violent on the human who's causing me pain, etc. etc. That's consciousness for you.
What is to prevent the SWAT guy from having 5,000 pain buttons, with each button controlling a known reaction that you have given in the past to pain? The SWAT guy even becomes very familiar with how you react in any given situation, so he knows what buttons to press so that the animatronic head appears like it is really you in pain. It perfectly duplicates your reactions to pain. Are you saying that the animatronic head feels pain or that the whole system feels pain? That seems absurd. Nothing feels pain since the animatronic head is controlled by minature gears and such.

User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Post #149

Post by Bugmaster »

harvey1 wrote:What is to prevent the SWAT guy from having 5,000 pain buttons, with each button controlling a known reaction that you have given in the past to pain?
Ah, this sounds like Searle's initial version of his Chinese Room argument, where he had a fixed table of responses to pick from.

The problem is, this isn't how humans work, and it's not even how simple computer programs work.

Humans, and most computer programs, are stateful. Their responses change over time. This means that a list of 5000 pain buttons will not suffice.

Let's go even further, though. Let's say that the SWAT guy has an infinitely long table of all possible responses to stimuli. In this case, he would still need an algorithm to determine which response to pick when, wouldn't he ?
Are you saying that the animatronic head feels pain or that the whole system feels pain? That seems absurd. Nothing feels pain since the animatronic head is controlled by minature gears and such.
Now it sounds like you're saying, "we know exactly how the Chinese Room works, therefore it can't be conscious, because consciousness is mysterious". I've already covered this in my opening statement.

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #150

Post by harvey1 »

Bugmaster wrote:Their responses change over time. This means that a list of 5000 pain buttons will not suffice.
Why, do you think anyone you know would look at you odd if you only had 5,000 expressions of pain? I think if you only had a 100 expressions of pain no one would think anything odd.
Bugmaster wrote:In this case, he would still need an algorithm to determine which response to pick when, wouldn't he?
The SWAT guy knows you, that's why he picks certain responses for the animatronic head when the situation is apt. For example, if someone sticks the head with a pin, the SWAT guy presses button "243" because he knows that this is your same expression when you were once stuck with a pin.
Bugmaster wrote:Now it sounds like you're saying, "we know exactly how the Chinese Room works, therefore it can't be conscious, because consciousness is mysterious". I've already covered this in my opening statement.
No. I'm not talking about a Chinese room, you are. I'm saying the guy in the SWAT van is pressing buttons to show different expressions of pain on the animatronic head. No one can tell the difference from a live human. You are saying the animatronic head is in pain, and that seems to contradict what you said earlier when you said it was not in pain. Which is it, is the animatronic head in pain or not? If the "system" is in pain, then what is causing the feeling of pain in the system?

User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Post #151

Post by Bugmaster »

harvey1 wrote:Why, do you think anyone you know would look at you odd if you only had 5,000 expressions of pain?
Yes, actually. Here's an example:

Bugmaster: Ow ow my pinkie hurts !
Examiner: How did you hurt your pinkie ?
Bugmaster: I was moving my desk to the opposite corner of the room, and accidentally crushed my pinkie with it. Now it hurts like a bitch.

Chances are, that particular expression of pain wasn't on the list.
harvey1 wrote:
Bugmaster wrote:In this case, he would still need an algorithm to determine which response to pick when, wouldn't he?
The SWAT guy knows you, that's why he picks certain responses for the animatronic head when the situation is apt.
That seems like a "yes" answer to me. So, ultimately, the SWAT guy would have to emulate my response to any situation, in order to pass as me. I would argue that a system that acts identically to myself in every situation is, functionally, myself.
No. I'm not talking about a Chinese room, you are.
Your robo-head is the exact same thing. Only you've replaced the I/O slot with the head, the room with a SWAT van, the intern inside with the SWAT guy, and the task of speaking Chinese with the task of feeling pain. You've dressed up the argument in different clothing, but it's still the same argument.

If you disagree, then please show me how your robo-head differs substantially from the Chinese Room, as I have presented it in my opening argument.
You are saying the animatronic head is in pain...
Wrong, I have repeatedly said that the entire system (head, SWAT guy, van, connecting wires, whatever) is in pain. There's a big difference. Read my opening statement, please.

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #152

Post by harvey1 »

Bugmaster wrote:Here's an example... Chances are, that particular expression of pain wasn't on the list.
So, if a conversation was not included, just the verbal and physical expressions of pain, are you saying that intelligent observers cannot be fooled? This seems to contradict what you said prior where you acknowledged that an animatronic system could fool us into believing it is in pain. Which is it? Can the animatronic system fool observers, or can't it?
Bugmaster wrote:
harvey1 wrote:
Bugmaster wrote:In this case, he would still need an algorithm to determine which response to pick when, wouldn't he?
The SWAT guy knows you, that's why he picks certain responses for the animatronic head when the situation is apt.
That seems like a "yes" answer to me. So, ultimately, the SWAT guy would have to emulate my response to any situation, in order to pass as me. I would argue that a system that acts identically to myself in every situation is, functionally, myself.
Are you saying the SWAT guy is in pain just because he pushes button "4878"? If so, then why must the SWAT guy be in pain in order to push that button?
Bugmaster wrote:
No. I'm not talking about a Chinese room, you are.
Your robo-head is the exact same thing. Only you've replaced the I/O slot with the head, the room with a SWAT van, the intern inside with the SWAT guy, and the task of speaking Chinese with the task of feeling pain. You've dressed up the argument in different clothing, but it's still the same argument. If you disagree, then please show me how your robo-head differs substantially from the Chinese Room, as I have presented it in my opening argument.
I fail to see the similarities. For one, pain is a feeling and the Chinese room does not talk about the cause of feelings, it is based on what it means to understand something. You already admitted that the animatronic head does not feel pain, but the analogy of a Chinese room argument seems to say that "something" does feel pain--which you say is the "system." How can a system of mechanical parts feel pain? Are you saying that factories, companies, and organizations actually feel pain even though they lack neural connections? Does the term "feeling" have any real meaning to you?
Bugmaster wrote:
You are saying the animatronic head is in pain...
Wrong, I have repeatedly said that the entire system (head, SWAT guy, van, connecting wires, whatever) is in pain. There's a big difference. Read my opening statement, please.
Okay, BM, why do you think the entire system feels pain? Earlier you said that the AI community has not yet discovered the algorithms for the feeling of pain. The system suggested here is rather simple (head, SWAT guy, van, wireless connection, mechanical stuff inside the head), so what mysterious algorithm is present in this rather simply architecture that has eluded the AI community?

Post Reply