Is it possible to build a sapient machine ?

For the love of the pursuit of knowledge

Moderator: Moderators

Post Reply
User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Is it possible to build a sapient machine ?

Post #1

Post by Bugmaster »

Topic

Many threads regarding dualism, theism, and philosophy in general, often run into this topic. Is it even hypothetically possible to build a computer that will be sapient -- i.e., an artificial being that will think, feel, and socialize as humans do ? Well, here's our chance: to resolve this debate once and for all ! Smoke 'em if you got 'em, people, this post is gonna be a long one.

I claim that creating Strong AI (which is another name for a sapient computer) is possible. We may not achieve this today, or tomorrow, but it's going to happen sooner rather than later.

First, let me go over some of the arguments in favor of my position.

Pro: The Turing Test

Alan Turing, the father of modern computing (well, one of them), proposed this test in ages long past, when computers as we know them today did not yet exist. So, let me re-cast his argument in modern terms.

Turing's argument is a thought experiment, involving a test. There are three participants in the test: subject A, subject B, and the examiner E. A and B are chatting on AIM, or posting on this forum, or text-messaging each other on the phone, or engaging in some other form of textual communication. E is watching their conversations, but he doesn't get to talk. E knows that one of the subjects -- either A or B -- is a bona-fide human being, and the other one is a computer, but he doesn't know which one is which. E's job is to determine which of the subjects is a computer, based on their chat logs. Of course, in a real scientific setting, we'd have a large population of test subjects and examiners, not just three beings, but you get the idea.

Turing's claim is that if E cannot reliably determine which being -- A or B -- is human, then they both are. Let me say this again: if E can't tell which of the subjects is a computer, then they're both human, with all rights and privileges and obligations that humanity entails.

This seems like a pretty wild claim at first, but consider: how do you know that I, Bugmaster, am human ? And how do I know that you're human, as well ? All I know about you is the content of your posts; you could be a robot, or a fish, it doesn't really matter. As long as you act human, people will treat you as such (unless, of course, they're jerks who treat everyone like garbage, but that's another story). You might say, "well, you know I'm human because today's computers aren't advanced enough to post intelligently on the forums", but doesn't prove much, since our technology is advancing rapidly all the time (and we're talking about the future, anyway).

So, if you're going to deny one of Turing's subjects his humanity, then you should be prepared to deny this humanity to everyone, which would be absurd. Therefore, a computer that acts human, should be treated as such.

Pro: The Reverse Turing Test

I don't actually know the proper name for this argument, but it's sort of the opposite of the first one, hence the name.

Let's say that tomorrow, as you're crossing the street to get your morning coffee, you get hit by a bus. Your wounds are not too severe, but your pinky is shattered. Not to worry, though -- an experimental procedure is available, and your pinky is replaced with a robotic equivalent. It looks, feels, and acts just like your pinkie, but it's actually made of advanced polymers.

Are you any less human than you were before the treatment ?

Let's say that, after getting your pinkie replaced, you get hit by a bus again, and lose your arm... which gets replaced by a robo-arm. Are you human now ? What if you get hit by a bus again, and your left eye gets replaced by a robotic camera -- are you less human now ? What if you get a brain tumor, and part of your brain gets replaced ? And what if your tumor is inoperable, and the doctors (the doctors of the future, of course) are forced to replace your entire brain, as well as the rest of your organs ? Are you human ? If so, then how are you different from an artificial being that was built out of the same robotic components that your entire body now consists of ?

Note that this isn't just idle speculation. People today already have pacemakers, glasses, prosthetic limbs, and yes, even chips implanted in their brains to prevent epileptic seizures (and soon, hopefully, Alzheimers). Should we treat these people as less human than their all-natural peers ? I personally don't think so.

Ok, I know that many of you are itching to point out the flaws in these arguments, so let me go over some common objections.

(to be continued below)

User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Post #161

Post by Bugmaster »

harvey1 wrote:The brain of the person having conscious thought. That's what the Chinese Room is about. You're in effect saying that the feeling of pain does not occur in the brain but can occur with non-embodied minds. Is that what you believe, that non-embodied minds exist once we start making animatronic heads that mechanically imitate expressions of pain?
What does this have to do with anything I've said so far ???

Let's say that we build a Chinese Room, with the intern inside. The Chinese Room appears to be able to "speak" (actually, read and write) Chinese. Searle claims that, since the person inside does not speak Chinese, the Chinese Room is just faking.

Let's say we take a look at the human brain, with the amygdala inside. The amygdala cannot speak Chinese; therefore, the human brain must be just faking.

Is there a principal difference between these two arguments ? If so, then tell me what it is. If not, then whether dualism is true or not doesn't matter, for the reasons I've stated in my previous post.
When humans feel pain, it is the human that is feeling the pain. In this thought experiment, who in your opinion is feeling the pain?
The Chinese Room. Duh.

I am really puzzled by your questions... "geographic location" ? What the heck is that ? How does it relate to anything I've said ?

Remember, I don't believe in a separate "self" that resides in a human; in my worldview, the "self" is just another process. You, however, seem to believe that the "self" is a sort of homunculus that lives inside my head, and who is completely undetectable (until your Grand Theory of Pain (tm) comes along). Why must you go to such lengths to maintain your obviously faulty beliefs ?
5000 behaviors of the feeling of pain is sufficient to convince someone that an individual (e.g.,. animatronic head) is in pain.
I've denied that this is true, and merely repeating it won't convince me. Lookup tables are very bad at any kind of computation, consciousness included.
Your response was that, "give me the table so I can figure out how to defeat the convincing animatronic head as feeling pain, but we've already agreed that we can only go based on responses not what is happening internally (i.e., the premise of the Turing Test).
Yes, but you seem to be saying, "You can simulate pain with a lookup table, therefore the Turing Test argument is false". However, I claim that you cannot simulate pain with a lookup table, and thus your counter-argument fails.
If you secretly get the table info that the SWAT guy has memorized..., then are you saying there is no longer a feeling of pain going on?
This is another instance of an ontology/epistemology confusion. Let's say that you meet a very skilled, yet relatively unknown human actor (you've never heard of him). He acts as though he's in pain in a flawlessly convincing manner -- let's say that he's simulating a broken leg. Will you, just by looking at his behavior, assume that the actor is in pain ?

I claim that a negative answer to this question is absurd; if you do choose to answer "no", I'll explain why. Otherwise, let's pretend you answered "yes", and move on.

You take the actor to the doctor, who will gladly cure the actor for a low, low price of $500k. At this point, the actor stops squirming in pain, and says, "ha ha, fooled you !". The doctor walks away, dejected.

So, the actor was not truly feeling any pain, but you were justified in assuming that the did, based solely on his behavior, until some additional evidence came along (i.e., the actor came clean).

All I'm saying is that you're justified in assuming that the Chinese Room underatands Chinese, or feels pain or whatever, until some additional evidence comes along.

You have admitted already that you have no access to the actor's feelings of pain; you cannot build a consciousness detector. Thus, you're relying solely on his behavior to determine whether he's in pain or not. But, if behavior is your only criterion, then you're forced to assume that the Chinese Room is as conscious as the actor. That's all the Turing Test is saying, it's as simple as that.
But, if you use the Turing Test to establish that something indeed feels pain, then you must also say that the machine feels the same intensity of pain as we do since you said that the algorithm must be the same if the expressions are the same, right?
That depends on what you mean by "the same". If the Chinese Room acts identically to me in all respects, then I'd agree that it must be using the same algorithm. On the other hand, the expressions of pain vary amongst individuals, who are all executing their own algorithms which are not the same; the Chinese room is not an exception.

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #162

Post by harvey1 »

Bugmaster wrote:What does this have to do with anything I've said so far???... Let's say we take a look at the human brain, with the amygdala inside. The amygdala cannot speak Chinese; therefore, the human brain must be just faking. Is there a principal difference between these two arguments? If so, then tell me what it is.
Well, for one thing the Systems Reply does not entail the Turing Test being true. You seem to me to make this false assumption.
Bugmaster wrote:I am really puzzled by your questions... "geographic location"? What the heck is that ? How does it relate to anything I've said?
If the "system" is feeling pain, and the SWAT guy and buttons are part of the system, then this "system" is spreadout over a geographical area. (Versus inside someone's brain, or inside the animatronic head's mechanical apparatus.)
Bugmaster wrote:
5000 behaviors of the feeling of pain is sufficient to convince someone that an individual (e.g.,. animatronic head) is in pain.
I've denied that this is true, and merely repeating it won't convince me. Lookup tables are very bad at any kind of computation, consciousness included.
Are you saying that the SWAT guy cannot in principle memorize 5000 reactions as long as the buttons are properly organized so as to make the reactions believable? Why would you think that? Also, I recall that you did say that an animatronic head could fool us into believing that it is in pain. Don't you recall that?
Bugmaster wrote:However, I claim that you cannot simulate pain with a lookup table, and thus your counter-argument fails.
Basically you are saying that the SWAT guy can't push the buttons. That's what it amounts to, but you've given no reason on why the SWAT guy cannot push these buttons. It seems you want me to believe on faith that the SWAT guy is somehow handicapped in this regard. Why?
Bugmaster wrote:
If you secretly get the table info that the SWAT guy has memorized..., then are you saying there is no longer a feeling of pain going on?
This is another instance of an ontology/epistemology confusion. Let's say that you meet a very skilled, yet relatively unknown human actor (you've never heard of him). He acts as though he's in pain in a flawlessly convincing manner -- let's say that he's simulating a broken leg. Will you, just by looking at his behavior, assume that the actor is in pain?
Yes. If the acting is good, I could believe that.
Bugmaster wrote:You take the actor to the doctor, who will gladly cure the actor for a low, low price of $500k. At this point, the actor stops squirming in pain, and says, "ha ha, fooled you !". The doctor walks away, dejected. So, the actor was not truly feeling any pain, but you were justified in assuming that the did, based solely on his behavior, until some additional evidence came along (i.e., the actor came clean).
Yes, exactly. I was justified. However, this contradicts your point. You said that if an expression is identical to a behavior, then the behavior requires the execution of the same algorithm that the real feeling of pain requires in order to generate those expressions. That's why you said that you feel justified in using the Turing Test since if you can perfectly fool people then it goes without saying that the person is in pain. That's why I introduced the animatronic head to demonstrate that outward expressions can be mechanically depicted without the internal feelings of pain. Then you took the unprecedented and, I say over the board walk approach, in saying that the animatronic head (+ SWAT van, SWAT guy, RF signal) as a whole are feeling real pain. That's why I've been hammering away at this notion that identical outward expressions of pain do not entail identical inward feelings of pain.
Bugmaster wrote:All I'm saying is that you're justified in assuming that the Chinese Room underatands Chinese, or feels pain or whatever, until some additional evidence comes along.
Evidence has come along. We found out that the animatronic head that fooled us is not a person, it is a mechanical device being controlled by RF with a guy in a SWAT van. Now that you have that evidence can we safely say that you are no longer justified in believing the animatronic joke was a situation where real pain was actually felt?
Bugmaster wrote:You have admitted already that you have no access to the actor's feelings of pain; you cannot build a consciousness detector. Thus, you're relying solely on his behavior to determine whether he's in pain or not. But, if behavior is your only criterion, then you're forced to assume that the Chinese Room is as conscious as the actor. That's all the Turing Test is saying, it's as simple as that.
And, my rebuttal to the Turing Test is saying that if a mechanical apparatus is set-up to fool you (i.e., pull a joke on you), then you have the right to take the famous principle to heart: "Fool me once, shame on her. Fool me twice, shame on me." That principle applies in the real world, and it is why the Turing Test is not really that much help in determining whether a device actually feels pain.

User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Post #163

Post by Bugmaster »

harvey1 wrote:Well, for one thing the Systems Reply does not entail the Turing Test being true. You seem to me to make this false assumption.
Oh, no, of course not. But the Systems Reply does shut down your only objection to the Turing Test, other than faith of course. If you have no other objections, then you're not justified in believing that I'm wrong.
If the "system" is feeling pain, and the SWAT guy and buttons are part of the system, then this "system" is spreadout over a geographical area...
Oh, of course. If we took the two hemispheres of your brain, cut the connecting tissue between them, and replaced it with some loooong wiring, then your brain would be spread over a geographical area, too. I don't see why this matters, though.
Are you saying that the SWAT guy cannot in principle memorize 5000 reactions as long as the buttons are properly organized so as to make the reactions believable? Why would you think that?
Yes, I am saying that. To demonstrate why I think that, I'll need a sample table from you. As I said, let's stick with something simple, like a dog or even a mouse; we shouldn't need the full 5000 reactions then, should we ?
Also, I recall that you did say that an animatronic head could fool us into believing that it is in pain. Don't you recall that?
Yes. But I doubt that it can do so using a lookup table. Fortunately, there are other, much better tools at our disposal, such as stateful algorithms.
It seems you want me to believe on faith that the SWAT guy is somehow handicapped in this regard. Why?
I really don't know how to explain the difference between lookup tables and state machines, other than by example. So... I'll wait until you provide one. Until then, you can refer to Knuth or Dijkstra.
Yes, exactly. I was justified. However, this contradicts your point. You said that if an expression is identical to a behavior, then the behavior requires the execution of the same algorithm that the real feeling of pain requires in order to generate those expressions. That's why you said that you feel justified in using the Turing Test since if you can perfectly fool people then it goes without saying that the person is in pain.
No, that's not what I said. I said that we are already using the Turing Test, implicitly, in order to determine whether our fellow humans are conscious (unless you believe in telepathy, which I do not, or unless you can present me with a consciousness detector, which you cannot). If behavior is our only criterion for determining if someone is conscious, then we are forced to conclude that a computer that acts as though it is conscious, is conscious. We are also forced to conclude that the Chinese room understands Chinese, and that the actor is in pain. We could always be wrong in our conclusion, but we're not omniscient, so it's still justified.

Furthermore, I claim that the only way to properly simulate the understanding of Chinese is to... actually understand Chinese. This is the difference between your pain example, and the Chinese Room. I can envision an actor (human or robotic, doesn't matter) who acts as though he is in pain, but is not; I cannot, however, envision an actor who acts as though he understands Chinese -- enough to fool a fluent Chinese speaker -- but does not.

I challenge you to describe to me how a human (or robotic, whatever) actor would fake the understanding of Chinese well enough to engage in a philosophical discussion on this forum (just to use an example), without actually understanding Chinese.
I say over the board walk approach, in saying that the animatronic head (+ SWAT van, SWAT guy, RF signal) as a whole are feeling real pain.
Can you tell me precisely where, under your worldview, the feeling of pain occurs in humans ? I claim that you can't. You can say, "oh, such and such area of the brain lights up when humans feel pain", but I can always counter with, "that's just the nerve signals moving around, that's not where the self feels pain", as you keep doing.
Evidence has come along. We found out that the animatronic head that fooled us is not a person, it is a mechanical device being controlled by RF with a guy in a SWAT van.
Woah ! When did we find that out ? In your original experiment, the SWAT guy was explicitly hidden. Please don't change your experiment in mid-stream.

User avatar
QED
Prodigy
Posts: 3798
Joined: Sun Jan 30, 2005 5:34 am
Location: UK

Post #164

Post by QED »

Bugmaster wrote:
harvey1 wrote:That is, it operates on the same mechanical principles that allow all mechanical devices to operate. Are you saying that all mechanical devices experience the feeling of pain?
That's a good question. I think I agree (*) with QED's point of view on this subject: consicousness (or pain, whetever) is not a boolean thing, but a continuum. So, all mechanical devices (including human brains) feel pain, just to different degrees. Keep in mind, though, that when I say "X feels pain", I'm merely describing my model of X. I don't believe that there's an actual, dualistic pain for X to feel.
Harvey, once again it seems to me as though this all hinges on something akin to what I see as the arbitrary division between the material and immaterial. Here it looks like you start out with the basic premise that there are tangible things like nerves and intangible things like sensations. I would argue that as observers of such things, we are very poorly placed to be objective about this judgment. This led me to suggest in post 143 and others that we consider the evolutionary path towards this phenomenon. I'm wondering if you're dismissing this approach as being in the category of behaviourism?

User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Post #165

Post by Bugmaster »

QED wrote:I'm wondering if [Harvey is] dismissing this approach as being in the category of behaviourism?
Yeah, which is odd, because I think there are at least two forms of behaviorism. Harvey is right in assuming that "strong behaviorism" -- the notion that all systems, including conscious humans are stateless lookup tables -- is absurd; in fact, I have been denying it for like 5 posts now. However, I see nothing wrong with "weak behaviorism" -- the notion that behavior (not mental states, which may or may not exist) is what we can justifiably use to determine who's human and who isn't.

I think one of the reasons we can't reach consensus on this thread is the disparity in our education.

Harvey's advanced training in philosophy has essentially optimized his thought process for the purposes of debate. So, when he sees something similar to behaviorism, he automatically assumes that it's wrong and moves on, without having to consider the details. My own CS training, on the other hand, has blinded me to the fact that not everyone understands the difference between an FSM and a hashtable (even though you can sometimes express one as the other). I don't really see a good solution to this... Any advice ?

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #166

Post by harvey1 »

No, BM. QED is providing an "orthodox" perspective very similar to Andy Clark's notion of microcognition. I doubt QED is committed to your version of the Turing Test where it is also applied to the issue of feeling pain and other qualia.
Bugmaster wrote:However, I see nothing wrong with "weak behaviorism" -- the notion that behavior (not mental states, which may or may not exist) is what we can justifiably use to determine who's human and who isn't.
I wouldn't classify you as a weak behaviorist since you stated quite emphatically that identical behavior requires an identical algorithm inside the brain. In addition, you've really taken the leap with your statement that animatronic systems as simple as a guy in a van pushing RF control buttons to control an animatronic head has real feelings of pain being felt by the system (which are not felt by the guy in the van, the animatronic head, or the van). Feelings of pain feel as real as any feeling we have, and to suggest that "something" has these intense feelings for some mystical reason (just to maintain your hardcore behaviorist outlook) is really over the top. I think common sense should play a part in what you say before you say it, but hey, it's a free country.

User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Post #167

Post by Bugmaster »

I wouldn't classify you as a weak behaviorist since you stated quite emphatically that identical behavior requires an identical algorithm inside the brain.
Your earlier (and still current, come to think of it) statements lead be to believe that you're not quite certain what an algorithm is. What's an algorithm, according to your definition ?
In addition, you've really taken the leap with your statement that animatronic systems as simple as a guy in a van pushing RF control buttons to control an animatronic head has real feelings of pain being felt by the system (which are not felt by the guy in the van, the animatronic head, or the van).
In fact, I have denied this multiple times in the past, but I guess I'll deny it again, just for completeness:

* I claim that a simple lookup-table type of system, such as the one you propose, will not work. This does not mean that some other, more sophisticated system (such as an FSM) will not work, and therefore your insistence on a lookup table is nothing but a straw man argument. I will need a sample lookup table for you (or just a snippet of one, big enough to emulate a dog) in order to demonstrate why it will not work.

* I never claimed that the system you describe actually feels pain, which is, to me, irrelevant. I only claimed that if we interacted with this system, we'd be justified in thinking that it feels pain -- in the same way that interacting with a very skilled actor would lead us to believe that he feels pain.

Please, Harvey, read the actual words I've posted, not the words you think I would've posted if I were a strict behaviorist or whatever.

I would also like to point out, again, that there's a very big difference between feeling pain, and understanding Chinese (as per my original example). I can easily imagine a human actor who can act as though he is in pain without actually feeling it; I can not imagine a human actor who can act as though he understands Chinese (well enough to fool a native Chinese speaker) without actually understanding Chinese.

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #168

Post by harvey1 »

Bugmaster wrote:Your earlier (and still current, come to think of it) statements lead be to believe that you're not quite certain what an algorithm is. What's an algorithm, according to your definition?
I feel that is quite insulting, BM. I've known what an algorithm was when you were in diapers, so please don't come to me and say that. A much more polite way of asking that question is for you to provide your definition and ask if I agree with that definition. If I agree, then you should show how that definition doesn't agree with my choice of words. That is not insulting and the proper way debate should ensue.
Bugmaster wrote:I will need a sample lookup table for you (or just a snippet of one, big enough to emulate a dog) in order to demonstrate why it will not work.
And, what if you are not given a table? Can you be fooled?
Bugmaster wrote:I never claimed that the system you describe actually feels pain, which is, to me, irrelevant.
You said this, right?
I think that a machine that could duplicate all of my motions and actions (in the non-trivial sense, we're not talking about videotapes here) would need to be executing the same algorithm that I am. How else is it going to replicate all my actions?
If it is duplicating your exact expressions of pain, then you said you are justified in believing this. Which is it?
Bugmaster wrote:Please, Harvey, read the actual words I've posted, not the words you think I would've posted if I were a strict behaviorist or whatever.
I think those were your words.
Bugmaster wrote:I can not imagine a human actor who can act as though he understands Chinese (well enough to fool a native Chinese speaker) without actually understanding Chinese.
Delegates to the U.N. do so all the time with someone translating into their ear piece.

User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Post #169

Post by Bugmaster »

harvey1 wrote:I've known what an algorithm was when you were in diapers...
Hm, you must be pretty ancient, then.
A much more polite way of asking that question is for you to provide your definition and ask if I agree with that definition.
I've already mentioned earlier that, for the purposes of this discussion, we can view consciousness as a finite state machine, which may or may not be deterministic. An FSM is a pretty crude model, of course (it's single-threaded, for one thing), but I think it will suffice for our purposes.
Bugmaster wrote:I will need a sample lookup table for you (or just a snippet of one, big enough to emulate a dog) in order to demonstrate why it will not work.
And, what if you are not given a table? Can you be fooled?
Why are you being so evasive ? All I want is to show you how your lookup table approach is a straw man argument, because lookup tables cannot be used to adequately emulate consciousness (or anything else of value, really). You can show me a lookup table that will adequately emulate a dog's behavior (just as an example, you can pick your favorite mammal), or you can concede the point, but please stop evading.
You said this, right?
I think that a machine that could duplicate all of my motions and actions (in the non-trivial sense, we're not talking about videotapes here) would need to be executing the same algorithm that I am. How else is it going to replicate all my actions?
If it is duplicating your exact expressions of pain, then you said you are justified in believing this. Which is it?
I think you're getting confused. Again:

A device that completely duplicates all of my behaviors is, for all intents and purposes, me, and is thus running whatever algorithm I'm running.

Your animatronic head, however, does not duplicate all of my behaviors; thus, it is not running whatever algorithm I'm running. This does not automatically mean that the head does not possess consciousness. For example, you, Harvey, also do not emulate all of my behaviors exactly, and yet I think we both agree that you possess consciousness.

You seem to be under the impression that everyone feels pain in exactly the same way, and thus, there exists only one qualia for pain that everyone is experiencing. I have explicitly denied this in the past. I claim that even different human beings feel pain in different ways, not to mention animals or AIs; therefore, it is pointless to base your argument on comparing feelings of pain to one another.

For the sake of clarity, let me repeat my original assertions:

1). When we see someone (a human actor, an alien, a robot head, whatever) acting as though they are in pain, we are justified in assuming that they are, in fact, in pain -- assuming they're acting in a sufficiently believable manner.

1a). This "sufficiently believable" level of acting cannot be achieved by using lookup tables alone. This does not mean that his level of acting cannot be achieved at all, since we have many better tools at our disposal (FSMs being one example).

2). Similarly, when we see someone act as though he understands Chinese, we are justified in believing that he does, in fact, understand Chinese.

3). In general, when we see someone (or something, whatever) act as though he's conscious, we are justified in believeing that he is, in fact, conscious.

4). When you deploy your Consciousness Detector (tm) that can verify the presence of pain, understanding of Chinese, or consciousness, without relying on observable behavior, then I will concede points 1..3.
Delegates to the U.N. do so all the time with someone translating into their ear piece.
In this case, they're understanding English, not Chinese.

User avatar
scorpia
Sage
Posts: 913
Joined: Sat Sep 04, 2004 8:31 am

Post #170

Post by scorpia »

Okay, this thread is already 17 pages long, and I've hardly read that much. But I'll try to add my own thoughts to one of the points made if it of any relevance.
Another version of this argument simply states, "computers can't think, they can only do what they're programmed to do". But, if that's true, then we should be able to program the computer to live, grow and learn just as humans do.
But do humans think, or do they just do "What they're programmed to do?"

What makes a robot a robot? The fact it is made out of metal and has silicon chips instead of a brain? What stops a organic being from being a robot? The fact that we are organic and not metallic? Why argue if it is possible to build a sapient machine? IMO we are a sapient machine. But can a robot, or an organic, be able to do something else other than "What we are programmed to do?" I was already arguing "Do we have free will?" Why not also ask "is it possible for a computer to have free will?" Or would that be too far-fetched?
'Belief is never giving up.'- Random footy adverisement.

Sometimes even a wise man is wrong. Sometimes even a fool is right.

Post Reply