Is it possible to build a sapient machine ?

For the love of the pursuit of knowledge

Moderator: Moderators

Post Reply
User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Is it possible to build a sapient machine ?

Post #1

Post by Bugmaster »

Topic

Many threads regarding dualism, theism, and philosophy in general, often run into this topic. Is it even hypothetically possible to build a computer that will be sapient -- i.e., an artificial being that will think, feel, and socialize as humans do ? Well, here's our chance: to resolve this debate once and for all ! Smoke 'em if you got 'em, people, this post is gonna be a long one.

I claim that creating Strong AI (which is another name for a sapient computer) is possible. We may not achieve this today, or tomorrow, but it's going to happen sooner rather than later.

First, let me go over some of the arguments in favor of my position.

Pro: The Turing Test

Alan Turing, the father of modern computing (well, one of them), proposed this test in ages long past, when computers as we know them today did not yet exist. So, let me re-cast his argument in modern terms.

Turing's argument is a thought experiment, involving a test. There are three participants in the test: subject A, subject B, and the examiner E. A and B are chatting on AIM, or posting on this forum, or text-messaging each other on the phone, or engaging in some other form of textual communication. E is watching their conversations, but he doesn't get to talk. E knows that one of the subjects -- either A or B -- is a bona-fide human being, and the other one is a computer, but he doesn't know which one is which. E's job is to determine which of the subjects is a computer, based on their chat logs. Of course, in a real scientific setting, we'd have a large population of test subjects and examiners, not just three beings, but you get the idea.

Turing's claim is that if E cannot reliably determine which being -- A or B -- is human, then they both are. Let me say this again: if E can't tell which of the subjects is a computer, then they're both human, with all rights and privileges and obligations that humanity entails.

This seems like a pretty wild claim at first, but consider: how do you know that I, Bugmaster, am human ? And how do I know that you're human, as well ? All I know about you is the content of your posts; you could be a robot, or a fish, it doesn't really matter. As long as you act human, people will treat you as such (unless, of course, they're jerks who treat everyone like garbage, but that's another story). You might say, "well, you know I'm human because today's computers aren't advanced enough to post intelligently on the forums", but doesn't prove much, since our technology is advancing rapidly all the time (and we're talking about the future, anyway).

So, if you're going to deny one of Turing's subjects his humanity, then you should be prepared to deny this humanity to everyone, which would be absurd. Therefore, a computer that acts human, should be treated as such.

Pro: The Reverse Turing Test

I don't actually know the proper name for this argument, but it's sort of the opposite of the first one, hence the name.

Let's say that tomorrow, as you're crossing the street to get your morning coffee, you get hit by a bus. Your wounds are not too severe, but your pinky is shattered. Not to worry, though -- an experimental procedure is available, and your pinky is replaced with a robotic equivalent. It looks, feels, and acts just like your pinkie, but it's actually made of advanced polymers.

Are you any less human than you were before the treatment ?

Let's say that, after getting your pinkie replaced, you get hit by a bus again, and lose your arm... which gets replaced by a robo-arm. Are you human now ? What if you get hit by a bus again, and your left eye gets replaced by a robotic camera -- are you less human now ? What if you get a brain tumor, and part of your brain gets replaced ? And what if your tumor is inoperable, and the doctors (the doctors of the future, of course) are forced to replace your entire brain, as well as the rest of your organs ? Are you human ? If so, then how are you different from an artificial being that was built out of the same robotic components that your entire body now consists of ?

Note that this isn't just idle speculation. People today already have pacemakers, glasses, prosthetic limbs, and yes, even chips implanted in their brains to prevent epileptic seizures (and soon, hopefully, Alzheimers). Should we treat these people as less human than their all-natural peers ? I personally don't think so.

Ok, I know that many of you are itching to point out the flaws in these arguments, so let me go over some common objections.

(to be continued below)

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #111

Post by harvey1 »

Bugmaster wrote:
In 15 years they will behave as conscious people, that's why Hollywood is making these devices so that filming crews can film them like any other actor on the set.
Wow, you're being a lot more optimisitc than me -- I think 50-100 years would be a more realistic time table for Strong AI :-) However, for Hollywood's purposes, plain old scripted bots would suffice; they don't need Strong AI. I mean, they have Keanu Reeves already, it can't be hard to automate that.
I mean that within 15 years there will be K-bots in a lab that can have their script lines transmitted to the device with instructions on how to say those lines (e.g., with enthusiasm, with sadness, etc.). All of those behaviors will be part of the hardware design of the K-bot. Actually, I didn't realize how far advanced this technology has already achieved. You don't have to wait 15 years for a demonstration. See the Quicktime movie of this animatronic human head at this special effects website. Now, let me ask you, is this human head a real person? If not, then why not?
Bugmaster wrote:since you have (by your own admission) no theoretical basis for pain, does this mean that you cannot, in principle, distinguish pain from a simulation of pain? Then how do you know whether other people (not you !) feel pain?
I don't know if other people feel pain if you define knowledge in terms of absolute truth (i.e., the way reality actually is). In that sense, others might not even exist for all I know. However, I have no experience with this kind of knowledge and therefore I wouldn't even consider such babble as worthy of a discussion. I equate knowledge with a warrantable claim based on observation and accurate predictions based on my experience. In that sense (the only sense I can know anything) I know that other people experience pain since their actions agree with my actions, and most importantly, their background is like my background. I say that the similarity of our backgrounds and behavior is similar enough to warrant this as a claim of knowledge.

I feel Bugmaster that you are using these cavils as a means to blur the discussion. Let's talk about science. You already agreed that there are unknown algorithms that cause pain. You already agreed that science needs to learn the cause of this feeling of pain. So, why do you continue to rehash these cavils of solipsism as if you had not already agreed to the need of a scientific explanation?
Bugmaster wrote:
When we talk about qualia that you agreed that you experience...
If I did, then I was wrong. I do agree that I experience pain and joy and such, which, in your worldview, are caused by qualia -- but I didn't mean to say that I experience qualia directly (if they do indeed exist).
No, qualia is the feeling of pain, joy, and such. I understand that you admitted to having these feelings, and therefore you admit to having qualia experiences by definition of the term "qualia."
Bugmaster wrote:
There's no theory of Turing computation that is known to produce qualia. There's not a theory period, and, therefore, the theory of qualia is unknown.
Isn't that a bit like saying that qualia don't exist? Or, at least, that we don't have any reason to believe that they exist?
I'm not sure what you mean by exist. We don't know if qualia are fundamental or can be eliminated by concepts that are more fundamental (e.g., neuron firings), but I think it is more than reasonable to say that qualia can be accounted for by more fundamental explanations. This is what a theory of qualia would provide. Qualia itself is a real phenomena in the sense that there is a phenomena happening inside human beings that must be studied and explained.
Bugmaster wrote:Yes, but how do you know that I have these feelings ? Because I said so? But, isn't the veracity of my words contingent on me being able to feel pain, and other qualia-based phenomena? That's a catch-22.
This is a discussion that is perhaps more fundamental than anything we have been discussing. In my view, this needs to be discussed thoroughly if we are to come to an agreement on this issue.
Bugmaster wrote:
Like any phenonema which we don't have good theories (and there are many such phenomena in science), there's mystery, but that is no excuse to use steam engines as our theory of explanation.
Can you name some of these phenomena? Earlier, you said that the theory of a phenomenon is its definition, but now you're saying that we have phenomena, presumably well-defined ones, for which we have no theory. Which is it ?
I think this confuses a phenomena with a theory of a phenomena. A phenomena doesn't need a definition, it doesn't need a theory. All it needs is observations that confirm that the phenomena does indeed occur. The feeling of pain is certainly a phenomena that people experience. We don't need a theory of pain in order to know that the phenomena occurs. At the same time, we don't need to throw out false theories, such as steam engine theories (or solipsist arguments), in order to resolve the phenomena (i.e., as a "phenonema" needing no further explanation).
Bugmaster wrote:
harvey1 wrote:We don't have a theory of pain, and if we did, then we would know what pain is. However, we cannot assume that K-bots have pain merely because we can control their faces to look as if they are in pain.
So, you don't know what pain is, you have no way of detecting it, you have no idea what causes it... and yet, you know that K-bots don't have it. That strikes me as irrational. How can you make a completely unknown quantity (since that is what your view of pain amounts to) a basis of your argument ? You might as well say, "machines are not qrtgushz, and therefore we shouldn't treat them as human".
Let me rephrase this as the way I read it:
theory of dark matter wrote:We don't have a theory of [dark matter], and if we did, then we would know what [dark matter] is. However, we cannot assume that [spacecrafts use dark matter] merely because we can control their [behavior] to look as if they [use dark matter].
So, you don't know what [dark matter] is, you have no way of detecting it, you have no idea what causes it... and yet, you know that [spacecrafts] don't have it. That strikes me as irrational.
Perhaps it is easier here to see why your argument is not valid. We have a phenomena (of pain... of dark matter...), but we don't have a theory to explain it. However, we don't need such a theory to explain K-bot behavior or spacecraft motion, and therefore it would be irrational if we thought we needed such an unknown theory to explain what we already have no problem explaining using existing science. As for pain, we already agreed that this is a phenomena that needs to be understood, so why rehash this discussion again?
Bugmaster wrote:I think that my view is more parsimonious. I know how I act when I'm experiencing pain, or joy, or boredom or whatever; thus, when other people act like they're in pain, I can infer that they feel pain. Thus, when some other entity acts just like humans act, I can infer that it's at least as human as the humans, intellectually speaking. What's so difficult about that?
K-bot actions are controlled by electro-mechanical mechanisms. Why would you think that K-bots are actually experiencing pain? They don't even need a CPU. They need not be any more sophisticated in processing instructions than toasters. Are you suggesting that toasters experience pain?
Bugmaster wrote:Also note that, in my example, no human is controlling the Strong AI; the AI is fully independent -- or, at least, as independent as any modern human is. So, your line about controlling the faces of K-bots is irrelevant.
How is it relevant how the K-bot is being controlled? You don't know that it is being controlled. That was your point, right? If we cannot decide its qualia status (i.e., no matter how many K-bots there are deceiving people), then we are to assume that the bot has internal circuitry showing that it possesses qualia. And, even if it wasn't being controlled, why couldn't a K-bot just be an extension of existing voice mail/answering systems (i.e.., interactive voice response systems: IVRs) that are now widely deployed by companies to answer incoming calls? Why do you think that the reactions that a K-bot makes would have anything to do with them being in pain? These are just electro-mechanical reactions that could be controlled by existing IVRs.
Bugmaster wrote:
harvey1 wrote:Doesn't that seem like a reckless assumption for K-bots? They obviously are not human and you just fell prone to someone having a joke at your expense.
What's "obvious" about it ? I aleady did state, repeatedly, that the K-Bot 1000s of The Future (tm) act exactly as humans do -- at least, intellectually; they obviously don't eat sandwiches or whatever. Why, then, is it "obvious" to you that they're not human?
A K-bot that is controlled by remote control isn't experiencing any more pain than a toaster. Is that not obvious to you?

User avatar
QED
Prodigy
Posts: 3798
Joined: Sun Jan 30, 2005 5:34 am
Location: UK

Post #112

Post by QED »

I'm trying to tempt Curious into this debate in order not to derail another...
In the topic titled [url=http://debatingchristianity.com/forum/viewtopic.php?p=51339#51339]If a tree falls in a forest...[/url] Curious wrote:Are you seriously stating here that you don't understand the difference between the object, the interaction and the perception of the interaction? I realise it must be difficult for an atheist to accept the idea that most of what he or she experiences is in fact subjective. It would be a great shame if an atheist was forced to concede that the subjective evidence of the theist might have some credibility in light of this.
Actually, I see it entirely the other way round.

The question is really all about perception. As an atheist I would be at liberty to consider how inorganic material might organize itself into organic material and how that organic structure, made of regular atoms, comes to experience all the other atoms around it. From this perspective, as I have suggested before, even a patently inorganic thing like a thermostat "feels" the cold to a tiny degree. But be careful how you interpret this statement, it's possible to look at this as saying everything has feelings or nothing has feelings.

So I think there's a potential trap when we consider gross perceptions in things like Humans -- we might think that they're something special because we don't see things like thermostats shivering in the cold, but maybe the thermostat would eventually if it too managed to evolve!

The trigger for all this was the Paramecium (that Harvey introduced to us through a paper by Christophe Menant on Information and Meaning ) that evolved a sense for acid and a propulsion system that could drive it away from same. It's the sort of model any home robotics constructor could identify with and it's evolution by natural selection is straight forward to see. At no point in the Trillions of subsequent generations and mutations, potentially tracing a route to a much more sophisticated creature, am I anticipating anything other than more of the same -- but on a much greater scale of complexity.

So goodness knows what this labels me as, but rather than the way these favorite song lyrics of mine would have it:
Eno wrote: There's a brain in the table,
There's a heart in the chair
And they all live in Jesus,
It's a family affair.
I would argue that there's a table in the brain and a chair in the heart. Not sure about the other stuff though :-k

Curious
Sage
Posts: 933
Joined: Thu May 26, 2005 6:27 pm

Post #113

Post by Curious »

QED wrote:I'm trying to tempt Curious into this debate in order not to derail another...
...........
The question is really all about perception. As an atheist I would be at liberty to consider how inorganic material might organize itself into organic material and how that organic structure, made of regular atoms, comes to experience all the other atoms around it. From this perspective, as I have suggested before, even a patently inorganic thing like a thermostat "feels" the cold to a tiny degree. But be careful how you interpret this statement, it's possible to look at this as saying everything has feelings or nothing has feelings.

So I think there's a potential trap when we consider gross perceptions in things like Humans -- we might think that they're something special because we don't see things like thermostats shivering in the cold, but maybe the thermostat would eventually if it too managed to evolve!

The trigger for all this was the Paramecium (that Harvey introduced to us through a paper by Christophe Menant on Information and Meaning ) that evolved a sense for acid and a propulsion system that could drive it away from same. It's the sort of model any home robotics constructor could identify with and it's evolution by natural selection is straight forward to see. At no point in the Trillions of subsequent generations and mutations, potentially tracing a route to a much more sophisticated creature, am I anticipating anything other than more of the same -- but on a much greater scale of complexity.
...
I also replied in the original thread as the points I make there are more pertinent to that thread. I will however offer a different reply here.

You are correct when you say that it is all a matter of perception. Obviously, an organism's perception of an event depends as much on the sensitivity and complexity of it's faculties as on the event itself. I think though where your argument fails concerning the thermostat "feeling" the cold is a failure to distinguish perception from reality. To say that a thermostat feels pain ignores the fact that the thermostat is only experiencing the physical effects of the temperature. The expansion or contraction is a real physical event. A perception is an interpretation or representation of an event that is quite distinct from the actual event itself. I doubt very much that you can show that the thermostat has any such perception of a change in temperature.

User avatar
QED
Prodigy
Posts: 3798
Joined: Sun Jan 30, 2005 5:34 am
Location: UK

Post #114

Post by QED »

Curious wrote:To say that a thermostat feels pain ignores the fact that the thermostat is only experiencing the physical effects of the temperature. The expansion or contraction is a real physical event. A perception is an interpretation or representation of an event that is quite distinct from the actual event itself. I doubt very much that you can show that the thermostat has any such perception of a change in temperature.
This one's a real struggle to get across (all my fault though!). The situation we are discussing here is polar and I think that you are considering it from the wrong end. I put it to you that the expansion and contraction are the perception. I suggest this because from the evolution of the first multicellular life-form onwards, this was the only from of perception around. Rather than see perception as something discrete emerging from "a clever trick" of evolutionary configuration, I'm wondering if perception just grows ever stronger as organisms become ever more complex (more sensors and processors).

Curious
Sage
Posts: 933
Joined: Thu May 26, 2005 6:27 pm

Post #115

Post by Curious »

QED wrote:
Curious wrote:To say that a thermostat feels pain ignores the fact that the thermostat is only experiencing the physical effects of the temperature. The expansion or contraction is a real physical event. A perception is an interpretation or representation of an event that is quite distinct from the actual event itself. I doubt very much that you can show that the thermostat has any such perception of a change in temperature.
This one's a real struggle to get across (all my fault though!). The situation we are discussing here is polar and I think that you are considering it from the wrong end. I put it to you that the expansion and contraction are the perception. I suggest this because from the evolution of the first multicellular life-form onwards, this was the only from of perception around. Rather than see perception as something discrete emerging from "a clever trick" of evolutionary configuration, I'm wondering if perception just grows ever stronger as organisms become ever more complex (more sensors and processors).
This is where we are are odds. Would you insist that the ball is the same as your perception of the ball or would you agree that both are distinct from one another? You might look at the ball and come to the conclusion that it is red. This is your perception of the ball and not the reality. The ball has no colour whatsoever and what you believe to be a red ball is a ball that is without such a quality reflecting particular wavelengths of light so as to appear red. The ball might even appear solid and this is your perception of the ball but we know that the ball is far from solid and consists primarily of empty space. The ball might appear hard also but this is just a perception as the ball is simply a seething mass of electromagnetic waves that happen to have a particular degree of cohesion. The perception in all these cases is not truly representative of the physical reality. The reality is the fact, the perception is the representative interpretation of that fact. The perception need not even have any parallel with reality at all. If you were to have the feeling of being watched does this create a reality of something watching you? Or an epileptic aura, does this actually create a real physical smell or taste or is it purely interpretive?
I agree with the gist of your last sentence but I think the key words here are sensors and processors.
The idea that the fact is identical to the perception is very appealing from a theistic perspective as using this we might logically extrapolate the existence of a cosmic consciousness of a complexity equal to it's physical complexity. Is that really what you are heading towards?

User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Post #116

Post by Bugmaster »

Yar, I'm back. Sorry guys, I can't respond to the debate in real-time, I'm too busy to even sleep :-(
harvey1 wrote:
Bugmaster wrote:Yes, but how do you know that I have these feelings? Because I said so? But, isn't the veracity of my words contingent on me being able to feel pain, and other qualia-based phenomena? That's a catch-22.
Just a quick response to this very fundamental issue. When you ask, "how do you know...," I think it assumes some objective kind of knowledge which I say doesn't exist for human beings. In actuality, humans know nothing in an objective sense.
Agreed, sorry for being unclear. Obviously, you can't know for certain that I experience any kind of feelings, or that the Sun exists, or pretty much anything else, other than the Descartean "I am". I've actually pointed this out repeatedly, at least as far as pain and consciousness and such are concerned.

So, in the absence of knowledge, you're forced to rely on evidence. Well, then, what is your evidence for concluding that other people who post on this form feel pain, or joy, or anything at all ? You have rejected behavior as evidence, but for now -- until your marvelous "theory of consciousness" comes out, that is -- behavior is all you've got. So, you have no way of detecting or even defining consciousness at all, and you have rejected the only meaningful way of inferring its existence. Thus, you have rendered your concept of consciousness meaningless.

You say that "those questions [about consciousness] must be established through scientific inquiry", but your approach is entirely unscientific. In science, we don't sit around and wait for someone to develop an uber-theory; we gather the facts we have and build on top of them -- and we certainly don't reject evidence out of hand merely because it contradicts our assumptions.
I think what you are trying to do is muddy the water with epistemology (e.g., "how is knowledge possible?") when in fact you should be concerned with the scientific questions that are legitimate (e.g., "why do humans such as yourself experience the feeling of pain?").
I don't care about epistemology in general (at least, not for the purposes of this thread), and I am not rejecting the very idea of consciousness -- I certainly support QEDs vision of it, for example. What I am rejecting is your model of consciousness -- i.e., dualism. I am pointing out that you have rendered yourself completely unable to offer any evidence for your position, which, scientifically speaking, renders your position moot.
That is an invalid argument since our perceptions may not be able to detect differences (e.g., between a moving (non-accelerating) spacecraft and a stationary spacecraft if inside such a spacecraft without windows) even though those differences could be significant (e.g., we might really be moving).
In this example, detecting whether the spacecraft is moving or not is not merely difficult -- it is impossible, a priori. Thus, until someone (such as Einstein, for example) comes up with a completely new way of looking at the world, we're free to assume that the spacecraft is moving, or that it is stationary, whichever is easiest (i.e., whichever is more parsimonious). Similarly, until someone comes out with your theory of pain, we're free to assume that other beings experience it, or not, whichever makes more sense. Your assumption ("beings who act as though they're consciousness are still not conscious") leads to a lot of absurdities, so my assumption works better.

Note that this argument only applies to your dualistic notion of consciousness. It does not apply to my behavioristic-esque notion of it. Nor does it apply to other things for which there can be evidence -- such as black holes, or laws of motion, or whatever. Your notion of a dualistic consciousness that precludes any possibility of evidence is simply not scientific.

User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Post #117

Post by Bugmaster »

I'm going to put this response in its own post, since I think it is vitally important.
harvey1 wrote:I mean that within 15 years there will be K-bots in a lab that can have their script lines transmitted to the device with instructions on how to say those lines (e.g., with enthusiasm, with sadness, etc.). ... You don't have to wait 15 years for a demonstration. See the Quicktime movie of this animatronic human head at this special effects website. Now, let me ask you, is this human head a real person? If not, then why not?
Harvey, I have been answering this same question for about 10 posts now. I don't know how I can make it any clearer, but I'll try anyway. Here goes:

In my example, the AIs that interact with you on the forum are capable on carrying on a conversation as well as a human being can. They are not following a pre-scripted sequence of lines; they respond to your posts in an intelligent fashion. They laugh (ok, they "lol") at your jokes (assuming that your jokes are funny), they point out flaws in your logic, they call you names when they get really mad, they act upset when you call them names (or maybe they just shrug it off, whatever), they bring up issues that you find interesting, etc. etc. They might have animatronic human heads, or they might not -- you don't know, because all you can see are their words.

Note, again, that the above is very different from tape recorders playing back a tape, or from actors acting out lines in a movie. Tape recorders and movies are not interactive; human beings are.

I'm trying to think of a way to make this more clear, but I can't -- maybe QED or someone else can help. Let me restate my premise again:

Modern animatronic heads, K-bots, AIM-chatterbots, or whatever, cannot carry on a fully interactive conversation. The AIs in my example can, and they can do this at least as well as any biological human being can.

I honestly can't understand why you repeatedly bring up all kinds of devices that cannot carry on an unscripted, impromptu dialogue -- such as the one we're having -- and ask me, "well, do you think this rock is human ? huh ? huh ? How about this animated Mickey Mouse doll ? Huh ?". Either you're being disingenuous on purpose, or you honestly can't tell the difference between the way tape recorders behave, and the way humans behave. Either way, we cannot proceed in this argument until you've understood my main premise. You don't have to agree with me, but please, make an effort to at least understand what I'm talking about.

User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Post #118

Post by Bugmaster »

harvey1 wrote:I know that other people experience pain since their actions agree with my actions, and most importantly, their background is like my background. I say that the similarity of our backgrounds and behavior is similar enough to warrant this as a claim of knowledge.
The part about "their actions agree with my actions" sounds like "their behavior is consistent with what I expect humans to behave like", which is what I've been saying all along. However, what is your basis for believing that other people's backrounds are similar to yours ? For example, why do you believe that I, Bugmaster, have a similar background to yours ? I claim that, in the context of these forums, my behavior is the only piece of evidence you have about me -- or any other person, for that matter. You assume that I'm human based on my behavior alone, and thus, if I were actually an AI, you'd still be forced to conclude that I'm human -- or reject your notion of humanity altogether. My argument is really as simple as that.
You already agreed that there are unknown algorithms that cause pain. You already agreed that science needs to learn the cause of this feeling of pain.... No, qualia is the feeling of pain, joy, and such. I understand that you admitted to having these feelings, and therefore you admit to having qualia experiences by definition of the term "qualia."
Sorry Harvey, but, as far as I understand, qualia have an independent existence from the person who is experiencing them -- or they may merely be the result of objects which have such an independent existence, depends on your definition. I deny that such dualistic objects exist, and thus, I can't claim to possess these qualia. I have a feeling that when you say "unknown algorithms" (above), you really mean "ineffable qualia", so I'm hesitant to proclaim my support for them. And yes, I do believe that science needs to learn more about human consciousness (we've got pain pretty much figured out), just as it needs to learn more about cosmology, physics, and the cure for the common cold, but that's not a very meaningful statement. Science needs to learn about things, that's what it's there for.
Bugmaster wrote:
There's no theory of Turing computation that is known to produce qualia. There's not a theory period, and, therefore, the theory of qualia is unknown.
Isn't that a bit like saying that qualia don't exist? Or, at least, that we don't have any reason to believe that they exist?
I'm not sure what you mean by exist. We don't know if qualia are fundamental or can be eliminated by concepts that are more fundamental (e.g., neuron firings), but I think it is more than reasonable to say that qualia can be accounted for by more fundamental explanations. This is what a theory of qualia would provide. Qualia itself is a real phenomena in the sense that there is a phenomena happening inside human beings that must be studied and explained.
Your paragraph above is self-contradictory. First, you claim that there's no theory of qualia. Then, you claim that it's more reasonable to assume some they can be accounted for by "fundamental explanations". Then, you seem to state, indirectly, that all we really know of qualia is that we feel pain sometimes.

Well, which is it ? If there's not theory of qualia at all, then you are not justified in assuming anything about them, one way or another. And if by "qualia" you mean "sometimes I feel pain", then "qualia" is really just an empty term with no special meaning -- and it's your feelings that are important, not the qualia or lack thereof.
Bugmaster wrote:Yes, but how do you know that I have these feelings ? Because I said so? But, isn't the veracity of my words contingent on me being able to feel pain, and other qualia-based phenomena? That's a catch-22.
This is a discussion that is perhaps more fundamental than anything we have been discussing. In my view, this needs to be discussed thoroughly if we are to come to an agreement on this issue.
Er... ok ? Now what ? Shall we get with the discussing ?
A phenomena doesn't need a definition, it doesn't need a theory. All it needs is observations that confirm that the phenomena does indeed occur.
I claim that, in the context of these forums, you have no observations that give you direct evidence that I (not you !) experience pain, other than my behavior. Now what ?
The feeling of pain is certainly a phenomena that people experience. We don't need a theory of pain in order to know that the phenomena occurs.
I disagree. I certainly know that I experience pain, but I don't have a pain-o-meter that I can point at your head (or wherever), and produce a pain reading (measured in milli-hangovers or something). My only evidence for your pain is your behavior. In my monistic worldview, this is quite sufficient to conclude that you do, in fact, experience pain; however, in your dualistic worldview, this is apparently insufficient. Again: I don't reject the notion of pain in general, I only reject your dualistic concept of it.
harvey1 wrote:Let me rephrase this as the way I read it:
theory of dark matter wrote:We don't have a theory of [dark matter], and if we did, then we would know what [dark matter] is. However, we cannot assume that [spacecrafts use dark matter] merely because we can control their [behavior] to look as if they [use dark matter].
So, you don't know what [dark matter] is, you have no way of detecting it, you have no idea what causes it... and yet, you know that [spacecrafts] don't have it. That strikes me as irrational.
But we do, in fact, have evidence of dark matter. Galaxies are spinning faster than they should be; stars are moving along paths that are more curved than they should be if dark matter did not exist, etc. "Dark matter" is simply a term for "whatever it is that's causing galaxies to spin faster, etc.". Your notion of pain, however, is much more than just an explanation for our observations (i.e., human behavior); therefore, this analogy is false.
K-bot actions are controlled by electro-mechanical mechanisms. Why would you think that K-bots are actually experiencing pain? They don't even need a CPU. They need not be any more sophisticated in processing instructions than toasters. Are you suggesting that toasters experience pain?
See my previous post.
How is it relevant how the K-bot is being controlled? You don't know that it is being controlled. That was your point, right?
Exactly ! So, we don't know whether we're talking to Stephen Hawking or a bot. If we accept your worldview, we have to conclude that neither Stephen Hawking nor the bot are conscious; if we accept my worldview, we conclude that they both are. I think my worldview makes more sense.
And, even if it wasn't being controlled, why couldn't a K-bot just be an extension of existing voice mail/answering systems (i.e.., interactive voice response systems: IVRs) that are now widely deployed by companies to answer incoming calls?
I don't know, can these IVRs talk as freely and intelligently as you or I can ? See my previous post.
A K-bot that is controlled by remote control isn't experiencing any more pain than a toaster. Is that not obvious to you?
No, sorry. In this case, you're not talking to a K-bot; you're talking to the entire system: K-bot plus human that controls it (i.e., Stephen Hawking). I think it's pretty obvious that this system does experience more pain than a toaster.

Also, see my previous post.

User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Post #119

Post by Bugmaster »

Curious wrote:I think though where your argument fails concerning the thermostat "feeling" the cold is a failure to distinguish perception from reality. To say that a thermostat feels pain ignores the fact that the thermostat is only experiencing the physical effects of the temperature. The expansion or contraction is a real physical event. A perception is an interpretation or representation of an event that is quite distinct from the actual event itself. I doubt very much that you can show that the thermostat has any such perception of a change in temperature.
I don't know about QED, but I sure can. All you have to do is look at the heater. When the thermostat turns it on, that's how you know it's feeling a change in temperature ! :-)

Remember that we materialists believe that everything is, ultimately, "a real physical event"; we reject the existence of immaterial, spiritual, mental, and whatever other kind of entities. Thus, while a person's feeling of pain is a lot more complicated than metal contraction, it is not categorically different from it.

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #120

Post by harvey1 »

Okay, I think a little summary of our discussion is in order. We're starting to repeat our arguments, and we're both getting a little frustrated in the process. Here's the main contents of our on-going discussion that happened through a number of previous posts that I think will help us avoid re-stating those arguments (given the length of this correspondence, I think we ought to avoid repeating those arguments!):
Bugmaster: "How do you know I feel pain? Can you see the qualia inside my astral head? Or is it through my behavior (such as screaming "ouch", for example)? If so, then how will you distinguish me from an AI that acts exactly like me?"

Harvey: "This is irrelevant. You just said that you agree that you feel real pain. Now I want you to explain how this function is possible if you really feel it. In actuality, I don't have to justify to myself that you feel pain since I feel pain, so I already feel justified in needing an explanation for pain."

Bugmaster: "Hmm, that sounds a lot like solipsism to me. You don't feel justified in believing that anyone has any pain at all, except for yourself. If you're willing to go that far, we can discuss it further, but I personally think that solipsism is absurd."

Harvey: "I see that response as a Red Herring. The subject matter for my question about your pain has nothing to do with whether I am justified in believing that you have pain or not. (I think that I am justified in believing that others, including QED and yourself, have pain so therefore I see no reason to explore whether this amounts to a solipsist belief on my part.) The subject matter for my question is whether you believe you have pain. If you agree that you do (which you did), then you should explain that experience in terms of an algorithm that causes that feeling. Just referring to algorithms that imitate pain reactions is not explaining your pain. Don't worry why I believe you. I have my reasons for believing that you really experience pain. It has nothing to do with some kind of solipsist tangent. So, if you don't mind, reply to this question: can you describe in principle how the algorithm running inside your head gives you a feeling of pain? If not, then why are you so sure that future AI programmers can program it with standard computer logic?"

Bugmaster: "Sorry, I'll have to stop you right there. Why do you feel justified in believing that QED and I feel pain? I challenge you to provide an explanation that does not rely on dualism, or faith (or both). If you cannot, then you'll have to concede the point, and apply the principle of charity to all entities, be they fleshy or electric. That is the whole point of my original argument. In other words, I am deliberately setting up a dilemma here: either you go with solipsism, and reject the notion that anyone but you feels anything -- or you go with Turing, and accept machines, aliens, and other non-human things into the category of "people", based on their behavior."

Harvey: "Extreme skepticism also must be justified. It is easier for me to justify the principle of charity by accepting that others have real pain given our similarity, and lack of any alternative explanation for why they express in words and actions the very same reactions that I do when I'm in pain. I don't have any other parsimonious explanation other than that we all experience similar pains (and experiences of joy, humor, etc.)... If robots are eventually able to duplicate our actions of pain, why does this mean that I must attribute to them the experience of pain? I have a perfectly reasonable explanation that doesn't rely on hocus pocus happening inside their chips. All I have to say is that science has merely duplicated their reactions, and I have the hardware designs and algorithms to show that this is the case... I don't see why I should accept that dichotomy [solipsism or behavior-only attribution of qualia]. Evolutionary history is vast enough, and the organisms that exist(ed) are complex enough, that I don't have to be committed to a thesis that states that the strong AI task is solved by merely duplicating outward behavior. In fact, already many of the behaviors and adaptations of certain organisms have been duplicated (e.g., flight), however why should this alone convince us that we have duplicated the actual feelings of pain (joy, humor, etc.)? Afterall, we haven't provided any algorithms to do this, so why should we expect that we have duplicated evolution in this regard? Again, it seems like you want outward behavior to be the only marker for achieving strong AI simply because you cannot think of a way that would show that we have achieved strong AI. Well, that is only to assume that strong AI is trivial. However, you've never given any evidence that it is trivial. You just assume that it is trivial without any reason for thinking that (other than that this is what you want to believe)."

Bugmaster: "Ah, but remember: under my original conditions, all you have to go on are the forum posts of the participants in question. You don't know how similar or dissimilar they are to you, on the inside -- all you see are the words they post. Under these conditions, your extreme skepticism is, IMO, unjustified... In other words, when you see two entities -- one biological, one electronic -- react to the same stimuli the same way (by acting out as though they were in pain), you nonetheless maintain that the biological entity has something that the electronic one lacks. This is a less parsimonious worldview, and now the burden of proof is on you."

Harvey: "I'm not extremely skeptical about other people's feelings of pain because I don't think I have good reason to be extremely skeptical. However, if you mean a world where robots mimic people having pain, then I would be extremely skeptical as I alluded to before. We don't live in that world, so I have no reason to doubt that other posters lack the feeling of pain... Why is the onus on me? If an electronic entity exists, then it should be fairly easy to contact the designers and ask them how they programmed their robots to experience pain. If they can show me an algorithm that naturally leads to their robots experiencing pain, then I'm willing to extend the charity of robots experiencing pain as other biological entities. If they cannot, then I assume that their AI creatures can only display human expressions of pain much like a puppet expresses human movements by the manipulation of puppeteers. As for biological entities, I cannot look inside their skulls very effectively, nor review the mechanisms that generate pain (or just the outward appearance of pain); therefore the most parsimonious assumption is that the feeling of pain evolved early on (i.e., prior to mammalian evolution). That parsimonious assumption leads me to conclude that neither I nor other humans are the only biological creatures that experience real pain. I am justified in my beliefs here."

Bugmaster: "Again, you are holding the robotic entity to a much higher standard than the biological one. You don't know how the biological entity experiences pain, besides some vague notions that pain must have evolved. Even modern neurobiologists do not have a full understanding of the human brain -- far from it. We do not have a "full algorithm" (or a Grand Unified Qualia Theory) for the biological humans, yet you're perfectly willing to grant them humanity, nonetheless... So, you're granting humans their humanity, even though you can't look directly inside their skulls; you just assume that pain must have evolved, somehow. But, you're denying the same humanity to robots who act like humans do, and you justify this denial by your inability to look inside the robots' "skulls". That's inconsistent... Careful, here [referring to the comment about "puppet expresses human movements by the manipulation of puppeteers"], you're changing the conditions of the experiment. In all of my examples, this one included, the robotic entity is fully independent, and fully interactive. It is responding to stimuli, not following a hardcoded track like a movie; and it is not externally controlled by anyone (at least, not more so than us humans are)."

Harvey: "It is non-sensical to believe that I'm unique in experiencing pain. It is extremely parsimonious to believe in biological evolutionary theory where these functions of feeling pain evolved long before the Cretaceous. I believe that is the most sensible view, don't you? On the other hand, I don't have any reason to believe that thermometers are having an orgasm whenever I turn it up, do you? So, why should I believe that this imparted functionality of feeling pain somehow happens magically to digital devices? I know that evolution is complex enough to produce this imparted function from the shear amount of evidence that evolution can bring about complex things. So, why should I doubt that the feeling of pain is one of those emergent phenomena that evolutionary processes caused millions upon millions of years ago? BM, you are asking me to doubt evolutionary theory, do you see how ridiculous and hypocritical that is for your position?... If there were a phenomena that I was unfamiliar with...then I would be skeptical that artificial and biological life could possess such a phenomena... I know that I do feel pain... So, I immediately conclude that evolution caused the feeling of pain... The situation is also reversed in favor of human technology.... Before we can accept that [gadgets] have the same biological functionality with regard to feeling pain, we should have good reason to think this. We don't have any reason to think this. In fact, we have some good reasons not to think it since we know the algorithms and engineering designs so well with the gadgets that are manufactured in the world."

Bugmaster: "Why? I mean, I can certainly see the evolutionary advantage of pain, but, when seen in these terms, pain is just a mechanism that fullfills a function. As I'd mentioned before, there's nothing magical about any particular mechanism; its functions can be duplicated or even improved upon by us clever humans. But, presumably, you believe that there's something more to pain than mere functionality. Thus, it's up to you to show me what these additional features are, and how they've evolved, since natural selection as I understand it only "cares" about functionality... thermometers do not behave as humans do (orgasmic or not). My hypothetical Strong AI bots do behave as humans do. If my thermometer could carry on an online conversation half as well as you or QED could, I'd consider it human for all intents and purposes. Of course, in that case, it wouldn't be a thermometer anymore, it'd just be a human with a very refined sense of temperature... Are you saying that humans cannot create any functionality that animals have evolved ? That's silly, because we have machines that fly and breathe and see and do all kinds of other things that animals do. Or, are you saying that consciousness is about as useful, evolutionary speaking, as a TeV accelerator -- i.e., not at all? That's silly too, because we've both already agreed that consciousness is a very useful evolutionary adaptation (and that seems kind of obvious to me). So... what are you saying?... This sounds exactly like biological naturalism: "only biological creatures can feel pain, therefore it's impossible to create a device that feels pain". But all you've done is push the problem down a level. Why is it that only biological creatures can feel pain? Because pain has evolved ? Ok, what's so special about pain that it absolutely must evolve and cannot be constructed, as opposed to other human functions (breathing, etc.) that have evolved but can be constructed as well? Is the answer, "because pain requires qualia"? But then, what are qualia, how do they cause pain, and why is it that they absolutely must evolve and cannot be constructed? Is it because qualia are emergent properties? Why can't we construct a machine that will generate such properties? All you're doing is renaming your mystery factor, you're never actually explaining how it works or why it is necessary at all... I've agreed that I feel pain, but I've also provided a purely materialistic explanation for why I feel pain, which also allows me to justifiably believe (with a high degree of certainty) that other humans feel pain, as well. However, under your worldview, you are not justified in believing that other humans feel pain, because your worldview depends on things that are not observable in principle. So, my worldview explains more and it's simpler to boot. A clear winner."

Harvey: "You also said that you do really feel pain. All of this seems contradictory to me. How can you have an algorithm for pain and at the same time suggest that there's nothing more to pain than behavior of someone in pain? There's more to pain than behavior if the feeling is to produced by an algorithm... if you've already agreed that there is more to pain than the behavior (e.g., a real feeling), then why is behavior of someone in pain relevant at all? We already agree that behavior is not the whole story, so why bring up the behavior as if it is the only significant issue?... You cited Turing computation as a reason to believe the behavior of responding to pain is materially explained, but your explanation for pain sensations seems to be that we should hold out on faith for future [computer] scientists that are smarter than current [computer] scientists...

Bugmaster: "What I'm saying is that, regardless of whether consciousness is dualistic in nature or not, we normally deduce that someone has a consciousness based on their behavior. Therefore, it's parsimonious for us to assume that an entity that behaves although it is conscious, is in fact conscious, unless proven otherwise... I don't understand where you're going with this. You start out with ye olde biological naturalism ("DNA is required for humanity"), and you end up with validating my point ("abilities (i.e., behaviors) determine whether someone is human"). That's not consistent at all... would the clone still behave as though it felt pain? If so, how would you tell which person is Bugmaster, and which is the clone (assuming that the aliens burned their paper trail)?"

Harvey: "We couldn't...So, we normally might not extend charity to a perfect stranger in asking them their opinion on a philosophical matter, but if I see from your written words that you know a thing or two about that, I extend charity in that direction. I assume that as a human being that I can automatically extend a certain charity to you without knowing much else.."

Bugmaster: "Whoa there. You just said that you assume I'm a human being just by looking at my written words. Isn't this what I've been persuading you to do all along? Note that you don't know "from the onset" whether I "have any feelings" or not, because all you can see are my words... If you can't tell which person is real and which is the clone, or which $20 bill is real and which is counterfeit, or which person is squishy and which is a robot... "

Harvey: "There is no valid reason why I shouldn't make the assumption that you are a human being. We don't live in the 26th century... If we can't tell, then we must withdraw charity. That's what charity is, afterall. It's the view that we don't know for sure, but we are justified in believing the person is human and therefore has feelings. If we no longer could determine you from the clone, then I would have to assume that you are the clone, in which case I would no longer treat you as having qualia. Again, you already acknowledged that you have feelings and that this is an unknown algorithm. I think it's fairly obvious that a machine can be made to duplicate all of your motions and actions without this algorithm. So, why if it is possible for AI machines to fool you, why must we assume that we are not being fooled?"

Bugmaster: "It's actually not obvious to me. I think that a machine that could duplicate all of my motions and actions (in the non-trivial sense, we're not talking about videotapes here) would need to be executing the same algorithm that I am. How else is it going to replicate all my actions?... It sounds as though you're saying that right now, in the 21st century, my words alone are sufficient for you to determing whether I'm human. But, there mere existence of AI robots would immediately cause you to suspect my humanity. That seems extreme to me; but, more importantly, I fail to see the distinction. Let's say that tomorrow, some researcher announces that he has developed Strong AI, and that he has been beta-testing it on this board for the past three months. In practical terms, how would this change your posting habits?"

Harvey: "So, are you saying that K-bots [animatronic heads] already have human feelings?... Again, it would only change my posting habits if I were in a conversation that assumed I wasn't talking to savants. If I found out that you were a strong AI savant, then I might immediately assume that the reason that I wasn't getting through to you about qualia is because you in fact don't experience qualia, so there's no use in trying to explain it to you knowing what I know now. So, I would politely end my discussion. On the other hand, that reality is far off, so I extend charity to you by thinking that repeated exchanges might succeed in showing that the feeling of pain (etc.) is justification in itself to ignore behavior. We have K-bots already, so we can already wave off duplication of behavior as a non-issue with regard to whether strong AI is possible or not... without a theory that shows how it is that we know we are duplicating pain processes in our brain, how do you know that you are not just making K-bots?"

Bugmaster: "Oh, that's easy. K-bots can't carry on a conversation, whereas my hypothetical Strong AI can. That's a pretty obvious difference... Inicidentally, I have repeatedly stated that no modern computer that I know of implements Strong AI. Thus, it's kinda silly to keep asking me whether I think that K-Bots are Strong AI -- I've already stated that they're not.... There are plenty of people around who won't understand or won't accept your qualia-based argument -- does that mean they're all robots? If not, then it must be possible to deny your argument even without being a robot, so it might be the case that the robot is denying your argument for the same reasons that humans are denying it (such as, "this argument is not persuasive enough")..."

Harvey: "Can't K-bots be equipped with a sound apparatus that perfectly imitates the human voice? Can't the K-bot be controlled with a remote control where the movements are controlled by the movements of a human being who is behind a curtain (in a Wizard of Oz kind of fashion)? It seems that you would ask us to assume that a bot is conscious except when you happen to know that it is not equipped with a processor. However, your argument is that we don't have to look inside the bot to attribute full human abilities to it. However, in the very near future it is likely that K-bots will be very common, and you could easily be fooled by talking to a K-bot as a real human being. You don't think the K-bot has consciousness but how do you know that if you only make your conclusions by their behavior? It seems to me that this is a contradiction in your argument since you are in effect arguing that behavior is not enough to establish qualia and consciousness especially when it comes to K-bots."

Bugmaster: "Yes, of course. In fact, what you have just described is Stephen Hawking. He's a human being who can't speak on his own, but who can drive a speech syntesizer by remote control. I think we'd both agree that Stephen Hawking is conscious. So... what's your point? Were trying to say, "aha ! but you can never know whether a given entity is a bot masquerading as a human, or a human masquerading as a bot !"? Well... yeah! That's my whole point. Since I can't determine which of them is which, and since they both act as though they're conscious, I'm just going to assume that they're both, in fact, conscious, and save myself a lot of absurdity-related trouble."

Harvey: "Doesn't that seem like a reckless assumption for K-bots? They obviously are not human and you just fell prone to someone having a joke at your expense... In 15 years they will behave as conscious people, that's why Hollywood is making these devices so that filming crews can film them like any other actor on the set.... It doesn't strike me as consistent [in your argument] since even today one could be fooled in a darkened room if the K-bot and its synthesized voice were placed in a wheelchair and made to look like Stephen Hawking. If you know ahead of time that people are pulling these pranks, you would still be so gullible? Of course not! You would immediately try to determine if the K-bot is Stephen Hawking, and in 30 years you won't be able to do so unless you were able to analyze the K-bot's skin tissue or look inside its skin. Nonetheless, a K-bot of this type need not be any smarter than they are today.... Let's refer to the feeling of pain since you've already admitted that you have that have those real feelings... we cannot assume that K-bots have pain merely because we can control their faces to look as if they are in pain."

Bugmaster: "Yes, but how do you know that I have these feelings? Because I said so? But, isn't the veracity of my words contingent on me being able to feel pain, and other qualia-based phenomena? That's a catch-22... Wow, you're being a lot more optimisitc than me -- I think 50-100 years would be a more realistic time table for Strong AI. However, for Hollywood's purposes, plain old scripted bots would suffice; they don't need Strong AI... What's "obvious" about [K-bots not feeling pain]? I aleady did state, repeatedly, that the K-Bot 1000s of The Future (tm) act exactly as humans do -- at least, intellectually; they obviously don't eat sandwiches or whatever. Why, then, is it "obvious" to you that they're not human?... Also note that, in my example, no human is controlling the Strong AI; the AI is fully independent -- or, at least, as independent as any modern human is. So, your line about controlling the faces of K-bots is irrelevant."

Harvey: "I mean that within 15 years there will be K-bots in a lab that can have their script lines transmitted to the device with instructions on how to say those lines (e.g., with enthusiasm, with sadness, etc.). All of those behaviors will be part of the hardware design of the K-bot. Actually, I didn't realize how far advanced this technology has already achieved. You don't have to wait 15 years for a demonstration. See the Quicktime movie of this animatronic human head at this special effects website. Now, let me ask you, is this human head a real person? If not, then why not?... K-bot actions are controlled by electro-mechanical mechanisms. Why would you think that K-bots are actually experiencing pain? They don't even need a CPU. They need not be any more sophisticated in processing instructions than toasters. Are you suggesting that toasters experience pain?... And, even if it wasn't being controlled, why couldn't a K-bot just be an extension of existing voice mail/answering systems (i.e.., interactive voice response systems: IVRs) that are now widely deployed by companies to answer incoming calls? Why do you think that the reactions that a K-bot makes would have anything to do with them being in pain? These are just electro-mechanical reactions that could be controlled by existing IVRs... A K-bot that is controlled by remote control isn't experiencing any more pain than a toaster. Is that not obvious to you?... How is it relevant how the K-bot is being controlled? You don't know that it is being controlled. That was your point, right? If we cannot decide its qualia status (i.e., no matter how many K-bots there are deceiving people), then we are to assume that the bot has internal circuitry showing that it possesses qualia."

Bugmaster: "Exactly! So, we don't know whether we're talking to Stephen Hawking or a bot. If we accept your worldview, we have to conclude that neither Stephen Hawking nor the bot are conscious; if we accept my worldview, we conclude that they both are. I think my worldview makes more sense... can these IVRs talk as freely and intelligently as you or I can?... Harvey, I have been answering this same question for about 10 posts now. I don't know how I can make it any clearer, but I'll try anyway. Here goes: In my example, the AIs that interact with you on the forum are capable on carrying on a conversation as well as a human being can. They are not following a pre-scripted sequence of lines; they respond to your posts in an intelligent fashion. They laugh (ok, they "lol") at your jokes (assuming that your jokes are funny), they point out flaws in your logic, they call you names when they get really mad, they act upset when you call them names (or maybe they just shrug it off, whatever), they bring up issues that you find interesting, etc. etc. They might have animatronic human heads, or they might not -- you don't know, because all you can see are their words. Note, again, that the above is very different from tape recorders playing back a tape, or from actors acting out lines in a movie. Tape recorders and movies are not interactive; human beings are. Modern animatronic heads, K-bots, AIM-chatterbots, or whatever, cannot carry on a fully interactive conversation. The AIs in my example can, and they can do this at least as well as any biological human being can."
Now, to summarize this discussion, notice the following:

1) Your solipsism charge:

I responded by saying that I was justified in believing that since there are no Strong AI bots in existence, I do not have to worry about other posters on this forum lacking humanity. I am justified in attributing humanity at this stage in our technology, but that justification may not always be so (e.g., later on when we cannot tell the difference between humans and bots). Would you agree that I have good reason to believe that humans have feelings of pain and animatronic heads do not have feelings of pain?

2) K-bots imitating pain reactions:

Do we agree that K-bots (i.e., non-Strong AI bots) can be made in the near future that can look like they are in pain, but given their lack of internal circuitry are not really in pain? Do we agree that K-bots can be controlled by IVR software and remote control such that we would be entirely fooled about their humanity? If so, can we agree that in the not so distant future that K-bots might fool us by just interacting with them in person? In your replies above, at times you want to say that bots need to be independent (i.e., no remote control manipulation), while at other times you suggest that if we cannot tell the difference from mere appearances then we ought to say that the bots (animatronic heads?) are conscious. Which is it?
Bugmaster wrote:as I understand, qualia have an independent existence from the person who is experiencing them -- or they may merely be the result of objects which have such an independent existence, depends on your definition.
This is an incorrect understanding. Qualia in the broad sense of the term means "introspectively accessible, phenomenal aspects of our mental lives."
Bugmaster wrote:
Bugmaster wrote:
There's no theory of Turing computation that is known to produce qualia. There's not a theory period, and, therefore, the theory of qualia is unknown.
Isn't that a bit like saying that qualia don't exist? Or, at least, that we don't have any reason to believe that they exist?
I'm not sure what you mean by exist. We don't know if qualia are fundamental or can be eliminated by concepts that are more fundamental (e.g., neuron firings), but I think it is more than reasonable to say that qualia can be accounted for by more fundamental explanations. This is what a theory of qualia would provide. Qualia itself is a real phenomena in the sense that there is a phenomena happening inside human beings that must be studied and explained.
Your paragraph above is self-contradictory. First, you claim that there's no theory of qualia. Then, you claim that it's more reasonable to assume some they can be accounted for by "fundamental explanations". Then, you seem to state, indirectly, that all we really know of qualia is that we feel pain sometimes. Well, which is it? If there's not theory of qualia at all, then you are not justified in assuming anything about them, one way or another.
I'm justified that I have "introspectively accessible, phenomenal aspects of [my] mental li[fe]" since this is what I experience (which you acknowledged that you also experience. What I'm not justified in believing is what qualia are exactly. I lack a scientific theory to explain it.
Bugmaster wrote:And if by "qualia" you mean "sometimes I feel pain", then "qualia" is really just an empty term with no special meaning -- and it's your feelings that are important, not the qualia or lack thereof.
Qualia is the feelings of pain, feeling of joy, etc..
Bugmaster wrote:In my monistic worldview, this is quite sufficient to conclude that you do, in fact, experience pain;
So, if you walked into a K-bot filming session, and the K-bot has pain expressions, is the K-bot in pain? If not, then why not? Afterall, the K-bot has the expected behavior of a human in pain.
Bugmaster wrote:
Bugmaster wrote:
No, there's no way for us to tell, there's a way the Secret Service to tell, but in this analogy, the Secret Service represents knowledge that we don't have access to.
Again, this is a false analogy. Without your Theory of Pain, you can't even define what pain is, and you have no idea whether your conjectures about qualia, dualistic mental objects, emergent properties or whatever, even make any sense at all. In the analogy, however, the existence of the Secret Service and their printing-plate-detectors is a certainty.
Let me rephrase this as the way I read it: So, you don't know what [dark matter] is, you have no way of detecting it, you have no idea what causes it... and yet, you know that [spacecrafts] don't have it. That strikes me as irrational.
But we do, in fact, have evidence of dark matter. Galaxies are spinning faster than they should be; stars are moving along paths that are more curved than they should be if dark matter did not exist, etc. "Dark matter" is simply a term for "whatever it is that's causing galaxies to spin faster, etc.". Your notion of pain, however, is much more than just an explanation for our observations (i.e., human behavior); therefore, this analogy is false.
We do indeed have evidence of there being a feeling of pain. You even acknowledged that you have this feeling. Outside of your red herring on the solipsist charge, there's no reason to discount the experience of pain as evidence of a phenomenon simply because we do not (currently) understand this experience in scientific terms. You even said yourself that this algorithm of pain is something that is forthcoming in the future. Don't you recall saying that?
Bugmaster wrote:
How is it relevant how the K-bot is being controlled? You don't know that it is being controlled. That was your point, right?
Exactly ! So, we don't know whether we're talking to Stephen Hawking or a bot. If we accept your worldview, we have to conclude that neither Stephen Hawking nor the bot are conscious; if we accept my worldview, we conclude that they both are. I think my worldview makes more sense.
This seems to me to be in contradiction to this earlier statement by you:
Thus, it's kinda silly to keep asking me whether I think that K-Bots are Strong AI -- I've already stated that they're not
How can you agree that controlled K-bots are non-Strong AI bots, and agree that they can perfectly imitate human actions, and not agree that K-bots do not experience pain? We are being fooled by the electromechanical appartus that makes it look like Stephen Hawking, are we not? Why attribute qualia to such a gadget if we know this is a common practical joke that is happening?
Bugmaster wrote:
A K-bot that is controlled by remote control isn't experiencing any more pain than a toaster. Is that not obvious to you?
No, sorry. In this case, you're not talking to a K-bot; you're talking to the entire system: K-bot plus human that controls it (i.e., Stephen Hawking). I think it's pretty obvious that this system does experience more pain than a toaster.
So, what system is experiencing pain? There is a guy in a room next door that pushes "expression of pain on K-bot 34523," and the electromechanical assembly within the K-bot results in the K-bot's face imitating a facial expression of pain. How is that different from the same guy putting toast into a toaster and getting toast? Does the toaster have a feeling of satisfaction because it made toast? How then can the K-bot have a feeling of pain because it made a particular facial expression that looks like pain to us?

Post Reply