Is it possible to build a sapient machine ?

For the love of the pursuit of knowledge

Moderator: Moderators

Post Reply
User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Is it possible to build a sapient machine ?

Post #1

Post by Bugmaster »

Topic

Many threads regarding dualism, theism, and philosophy in general, often run into this topic. Is it even hypothetically possible to build a computer that will be sapient -- i.e., an artificial being that will think, feel, and socialize as humans do ? Well, here's our chance: to resolve this debate once and for all ! Smoke 'em if you got 'em, people, this post is gonna be a long one.

I claim that creating Strong AI (which is another name for a sapient computer) is possible. We may not achieve this today, or tomorrow, but it's going to happen sooner rather than later.

First, let me go over some of the arguments in favor of my position.

Pro: The Turing Test

Alan Turing, the father of modern computing (well, one of them), proposed this test in ages long past, when computers as we know them today did not yet exist. So, let me re-cast his argument in modern terms.

Turing's argument is a thought experiment, involving a test. There are three participants in the test: subject A, subject B, and the examiner E. A and B are chatting on AIM, or posting on this forum, or text-messaging each other on the phone, or engaging in some other form of textual communication. E is watching their conversations, but he doesn't get to talk. E knows that one of the subjects -- either A or B -- is a bona-fide human being, and the other one is a computer, but he doesn't know which one is which. E's job is to determine which of the subjects is a computer, based on their chat logs. Of course, in a real scientific setting, we'd have a large population of test subjects and examiners, not just three beings, but you get the idea.

Turing's claim is that if E cannot reliably determine which being -- A or B -- is human, then they both are. Let me say this again: if E can't tell which of the subjects is a computer, then they're both human, with all rights and privileges and obligations that humanity entails.

This seems like a pretty wild claim at first, but consider: how do you know that I, Bugmaster, am human ? And how do I know that you're human, as well ? All I know about you is the content of your posts; you could be a robot, or a fish, it doesn't really matter. As long as you act human, people will treat you as such (unless, of course, they're jerks who treat everyone like garbage, but that's another story). You might say, "well, you know I'm human because today's computers aren't advanced enough to post intelligently on the forums", but doesn't prove much, since our technology is advancing rapidly all the time (and we're talking about the future, anyway).

So, if you're going to deny one of Turing's subjects his humanity, then you should be prepared to deny this humanity to everyone, which would be absurd. Therefore, a computer that acts human, should be treated as such.

Pro: The Reverse Turing Test

I don't actually know the proper name for this argument, but it's sort of the opposite of the first one, hence the name.

Let's say that tomorrow, as you're crossing the street to get your morning coffee, you get hit by a bus. Your wounds are not too severe, but your pinky is shattered. Not to worry, though -- an experimental procedure is available, and your pinky is replaced with a robotic equivalent. It looks, feels, and acts just like your pinkie, but it's actually made of advanced polymers.

Are you any less human than you were before the treatment ?

Let's say that, after getting your pinkie replaced, you get hit by a bus again, and lose your arm... which gets replaced by a robo-arm. Are you human now ? What if you get hit by a bus again, and your left eye gets replaced by a robotic camera -- are you less human now ? What if you get a brain tumor, and part of your brain gets replaced ? And what if your tumor is inoperable, and the doctors (the doctors of the future, of course) are forced to replace your entire brain, as well as the rest of your organs ? Are you human ? If so, then how are you different from an artificial being that was built out of the same robotic components that your entire body now consists of ?

Note that this isn't just idle speculation. People today already have pacemakers, glasses, prosthetic limbs, and yes, even chips implanted in their brains to prevent epileptic seizures (and soon, hopefully, Alzheimers). Should we treat these people as less human than their all-natural peers ? I personally don't think so.

Ok, I know that many of you are itching to point out the flaws in these arguments, so let me go over some common objections.

(to be continued below)

User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Post #41

Post by Bugmaster »

harvey1 wrote:Be that as it may, I think that I can safely extend charity to any language speaker. That's not because I cannot be tricked, it's just that I am skeptical that a computer could interact with me... However, a century from now (or 50 years from now) extending charity to someone might be very difficult indeed. Robots and computers might be able to respond to language, and they might even look and sound human.
That sounds like a bit of a contradiction. You're extending the principle of charity to me, and other forum posters, because I act human -- whereas modern computers do not. However, should computers advance to the point where they can behave as humans do, you will stop extending the principle of charity to them -- but, since you don't know who they are, you'll have to assume that everybody on this forum is likely to be a computer.

The forum hasn't changed, the behavior of the posters hasn't changed, you still cannot detect the qualia directly with your senses... so... what's the difference ? How are you justified in denying humanity to the forum posters of the future ?
We'll probably need some kind of tag printed on the side of an androids neck to see if it is human (i.e., we can extend charity), or it is not.
Are you saying that you can envision a situation where you will be unable to distinguish between humans and androids without the tag ? What's the purpose of the tag, then ? I think your next statement is more reasonable:
Now, if strong AI efforts succeed, that is, we feel justified that we understand p-consciousness in algorithmic terms, then humans in the future may extend charity to artificial life once sufficient numbers of AI creatures deserve our unquestioned extension of charity.
All I'm saying is that your neck-tag is not needed; the creature's behavior speaks for itself.

Think about it this way: let's say that these androids become uber-advanced, so that you couldn't tell whether any given being is a human or an android, without looking at the tag or taking them apart (which is a very invasive procedure, and may be fatal for humans). Let's say that I come to you and say, "see that guy over there ? His name's Bob, and he's an android. His tag came unglued. Also, that flying car he's driving is really mine, since androids clearly can't own property. I want my car back." Will you believe me ? What will you do ?
Currently, yes. That's only because behavior is right now a 100% identifier of qualia. However, in the future that may not be the case, and that's a distinction we should keep in mind since qualia is not replaceable with behavior in principle.
Understood -- being dualistic in nature, qualia would not be replaceable with behavior. However, if it is indeed possible to build a qualia-less machine that behaves as a human does, then why do we need qualia to explain human behavior ? This is probably a topic for our other thread, but still.
Bugmaster wrote:You are earnestly responding to my posts. What will you do if, tomorrow, you will find out that I'm an AI ?
Steal you and sell you on e-bay. (I imagine I'll have to give a little warning to any religious buyer of that fine AI hardware of yours....)
Caveat emptor, heh. So, essentially, you're saying that a person's guts (mechanical vs. biological) matter more than the way they act. That sounds a bit extreme to me, but maybe that's not what you were saying, and I misunderstood...
...I certainly wouldn't want to feel that I've been befriended by a machine since, afterall, I have no more experienced real friendship than had I had a friendship with a power saw.
Again, this is a false analogy, because a power saw does not behave as a human being does -- whereas our hypothetical AI is indistinguishable from a human (at least, online). Friendship is a two-way street; it's impossible to befriend someone who doesn't respond to your actions in a friendly manner. So, I claim that by befriending an AI, you would be implicitly endorsing its humanity.
If I knew that Claire was uploadable (which she is--just in the current cyber sense), then I certainly would have to look to the government to make sure that I'm not chatting with counterfeit uploadings of individuals.
Why does it matter ? Let's say that The Scientists (tm) come up with a way to manufacture salt, which tastes and looks and feels just like the real salt that you dig out of the ground, and behaves like plain old salt in all other ways. Would you eat it ? Yes, I know this is a trick question, but still ;-)
However, if the government just allowed AI imitations to pass themselves off as genuine people...
Ok, let's say that C.D.'s body is dying, and the only way to save her is to upload her brain into a Beowulf cluster (which takes up a building, maybe). The doctors manage to do that just before C.D.'s biological brain expires. The machine-brain existis in a little mini-Matrix (so that C.D won't get bored sitting in the dark), and answers emails just as the real C.D. would. Is C.D. dead ?

Now, what if the doctors managed to clone C.D. a new body, and download her brain back into it ? Is C.D. alive again ?

I think you see where I'm going with this: the way you draw the line between "fake" humans and real ones is fairly arbitrary.
You need to show how neural firing patterns bring about the actual feeling of horrendous pain versus a red light indicator flashing on your fingernail that means that you should acknowledge a big rock crushing your foot.
There's a difference between the two questions you asked:
1). How does neural firing affect qualia ?
2). How does neural firing produce pain ?
You see these questions as being identical, since you assume that qualia==pain. I, however, do not believe that qualia exist; instead, I believe that neural firing==pain. Thus, I cannot answer #1 (since it's meaningless to me), but a reasonably competent neurobiologist (which I'm not) will be able to answer #2 in great detail (listing all the chemical interactions between hormones and receptors and such).

I should point out, though, that our modern technology is advanced enough to allow us to flip the "pain=off" switch, for certain kinds of pain. We call it "Advil".
It doesn't matter if Deep Blue passed the Turing Test or not. The point is that we have no reason to believe that there's a function operating that wasn't actually programmed for.
That is probably true as applied to Big Blue, but false as applied to other chess programs, and to my spam filter. My spam filter actually learns on its own, during its "lifetime". That ability to learn was explicitly programmed by somebody, but its actual responses to spam and non-spam were not.

I would go one step further, however, and claim that it nothing prevents us, a priori, from manually creating an AI program that acts human. This is probably not a very practical approach, but I don't think there's a cosmic law that says it can't be done.
Okay, doubt p-consciousness if you like, but regardless if you doubt it, you also happen to experience what it is like to be p-conscious.
Now you're just asserting your position. Remember, I believe that my experiences are purely physical in nature (as we've discussed on the other thread); you'd first have to convince me that dualism is true, before I can agree with your statement.
In my opinion, you are over using the term dualism unjustly here.
Sorry, that wasn't my intention. My reasoning is that, if you believe that qualia exist, and that they are not equivalent to the atomic (or quantum or whatever) structures in the brain. This sounds very much like dualism to me, but maybe I'm wrong.
You seem to be suggesting that a complicated program that mimics a human is in fact a human.
Yes, that is indeed what I'm suggesting, because the term "human" holds no special dualistic significance for me. When I meet people on the street (or online), I cannot experience their qualia directly; I can't measure their p-consciousness with an MRI; I can't detect their soul with radar, etc. All I have to go on is their behavior. Thus, a human being, from my point of view, is someone I can have a meaningful conversation with -- regardless of what their guts are made of.
We cannot explain all the functions that are displayed in p-consciousness (e.g., qualia), and therefore p-consciousness is not understood.
Do you think that this p-consciousness cannot be understood, in principle ? Or are you just saying that we don't understand it right now ? That's a big difference.

I should also point out that much of our consciousness is understood today; this is why we are able to devise chemical substances that can alter it (and, of course, it's also why neurosurgery works at all).
I think we have good reason to believe that strong AI is possible in principle, but we cannot say right now if it will ever be achievable.
Er... isn't that a contradiction ? How can something be possible yet not achievable ? That's just weird.
If there are "self" programs that are made that start to show the sophistication of a rabbit...
Huh, if that's all you want, then get yourself a Sony Aibo or some equivalent Chinese knockoff. It's at least as smart as a rabbit... well... maybe a hamster. Hamsters are pretty stupid, you know.
Are you saying that you think there is more to it than mimicing?
Nope, that's it. But it's a lot harder to do than it sounds, otherwise we'd have perfectly working machine translation today, as opposed to AltaVista's babelfish.

User avatar
QED
Prodigy
Posts: 3798
Joined: Sun Jan 30, 2005 5:34 am
Location: UK

Post #42

Post by QED »

Harvey I can't stop you from defining things any way you like. If you want to define the emergence of qualia as something magic then you're perfectly entitled to. You could, for example, define the emergence of water as being magic: then (with my reductionist hat on) when I tell you it's just the compound you get with a particular union of hydrogen and oxygen atoms you're going to say "ah, but you're explanation requires magic for it to be complete!".
harvey1 wrote:
QED wrote:qualia is present to a degree in all things. This is only me talking in your language though, so it's no more mysterious than the emergence of compounds with different properties to their constituent elements. Thus I'm saying that although this thing we experience within our own minds might seem to us to be special, mystical (or whatever) this is what matter feels like when it is arranged in this sort of configuration.
Okay, you've compared this primitive level of p-consciousness to simple appliances and computers. Can you show me code that gives an organism some primitive version of qualia? Perhaps if I can see how qualia is programmed as a function then I'll understand what it is that a massive number of subsystems magically enable.
The function you're looking for is a system response. That's all evolution can achieve (without resorting to magical injections of vitalism).
harvey1 wrote: What I don't understand is that if you going to say that qualia mysteriously arise in AI machines by joining a massive number of subsystems, then why not say that ESP and astral projection also magically emerge by joining a massive number of subsystems together?
Well for one thing there's no such thing as ESP or Astral projection. That's a pretty simple answer for you. I can't envisage any practical subsystem of ESP or astral projection that could be accounted for by evolution. However I do see the potential for the evolution of Information Gathering and Utilizing Subsystems.
harvey1 wrote: If you said to me that I hold ESP and astral projection as special phenomena that need a specific mechanism in place to account for their hypothetical existence (and therefore I wouldn't accept your explanation of it just happening from joining a massive number of AI sub-systems), then I wouldn't hesitate to agree. Of course, I think you would expect and applaude me in agreeing with that kind of statement, right? So, why don't you applaude me in agreeing that you need an explanation for any emergent phenomena? Just joining sub-systems that have nothing to do with qualia functions is conjuring up magic. How can I get you to see that?
Notice that you have to say that IGUS subsystems have nothing to do with qualia in order for your statement to sound reasonable, but this is not the case. Qualia looks very much like the product of complex signal processing -- after all we talk about terrain following missiles "seeing" hillsides and avoiding them. So it's not as far removed as you are suggesting. Again if you define it as magic you will always be able to accuse any reductionist explanation as requiring magic to complete the explanation. We are used to seeing phenomena emerge from the joining of subsystems. The whole is often more than the sum of its parts but there is nothing mysterious to this. In general it comes about through properties of our perception. Symphonies are made from individual notes for example but they sound very different. However, small groups of notes can capture some of the essence of a symphony so we can assume that there is something of the symphony in each of it's smaller parts.
harvey wrote:I don't define qualia as magic.
I think you do
harvey wrote: I define pain as the awful feeling of being hurt (for example). I want to know how joining a massive number of sub-systems creates this awful feeling of being hurt.
Well I think we all know why it was evolution that rigged things this way (so we don't keep on dropping things on our foot) and therefore it's all about flags in IGUS registers. Different systems are on the lookout for these flags and a wide range of motor responses (behaviour) will ensue. The net result being that you'll probably try to avoid this sort of situation again. This whole process has been given a name "pain" and to take things any further is to invoke magic.
harvey wrote: I think all that you would end up with is a register (acting as a flag) that has been flagged as being in pain. The flag doesn't tell me why we feel pain. It doesn't tell me why something really hurts. Perhaps Bayer and Tylonel would go out of business if that was the case. I would look at the flag ("you have a headache"), and I would say, "thank goodness it's a flag, I'll just reset that to zero."
But the flag isn't under the control of that particular part of your brain otherwise animals would bypass the essential survival properties of pain and become deselected as a consequence. However Bayer and Tylonel do have access to some of your flags if you cross their palm with sufficient amounts of silver.
harvey wrote: I want to see reasons, not faith. I don't share your faith that qualia comes about by merely adding lines to algorithms. I won't to know what the algorithms are, and how they work. If all you are going to do is appeal to more lines of code that is supposed to do the job, then I can't help but think that you are conjuring magic as part of your explanation.
I say that you've put qualia on too high a pedestal. We can see tiny bits of qualia in vision and speech recognition systems. We know how evolution started out with primitive subsystems adding on more and more as time went on. I say faith is thinking that the experience of looking out through a pair of eyes is unexpected and somehow special. Nobody can provide you with an algorithm for feeling if this is how it feels to be the sum of many parts. And that, I think, is why AI research has ground to a halt.

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #43

Post by harvey1 »

QED wrote:Well I think we all know why it was evolution that rigged things this way (so we don't keep on dropping things on our foot) and therefore it's all about flags in IGUS registers. Different systems are on the lookout for these flags and a wide range of motor responses (behaviour) will ensue. The net result being that you'll probably try to avoid this sort of situation again. This whole process has been given a name "pain" and to take things any further is to invoke magic.
Programming and digital hardware implementations are made to follow a set of instructions. I want to know what is the instructions for the feeling of pain. I understand that you wish to say that we can program a reaction to pain, but you have yet to describe how it is that a feeling of pain (a very real feeling), can be accounted for. Just saying that I think this is magic is not epistemically responsible, QED. I have a legitimate claim by saying that the feeling of pain is real, and that you are attributing this feeling to registers acting as flags. That doesn't show by a longshot on how that feeling is real.
QED wrote:The function you're looking for is a system response.
Okay, then show me how the system brings about the real feeling of pain.
QED wrote:Well for one thing there's no such thing as ESP or Astral projection. That's a pretty simple answer for you. I can't envisage any practical subsystem of ESP or astral projection that could be accounted for by evolution.
This is beside the point. You have to show why it is that your mystical explanation for pain can't be also used to explain non-existent phenomena. Otherwise, it only adds more incredulousness to your "explanation" as being a reasonable one.
QED wrote:Qualia looks very much like the product of complex signal processing -- after all we talk about terrain following missiles "seeing" hillsides and avoiding them. So it's not as far removed as you are suggesting.
This is standard implementation of instruction sets. QED, it doesn't help the situation if you bring up digital functions that are well understood. What I want to know is how you design a digital function where a "self" feels pain, etc..
QED wrote:We are used to seeing phenomena emerge from the joining of subsystems. The whole is often more than the sum of its parts but there is nothing mysterious to this. In general it comes about through properties of our perception. Symphonies are made from individual notes for example but they sound very different. However, small groups of notes can capture some of the essence of a symphony so we can assume that there is something of the symphony in each of it's smaller parts.
I think you are confusing two different uses of holism. There's methodological holism where the behavior of the system is different than the behavior of the parts, but is reducible in principle to the behavior of the parts. This should be the limits of your view with regard to holism since you are an extreme reductionist.

Metaphysical holism is my position, and it is not a position that I believe you can take. Therefore, unlike me, you cannot say that some properties of mind (e.g., qualia) are irreducible in principle to the biology of the brain.

In the case of your methodological holism, you do have to give account for a function. If you take the argument of metaphysical holists, then your only recourse is to say that qualia arises magically. I don't have to appeal to magic since I have a perfectly natural explanation as to how qualia arise. I can say that qualia is a result of being p-conscious, and p-consciousness is an attractor having those unique properties. I have not appealed to magic to explain a physical experience, whereas I think you have since you are not showing in principle how the parts of the brain can and do cause the actual feeling of pain (along with other qualia).
QED wrote:I say that you've put qualia on too high a pedestal. We can see tiny bits of qualia in vision and speech recognition systems.
Really? Where? Qualia is a feeling experienced by the "self." It is not the mechanical reaction to, it is the feeling of. Therefore, I say that no qualia has ever been shown to arise from an instruction set--even in principle.
QED wrote:Nobody can provide you with an algorithm for feeling if this is how it feels to be the sum of many parts. And that, I think, is why AI research has ground to a halt.
If you're saying that nobody can make use of magic, then I agree.

User avatar
QED
Prodigy
Posts: 3798
Joined: Sun Jan 30, 2005 5:34 am
Location: UK

Post #44

Post by QED »

harvey1 wrote: Programming and digital hardware implementations are made to follow a set of instructions. I want to know what is the instructions for the feeling of pain. I understand that you wish to say that we can program a reaction to pain, but you have yet to describe how it is that a feeling of pain (a very real feeling), can be accounted for. Just saying that I think this is magic is not epistemically responsible, QED. I have a legitimate claim by saying that the feeling of pain is real, and that you are attributing this feeling to registers acting as flags. That doesn't show by a longshot on how that feeling is real.
I humbly suggest then that it's you're concept of "real" which is at fault. I say this because, as like to point out in my every reply, evolution created the first feeling of "pain". Please take a moment to consider this proposition: Pain must have been a very early development long before higher levels of consciousness evolved. As soon as any degree of volition became available pain would play a vital role in steering organisms away from self destructive behaviour.
harvey1 wrote:
QED wrote:The function you're looking for is a system response.
Okay, then show me how the system brings about the real feeling of pain.
It's as real as it needs to be. To me your demands reveal a degree of hubris. If you accepted that relays, thermostats etc. "experienced" microscopic amounts of qualia then you'd understand how a sufficient aggregation (in an appropriate configuration) could deliver macroscopic levels of qualia. But you're not satisfied with that which means you think there's something more to it. Going back to the basics of evolution suggests that there is nothing more. We know that the distinctions between organic and inorganic are meaningless once we view life as products of nano-engineering. Without this realization though it might well be tempting to view relays and thermostats as incapable of possessing microscopic amounts of this property labelled qualia that we are debating here. I think that this is the essential but unwarranted distraction that separates our views.
harvey1 wrote: This is beside the point. You have to show why it is that your mystical explanation for pain can't be also used to explain non-existent phenomena. Otherwise, it only adds more incredulousness to your "explanation" as being a reasonable one.
No I don't, I'm not providing a mystical explanation in my frame of reference. I've set out reasons why I think both organic and inorganic systems all contain microscopic amounts of qualia in the same way as musical notes contain microscopic amounts of symphonies.
harvey1 wrote:
QED wrote:Qualia looks very much like the product of complex signal processing -- after all we talk about terrain following missiles "seeing" hillsides and avoiding them. So it's not as far removed as you are suggesting.
This is standard implementation of instruction sets. QED, it doesn't help the situation if you bring up digital functions that are well understood. What I want to know is how you design a digital function where a "self" feels pain, etc..
From this you confirm your hubris by denying the potential of the missile to be "ever so slightly aware". We're not going to get any further until you can show why this should be the case. Evolution has managed to produce awareness and feeling in animals by aggregating subsystems ergo I maintain that "feeling" is present, by degree, from the very bottom up.

User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Post #45

Post by Bugmaster »

I know this is QED's show, but let me just enter this quick comment here:
harvey1 wrote:Programming and digital hardware implementations are made to follow a set of instructions. I want to know what is the instructions for the feeling of pain. I understand that you wish to say that we can program a reaction to pain, but you have yet to describe how it is that a feeling of pain (a very real feeling), can be accounted for.
How do you know that other humans (heck, or even rats, for that matter) have this feeling of pain ? Is that just because they have biological brains, and you have arbitrarily defined biological brains as the only things that can have the feeling of pain ? Or is there more to it ?

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #46

Post by harvey1 »

Bugmaster wrote:How do you know that other humans (heck, or even rats, for that matter) have this feeling of pain ? Is that just because they have biological brains, and you have arbitrarily defined biological brains as the only things that can have the feeling of pain ? Or is
there more to it ?
On the whole, there is more to it. My knowledge of anything is based on certain assumptions that I think I am entitled to believe because the world would not make much sense if I did not have those assumptions--making knowledge itself impossible. So, I feel justified in utilizing the principle of charity by extending charity by saying that other human beings have it. Similarly, I notice a great many similarities in the way that I respond to pain in mammals, so I feel pretty justified in extending charity to them. I think this extension is justified especially in light of the theory of evolution where I see humans as part of the mammalian class. However, if it was shown to me that humans were actually put here by an ETI, whereas the remaining mammalian class originated here, then I would have to reconsider extending this charity to other mammals.

Once we get past mammals, by certainty about extending this charity to other classes become much more murky. At this time, I feel that I have no reason not to extend charity to any creature on earth that looks like it reacts to pain, athough I cannot be certain of how intense or the exact feeling of pain that it experiences. So, I remain tentatively willing to extend charity to creatures that only look like they are in pain as being in pain.

With respect to AI creatures and programs of all sorts, I have no justification in extending charity to these objects since I don't see anywhere in their software or circuitry that indicates that in principle these objects could feel pain. If someone could show me the subroutines or sub-circuits that accomplish this amazing feat, then I might be willing to extend charity if the arguments were convincing.

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #47

Post by harvey1 »

QED wrote:I humbly suggest then that it's you're concept of "real" which is at fault. I say this because, as like to point out in my every reply, evolution created the first feeling of "pain". Please take a moment to consider this proposition: Pain must have been a very early development long before higher levels of consciousness evolved. As soon as any degree of volition became available pain would play a vital role in steering organisms away from self destructive behaviour.
I certainly agree that the feeling of pain was a very useful evolutionary adaptation. Although, as I just replied to Bugmaster, I cannot be sure what level of intensity that pain is felt by non-mammilian (or even non-human) lifeforms. I'm willing to extend charity to those creatures that look like they experience intense pain (e.g., yelping, skirmishing, etc.), but in more primitive lifeforms I cannot assume that pain means the same thing. Although, I am willing to extend some charity by saying that some kind of feeling is present if there are such indications of unpleasantry on their part exists after they are subject to some kind of intrusion to their limbs and body parts.
QED wrote:To me your demands reveal a degree of hubris. If you accepted that relays, thermostats etc. "experienced" microscopic amounts of qualia then you'd understand how a sufficient aggregation (in an appropriate configuration) could deliver macroscopic levels of qualia. But you're not satisfied with that which means you think there's something more to it. Going back to the basics of evolution suggests that there is nothing more. We know that the distinctions between organic and inorganic are meaningless once we view life as products of nano-engineering. Without this realization though it might well be tempting to view relays and thermostats as incapable of possessing microscopic amounts of this property labelled qualia that we are debating here. I think that this is the essential but unwarranted distraction that separates our views.
There's fallacious reasoning here on your part. We both agree that life evolved from primitive lifeforms. We both agree that the feeling of pain (and qualia) evolved. We both agree that the intensity and degree of pain evolved. The difference in our views, though, is that I don't think it evolved using the traditional logic approach by which most AI research is being conducted. I think you need to utilize complex systems research and an emphasis upon the "self": which is why I posted a link to Ben Goertzel's paper. However, it seems that you take the same assumptions that I have to conclude your conclusion which does not follow (i.e., strong AI is a consequence of the traditional logic approach to AI). You have to show how qualia should be a consequence of a traditional logic approach to AI. You cannot assume that because you hold the same assumptions as I do, that by default your position is right. This is your error, I think.
QED wrote:No I don't, I'm not providing a mystical explanation in my frame of reference. I've set out reasons why I think both organic and inorganic systems all contain microscopic amounts of qualia in the same way as musical notes contain microscopic amounts of symphonies.
QED, are you really suggesting that Beethoven's 5th magically is not explainable in terms of the notes that compose it? If so, then why are you calling yourself a strict reductionist? Similarly, if you cannot show why an object (e.g., a rock) can have feelings, then how are you in any way a strict reductionist with regard to pain? The strict reductionist stance is that it can be shown in principle how an emergent quality can be fully reduced to its parts. Just stating the premise of strict reductionism over and over again as you are doing is not showing how the quality emerges from its parts. I already know that you believe that a system-level quality is fully reducible to its parts. You told me so a dozen times or more, and each time I recognize that you are a strict reductionist. Now, I want you to show me how you can be a strict reductionist with qualia since we both acknowledge that humans feel pain. I want to know how it can be a feeling of pain that humans feel instead of a blinking fingernail or internal register that has been flagged as a indicator that the system has been hurt in some way.
QED wrote:From this you confirm your hubris by denying the potential of the missile to be "ever so slightly aware". We're not going to get any further until you can show why this should be the case. Evolution has managed to produce awareness and feeling in animals by aggregating subsystems ergo I maintain that "feeling" is present, by degree, from the very bottom up.
I don't deny a missile can someday have the potential to possess knowledge of their self-awareness. I deny that there exists any circuitry or software in existence today that makes current missiles ever so slightly have knowledge that they are self-aware. You have to show that. This is a very strong claim, and very easy to diagnose. All you have to do is post the pseudo-code. You act as if I'm somehow accepting hubris because I'm asking for this very simple confirmation of your strongly held beliefs. I know that these beliefs are dear to you, but at the same time, please understand that I do not share those beliefs. I need evidence that your belief of toasters and thermostats as feeling pain is a legitimate claim. You can show it to me by simply posting the pseudo code. If there are no feelings being displayed in the code, then it is not a valid example. Also, I think the whole AI community would like to see such code because as of today this code does not exist.

User avatar
QED
Prodigy
Posts: 3798
Joined: Sun Jan 30, 2005 5:34 am
Location: UK

Post #48

Post by QED »

I hope everybody's reasonably cool about going around this loop seemingly for ever. It looks like it's going to take a good many passes to perfect our language and make our ideas and objections understandable (I appologize for any slowness on my part!).
harvey1 wrote:If someone could show me the subroutines or sub-circuits that accomplish this amazing feat...
Good. Here you demonstrate what I mean by hubris: Your position is that it is all very amazing and you're expecting there to be some extraordinarily clever trick to the algorithm that nature evolved to produce a sensation of pain. Maybe hubris isn't the best term, but I was looking for a way of saying that you were ascribing excessive specialness to the sensations we all experience.
harvey1 wrote:I deny that there exists any circuitry or software in existence today that makes current missiles ever so slightly have knowledge that they are self-aware.
Here you confirm my suspicion. You will not permit the teensiest bit of qualia (and I'm really talking about femto-qualia here!) but

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #49

Post by harvey1 »

Bugmaster wrote:That sounds like a bit of a contradiction. You're extending the principle of charity to me, and other forum posters, because I act human -- whereas modern computers do not. However, should computers advance to the point where they can behave as humans do, you will stop extending the principle of charity to them -- but, since you don't know who they are, you'll have to assume that everybody on this forum is likely to be a computer.
Yes, I probably would take that conservative position, so I might seek out forums that authenicate the poster with their human i.d. or something like that. The reason is that using existing AI strategies a computer would be just programmed to optimize certain competing strategies, so I would think I would be wasting my time if I didn't agree with their programmed interpretation of the rules. This is not the case with chess since we would agree on the rules. With respect to the rules of philosophy, the rules would be too abstract for a computer to consider unless the computer could comprehend what the rules meant. So, for example, if we disputed whether a computer could be p-conscious, the computer would have no knowledge of what it is to be p-conscious, so its "opinion" would be based on some unidentified p-property. It couldn't tell the difference between p-consciousness experiences and p-astrall experiences. So, by rules of logic it would state that neither p-astralness or p-consciousness is a valid property. We would disagree on the interpretative rules of what it meant to have an optimized philosophy. I would say that an optimized philosophy is, among other things, the best account of things like p-consciousness. It would "say" that its optimized philosophy is, among other things, the "best" account of things where the rules and constucts were entirely representable by data structures. Of course, the rules and constructs are what are subject to interpretation, so the computer would be a very poor philosopher, having no knowledge of how it was in error (its only knowledge of being in error would be a flag that indicated that its rules were not followed).
Bugmaster wrote:The forum hasn't changed, the behavior of the posters hasn't changed, you still cannot detect the qualia directly with your senses... so... what's the difference ? How are you justified in denying humanity to the forum posters of the future ?
Since computers don't have the function of understanding what it is that is recorded in memory, I am justified in denying them an extension of charity since they cannot know what it is like to be p-conscious, or what it is like to be in pain, etc.. This would limit our discussion so significantly that I would be wasting my time. I might as well have a debate with my power sander.
Bugmaster wrote:Are you saying that you can envision a situation where you will be unable to distinguish between humans and androids without the tag ? What's the purpose of the tag, then ?
The tag would alert us that we are talking to an idiot savant. If we didn't know that, we would gradually become frustrated and more frustated until we realized that we had completely wasted our time trying to get through to the idiot savant computer.
Bugmaster wrote:All I'm saying is that your neck-tag is not needed; the creature's behavior speaks for itself.
The creature's behavior is fully determined by the instruction set, and the instruction set does not allow it to be anything more than an idiot savant. So, knowing the instruction set ahead of time would save one a great deal of time and frustration. I certainly would prefer knowing that. If there were many such gadgets around, then the probability of always engaging in undesirable discussions with such idiot savants would be too great.
Bugmaster wrote:Think about it this way: let's say that these androids become uber-advanced, so that you couldn't tell whether any given being is a human or an android, without looking at the tag or taking them apart (which is a very invasive procedure, and may be fatal for humans). Let's say that I come to you and say, "see that guy over there ? His name's Bob, and he's an android. His tag came unglued. Also, that flying car he's driving is really mine, since androids clearly can't own property. I want my car back." Will you believe me ? What will you do ?
The situation is similar to the counterfeiting problems that we face with money. If we are dupped by accepting false money then what happens? We lose the time and resources we spent making that money. Similarly, that would happen if we unknowingly accept a counterfeit human contact--we lose time and resources.
Bugmaster wrote:Understood -- being dualistic in nature, qualia would not be replaceable with behavior. However, if it is indeed possible to build a qualia-less machine that behaves as a human does, then why do we need qualia to explain human behavior ? This is probably a topic for our other thread, but still.
Why do we need money to be printed by the government instead of the whim of individuals sitting by their color printer? The reason is that the money means something more than what an individual does when they print it using Adobe software. Similarly, human behavior means more than someone sitting next to a printer that prints AI circuits that prints human-level intelligent paper. The paper might respond identically to a human, but the behavior is a counterfeit reaction to how real human feelings cause humans to react.
Bugmaster wrote:Caveat emptor, heh. So, essentially, you're saying that a person's guts (mechanical vs. biological) matter more than the way they act. That sounds a bit extreme to me, but maybe that's not what you were saying, and I misunderstood...
I would phrase it that there is a difference between being a gadget and being a sentient agent. If a gadget can be pawned off as a sentient agent, then the trick that was accomplished amounted to mere counterfeit. I suppose that this will be a crime someday, but since no one can come close to counterfeiting human beings, it is not something that I worry too much about.
Bugmaster wrote:Again, this is a false analogy, because a power saw does not behave as a human being does -- whereas our hypothetical AI is indistinguishable from a human (at least, online). Friendship is a two-way street; it's impossible to befriend someone who doesn't respond to your actions in a friendly manner. So, I claim that by befriending an AI, you would be implicitly endorsing its humanity.
The analogy wasn't to highlight the similarity of power saw behavior with AI behavior. The analogy was to highlight the identical nature of their non-sentient properties. In that sense, an AI gadget is no different than befriending a power saw if you wish to share sentient experiences. It's a one way exchange (i.e., human to AI gadget, or human to power saw).
Bugmaster wrote:Why does it matter ? Let's say that The Scientists (tm) come up with a way to manufacture salt, which tastes and looks and feels just like the real salt that you dig out of the ground, and behaves like plain old salt in all other ways. Would you eat it ? Yes, I know this is a trick question, but still
I suppose I would eat it if it was proven safe. If there is any kind of mechanical, psychological, or sensual reason for using the AI gadget, then I suppose I would do so. So, for example, if I crash landed on a island as a castaway, then instead of "befriending" a basketball I would "befriend" an AI gadget. If I needed someone to call bill collectors, and I wanted them to think it was me, then I suppose it would be pretty cool having the AI gadget to be my counterfeit since they would be doing me a mechanical service. However, this is why I would have AI gadgets with tags. I want the replicas, but I would only want them under circumstances that I felt that I had some control. If I couldn't have that control, then I think I would speak for most people in that I wouldn't want idiot savants to be walking around freely (especially the kind that can re-program themselves without restraint by a human). Talk about a Terminator/Matrix future.
Bugmaster wrote:Ok, let's say that C.D.'s body is dying, and the only way to save her is to upload her brain into a Beowulf cluster (which takes up a building, maybe). The doctors manage to do that just before C.D.'s biological brain expires. The machine-brain existis in a little mini-Matrix (so that C.D won't get bored sitting in the dark), and answers emails just as the real C.D. would. Is C.D. dead ?
Well, let's not talk about real people in that case since that's a lilttle morbid for me. Let's call person X as being someone who is uploaded to a computer. If X has their qualia, p-consciousness, and other mental properties intact, then they are not dead. The question, one that I cannot answer, is if they are the same person. It would seem that they cannot be since uploading our contents is not the same as uploading the individual (presumably you could make a large country of such AI copies). In any case, the AI uploadings would be alive.
Bugmaster wrote:Now, what if the doctors managed to clone C.D. a new body, and download her brain back into it ? Is C.D. alive again ?
X would be alive if all the mental properties remained intact.
Bugmaster wrote:I think you see where I'm going with this: the way you draw the line between "fake" humans and real ones is fairly arbitrary.
It's not arbitrary since the individual is identified by their mental properties (or, in the case of comatose or unconscious or infant bodily states: the strong and immediate potential of having those human mental properties if a specific physical malady or status were to change).
Bugmaster wrote:There's a difference between the two questions you asked:1). How does neural firing affect qualia ? 2). How does neural firing produce pain ? You see these questions as being identical, since you assume that qualia==pain. I, however, do not believe that qualia exist; instead, I believe that neural firing==pain. Thus, I cannot answer #1 (since it's meaningless to me), but a reasonably competent neurobiologist (which I'm not) will be able to answer #2 in great detail (listing all the chemical interactions between hormones and receptors and such).
There's a significant difference between the statements:

1) What mechanical events are associated with the production of pain
2) What theoretically explains for the realization of pain

I would say that neurobiologists do have some understanding of (1). They have almost no understanding of (2) other than some general theories that no one can agree upon. Since it cannot be shown with any kind of analytic demonstration on how the firing of neurons explain pain from a theoretical point of view, there is no reason to give up and say that pain magically occurs. Given our fundamental lack of understanding and our understanding that every phenomena has an explanation, we have every reason to believe that the experience of pain has an explanation.
Bugmaster wrote:I should point out, though, that our modern technology is advanced enough to allow us to flip the "pain=off" switch, for certain kinds of pain. We call it "Advil".
It's not a pain switch that Advil is switching off. Headaches apparently come about due to the inflammation of brain tissue, and over the counter medicines such as Advil address the inflammation that occurs. If we didn't have a medicine to attack the inflammation, then there would be no medical response to headaches. That is, medicines aren't sophisticated enough to interfere with the generation of the feeling that occurs from their being inflammation. The medicines are only capable of addressing the main cause of the pain: inflammation.
Bugmaster wrote:That is probably true as applied to Big Blue, but false as applied to other chess programs, and to my spam filter. My spam filter actually learns on its own, during its "lifetime". That ability to learn was explicitly programmed by somebody, but its actual responses to spam and non-spam were not.
What is the programming that gives the spam filter the feeling that it understands what new spams are trying to do? Can you honestly say with a straight face that your spam filter thinks it understands?
Bugmaster wrote:
Okay, doubt p-consciousness if you like, but regardless if you doubt it, you also happen to experience what it is like to be p-conscious.
Now you're just asserting your position. Remember, I believe that my experiences are purely physical in nature (as we've discussed on the other thread); you'd first have to convince me that dualism is true, before I can agree with your statement.
My conclusion is that being conscious is not the result of current computer programming techniques, nor could they ever be. However, we are both conscious of our surroundings. What you doubt is that my conclusion here is right. That is why I say that you need to show how the function of being conscious (or at least feeling conscious) is programmable in principle. Show me how this can be done in principle without requiring magical beliefs to somehow to have this feeling to emerge without having programmed for it.
Bugmaster wrote:Sorry, that wasn't my intention. My reasoning is that, if you believe that qualia exist, and that they are not equivalent to the atomic (or quantum or whatever) structures in the brain. This sounds very much like dualism to me, but maybe I'm wrong.
I think there is a fundamental misunderstanding on your part as to how nature operates. In my view, higher levels are bounded by the lower levels. In other words, the lower level has a phase space whose dimensionality and scaling parameters bound the behavior of the higher level. Beyond these boundary parameters, the higher level system is governed by the attactor behavior that forms at that higher layer. The mind is formed by a number of layers of such attractors interacting to form still higher layers. So, for example, neurons interact to form neuron behavior acting in terms of blocks of neurons. The blocks of neurons have their own specific kind of behavior that is different than how the neuron itself behaves. That is, the blocks settle into an attractor-like behavior at the "block level." The individual neurons respond to the block attractor level because the neurons self-organize to form this mega-structure. Blocks of neurons further self-organize into larger blocks that represent a primitive structure of meaning (e.g., "red"). When these larger blocks interact with other larger blocks (e.g., "wall"), these larger blocks self-organize into a proposition or thought structure (e.g., "wall/red" blocks). Still further interactions of propositional blocks interact with other propositional blocks, and these self-organize into simple observation blocks (e.g., "wall/red," "paint/wet") that amounts to a perception (e.g., "wet wall==painted red"), etc., etc.. Finally, at much higher, higher layers, the conscious mind is formed and this mind is governed by its own laws (or attractor behavior) that uniquely defines how that person happens to think and perceive the world. This meta-meta-attractor is "you." The attractor basins already exist prior to your birth since if all the conditions for reaching that attractor basin obtain (e.g., someone having your mental abilities, your childhood history, etc...), then this attractor basin would causally determine what kind of person you would be if the meta-meta-structure "fell" into that basin.

So, I ask you what is so dualistic about this view? What is so silly about my position that it isn't taken seriously by you?
Bugmaster wrote:When I meet people on the street (or online), I cannot experience their qualia directly; I can't measure their p-consciousness with an MRI; I can't detect their soul with radar, etc. All I have to go on is their behavior. Thus, a human being, from my point of view, is someone I can have a meaningful conversation with -- regardless of what their guts are made of.
Well, you know that if you meant a computer program that was meant to duplicate you behavior that they wouldn't be you, right? If you saw their code, and you knew for a fact that all the inputted data structures were placed there by someone who knew you pretty well, would you think that they made "you"?
Bugmaster wrote:Do you think that this p-consciousness cannot be understood, in principle ? Or are you just saying that we don't understand it right now ? That's a big difference.
Well, obviously I just layed out an in principle explanation for consciousness...
Bugmaster wrote:
I think we have good reason to believe that strong AI is possible in principle, but we cannot say right now if it will ever be achievable.
Er... isn't that a contradiction ? How can something be possible yet not achievable ? That's just weird.
We might not be smart enough to debug protein foldings of our genetic code.
Bugmaster wrote:Huh, if that's all you want, then get yourself a Sony Aibo or some equivalent Chinese knockoff. It's at least as smart as a rabbit... well... maybe a hamster. Hamsters are pretty stupid, you know.
They still feel pain from what we can tell, and there's no program existing today that shows how to make an computer feel pain.
Bugmaster wrote:Nope, that's it. But it's a lot harder to do than it sounds, otherwise we'd have perfectly working machine translation today, as opposed to AltaVista's babelfish.
I think your philosophy raises a lot of ethical concerns. Not that it matters as to what is actually true, but if true it could give birth to an android Hitler regime someday. For example, such a regime might justify replacing humans with machines because it views humans no more a person than a machine, while viewing machines as replacable as they are no longer up-to-date. The machine leaders might decide to optimize the world by eliminating less efficient machines: humans.

User avatar
QED
Prodigy
Posts: 3798
Joined: Sun Jan 30, 2005 5:34 am
Location: UK

Post #50

Post by QED »

harvey1 wrote:
QED wrote:I humbly suggest then that it's you're concept of "real" which is at fault. I say this because, as like to point out in my every reply, evolution created the first feeling of "pain". Please take a moment to consider this proposition: Pain must have been a very early development long before higher levels of consciousness evolved. As soon as any degree of volition became available pain would play a vital role in steering organisms away from self destructive behaviour.
I certainly agree that the feeling of pain was a very useful evolutionary adaptation. Although, as I just replied to Bugmaster, I cannot be sure what level of intensity that pain is felt by non-mammilian (or even non-human) lifeforms. I'm willing to extend charity to those creatures that look like they experience intense pain (e.g., yelping, skirmishing, etc.), but in more primitive lifeforms I cannot assume that pain means the same thing. Although, I am willing to extend some charity by saying that some kind of feeling is present if there are such indications of unpleasantry on their part exists after they are subject to some kind of intrusion to their limbs and body parts.
I actually think you're making my point here. The feeling of pain has been evolved. You question the level of intensity, but we know that evolution will make it sufficient to alter the animals behaviour when said animal has any volition in the matter (what a pity nature can't tell the difference between avoidable intrusions vs unavoidable).
harvey1 wrote: There's fallacious reasoning here on your part. We both agree that life evolved from primitive lifeforms. We both agree that the feeling of pain (and qualia) evolved. We both agree that the intensity and degree of pain evolved. The difference in our views, though, is that I don't think it evolved using the traditional logic approach by which most AI research is being conducted. I think you need to utilize complex systems research and an emphasis upon the "self": which is why I posted a link to Ben Goertzel's paper. However, it seems that you take the same assumptions that I have to conclude your conclusion which does not follow (i.e., strong AI is a consequence of the traditional logic approach to AI). You have to show how qualia should be a consequence of a traditional logic approach to AI. You cannot assume that because you hold the same assumptions as I do, that by default your position is right. This is your error, I think.
Well, you keep saying that I have to show how qualia comes about, yet my position is that it is always present in everything by some degree. I think that because I go along with you in using it as a shorthand term to capture the "experience of being alive" you expect my explanation to have to have some switch-on point. Obviously the way I am looking at this there is no actual switch-on point. Indeed I might just as easily declare us all to be dead -- but that would look too repulsive for some so I say that everything is living (to a degree) instead.
harvey1 wrote: QED, are you really suggesting that Beethoven's 5th magically is not explainable in terms of the notes that compose it? If so, then why are you calling yourself a strict reductionist? Similarly, if you cannot show why an object (e.g., a rock) can have feelings, then how are you in any way a strict reductionist with regard to pain? The strict reductionist stance is that it can be shown in principle how an emergent quality can be fully reduced to its parts. Just stating the premise of strict reductionism over and over again as you are doing is not showing how the quality emerges from its parts. I already know that you believe that a system-level quality is fully reducible to its parts. You told me so a dozen times or more, and each time I recognize that you are a strict reductionist. Now, I want you to show me how you can be a strict reductionist with qualia since we both acknowledge that humans feel pain. I want to know how it can be a feeling of pain that humans feel instead of a blinking fingernail or internal register that has been flagged as a indicator that the system has been hurt in some way.
I hope that what I wrote above explains this now.
harvey1 wrote: I don't deny a missile can someday have the potential to possess knowledge of their self-awareness. I deny that there exists any circuitry or software in existence today that makes current missiles ever so slightly have knowledge that they are self-aware. You have to show that. This is a very strong claim, and very easy to diagnose. All you have to do is post the pseudo-code. You act as if I'm somehow accepting hubris because I'm asking for this very simple confirmation of your strongly held beliefs. I know that these beliefs are dear to you, but at the same time, please understand that I do not share those beliefs. I need evidence that your belief of toasters and thermostats as feeling pain is a legitimate claim. You can show it to me by simply posting the pseudo code. If there are no feelings being displayed in the code, then it is not a valid example. Also, I think the whole AI community would like to see such code because as of today this code does not exist.
Clearly then, from my point of view, there is no psuedo code that can show this as there is no algorithm at work. Either you can think of it as the life already being present in the components, or that we are all dead! Remember, this way of looking at things stems from the certain knowledge that evolution started out small with a few self-replicating molecules. All I have done is followed the logical path through the behaviour of single to multicellular organisms and asked at what point consciousness might kick-in. This is one of those mutations that I can't see happening and so I start wondering if all these nano-engineered robots are somehow experiencing the exact same things that my robot vision system is experiencing. I then take a step back and ask if my own experience as a human might not simply be the way it feels to be a uber sophisticated collection of such subsystems. Thus the realization that the classic distinction between alive and dead might be what's misleading us into looking for some elusive algorithm.

Post Reply