Topic
Many threads regarding dualism, theism, and philosophy in general, often run into this topic. Is it even hypothetically possible to build a computer that will be sapient -- i.e., an artificial being that will think, feel, and socialize as humans do ? Well, here's our chance: to resolve this debate once and for all ! Smoke 'em if you got 'em, people, this post is gonna be a long one.
I claim that creating Strong AI (which is another name for a sapient computer) is possible. We may not achieve this today, or tomorrow, but it's going to happen sooner rather than later.
First, let me go over some of the arguments in favor of my position.
Pro: The Turing Test
Alan Turing, the father of modern computing (well, one of them), proposed this test in ages long past, when computers as we know them today did not yet exist. So, let me re-cast his argument in modern terms.
Turing's argument is a thought experiment, involving a test. There are three participants in the test: subject A, subject B, and the examiner E. A and B are chatting on AIM, or posting on this forum, or text-messaging each other on the phone, or engaging in some other form of textual communication. E is watching their conversations, but he doesn't get to talk. E knows that one of the subjects -- either A or B -- is a bona-fide human being, and the other one is a computer, but he doesn't know which one is which. E's job is to determine which of the subjects is a computer, based on their chat logs. Of course, in a real scientific setting, we'd have a large population of test subjects and examiners, not just three beings, but you get the idea.
Turing's claim is that if E cannot reliably determine which being -- A or B -- is human, then they both are. Let me say this again: if E can't tell which of the subjects is a computer, then they're both human, with all rights and privileges and obligations that humanity entails.
This seems like a pretty wild claim at first, but consider: how do you know that I, Bugmaster, am human ? And how do I know that you're human, as well ? All I know about you is the content of your posts; you could be a robot, or a fish, it doesn't really matter. As long as you act human, people will treat you as such (unless, of course, they're jerks who treat everyone like garbage, but that's another story). You might say, "well, you know I'm human because today's computers aren't advanced enough to post intelligently on the forums", but doesn't prove much, since our technology is advancing rapidly all the time (and we're talking about the future, anyway).
So, if you're going to deny one of Turing's subjects his humanity, then you should be prepared to deny this humanity to everyone, which would be absurd. Therefore, a computer that acts human, should be treated as such.
Pro: The Reverse Turing Test
I don't actually know the proper name for this argument, but it's sort of the opposite of the first one, hence the name.
Let's say that tomorrow, as you're crossing the street to get your morning coffee, you get hit by a bus. Your wounds are not too severe, but your pinky is shattered. Not to worry, though -- an experimental procedure is available, and your pinky is replaced with a robotic equivalent. It looks, feels, and acts just like your pinkie, but it's actually made of advanced polymers.
Are you any less human than you were before the treatment ?
Let's say that, after getting your pinkie replaced, you get hit by a bus again, and lose your arm... which gets replaced by a robo-arm. Are you human now ? What if you get hit by a bus again, and your left eye gets replaced by a robotic camera -- are you less human now ? What if you get a brain tumor, and part of your brain gets replaced ? And what if your tumor is inoperable, and the doctors (the doctors of the future, of course) are forced to replace your entire brain, as well as the rest of your organs ? Are you human ? If so, then how are you different from an artificial being that was built out of the same robotic components that your entire body now consists of ?
Note that this isn't just idle speculation. People today already have pacemakers, glasses, prosthetic limbs, and yes, even chips implanted in their brains to prevent epileptic seizures (and soon, hopefully, Alzheimers). Should we treat these people as less human than their all-natural peers ? I personally don't think so.
Ok, I know that many of you are itching to point out the flaws in these arguments, so let me go over some common objections.
(to be continued below)
Is it possible to build a sapient machine ?
Moderator: Moderators
Post #101
I am aware that this thread resides within the philosophy section but am still surprised that my previous post ( being plainly matter of fact ) has provoked no response thus far. I would be interested to hear the views of anyone working in the field of AI (or even with an interest in the subject) concerning my previous "contribution". While there are obvious philosophical issues surrounding the creation of a sentient machine, I believe this thread originally addressed the issue of a sapient machine.
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Post #102
We have no current instruments that can detect pain from the outside. This is chiefly because there is no theory of pain that allows us to predict which patterns in the brain translate to particular pain qualia.QED wrote:Harvey, is your contention that pain is some sort of emergent phenomenon rather than a particular internal state of the brain? If so are you proposing that evolution managed to produce it along with the appropriate receptors to "pick it up" -- when we have no instruments that can detect it from the outside (apart from MRI scans that identify regional variations oxygen usage).
I don't disagree that pain is one of many states in the brain. What I disagree with is that we can necessarily program a computer that feels pain.QED wrote:This is why I prefer the option that pain is but one of many states which, through association with a set of values, stimulates the classic reactions -- which themselves are just states like any others e.g. perception of sound or vision. After all, that awful sound of chalk scraping on blackboards is nothing less than painful as is the sight of certain French automobiles.
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Post #103
I don't want to ignore you Curious, but I ask that you wait until I finish my discussion with Bugmaster. Time is really tight for me right now.Curious wrote:I am aware that this thread resides within the philosophy section but am still surprised that my previous post ( being plainly matter of fact ) has provoked no response thus far. I would be interested to hear the views of anyone working in the field of AI (or even with an interest in the subject) concerning my previous "contribution". While there are obvious philosophical issues surrounding the creation of a sentient machine, I believe this thread originally addressed the issue of a sapient machine.
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Post #104
Continuing on...
Let's say that I played chess over an e-mail system with people interested in playing chess. Let's say that later I found out that I was never playing with a human being, but it was all computerized. Does the new chess knowledge become useless knowledge? Of course not. Now, let's suppose that Mr. X advocates that I reduce pain by thought alone, and this approach really does work. I find that I have eliminated the need for morphine shots. Then I find out that Mr. X is a computer. Do I think that Mr. X was ever in pain before trying his new technique? No. However Mr. X came upon this new technique can be explained without Mr. X ever having any knowledge of pain, just like a computer winning at chess can be explained without the computer ever having any knowledge of chess. If we don't have a reason why a computer should feel pain, then we should elect the more parsimonious solution that Mr. X is simply programmed to optimize certain solutions that humans find desirable (e.g., superior chess play).Bugmaster wrote:When you find out that he's really an AI... are you going to reverse your stance? If you answer "no", then you're endowing Mr. X. with some very human attributes -- such as persuasiveness. If you answer "yes", then you're committing the "poisioning the well" fallacy (because Mr.X.'s logic is still valid and sound, regardless of his/its nature). And I don't see how you can remain agnostic on the topic, either.
I don't attribute qualia to them since I have no way to determine if they are merely 1970 Commodore computers.Bugmaster wrote:The aliens breathe sulfur dioxide, so you never see them outside of their encounter suit. They may be robots inside, you don't know. Now what?If aliens arrived, then the situation would be a little more complex. Unlike bots, we have no reason to deny qualia to them, but like bots we would want proof that they have real qualia.
No. Of course not. I'm saying that I need good reason to invest my time when the evidence suggests that I would be wasting my time warming up to a rock.Bugmaster wrote:Are you saying that your argument does not stand on its own merits?Sure, it is possible to deny my argument and even be right (and still be a bot without qualia). However, if part of my argument rests on a property that I don't think the bot possesses, then I lack good reason to believe my time will be found to be well spent (i.e., later on as the discussion progresses).
Translation is also greatly affected by noise (see Ahmad and Henikoff, 2001; Gardner et al., 2000; Grossman, 1995; Thattai, 2001). As I said, there are no real Turing machines in nature. I'll agree that mRNA translation is an approximation.Bugmaster wrote:Not really... If you look at how the DNA-copying enzymes work, the process is very similar to the read/write head of a Turing machine. Now, granted, DNA is also transcripted into proteins, which fold into all kinds of interesting shapes, but I'm talking about translation specifically, not transcription.RNA and DNA are not Turing machines. For one, most of the complexity generated by RNA and DNA is a result of the RNA and DNA folding structure and protein interactions. This is not an example of how a Turing machine processes its instructions.
That's not what I claimed. I claimed that simulations do not produce the phenomenon. Again, BM, please, please try to avoid assertions like this. It is tiresome for me to correct such comments.Bugmaster wrote:No, but you've claimed that none of these phenomena can be adequately reproduced by humans. I'm trying to show that this is not the case. And, if it's not the case for photosynthesis or magnetism, I don't see why consciousness should be categorically different.This is not computationalism.Bugmaster wrote:We can actually produce something similar to photosynthesis, using the photovoltatic effect (i.e., solar panels). And we can produce magnetism (though, not the ferro- kind, I suppose), by using solenoids.
Doesn't this apply to what I'm saying about a world where many of the individuals we interact with are strong AI devices? If we have no means to show how an algorithm of pain works, then how can they experience pain? We do not know if we are in a room that is in motion in space (talking to a human), or in a room sitting still in space (talking to a bot). Without being able to determine the actual case, we should take the worst case scenario: e.g., a bot really does not have the feeling of pain since no algorithm can explain pain as a theory.Bugmaster wrote:No, but it does mean that, from inside your room, you're free to make whatever assumption you want about its velocity. You can assume that it's moving, you can assume that it's at rest, whatever -- you have no way of telling. Thus, you're free to make whatever assumption is the most parsimonious. That's all I'm doing in regards to consciousness.
It has meaning in terms of what a good theory of consciousness (or qualia, intentionality, etc.) would look like. A good theory should be a theory of what these certain patterns do with regard to how the organism operates. Eventually, we might be able to improve upon these patterns in certain creatures, and notice if their behavior changes to something that we might consider more intelligent or introspective behavior (e.g., sadness at the death of their partner, etc.). If we can predict their behavior, along with predicting other changes in the brain, then we are narrowing in on a theory of consciousness (qualia, intentionality, etc.).Bugmaster wrote:In this case, your concept of consciousness has no meaning, because you have no way of defining it or detecting it at all.No, I see consciousness as an organized pattern of nature. If the organism doesn't have that particular organized pattern (i.e., according to a theory of consciousness which we don't have presently and may never have), then it isn't a conscious entity.
I think that consciousness/qualia will eventually be shown to be a calculated property. The calculation will no doubt involve future brain scanning equipment.Bugmaster wrote:No, but you can calculate it using a calorimeter.Many physical phenomena are not detectable by instruments. For example, entropy is not a substance that is measurable by a spectrometer.
Without a theory of consciousness (qualia, intentionality), there is no way to do so. (Just like before our understanding of thermodynamics we had no way to estimate entropy.) However, if we are successful at understanding this facet of human existence (and as I said before, I am optimistic that we will be successful), then a good theory will let us in on the secret of how to calculate such things. Perhaps it will a particular pattern of neuron firing that occurs. Maybe it will be neurotransmitter chemistry detection. Perhaps it will be a quantum gravity detection like Roger Penrose has suggested (although I don't think so). As these theories are developed, the means of detecting these phenomena will sort themselves out. This is how science works, and I see no reason why it won't happen in the future. What we cannot do, is use steam engine analogies to lead us to believe that a locomotive comparison will eventually produce all the necessary behaviors. I personally think that is absurd.Bugmaster wrote:All you've done is replace "consciousness" with "awareness". Practically, how would I detect, or calculate, the amount of consciousness in a system?Perhaps consciousness is the amount of order that a system possesses in terms of its awareness of its environment and its self with respect to the environment.
I'm not talking about dualistic consciousness or dualistic qualia. I'm talking about the feeling of pain that you agreed that you feel. This is a phenomena that science wishes to understand. To understand it requires a theory. To develop a theory, more must be understood about the brain and the dynamical systems that reside in the brain that produce these phenomena. Once such a theory is produced, we will no doubt learn more about pain than what the subjective feelings of pain produce within us. For example, we might learn why certain people are able to suppress pain whereas others experience a more heightened experience of it. It seems you disagree that computer technology and not science will understand these phenomena. I fundamentally disagree with that notion (if indeed this is what you believe).Bugmaster wrote:This is the point at which your theory becomes, well, a theory. Until then, it's just a hypothesis, or plain old conjecture. The existence of the photon was not accepted until there was some good theory, and evidence, behind it. Likewise, I'm afraid I'm not justified in accepting the existence of your dualistic consciousness, until there's some theory behind it. Yes, I could be wrong, but that's the risk everyone has to take in order to remain intellectually honest.This is not a radical departure from other physical phenomena. Theoretical objects almost always theorized based on their causal influence on the world, and as the theory is enhanced, the full (formerly unsuspected) properties of the theoretical object come to light.
A rather grim effect of chaos seems to be heart attacks. Let's hope that chaos research helps to detect and cure such maladies.Bugmaster wrote:Hm, what role does classical chaos play in the human cardiovascular system? In other words, which aspect of the system cannot be understood, without appealing to chaos?
Post #105
I sort of use the terms interchangeably, even though I know that's wrong :-( What I meant by "sapient" was something like, "capable of rational thought, emotion, friendship, and whatever other qualities that make us humans, well, human". A sapient machine would count as a sophont species in David Brin's fictional universe (well, if they weren't outlawed in that universe, that is).Curious wrote:While there are obvious philosophical issues surrounding the creation of a sentient machine, I believe this thread originally addressed the issue of a sapient machine.
Post #106
So, since you have (by your own admission) no theoretical basis for pain, does this mean that you cannot, in principle, distinguish pain from a simulation of pain ? Then how do you know whether other people (not you !) feel pain ?harvey1 wrote:If we know the theoretical basis of a phenomena, then we can know if a simulation of a phenomena is identical with the phenomena itself.
If I did, then I was wrong. I do agree that I experience pain and joy and such, which, in your worldview, are caused by qualia -- but I didn't mean to say that I experience qualia directly (if they do indeed exist).When we talk about qualia that you agreed that you experience...
Isn't that a bit like saying that qualia don't exist ? Or, at least, that we don't have any reason to believe that they exist ?There's no theory of Turing computation that is known to produce qualia. There's not a theory period, and, therefore, the theory of qualia is unknown.
Yes, but how do you know that I have these feelings ? Because I said so ? But, isn't the veracity of my words contingent on me being able to feel pain, and other qualia-based phenomena ? That's a catch-22.Let's refer to the feeling of pain since you've already admitted that you have that have those real feelings.
Can you name some of these phenomena ? Earlier, you said that the theory of a phenomenon is its definition, but now you're saying that we have phenomena, presumably well-defined ones, for which we have no theory. Which is it ?Like any phenonema which we don't have good theories (and there are many such phenomena in science), there's mystery, but that is no excuse to use steam engines as our theory of explanation.
So, you don't know what pain is, you have no way of detecting it, you have no idea what causes it... and yet, you know that K-bots don't have it. That strikes me as irrational. How can you make a completely unknown quantity (since that is what your view of pain amounts to) a basis of your argument ? You might as well say, "machines are not qrtgushz, and therefore we shouldn't treat them as human".harvey1 wrote:That's the point. We don't have a theory of pain, and if we did, then we would know what pain is. However, we cannot assume that K-bots have pain merely because we can control their faces to look as if they are in pain.Bugmaster wrote:So, if the Turing Test is not a good definition of consiousness... then what is ? You can reply, "well, we don't have a good Theory of Mind yet, when we do, it will all become clear" -- but if you do, then you're rendering your point moot. You can no longer argue whether AIs are conscious or not, because you don't know what consciousness even is.
I think that my view is more parsimonious. I know how I act when I'm experiencing pain, or joy, or boredom or whatever; thus, when other people act like they're in pain, I can infer that they feel pain. Thus, when some other entity acts just like humans act, I can infer that it's at least as human as the humans, intellectually speaking. What's so difficult about that ?
Also note that, in my example, no human is controlling the Strong AI; the AI is fully independent -- or, at least, as independent as any modern human is. So, your line about controlling the faces of K-bots is irrelevant.
Unless, of course, you mean to imply that every machine created by humans is implicitly controlled by humans 100% of the way ?
Post #107
Er oops, I meant to reply to the first part first.
Note the difference here: Strong AI is as fully interactive as any other human being, whereas Hollywood actors just need to follow pre-scripted sequences.
Again, remember that my original challenge takes place on an Internet discussion forum, or an IM chat channel. There are no visual or aural cues to "fool" you into anything.
I'll respond to your third post, too, when I have the chance...
What's "obvious" about it ? I aleady did state, repeatedly, that the K-Bot 1000s of The Future (tm) act exactly as humans do -- at least, intellectually; they obviously don't eat sandwiches or whatever. Why, then, is it "obvious" to you that they're not human ?harvey1 wrote:Doesn't that seem like a reckless assumption for K-bots? They obviously are not human and you just fell prone to someone having a joke at your expense.
Wow, you're being a lot more optimisitc than me -- I think 50-100 years would be a more realistic time table for Strong AI :-) However, for Hollywood's purposes, plain old scripted bots would suffice; they don't need Strong AI. I mean, they have Keanu Reeves already, it can't be hard to automate that.Bugmaster wrote:In 15 years they will behave as conscious people, that's why Hollywood is making these devices so that filming crews can film them like any other actor on the set.
Note the difference here: Strong AI is as fully interactive as any other human being, whereas Hollywood actors just need to follow pre-scripted sequences.
I'm sure someone would be fooled, but not me. I'll just ask the voice in the room some questions about physics, or life, or sandwiches. If it can't carry on a conversation, then I'll assume it's a bot, or a plain old tape recorder. I've been saying this in pretty much every one of my posts, too...It doesn't strike me as consistent since even today one could be fooled in a darkened room if the K-bot and its synthesized voice were placed in a wheelchair and made to look like Stephen Hawking.
Again, remember that my original challenge takes place on an Internet discussion forum, or an IM chat channel. There are no visual or aural cues to "fool" you into anything.
Ok, show me a K-bot, existing today, that can carry on a conversation at least as well as you can. I think you're just being disingenuous now.You would immediately try to determine if the K-bot is Stephen Hawking, and in 30 years you won't be able to do so unless you were able to analyze the K-bot's skin tissue or look inside its skin. Nonetheless, a K-bot of this type need not be any smarter than they are today.
My bad.harvey1 wrote:I don't ever recall making that assertion.Bugmaster wrote:Didn't you say at some point that emergent properties cannot be simulated ? If not, I aplologize.
Again, this is a false analogy. Without your Theory of Pain, you can't even define what pain is, and you have no idea whether your conjectures about qualia, dualistic mental objects, emergent properties or whatever, even make any sense at all. In the analogy, however, the existence of the Secret Service and their printing-plate-detectors is a certainty.No, there's no way for us to tell, there's a way the Secret Service to tell, but in this analogy, the Secret Service represents knowledge that we don't have access to.
It seems to me that you're arguing, once again, that pain has an independent existence of the person experiencing it; thus, pain can be real or counterfeit. But why ? Why are you inventing new entities without cause (as Occam might put it), when you have no evidence for them ? You have no way of detecting pain in any way (seeing as you've ruled out behavior as a valid method of detection), and yet you claim that pain exists, and that it can be "counterfeit" somehow. That just doesn't strike me as rational.To have a valid simulation we need to know the theoretical basis of pain, and based on this theoretical basis the simulation would be valid. Otherwise we must assume that it is a counterfeit experience.
I'll respond to your third post, too, when I have the chance...
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Post #108
Just a quick response to this very fundamental issue. When you ask, "how do you know...," I think it assumes some objective kind of knowledge which I say doesn't exist for human beings. In actuality, humans know nothing in an objective sense. We know in a pragmatic or subjective sense where we agree that certain sensations, certain basic human interactions form the basis of all knowledge. Science is also based on this subjective sense of knowledge where we first agree that light coming from twinkles in the sky represent light protruding through our atmosphere. Of course, we might be deceived by an evil demon of such things, but for the sake of the healthy pursuit of science we put away such babble and accept our pragmatic (subjective) interactions with nature at face value.Bugmaster wrote:Yes, but how do you know that I have these feelings? Because I said so? But, isn't the veracity of my words contingent on me being able to feel pain, and other qualia-based phenomena? That's a catch-22.Let's refer to the feeling of pain since you've already admitted that you have that have those real feelings.
In the case of you feeling pain, I do not see why the veracity of your words are based on you having the feeling of pain. If you and numerous of others came forward throughout history and said that they have no feeling of pain, then our knowledge of people having pain would not be universal enough to say that all healthy humans have pain sensations. Strong AI Bots and invading aliens would be entirely knew to human beings if their existence were firmly established, and therefore the questions about their pain sensations are not to be assumed on this same epistemological basis that justifies human knowledge. Those questions must be established through scientific inquiry. The scientific question is a valid one.
As such, science should investigate the reason why humans, you included, feel pain. I think what you are trying to do is muddy the water with epistemology (e.g., "how is knowledge possible?") when in fact you should be concerned with the scientific questions that are legitimate (e.g., "why do humans such as yourself experience the feeling of pain?"). From what I gather in your question here, we should say that all scientific questions concerning subjective experiences are scientifically unanswerable, is that right? Your argument seems to be that we have no reason to believe in the objective validity of subjective experiences since all that matters is establishing imperceptible differences. That is an invalid argument since our perceptions may not be able to detect differences (e.g., between a moving (non-accelerating) spacecraft and a stationary spacecraft if inside such a spacecraft without windows) even though those differences could be significant (e.g., we might really be moving).
If you disagree to what I've just written, then this is the subject that we ought to discuss. This issue is so fundamental in our philosophy of science differences that if we have disagreement here, we'll be spinning our wheels on the other topics which we are discussing.
Post #109
And now, for the long-awaited part 3 ! Ok, ok, now for the plain old part 3.
In any case, I quoted your responses above, just to underscore my point: it seems like you're implicitly endowing Mr.X. with human attributes -- such as the ability to effectively argue his case. Note that Mr.X. is not a mere SQL database. He/it doesn't just give you access to some "knowledge" -- he's actively persuading you to accept his point of view, by constructing valid and sound arguments. We have expert engines today that can build arguments based on available data (there's even a whole programming language, PROLOG, dedicated to the task), but none of them can go out and find justification for their claims -- and none of them can actually carry on a conversation with a human being in plain English (that's what PROLOG is for).
You're dismissing Mr.X by asserting that he's merely a clever gadget. However, this would imply that the ability to carry on a stimulating conversation in a human language (such as English) is a purely computational activity, suitable for implementation in gadgets... which is exactly what I've been arguing for all along. You can argue that there's something else, in addition to the ability to feel and reason that makes us human (as opposed to bots), but then you have to show me what that mystery factor might be, and how I can detect it.
Similarly, if you're sitting in the space-room, and you're not experiencing any acceleration, you're justified in assuming that the room is at rest, since you can't tell, in principle, whether it's moving or not -- and a room that is at rest will greatly simplify all your physics calculations.
For another example, the worst case scenario regarding our world is that we live in a perfectly constructed Matrix; however, that's not the most parsimonious assumption, and thus we're not justified in accepting it (i.e., Descartes is wrong).
There are plenty of crank pseudoscientists today who claim they've discovered antigravity, or psi-waves, or whatever. However, without evidence for their hypotheses, we are not justified in believing them. Consciousness -- at least, dualistic consciousness as you present it -- is in the same boat.
Actually I disagree. I think that a computer that can beat you at chess has knowledge of chess; in fact, it has knowledge of chess that exceeds your own (because it beat you, after all). Sure, it doesn't know the history of chess, or the politics behind it, or whatever -- but as far as gameplay knowledge is concerned, it 0wns you. If you disagree, please explain to me how a computer can beat you at chess without having any knowledge of chess -- and please do it in a way that doesn't render the term "knowledge of chess" meaningless. I personally think that knowledge of any game is needed in order to excel at that game -- as long as there's skill involved, of course, and not mere random probability.harvey1 wrote:Let's say that I played chess over an e-mail system with people interested in playing chess. Let's say that later I found out that I was never playing with a human being, but it was all computerized. Does the new chess knowledge become useless knowledge? Of course not. Now, let's suppose that Mr. X advocates that I reduce pain by thought alone, and this approach really does work. I find that I have eliminated the need for morphine shots. Then I find out that Mr. X is a computer. Do I think that Mr. X was ever in pain before trying his new technique? No. However Mr. X came upon this new technique can be explained without Mr. X ever having any knowledge of pain, just like a computer winning at chess can be explained without the computer ever having any knowledge of chess.Bugmaster wrote:When you find out that he's really an AI... are you going to reverse your stance? If you answer "no", then you're endowing Mr. X. with some very human attributes -- such as persuasiveness. If you answer "yes", then you're committing the "poisioning the well" fallacy (because Mr.X.'s logic is still valid and sound, regardless of his/its nature). And I don't see how you can remain agnostic on the topic, either.
In any case, I quoted your responses above, just to underscore my point: it seems like you're implicitly endowing Mr.X. with human attributes -- such as the ability to effectively argue his case. Note that Mr.X. is not a mere SQL database. He/it doesn't just give you access to some "knowledge" -- he's actively persuading you to accept his point of view, by constructing valid and sound arguments. We have expert engines today that can build arguments based on available data (there's even a whole programming language, PROLOG, dedicated to the task), but none of them can go out and find justification for their claims -- and none of them can actually carry on a conversation with a human being in plain English (that's what PROLOG is for).
You're dismissing Mr.X by asserting that he's merely a clever gadget. However, this would imply that the ability to carry on a stimulating conversation in a human language (such as English) is a purely computational activity, suitable for implementation in gadgets... which is exactly what I've been arguing for all along. You can argue that there's something else, in addition to the ability to feel and reason that makes us human (as opposed to bots), but then you have to show me what that mystery factor might be, and how I can detect it.
Again, your position is internally consistent, but I claim that it is absurd. You'd deny charity to pretty much anyone who looks, acts, and quacks... er... talks as though it's conscious, just on the off chance they might be a bot. This level of skepticism is extreme.harvey1 wrote:I don't attribute qualia to them since I have no way to determine if they are merely 1970 Commodore computers.The aliens breathe sulfur dioxide, so you never see them outside of their encounter suit. They may be robots inside, you don't know. Now what?
I believe you're begging the question here. If Strong AI bots are really soulless, then you shouldn't be able to "warm up" to them at all... right ? How can you possibly warm up to a rock, without being somewhat insane ?No. Of course not. I'm saying that I need good reason to invest my time when the evidence suggests that I would be wasting my time warming up to a rock.
I think you're just splitting hairs now. By your definition, computers aren't Turing machines either, because they're affected by noise (especially if you have some of that old, non-ECC RAM). Ok, I agree that computers and DNA/RNA aren't perfect Turing machines, but they come pretty close, don't you think ?harvey1 wrote:Translation is also greatly affected by noise... As I said, there are no real Turing machines in nature. I'll agree that mRNA translation is an approximation.Not really... If you look at how the DNA-copying enzymes work, the process is very similar to the read/write head of a Turing machine. Now, granted, DNA is also transcripted into proteins, which fold into all kinds of interesting shapes, but I'm talking about translation specifically, not transcription.
Now you know how I feel every time you say, "do you mean to say K-Bots are human ? do you ? huh ?" :-) Anyway, what is the primary difference between a solar panel, and a simulation of photosynthesis ?Bugmaster wrote:That's not what I claimed. I claimed that simulations do not produce the phenomenon. Again, BM, please, please try to avoid assertions like this. It is tiresome for me to correct such comments.
No, that's wrong. What we should do is take not the worst case scenario, but the most parsimonious scenario. We should accept the scenario that, all other things being equal, gives us the most explanatory power without "multiplying entities", as Occam would put it. So, you can indeed assume that bots who act perfectly human do lack something that humans have... but, without the ability to detect this "something", you're not justified in assuming that it exists.harvey1 wrote:Doesn't this apply to what I'm saying about a world where many of the individuals we interact with are strong AI devices? If we have no means to show how an algorithm of pain works, then how can they experience pain? We do not know if we are in a room that is in motion in space (talking to a human), or in a room sitting still in space (talking to a bot). Without being able to determine the actual case, we should take the worst case scenario: e.g., a bot really does not have the feeling of pain since no algorithm can explain pain as a theory.Bugmaster wrote:No, but it does mean that, from inside your room, you're free to make whatever assumption you want about its velocity. You can assume that it's moving, you can assume that it's at rest, whatever -- you have no way of telling. Thus, you're free to make whatever assumption is the most parsimonious. ...
Similarly, if you're sitting in the space-room, and you're not experiencing any acceleration, you're justified in assuming that the room is at rest, since you can't tell, in principle, whether it's moving or not -- and a room that is at rest will greatly simplify all your physics calculations.
For another example, the worst case scenario regarding our world is that we live in a perfectly constructed Matrix; however, that's not the most parsimonious assumption, and thus we're not justified in accepting it (i.e., Descartes is wrong).
This is a catch-22. You're saying that a good theory of consciousness will allow you to define what consciousness is, in order to construct a theory of it. The problem is that, until you can define what consciousness is, the word "consciousness" has no meaning. Your next statement is more reasonable:harvey1 wrote:It has meaning in terms of what a good theory of consciousness (or qualia, intentionality, etc.) would look like. A good theory should be a theory of what these certain patterns do with regard to how the organism operates.Bugmaster wrote:In this case, your concept of consciousness has no meaning, because you have no way of defining it or detecting it at all.No, I see consciousness as an organized pattern of nature. If the organism doesn't have that particular organized pattern (i.e., according to a theory of consciousness which we don't have presently and may never have), then it isn't a conscious entity.
However, this implies that we can use behavior to detect the presence of consciousness, which is something you've explicitly rejected in the past. I suppose it all depends on what you mean by "these patterns". If you mean, some sort of dualistic qualia, souls, and such, then all you've done is replace one mystery term ("consciousness") with another ("qualia") -- which, again, you have absolutely no way of detecting. On the other hand, if by "these patterns" you mean "these patterns of behavior", then you're affirming my argument. On the third hand, if you mean, "these physical structures in the brain", then you're once again claiming that the functionality of these structures cannot be duplicated using a Turing Machine -- and you'll have to explain why that is, without resorting to qualia or any other undetectable dualistic entities. Or you could mean something totally different, so I should probably stop guessing :-)Eventually, we might be able to improve upon these patterns in certain creatures, and notice if their behavior changes to something that we might consider more intelligent or introspective behavior (e.g., sadness at the death of their partner, etc.). If we can predict their behavior, along with predicting other changes in the brain, then we are narrowing in on a theory of consciousness (qualia, intentionality, etc.).
Er, what is a "calculated property" ? You're coming up with new terms pretty quickly, it's hard for me to keep up :-/harvey1 wrote:I think that consciousness/qualia will eventually be shown to be a calculated property. The calculation will no doubt involve future brain scanning equipment.Bugmaster wrote:No, but you can calculate it using a calorimeter.Many physical phenomena are not detectable by instruments. For example, entropy is not a substance that is measurable by a spectrometer.
No, it was worse than that. Before our understanding of thermodynamics, we had no concept of entropy at all, and no reason to believe that it existed (insofar as entropy can be said to exist). Thermodynamics is a theory that defines what entropy is. Without a similar theory of consciousness, or at least some experimental procedure that will allow us to reliably detect it, we are not justified in believing that it exists at all. That is indeed "how science works", as you'd put it.harvey1 wrote:Without a theory of consciousness (qualia, intentionality), there is no way to do so. (Just like before our understanding of thermodynamics we had no way to estimate entropy.)Bugmaster wrote:All you've done is replace "consciousness" with "awareness". Practically, how would I detect, or calculate, the amount of consciousness in a system?
There are plenty of crank pseudoscientists today who claim they've discovered antigravity, or psi-waves, or whatever. However, without evidence for their hypotheses, we are not justified in believing them. Consciousness -- at least, dualistic consciousness as you present it -- is in the same boat.
You're making a leap of faith here, by assuming that, just because I feel pain, this pain has an existence independent of myself. There's no reason to believe this. Furthermore, I think I've mentioned a couple of times that the interesting question is not whether I feel pain -- I know that I do -- but whether other people (or, other beings) feel pain, since I cannot detect their pain directly.I'm not talking about dualistic consciousness or dualistic qualia. I'm talking about the feeling of pain that you agreed that you feel. This is a phenomena that science wishes to understand. To understand it requires a theory.
Er, I'm not sure what this sentence means... What am I supposed to be disagreeing with ?seems you disagree that computer technology and not science will understand these phenomena. I fundamentally disagree with that notion (if indeed this is what you believe).
Argh, it's late, I'll read that article and respond to it some other time...harvey1 wrote:A rather grim effect of chaos seems to be heart attacks. Let's hope that chaos research helps to detect and cure such maladies.Bugmaster wrote:Hm, what role does classical chaos play in the human cardiovascular system? In other words, which aspect of the system cannot be understood, without appealing to chaos?
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Post #110
BM, I don't mean to rush you, but are you going to reply to my last post which I think is a fundamental issue? The way I see it, believing that the predominant majority of humans experience pain is a decided issue and one we do not need to waste our time discussing. If you really think otherwise, then we need to address why you would hold this position.
Last edited by harvey1 on Mon Jan 30, 2006 1:45 pm, edited 2 times in total.