Topic
Many threads regarding dualism, theism, and philosophy in general, often run into this topic. Is it even hypothetically possible to build a computer that will be sapient -- i.e., an artificial being that will think, feel, and socialize as humans do ? Well, here's our chance: to resolve this debate once and for all ! Smoke 'em if you got 'em, people, this post is gonna be a long one.
I claim that creating Strong AI (which is another name for a sapient computer) is possible. We may not achieve this today, or tomorrow, but it's going to happen sooner rather than later.
First, let me go over some of the arguments in favor of my position.
Pro: The Turing Test
Alan Turing, the father of modern computing (well, one of them), proposed this test in ages long past, when computers as we know them today did not yet exist. So, let me re-cast his argument in modern terms.
Turing's argument is a thought experiment, involving a test. There are three participants in the test: subject A, subject B, and the examiner E. A and B are chatting on AIM, or posting on this forum, or text-messaging each other on the phone, or engaging in some other form of textual communication. E is watching their conversations, but he doesn't get to talk. E knows that one of the subjects -- either A or B -- is a bona-fide human being, and the other one is a computer, but he doesn't know which one is which. E's job is to determine which of the subjects is a computer, based on their chat logs. Of course, in a real scientific setting, we'd have a large population of test subjects and examiners, not just three beings, but you get the idea.
Turing's claim is that if E cannot reliably determine which being -- A or B -- is human, then they both are. Let me say this again: if E can't tell which of the subjects is a computer, then they're both human, with all rights and privileges and obligations that humanity entails.
This seems like a pretty wild claim at first, but consider: how do you know that I, Bugmaster, am human ? And how do I know that you're human, as well ? All I know about you is the content of your posts; you could be a robot, or a fish, it doesn't really matter. As long as you act human, people will treat you as such (unless, of course, they're jerks who treat everyone like garbage, but that's another story). You might say, "well, you know I'm human because today's computers aren't advanced enough to post intelligently on the forums", but doesn't prove much, since our technology is advancing rapidly all the time (and we're talking about the future, anyway).
So, if you're going to deny one of Turing's subjects his humanity, then you should be prepared to deny this humanity to everyone, which would be absurd. Therefore, a computer that acts human, should be treated as such.
Pro: The Reverse Turing Test
I don't actually know the proper name for this argument, but it's sort of the opposite of the first one, hence the name.
Let's say that tomorrow, as you're crossing the street to get your morning coffee, you get hit by a bus. Your wounds are not too severe, but your pinky is shattered. Not to worry, though -- an experimental procedure is available, and your pinky is replaced with a robotic equivalent. It looks, feels, and acts just like your pinkie, but it's actually made of advanced polymers.
Are you any less human than you were before the treatment ?
Let's say that, after getting your pinkie replaced, you get hit by a bus again, and lose your arm... which gets replaced by a robo-arm. Are you human now ? What if you get hit by a bus again, and your left eye gets replaced by a robotic camera -- are you less human now ? What if you get a brain tumor, and part of your brain gets replaced ? And what if your tumor is inoperable, and the doctors (the doctors of the future, of course) are forced to replace your entire brain, as well as the rest of your organs ? Are you human ? If so, then how are you different from an artificial being that was built out of the same robotic components that your entire body now consists of ?
Note that this isn't just idle speculation. People today already have pacemakers, glasses, prosthetic limbs, and yes, even chips implanted in their brains to prevent epileptic seizures (and soon, hopefully, Alzheimers). Should we treat these people as less human than their all-natural peers ? I personally don't think so.
Ok, I know that many of you are itching to point out the flaws in these arguments, so let me go over some common objections.
(to be continued below)
Is it possible to build a sapient machine ?
Moderator: Moderators
Post #31
I think you still might be missing my point Harvey. You're surprised that anyone might think it a matter of just writing a million lines of code on a quantum computer to achieve AI. This suggests to me that you think there's some essential trick we're all missing. I know that some AI foundations have this as their mission statement but there is this other way of looking at it: We both agree that Natural Intelligence evolved incrementally. Now I don't see much opportunity for natural selection to stumble on a single trick that suddenly transforms the archean creatures brain into a state of consciousness. It seems very probable that the onset of consciousness is likewise incremental.
Therefore I don't think we need to examine genetic code to uncover any trick. I think the answer lies in incrementally implementing the billions of individual process algorithms that evolutions has networked together to deliver us with our own comprehensive qualia experiences. No trick, just hard slog which is why it hasn't been achieved so far and may mean that it is never practically achievable in the absence of an alternative design method. Such a method might be the recreation of a big chunk of the evolution of life in a genetic program running on a hyper computer, but why bother? I think the important thing to realize is that the experience of, say, vision is already happening in an automated optical inspection system it's just that if we could somehow measure the intensity of qualia experienced, the system would score extremely low. I think the more interconnected and motivated we made it would push up that score towards our own.
Therefore I don't think we need to examine genetic code to uncover any trick. I think the answer lies in incrementally implementing the billions of individual process algorithms that evolutions has networked together to deliver us with our own comprehensive qualia experiences. No trick, just hard slog which is why it hasn't been achieved so far and may mean that it is never practically achievable in the absence of an alternative design method. Such a method might be the recreation of a big chunk of the evolution of life in a genetic program running on a hyper computer, but why bother? I think the important thing to realize is that the experience of, say, vision is already happening in an automated optical inspection system it's just that if we could somehow measure the intensity of qualia experienced, the system would score extremely low. I think the more interconnected and motivated we made it would push up that score towards our own.
That's just the sort of statement I'd expect if I was "pushing someones buttons". But that's not what I'm trying to do. I've been trying all along to show how I connect my reasoning to my faith. You never seem to acknowledge my suggestion that meaning, intelligence, qualia etc. are all present to a minute degree in even the simplest of data processing systems. This explains to me why there is no watershed but a continuum. And yes I agree that we don't know everything, but I'm quite sure that there are certain things that can't exist.harvey1 wrote:And, why would you have that kind of confidence? I suppose you think that way about the origin of the universe too? We know everything, is that it?
Post #32
I think you still might be missing my point Harvey. You say that you're surprised anyone might think it a matter of just writing a million lines of code on a quantum computer to achieve AI. This suggests to me that you think there's some essential trick to it. I know that some AI foundations have this as their mission statement but there is another way of looking at it: We both agree that Natural Intelligence has evolved incrementally through a process of stepwise evolution. So I ask is it reasonable to imagine that natural selection stumbled on a single trick that suddenly transformed the brains of Archean creatures into a state of consciousness? It seems to me to be very much more likely that the onset of consciousness was also incremental -- starting out with the simplest data processing elements like the acid sensor and motor loop in the Paramecium.
Therefore I don't think we need to go looking into our genetic code for some sort of trick. I think the answer lies in the incremental implementation of billions of individual Parameceum type loops networked together to solve the billions of requirements set by the objective of meeting our own comprehensive set of responses. No trick, just hard slog which is why it hasn't been achieved so far and may mean that it is never practically achievable in the absence of an alternative design method. Such a method might be the recreation of a big chunk of the evolution of life inside a genetic program running on a hyper computer, but why bother? I think the important thing to realize is that the experience of, say, vision is already happening in an automated optical inspection system it's just that if we could somehow measure the intensity of qualia experienced, the system would only achieve a microscopic score. I think the more interconnected and motivated we made it the more it would push up the score towards our own.
Good grief Harvey. That statement of yours seems to be more than a little tinged with bitterness. I've been trying all along to show where I get my confidence from. You never seem to acknowledge my suggestion that meaning, intelligence, qualia etc. are all present to a microscopic degree in even the simplest of data processing systems. This explains to me why there is no magic watershed but a continuum. And yes I agree that we don't know everything, but the sort of logic I've used here can help us eliminate an awful lot of unknowns.
Therefore I don't think we need to go looking into our genetic code for some sort of trick. I think the answer lies in the incremental implementation of billions of individual Parameceum type loops networked together to solve the billions of requirements set by the objective of meeting our own comprehensive set of responses. No trick, just hard slog which is why it hasn't been achieved so far and may mean that it is never practically achievable in the absence of an alternative design method. Such a method might be the recreation of a big chunk of the evolution of life inside a genetic program running on a hyper computer, but why bother? I think the important thing to realize is that the experience of, say, vision is already happening in an automated optical inspection system it's just that if we could somehow measure the intensity of qualia experienced, the system would only achieve a microscopic score. I think the more interconnected and motivated we made it the more it would push up the score towards our own.
harvey1 wrote: And, why would you have that kind of confidence? I suppose you think that way about the origin of the universe too? We know everything, is that it?
Good grief Harvey. That statement of yours seems to be more than a little tinged with bitterness. I've been trying all along to show where I get my confidence from. You never seem to acknowledge my suggestion that meaning, intelligence, qualia etc. are all present to a microscopic degree in even the simplest of data processing systems. This explains to me why there is no magic watershed but a continuum. And yes I agree that we don't know everything, but the sort of logic I've used here can help us eliminate an awful lot of unknowns.
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Post #33
Sure there is a continuum, I believe I've acknowledged that on a few occasions. However, I don't see any mentioning about complex systems and self-organization which we barely understand at this point. Our understanding of computing is generally at a level that is completely ignorant in terms of putting these self-organized computing principles to heavy work. Don't get me wrong, there's interesting cellular automata work going on, etc.. However, we've just scratched the surface, and haven't come close to programming cognitive functions using an AI approach that takes exclusive advantage of self-organization. Ben Goertzel, I think, is one of the very first of researchers heading in the right direction with regard to strong AI.QED wrote:You never seem to acknowledge my suggestion that meaning, intelligence, qualia etc. are all present to a microscopic degree in even the simplest of data processing systems. This explains to me why there is no magic watershed but a continuum.
Not suddenly, no, but probably an evolving self along the lines of what Goertzel has in mind. Of course, nobody knows, but I think the traditional logic-AI approach is way off.QED wrote:This suggests to me that you think there's some essential trick to it.... I ask is it reasonable to imagine that natural selection stumbled on a single trick that suddenly transformed the brains of Archean creatures into a state of consciousness?
Very naive, QED. Very naive. I suggest that you read Goertzel's paper that I referenced, and then maybe you'll see that there are "tricks" that might need to be exercised before we can even begin on the path of strong AI. What we don't know is whether 4 billions years of evolution can be condensed down to the timeframes that humans would need in order to bring about strong AI (assuming we have the resources and intelligence to do it).QED wrote:Therefore I don't think we need to go looking into our genetic code for some sort of trick. I think the answer lies in the incremental implementation of billions of individual Parameceum type loops networked together to solve the billions of requirements set by the objective of meeting our own comprehensive set of responses. No trick, just hard slog which is why it hasn't been achieved so far and may mean that it is never practically achievable in the absence of an alternative design method. Such a method might be the recreation of a big chunk of the evolution of life inside a genetic program running on a hyper computer, but why bother? I think the important thing to realize is that the experience of, say, vision is already happening in an automated optical inspection system it's just that if we could somehow measure the intensity of qualia experienced, the system would only achieve a microscopic score. I think the more interconnected and motivated we made it the more it would push up the score towards our own.
Well, your "confidence" looks naive and even somewhat insulting to the complexity of the challenges that lie ahead. I'm also confident that cognition is a dynamical system, and therefore I'm not so confident that we know the large number of tricks that are out there. This is a very long process; one in which we know very little, I'm afraid.QED wrote:Good grief Harvey. That statement of yours seems to be more than a little tinged with bitterness. I've been trying all along to show where I get my confidence from. You never seem to acknowledge my suggestion that meaning, intelligence, qualia etc. are all present to a microscopic degree in even the simplest of data processing systems. This explains to me why there is no magic watershed but a continuum. And yes I agree that we don't know everything, but the sort of logic I've used here can help us eliminate an awful lot of unknowns.
Post #34
Ok, I don't know who Claire Danes is, but I can guess. Anyways, if Claire Danes was your email paypal, or an AIM chat buddy, would you extend the principle of charity to her -- even though you'd never have a chance to see her face-to-face ? If not, then why not ? If yes, then you are implicitly affirming my statement above. All you know about your AIM chat buddy is what she says on AIM; you have no access to the qualia in her head. So, you assume she's human, due to the way she behaves. This makes all these inner qualia kind of irrelevant.harvey1 wrote:Ouch... You mean if my computer projects the voices of Claire Danes, projects the image of Claire Danes, then I should treat my computer as if she is Claire Danes? Hmm... Sounds like people might start doing weird stuff to their computer.
You say:
But it seems that your criteria for judging whether the given entity has qualia is the behavior of that entity. So, we can dispense with qualia and just use behavior to make judgements.Why? I extend charity because I have sufficient reason to think that other humans are like myself and have qualia experiences.
Think about it this way. We're having an intelligent (ok... semi-intelligent, heh) conversation on this thread. You are earnestly responding to my posts. What will you do if, tomorrow, you will find out that I'm an AI ? In practice, how will this change your perspective on any of our previous (and future !) conversations ?
The chemist who made sodium chloride has replicated salt, not Claire Danes. But that's actually an interesting point.Programmers aren't always considering the philosophical issues if they do so at all. What they are doing is writing programs that have certain functions. If a programmer writes a program with the function to act like Claire Danes, then I can guarantee that the programmer might walk away satisfied, but they have no more re-created the person she is than a chemist who has made sodium chloride.
Let's say that we take Claire Danes's brain, and upload it into a computer. We now have a virtual Claire Danes, that would correspond with people in the same way that the biological Claire Danes would. When C.D. is talking to you on AIM, you'd have no way of knowing which version was actually chatting with you.
Are both C.D.s human ? If not, why not, and -- most importantly -- how would you tell, assuming that AIM was your only way of communicating with them ?
Now, let's go one step further, and say that someone constructs an artificial personality that behaves just like C.D... The same questions apply.
Yeah, either that, or maybe the nerve endings in my feet are transmitting a chemical signal into my brain, which causes the neurons in my head to fire in a different pattern than they did before, and that pattern is what we call "pain". Sorry, you'll have to do better than that.Okay. Try this. Have someone you know drop a large rock on your bare feet from 3-4 feet high. After you see the rock hit your feet, ask yourself if you feel pain. If you do, then qualia are real.
Yes, and you'd also make the same mistake if you tried befriending your salt shaker. Deep Blue does not pass the Turing Test, so your analogy is false.For example, if I tried to befriend Deep Blue (e.g., write it letters of admiration for beating a chess master), I think I would be making a severe mistake in judgement in thinking that I'm actually conversing with a personable entity.
This is, of course, assuming that "p-consciousness" actually exists -- which I doubt. I think QED has already answered your questions about evolution, better than I could. I'll just point out that you're singling out consciousness for some reason. You don't ask for the detailed sequence of events that caused claws to evolve, or fur, or gills... just consciousness. I understand that, given your dualism, it makes sense to single out consciousness -- but you can't use that to justify dualism in the first place.Natural selection is a mechanism by which evolutionary changes occur, however there is a story for every adaptation in terms of how a certain trait actually evolved. What was the sequence of events that allowed a particular trait to evolve.... We have to analyze our brains for such information, but there is no indication that this approach will help us succeed in understanding how p-consciousness emerges from the brain.
Intelligence, sapience, consciousness, sentience... I pretty much use them interchangeably on this thread, even though I know I shouldn't :-(What do you mean by "intelligence"?Bugmaster wrote:In any case, I should again point out that evolution is not necessarily the only way to develop intelligence.
Well, not trivially, but then, nothing interesting is really trivial. Flight, for example, is non-trivial. However, I see no reason why we wouldn't eventually solve both problems: a). how our consciousness works, and b). how to construct a Strong AI (and not neccesarily in that order). It seems like you agree with me, though.All I'm saying is that we know we have good reasons to believe that p-consciousness evolved, but there are no good reasons to believe that we will trivially solve how p-consciousness works.
Er, I actually don't think quantum computers are required, I think it can be done on a ye olde Slashdot-grade Beowulf cluster :-) But, yes, I do acknowledge that the problem is very difficult, and might take more than a million lines of code. However, since there are many smart people working on it in order to get rich, I do believe that the problem will be solved sooner (sometime during this millenium) rather than later (heat death of the Universe).It just leads me to be skeptical that this is a trivial problem. It isn't just a matter of writing a million lines of code on a quantum computer. I'm surprised that you seem to think that it is.
Post #35
You know very well what he would do. He would feel "tricked" in the sense that he has not spoken to a "real person". And now we get the problem with it all, what is a "real" person?Think about it this way. We're having an intelligent (ok... semi-intelligent, heh) conversation on this thread. You are earnestly responding to my posts. What will you do if, tomorrow, you will find out that I'm an AI ? In practice, how will this change your perspective on any of our previous (and future !) conversations ?
Clearly AI is enough to trick us, or will be, this of course does not change the physical fact we have, but mentally, we will be able to produce machines superior to us, that can do the same thing we can.
We will be interchangeable....
Imagine a SIMPLE calculator. Can you calculate anything in a second? The calculator can, hence, he is superior to you in this sense. What is the problem by seeing a machine made to THINK better then you? That is not very hard considering the average human is pretty dumb.
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Post #36
Yikes, you don't know who Claire Danes is? You must be sitting in the middle of the Gobi desert...Bugmaster wrote:Ok, I don't know who Claire Danes is, but I can guess. Anyways, if Claire Danes was your email paypal, or an AIM chat buddy, would you extend the principle of charity to her -- even though you'd never have a chance to see her face-to-face ? If not, then why not ? If yes, then you are implicitly affirming my statement above. All you know about your AIM chat buddy is what she says on AIM; you have no access to the qualia in her head. So, you assume she's human, due to the way she behaves. This makes all these inner qualia kind of irrelevant.

Be that as it may, I think that I can safely extend charity to any language speaker. That's not because I cannot be tricked, it's just that I am skeptical that a computer could interact with me. So, that leaves only human being interaction, and therefore my experience is such that only humans are like me, and therefore they must experience qualia as I do. However, a century from now (or 50 years from now) extending charity to someone might be very difficult indeed. Robots and computers might be able to respond to language, and they might even look and sound human. We'll probably need some kind of tag printed on the side of an androids neck to see if it is human (i.e., we can extend charity), or it is not.
Now, if strong AI efforts succeed, that is, we feel justified that we understand p-consciousness in algorithmic terms, then humans in the future may extend charity to artificial life once sufficient numbers of AI creatures deserve our unquestioned extension of charity. Perhaps there would be an intermediate period where we'll still look at the tag on the side of their neck, but we'll use red tags for AI-PC (p-conscious) to allow us to quickly feel comfortable talking to a real personable being.
Currently, yes. That's only because behavior is right now a 100% identifier of qualia. However, in the future that may not be the case, and that's a distinction we should keep in mind since qualia is not replaceable with behavior in principle.Bugmaster wrote:...it seems that your criteria for judging whether the given entity has qualia is the behavior of that entity. So, we can dispense with qualia and just use behavior to make judgements.
Steal you and sell you on e-bay. (I imagine I'll have to give a little warning to any religious buyer of that fine AI hardware of yours....)Bugmaster wrote:You are earnestly responding to my posts. What will you do if, tomorrow, you will find out that I'm an AI ?
In this situation, not much. I try not to share feelings or look for empathy here, so I really risk nothing if you happened to be an AI machine. However, if I were confiding personal feelings, etc., then I certainly wouldn't want to feel that I've been befriended by a machine since, afterall, I have no more experienced real friendship than had I had a friendship with a power saw.Bugmaster wrote:In practice, how will this change your perspective on any of our previous (and future !) conversations ?
If I knew that Claire was uploadable (which she is--just in the current cyber sense), then I certainly would have to look to the government to make sure that I'm not chatting with counterfeit uploadings of individuals. I suppose the U.S. Secret Service, with their experience with monetary counterfeit crimes, would handle such matters in the US. However, if the government just allowed AI imitations to pass themselves off as genuine people, then I would think that the desire to know if we are right in extending charity to our friends and lovers would have us making use of some very exotic private detective services. I wouldn't marry a power saw, for example, would you? So, we'd certainly see a rise in such trustworthy services (along with government regulations and enforcement agencies) to make sure that people feel comfortable extending charity in areas where we felt that it is in our rights to expect such comfort.Bugmaster wrote:Let's say that we take Claire Danes's brain, and upload it into a computer. We now have a virtual Claire Danes, that would correspond with people in the same way that the biological Claire Danes would. When C.D. is talking to you on AIM, you'd have no way of knowing which version was actually chatting with you. Are both C.D.s human ? If not, why not, and -- most importantly -- how would you tell, assuming that AIM was your only way of communicating with them ?
As long as I didn't think my personal investment in the AI version of CD was that important, I wouldn't feel improper by not extending charity to CD (or, for fun, I might extend charity and not care if I am wrong). Afterall, I use a power saw (used one last night, as a matter of fact), and I have no problem in not extending charity to it.Bugmaster wrote:Now, let's go one step further, and say that someone constructs an artificial personality that behaves just like C.D... The same questions apply.
That just brings me back to my point:Bugmaster wrote:Yeah, either that, or maybe the nerve endings in my feet are transmitting a chemical signal into my brain, which causes the neurons in my head to fire in a different pattern than they did before, and that pattern is what we call "pain". Sorry, you'll have to do better than that.
You need to show how neural firing patterns bring about the actual feeling of horrendous pain versus a red light indicator flashing on your fingernail that means that you should acknowledge a big rock crushing your foot.Show me how in principle qualia can emerge without waving magic wands of complex programs. If you cannot show it, then it is faith on your part to believe that it is a trivial problem.
It doesn't matter if Deep Blue passed the Turing Test or not. The point is that we have no reason to believe that there's a function operating that wasn't actually programmed for. Passing a Turing Test requires different programming than having qualia. To have qualia you need to show how that function is actually implemented. You can't just wave the wand and say, "Shazam!" That's not reasonable.Bugmaster wrote:Deep Blue does not pass the Turing Test, so your analogy is false.
Okay, doubt p-consciousness if you like, but regardless if you doubt it, you also happen to experience what it is like to be p-conscious. That experience is still a function. The function must be explained in algorithmic terms.Bugmaster wrote:This is, of course, assuming that "p-consciousness" actually exists -- which I doubt.
In my opinion, you are over using the term dualism unjustly here. I think that there are reasons for consciousness that can be understood in physical terms. However, I admit that I don't know what kind of science that will ultimately be, although I speculate that it will be complexity science. In your case, you don't act as if there is a problem at all. You seem to be suggesting that a complicated program that mimics a human is in fact a human. That seems absurd to me, but it seems to be your position.Bugmaster wrote:I think QED has already answered your questions about evolution, better than I could. I'll just point out that you're singling out consciousness for some reason. You don't ask for the detailed sequence of events that caused claws to evolve, or fur, or gills... just consciousness. I understand that, given your dualism, it makes sense to single out consciousness -- but you can't use that to justify dualism in the first place.
The reason that claws can be understood in algorithmic terms and p-consciousness cannot be is because we can explain all the functions that are displayed with claws with our current level of understanding. We cannot explain all the functions that are displayed in p-consciousness (e.g., qualia), and therefore p-consciousness is not understood. If we try to understand how it is that evolution accomplished p-consciousness, we don't have an explanation to how it occurred (naive explanations notwithstanding).
I think we have good reason to believe that strong AI is possible in principle, but we cannot say right now if it will ever be achievable. If there are "self" programs that are made that start to show the sophistication of a rabbit, then I would say that we have good reason to be pretty optimistic. Right now I'd say that we have no basis of knowing either way. I guess if I had to guess, I would think that within a few hundred years we might succeed. It's just a guess.Bugmaster wrote:Well, not trivially, but then, nothing interesting is really trivial. Flight, for example, is non-trivial. However, I see no reason why we wouldn't eventually solve both problems: a). how our consciousness works, and b). how to construct a Strong AI (and not neccesarily in that order). It seems like you agree with me, though.
Perhaps. However, I'm puzzled why you think the problem is complex since you seem to say that mimicing is good enough to produce p-conscious AI machines. Are you saying that you think there is more to it than mimicing?Bugmaster wrote:Er, I actually don't think quantum computers are required, I think it can be done on a ye olde Slashdot-grade Beowulf clusterBut, yes, I do acknowledge that the problem is very difficult, and might take more than a million lines of code. However, since there are many smart people working on it in order to get rich, I do believe that the problem will be solved sooner (sometime during this millenium) rather than later (heat death of the Universe).
Post #37
I think Bugmaster is on the same wavelength as me here, so condensing some of Harvey's statements:
The one thing that nobody has tried yet is to simply go ahead and create something out of a massive number of subsystems. They haven't tried it because they know it would take an impossible amount of effort. That's why AI research was trying to find a shortcut. From my POV it's no wonder that they ran into a brick wall.
Let me know Bugmaster if you disagree with what I'm about to say, but I think we're both suggesting that it will just take a helluvalot of mimicry before you get the right system response on both the inside and outside. That's not so much complex as plain difficult. Those terms often get used interchangeably. However, in this case, complexity is all about cleverness, cunning and economy -- and while evolution is able to deliver all these properties it goes about it in a very different way to how we would approach things when we sit down at the drawing board. Evolution does it the hard way, step by step. Hence my confidence that we too can produce this thing you call p-consciousness if we go about it by assembling a sufficient numbers of p-conscious sub-systems.Harvey in a nutshell wrote:Very naive, QED. Very naive. I suggest that you read Goertzel's paper that I referenced, and then maybe you'll see that there are "tricks" that might need to be exercised before we can even begin on the path of strong AI...
To have qualia you need to show how that function is actually implemented...
regardless if you doubt it, you also happen to experience what it is like to be p-conscious. That experience is still a function. The function must be explained in algorithmic terms...
You seem to be suggesting that a complicated program that mimics a human is in fact a human. That seems absurd to me, but it seems to be your position...
We cannot explain all the functions that are displayed in p-consciousness (e.g., qualia), and therefore p-consciousness is not understood...
I'm puzzled why you think the problem is complex since you seem to say that mimicking is good enough to produce p-conscious AI machines. Are you saying that you think there is more to it than mimicking..?
"Technical sub-problems" here sounds an awful lot like the evolutionary approach that I'm talking about. Perhaps you're finding it impossible to imagine that the experience of looking out through your eyes and seeing your fingers hitting keys as you deliver a stream of your thoughts might also be experienced, by degree, in other things. Such incredulity could be the basis of believing in a duality. But here you could easily be assuming something that is non-existent all along. If somebody defines qualia as having some mystical component in this way it is simply not appropriate for anyone to demand an explanation for it.Ben Goertzel wrote:Very few AI researchers carry out research aimed explicitly at the goal of producing thinking computer programs. Instead the field of AI has been taken over by the specialized study of technical sub-problems. The original goal of the field of AI -- producing computer programs displaying general intelligence -- has been pushed off into the indefinite future.
The one thing that nobody has tried yet is to simply go ahead and create something out of a massive number of subsystems. They haven't tried it because they know it would take an impossible amount of effort. That's why AI research was trying to find a shortcut. From my POV it's no wonder that they ran into a brick wall.
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Post #38
QED wrote:...t will just take a helluvalot of mimicry before you get the right system response on both the inside and outside. That's not so much complex as plain difficult.
But, why should AI mimicry of Claire Danes bring about qualia? Again, both of you are conjuring up magical spells if you think mimicing actions is what brings about qualia. We can already mimic a humans ability to do many tasks, are you suggesting that the computer has the qualia experience of those particular tasks? Show me how this happens in the code of a computer. I want to see the code.
QED wrote:"Technical sub-problems" here sounds an awful lot like the evolutionary approach that I'm talking about.
I would agree with Goertzel that evolution did not develop p-consciousness using those approaches. That's not to say that evolution didn't take advantage of many different approaches, it is merely to say that this is not the key to how p-consciousness arose on this planet.
QED wrote:Perhaps you're finding it impossible to imagine that the experience of looking out through your eyes and seeing your fingers hitting keys as you deliver a stream of your thoughts might also be experienced, by degree, in other things.
Not really. If you show me the code (or pseudo-code is okay), then I can find it possible. Rather than post a few hundred lines of code though, just tell me in your own words how this qualia-like function occurs in software by mimicry.
QED wrote:Such incredulity could be the basis of believing in a duality.
I'm curious as to why you think it is that duality is at a disadvantage here. You recognize your individual self as someone who has thoughts, beliefs, etc., so it is your position that needs to be defended against. The dualistic view is the de facto position. It is the position in which we experience every waking moment.
QED wrote:If somebody defines qualia as having some mystical component in this way it is simply not appropriate for anyone to demand an explanation for it.
No, it's not mystical. It is the experience of feeling p-consciousness and the experience of qualia. It is a function that any philosophy of mind should account for.
QED wrote:The one thing that nobody has tried yet is to simply go ahead and create something out of a massive number of subsystems. They haven't tried it because they know it would take an impossible amount of effort. That's why AI research was trying to find a shortcut. From my POV it's no wonder that they ran into a brick wall.
Why would a massive number of subsystems give you qualia? Please tell me without conjuring up magic...
Post #39
You're asking me why I think a massive number of subsystems give rise to qualia. I've already answered this but I think because you're totally convinced that "The dualistic view is the de facto position. It is the position in which we experience every waking moment." you've lost the ability to join up all the dots. I'll try again though.
Seeing as how shortcuts to AI lead to roadblocks, and how evolution arrived at us through a colossal aggregation of smaller subsystems I'm saying that qualia is present to a degree in all things. This is only me talking in your language though, so it's no more mysterious than the emergence of compounds with different properties to their constituent elements. Thus I'm saying that although this thing we experience within our own minds might seem to us to be special, mystical (or whatever) this is what matter feels like when it is arranged in this sort of configuration. Perhaps you're too wowed by the experience. There are plenty of other big wow's out there that have surprisingly mundane origins.
Of course if you hold qualia as something uber special you're not going to accept any of this.
Seeing as how shortcuts to AI lead to roadblocks, and how evolution arrived at us through a colossal aggregation of smaller subsystems I'm saying that qualia is present to a degree in all things. This is only me talking in your language though, so it's no more mysterious than the emergence of compounds with different properties to their constituent elements. Thus I'm saying that although this thing we experience within our own minds might seem to us to be special, mystical (or whatever) this is what matter feels like when it is arranged in this sort of configuration. Perhaps you're too wowed by the experience. There are plenty of other big wow's out there that have surprisingly mundane origins.
Of course if you hold qualia as something uber special you're not going to accept any of this.
I'd say that you and others have started out by defining qualia as magic and are now in a position to accuse all attempts to explain it as appealing to magic.harvey1 wrote:both of you are conjuring up magical spells if you think mimicing actions is what brings about qualia.
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Post #40
Okay, you've compared this primitive level of p-consciousness to simple appliances and computers. Can you show me code that gives an organism some primitive version of qualia? Perhaps if I can see how qualia is programmed as a function then I'll understand what it is that a massive number of subsystems magically enable.QED wrote:qualia is present to a degree in all things. This is only me talking in your language though, so it's no more mysterious than the emergence of compounds with different properties to their constituent elements. Thus I'm saying that although this thing we experience within our own minds might seem to us to be special, mystical (or whatever) this is what matter feels like when it is arranged in this sort of configuration.
What I don't understand is that if you going to say that qualia mysteriously arise in AI machines by joining a massive number of subsystems, then why not say that ESP and astral projection also magically emerge by joining a massive number of subsystems together? It seems that your only reason for not doing so is that you don't believe in ESP and astral projection, but if you did, I wouldn't see what could or would stop you from making that claim.QED wrote:Perhaps you're too wowed by the experience. There are plenty of other big wow's out there that have surprisingly mundane origins.
If you said to me that I hold ESP and astral projection as special phenomena that need a specific mechanism in place to account for their hypothetical existence (and therefore I wouldn't accept your explanation of it just happening from joining a massive number of AI sub-systems), then I wouldn't hesitate to agree. Of course, I think you would expect and applaude me in agreeing with that kind of statement, right? So, why don't you applaude me in agreeing that you need an explanation for any emergent phenomena? Just joining sub-systems that have nothing to do with qualia functions is conjuring up magic. How can I get you to see that?QED wrote:Of course if you hold qualia as something uber special you're not going to accept any of this.
I don't define qualia as magic. I define pain as the awful feeling of being hurt (for example). I want to know how joining a massive number of sub-systems creates this awful feeling of being hurt. I think all that you would end up with is a register (acting as a flag) that has been flagged as being in pain. The flag doesn't tell me why we feel pain. It doesn't tell me why something really hurts. Perhaps Bayer and Tylonel would go out of business if that was the case. I would look at the flag ("you have a headache"), and I would say, "thank goodness it's a flag, I'll just reset that to zero."QED wrote:I'd say that you and others have started out by defining qualia as magic and are now in a position to accuse all attempts to explain it as appealing to magic.
I want to see reasons, not faith. I don't share your faith that qualia comes about by merely adding lines to algorithms. I won't to know what the algorithms are, and how they work. If all you are going to do is appeal to more lines of code that is supposed to do the job, then I can't help but think that you are conjuring magic as part of your explanation.