Topic
Many threads regarding dualism, theism, and philosophy in general, often run into this topic. Is it even hypothetically possible to build a computer that will be sapient -- i.e., an artificial being that will think, feel, and socialize as humans do ? Well, here's our chance: to resolve this debate once and for all ! Smoke 'em if you got 'em, people, this post is gonna be a long one.
I claim that creating Strong AI (which is another name for a sapient computer) is possible. We may not achieve this today, or tomorrow, but it's going to happen sooner rather than later.
First, let me go over some of the arguments in favor of my position.
Pro: The Turing Test
Alan Turing, the father of modern computing (well, one of them), proposed this test in ages long past, when computers as we know them today did not yet exist. So, let me re-cast his argument in modern terms.
Turing's argument is a thought experiment, involving a test. There are three participants in the test: subject A, subject B, and the examiner E. A and B are chatting on AIM, or posting on this forum, or text-messaging each other on the phone, or engaging in some other form of textual communication. E is watching their conversations, but he doesn't get to talk. E knows that one of the subjects -- either A or B -- is a bona-fide human being, and the other one is a computer, but he doesn't know which one is which. E's job is to determine which of the subjects is a computer, based on their chat logs. Of course, in a real scientific setting, we'd have a large population of test subjects and examiners, not just three beings, but you get the idea.
Turing's claim is that if E cannot reliably determine which being -- A or B -- is human, then they both are. Let me say this again: if E can't tell which of the subjects is a computer, then they're both human, with all rights and privileges and obligations that humanity entails.
This seems like a pretty wild claim at first, but consider: how do you know that I, Bugmaster, am human ? And how do I know that you're human, as well ? All I know about you is the content of your posts; you could be a robot, or a fish, it doesn't really matter. As long as you act human, people will treat you as such (unless, of course, they're jerks who treat everyone like garbage, but that's another story). You might say, "well, you know I'm human because today's computers aren't advanced enough to post intelligently on the forums", but doesn't prove much, since our technology is advancing rapidly all the time (and we're talking about the future, anyway).
So, if you're going to deny one of Turing's subjects his humanity, then you should be prepared to deny this humanity to everyone, which would be absurd. Therefore, a computer that acts human, should be treated as such.
Pro: The Reverse Turing Test
I don't actually know the proper name for this argument, but it's sort of the opposite of the first one, hence the name.
Let's say that tomorrow, as you're crossing the street to get your morning coffee, you get hit by a bus. Your wounds are not too severe, but your pinky is shattered. Not to worry, though -- an experimental procedure is available, and your pinky is replaced with a robotic equivalent. It looks, feels, and acts just like your pinkie, but it's actually made of advanced polymers.
Are you any less human than you were before the treatment ?
Let's say that, after getting your pinkie replaced, you get hit by a bus again, and lose your arm... which gets replaced by a robo-arm. Are you human now ? What if you get hit by a bus again, and your left eye gets replaced by a robotic camera -- are you less human now ? What if you get a brain tumor, and part of your brain gets replaced ? And what if your tumor is inoperable, and the doctors (the doctors of the future, of course) are forced to replace your entire brain, as well as the rest of your organs ? Are you human ? If so, then how are you different from an artificial being that was built out of the same robotic components that your entire body now consists of ?
Note that this isn't just idle speculation. People today already have pacemakers, glasses, prosthetic limbs, and yes, even chips implanted in their brains to prevent epileptic seizures (and soon, hopefully, Alzheimers). Should we treat these people as less human than their all-natural peers ? I personally don't think so.
Ok, I know that many of you are itching to point out the flaws in these arguments, so let me go over some common objections.
(to be continued below)
Is it possible to build a sapient machine ?
Moderator: Moderators
Post #81
It looks to me as though Harvey's approach to this would run into difficulties if we were to encounter an alien life-form. If it appeared to be intelligent and sentient would we still be justified in withholding our charity just because the alien was unable to demonstrate that it too possessed qualia?Bugmaster wrote:Ah ! This sounds exactly like biological naturalism: "only biological creatures can feel pain, therefore it's impossible to create a device that feels pain". But all you've done is push the problem down a level. Why is it that only biological creatures can feel pain ? Because pain has evolved ? Ok, what's so special about pain that it absolutely must evolve and cannot be constructed, as opposed to other human functions (breathing, etc.) that have evolved but can be constructed as well ? Is the answer, "because pain requires qualia" ? But then, what are qualia, how do they cause pain, and why is it that they absolutely must evolve and cannot be constructed ? Is it because qualia are emergent properties ? Why can't we construct a machine that will generate such properties ?harvey1 wrote:Before we can accept that they [gadgets] have the same biological functionality with regard to feeling pain, we should have good reason to think this. We don't have any reason to think this. In fact, we have some good reasons not to think it since we know the algorithms and engineering designs so well with the gadgets that are manufactured in the world.
I don't think this elusive property will ever be isolated. I think the big mistake is to talk of pleasure and pain in terms of simulation or actuality. Irrespective of how "real" it feels to us I maintain that it has evolved as a state within our neural networks -- a state that has great significance in the context of our general awareness, but one that is just as virtual as a remembered face or tune. I think sensations are fully defined by the values we place on them and the associations therein. It is very difficult to be objective about this matter when we are so intimately involved in the experience, hence I really don't think we can afford to be elitist about it.
Eventually Robots may also hold values and associations that were hard-earned and I think we ought to be prepared to respect this. The only reason we have to chortle at this idea now is the trivial degree to which such values could be acquired with current technology. But this is the game we've been invited to play: to extend the envelope well beyond our current limits.
Incidentally, I found this this short article about real pain being dulled in virtual worlds very interesting.
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Post #82
Absolutely. I personally would write politicians to enforce restrictions on gadgets fostering themselves off as human forgeries.Bugmaster wrote:So, in a world where robotic philosophers exist, you'd potentially consider everyone to be a useful gadget, nothing more. And, since you can never truly examine anyone's qualia -- especially not when they're posting from thousands of miles away -- you'll treat everyone you meet as a bot.
Sunlight lamps aren't yellow stars.Bugmaster wrote:Again, this level of extreme skepticism needs to be justified. Under my worldview, I just conseider everyone who acts human to be human, and move on. I think it makes a lot of sense.
You also said that you do really feel pain. All of this seems contradictory to me. How can you have an algorithm for pain and at the same time suggest that there's nothing more to pain than behavior of someone in pain? There's more to pain than behavior if the feeling is to produced by an algorithm.Bugmaster wrote:I never said that we can never see an "algorithm" (*) for pain; in fact, my entire argument states that such an algorithm will eventually be developed ! All I'm saying is that I can't give you such an algorithm now... I believe that an algorithm for pain is "in principle" possible, because our existing human bodies do indeed embody such an algorithm... you believe that there's something more to pain than mere functionality.
But, if you've already agreed that there is more to pain than the behavior (e.g., a real feeling), then why is behavior of someone in pain relevant at all? We already agree that behavior is not the whole story, so why bring up the behavior as if it is the only significant issue?Bugmaster wrote:since thermometers do not behave as humans do (orgasmic or not). My hypothetical Strong AI bots do behave as humans do.
Computational processes do not produce ferromagnetism, so there's no reason to believe that Turing machine processing necessarily entails strong AI. As for duplicating evolutionary successes in machines, there is no reason to think that we can accomplish all the complex feats of evolution (e.g., the creation of life in a test tube).Bugmaster wrote:by your logic, we should definitely be able to produce Strong AI, since we have a proven ability to create complex things, and this ability is rising exponentially (Moore's Law).
Sure, and neither do I. However, I also doubt that we can produce everything that evolution has done. For example, humans might not be able to bring about DNA from atomic raw materials using current technology. Perhaps we might invent an atomic replicator to do the job, but that would not be using processes that evolution used in principle.Bugmaster wrote:I doubt that biological evolution is the only thing that can produce consciousness. There's a difference there.
I'm not sure if only biological creatures can only feel pain. My point is that we have good reason to think that the feeling of pain is a non-Turing computation, therefore not likely to be accomplished using Turing machines. The basic problem is that Turing machines do not directly produce real phenomena (e.g., ferromagnetism, photosynthesis, etc.); it can only model real phenomena, or the executed statements can be converted into some real phenomena. (However, it is not the Turing machines that are producing the real phenomena.) When it comes to feeling pain, the modeling of the pain sensation would seem to be limited to the behavior of acting out in pain. I know of no algorithm that can produce anything other than electromagnetic events that can be converted into nuclear or mechanical action (i.e., behaviors). Since pain is a real feeling--not just behavior (that you agreed that you experience)--then I would need to see an in principle depiction of how the feeling of pain can be translated into mechanical, electromagnetic, or nuclear behavior using a set of instructions. I see this as good reason to doubt such instructions exist since it is not behavior that we seek, so Turing machines bring us no closer to a solution than if we don't even consider them at all. What we are looking for is a physical phenomena that produces the feeling of pain. If we had that, then perhaps we could connect such a system to a Turing machine to control the feelings as a part of the software's features.Bugmaster wrote:This sounds exactly like biological naturalism: "only biological creatures can feel pain, therefore it's impossible to create a device that feels pain". But all you've done is push the problem down a level. Why is it that only biological creatures can feel pain? Because pain has evolved?
The feeling of pain is a unique phenomena that is (so far) a subjective state. If you go to the doctor and say you have a pain in your pinky finger, the doctor can only check for structural or internal damage, but the doctor cannot say that you are not feeling any pain. If the structure and internal condition check out fine, then you would be sent to a psychiatrist who would probably subscibe pain medication. This makes the job of science all the more difficult since we would have no means of knowing if pain has been effectively duplicated without a convincing theory.Bugmaster wrote:Ok, what's so special about pain that it absolutely must evolve and cannot be constructed, as opposed to other human functions (breathing, etc.) that have evolved but can be constructed as well?
Astrophysicists deal with a similar situation with dark matter. We have no way of knowing if our sun is interacting with small amounts of dark matter since dark matter does not interact with photons. Without a good theory of dark matter, astrophysicists cannot answer the question whether our sun interacts with it.
Eventually it might be possible to detect the feeling of pain if we have a good theory of pain. For example, it might be found that certain biological structures cause the emergence of pain, and we might be able to spot the features of that emergence with future generations of brain scanning equipment. If that happens, which I am optimistic that it will, then the feeling of pain will no longer be a mere subjective state. Doctors will know if you are lying about the pain in your pinky finger.
We might be able to produce such a machine someday. First we need a theory of pain sensation, and my view is that this requires a deeper understanding of the dynamical system that produces it. It's enormously complex and it might be some time before such a theory is forthcoming.Bugmaster wrote:Is the answer, "because pain requires qualia"? But then, what are qualia, how do they cause pain, and why is it that they absolutely must evolve and cannot be constructed? Is it because qualia are emergent properties? Why can't we construct a machine that will generate such properties?
Patience, BM. It took science 200-300K years before it even began. Since that time, we've made enormous progress in a mere 400 years. I realize that Turing machine concepts and computers seem like they've been around a longtime, but that's a blink even in terms of modern science. These are just fads when it comes to understanding nature, so perhaps its understandable that people get over anxious in trying to understand what it is that we do not understand. But, this lack of patience should not force us into making too eager accessments. Remember, the evolution of pain sensation probably took hundreds of millions of years to evolve. Given its key adaptation to life, it was an attribute of the brain that was under heavy selection pressures, and therefore a lot happened in the brain during that few hundred million years. The complexity of that evolution probably dwarfs the sophisticated nature of flight, and that took 300 years of scientific-technological progress before it could come about. We might be looking at hundreds or thousands of years for a full theoretical understanding of mental properties. This might even be too optimistic.Bugmaster wrote:All you're doing is renaming your mystery factor, you're never actually explaining how it works or why it is necessary at all.
So, I like your dissatisfaction with mysteries, but even with the advent of a deeper understanding of dynamical systems (largely due to Ken Wilson who won the nobel prize for renormalization groups), there's a great deal of progress in the last 40 years.
This analogy was constructed to show that outward behavior does not equate to internal process. If you lack an understanding of the internal process, then more than likely you are not reproducing all the mental states of humans. In the case of a sunlight lamp, we are foolishly missing a major process (nuclear fusion). In the case of humans we can easily be fooled by missing out on consciousness, qualia, intentionality, etc..Bugmaster wrote:If an artificial lamp (and it'd have to be a pretty big lamp) outputs the same exact spectrum and intensity of light as our Sun, why would you prefer one over the other ? I mean, plants certainly won't care...
Of course not. However humans are vastly complex entities in the universe with many, many unique properties. The vast majority of those unique properties are caused by mental states, and we shouldn't be fooled to thinking that only outward behavior determines mental states. As you yourself agreed, you feel pain, but it is perfectly reasonable to suggest that a clone of you could be made in a lab that an ETI (extraterrestrial intelligence) has removed that clone's ability to feel pain--other than that it is exactly like you. Obviously this clone is not you at all since it lacks a major feeling of pain (and other better and more desirable feelings, of course) that you experience.Bugmaster wrote:paralyzed people, autistic people (with varying degrees of autism), blind people, people with pacemakers, etc. -- who lack certain functions that the majority of us possesses. Are you saying that they're not human?
Humanity is a genetic lineage of a certain species on the planet. To be human means being born as a human. Any functionality that is lost or was not genetically inherited for some tragic reason is not an issue to one's humanity. However, if in the process of knowing someone and we see that a major faculty of human abilities is missing, then obviously we react differently to them by trying to compensate for that loss. So, if someone has better than 20/20 vision, we might ask them to look at distant signs when we are riding with them in a car. If someone is very intelligent, we might ask them to help us on a crossword puzzle or trivia question.Bugmaster wrote:What is the minimum set of functionality you'd require in order to accept someone as human -- provided that you're not telepathic, and can't sample their internal thoughts and feelings directly?
This was many posts ago where I laid out the dynamical emergence of how the feeling of pain comes about.Bugmaster wrote:I just re-read your post, and your description wasn't detailed at all.
I didn't.Bugmaster wrote:You've also hinted that solid water is a completely different substance from liquid water, and that their relationship is irreducible to H2O molecules -- which, to me, sounds fairly wrong.
Dynamical systems emerge as a consequence to how the universe is. The universe is a dynamical system (e.g., big bang/inflationary cosmology is all based on non-linear dynamical systems that are sensitive to the initial conditions at the beginning of time).Bugmaster wrote:How do "dynamical systems" emerge?
Roughly, the flow of time along with changing in time in some kind of space through symmetry breaking processes.Bugmaster wrote: What makes a system dynamical, or are all systems potentially dynamical?
The mechanism is a phase transition (like liquid water into steam--which is a first-order phase transition--however perhaps the more significant systems experience second-order phase transitions). The emergent properties are irreducible in terms of explanation. That is, you cannot explain the emergent properties without including the phase transition within the explanation. Explanation entails causality, and therefore an emergent higher level (e.g., mental properties) is causally irreducible to its lower level (e.g., physical properties) in the sense that the cause must include the dynamic properties encountered in the phase transition. Therefore, the mental is a "separate" system in that you need the dynamics of the mental in order to account for the behavior of the physical system. Now, we could say there is just the physical if we ignore the dynamical stuff happening as a result of the phase transition, but that would be a mistake since the physical stuff by itself cannot explain why it operates in the peculiar way that it does.Bugmaster wrote:What is the mechanism by which dynamical systems give rise to emergent properties, and why are emergent properties irreducible to the underlying systems?
A very interesting example is an ant colony. Ant colonies display emergent behavior as the worker ants collectively bring about a kind of self-awareness with autopoeisis behavior for the ant colony as a whole. The physical entities of the ant colony are mainly workers, but the actions of the workers can only be explained in terms of the dynamics of self-awareness displayed by the ant colony acting as a whole system.
The mind, in my view, is the meta-meta-meta-ant colony of the physical constituents of the brain (e.g., neurons, synapses, etc.).
Evolution is a dynamical system.Bugmaster wrote:How does evolution produce dynamical systems?
Dynamical systems tend to be everywhere and nowhere. The system is abstract in that it can best be understood as a geometric space. There are theories in science that suggest that space and time are itself emergent features of the world and that Euclidean space is just another example of geometric space. There was an article in the Scientific American this last month which shows that Hawking radiation theorized by Stephen Hawking with regard to black holes can be best understood along these lines. It would be an interesting situation indeed if science came to the conclusion that our world is an abstract entity. Who knows, maybe idealism is due for a major comeback later in this century...Bugmaster wrote:Which parts of our brain house which dynamical systems, and how did they evolve?
The pain comes from dynamical systems that emerge as a result of structures like C-fibers in the brain. Although, the activation of C-fiber neurons alone do not explain pain, their existence hints at the dynamical system. As part of the dynamical system they provide the physical basis of pain. (Although pain itself is a mental phenomena.)Bugmaster wrote:Which dynamical systems produce the feeling of pain when I stub my toe, and how are they related to the neural structures in my body?
A great deal of work is being done with dynamical systems, so there's nothing mysterious in principle with this concept. (I'm assuming that you have little familiarity with this issue.)Bugmaster wrote:I could go on and on but I think you see my point. All you've done is replace your mystery factor called "qualia" with a mystery factor called "dynamical system" (which is actually even more mysterious because it makes chemistry not work, at least as far as water is concerned). You haven't explained anything.
I can't imagine why. Dynamical systems is an integral part of science, perhaps the key integral of science.Bugmaster wrote:what you're saying sounds suspiciously like such a ghost to me.
I don't recall that explanation. You cited Turing computation as a reason to believe the behavior of responding to pain is materially explained, but your explanation for pain sensations seems to be that we should hold out on faith for future scientists that are smarter than current scientists.Bugmaster wrote:I've agreed that I feel pain, but I've also provided a purely materialistic explanation for why I feel pain
But, I do have a worldview that is observable in principle because I suggest that an understanding of the dynamical system produces observables in behavior, brain scanning patterns, micro structural analysis, etc.. My view is clearly superior since I am engaging in science and not some fantasical hope that algorithms will ever do more than merely model a real phenomena.Bugmaster wrote:under your worldview, you are not justified in believing that other humans feel pain, because your worldview depends on things that are not observable in principle. So, my worldview explains more and it's simpler to boot.
I'm not suggesting that Intel has a magical power. They do everything using scientific principles. The difference, though, is that the user of the computer does not know that the internal state of the computer matters with regard to whether the Intel and AMD chip are identical. Clearly observed behavior is not enough to show identity.Bugmaster wrote:in your example, the NSA data collection is still a purely materialistic process, and it is still a function that Intel chips perform but AMD chips do not (they don't modulate the power supply correctly). If AMD really wanted to implement this functionality, they could. You, however, are saying that only Intel has the magical power to implement NSA's spying, and no one else does. So, there's something magical about Intel's chips that cannot, in principle, be duplicated by anyone. So... what is it ?
Bugmaster, creationism is wrong whether I debate I a creationist or not. The subject of behaviorism's demise is thoroughly documented so it is unnecessary that I waste time on it (just like I wouldn't waste time debating creationists about science). Once QED objected that I never debated with creationists about their misinformation of science, and I'll tell you what I told him. Creationism is not worthy of discussion because there is nothing to debate seriously. If a view is discredited there is no use in trying to persaude those who hold to it. Either they don't know it is wrong, in which case it is not my job to educate them, or they are obstinate. Usually there is a strong link between being ignorant and obstinancy. My experience, and the reason I stopped debating people who hold to discredited views, is that the obstinance never stops.Bugmaster wrote:Again, I don't know exactly what you mean by "behaviorism", but it doesn't matter... you can't persuade anyone that my views are wrong merely by stating that they've been invalidated by someone. After all, that's the same tactic that the Creationists use: "everyone knows that evolution is wrong, but the scientific mafia doesn't want come out and say it. Trust us." That's just not very convincing.
Post #83
Let me just re-emphasize what I said above: in a world where robots can act human, you will assume everyone to be a robot until proven otherwise (and you'd campaign for better methods of proof). This is, quite possibly, the most extremely skeptical statement I've ever read (granted, I don't read many debates, but still). How do you justify this skepticism ? More importantly, even if your skepticism is justified, why is it important ?harvey1 wrote:Absolutely. I personally would write politicians to enforce restrictions on gadgets fostering themselves off as human forgeries.Bugmaster wrote:So, in a world where robotic philosophers exist, you'd potentially consider everyone to be a useful gadget, nothing more. ...
Again, I claim that, if robots act as humans do, then it doesn't really matter what they're made of on the inside. When I converse with human people, or befriend them, or whatever, I don't care about their DNA. All I care about is their ability to carry on a conversation, and their personality in general. What you're claiming is that a creature does not fully qualify as a conversational partner, or a friend, regardless of how he acts, unless the creature is biologically human.
This, to me, sounds like saying that robots don't have souls, but humans do, and therefore robots should never be treated as human. As I'd mentioned originally, this is a perfectly sound viewpoint, but it's not very convincing to someone who doesn't believe in souls.
QED already asked you about hypothetical aliens... Do they have souls ? Can they ever count as sentient beings ? What if the aliens had some strict notion of privacy, and did not allow you to perform invasive scans on them, to ensure that they're not robots in disguise ?
Remember: in my scenario, robots act just as humans do. Sunlight lamps do not act like yellow stars do (they have less mass, for one thing). This is a false analogy, again.Bugmaster wrote:Sunlight lamps aren't yellow stars.
I don't follow you. As I see it, everything I feel is tied into the "algorithm" that powers my personality and behavior... why should pain be any different ?How can you have an algorithm for pain and at the same time suggest that there's nothing more to pain than behavior of someone in pain? There's more to pain than behavior if the feeling is to produced by an algorithm.
I think you have an ontology/epistemology confusion here. What I'm saying is that, regardless of whether consciousness is dualistic in nature or not, we normally deduce that someone has a consciousness based on their behavior. Therefore, it's parsimonious for us to assume that an entity that behaves although it is conscious, is in fact conscious, unless proven otherwise.But, if you've already agreed that there is more to pain than the behavior (e.g., a real feeling), then why is behavior of someone in pain relevant at all?
Again, if you disagree, then you have to present me with a test that will distinguish a conscious being from a simulated being, without relying on their behavior. Taking my original scenario under consideration, your test would have to rely solely on what the being in question posts on Internet forums.
If you cannot provide such a test, you have the following options:
1). Treat everyone as a bot. This is clearly the option you choose, but I think it's ridiculously exclusive -- especially when you know for a fact that some humans do indeed post online.
2). Treat everyone as human -- or, at least, as a sentient being that's on an equal footing with humans. This is the option I'd choose.
3). Remain agnostic. I'm not sure whether this option is intellectually honest -- because, in practice, it will be functionally equivalent to either (1) or (2) -- but maybe you can persuade me that it is.
4). Come up with some other option, and tell me what it is.
This does not follow, because, as you know quite well, I don't believe that there's anything magical about ferromagnetism or evolution or any other physical process that prevents us from re-creating its results. I should also point out that we're well on our way for creating "life in a test tube" -- we can already create viruses out of base pairs and some enzymes -- so I'm not convinced that evolution is all that special.Computational processes do not produce ferromagnetism, so there's no reason to believe that Turing machine processing necessarily entails strong AI.
I think you're defeating your own argument. If we can create a paramecium out of atomic raw materials using the atomic replicator, would we not be reproducing the work that evolution has done on the naturally evolved paramecium ?For example, humans might not be able to bring about DNA from atomic raw materials using current technology. Perhaps we might invent an atomic replicator to do the job, but that would not be using processes that evolution used in principle.
Again, I don't think that consciousness is a fundamental force of nature, such as magnetism or gravity, so I'm not convinced by this argument. You're stating that Turing Machines cannot produce pain, because pain cannot be produced by Turing Machines. I'd need something more before you can convince me.The basic problem is that Turing machines do not directly produce real phenomena (e.g., ferromagnetism, photosynthesis, etc.); it can only model real phenomena, or the executed statements can be converted into some real phenomena. (However, it is not the Turing machines that are producing the real phenomena.)
Exactly ! And if I tell you that I feel pain in my pinky, then you can't verify that I indeed feel pain in my pinky, you just have to trust me. Isn't this what I've been saying all along ?If you go to the doctor and say you have a pain in your pinky finger, the doctor can only check for structural or internal damage, but the doctor cannot say that you are not feeling any pain.
That's not exactly right. A more correct statement would be: "without a good theory of dark matter, we cannot be reasonably sure that dark matter even exists at all". In science, nothing -- not electricity, not mass, not relativity, nothing -- is simply assumed to exist without solid evidence, as you seem to claim:Astrophysicists deal with a similar situation with dark matter. We have no way of knowing if our sun is interacting with small amounts of dark matter since dark matter does not interact with photons. Without a good theory of dark matter, astrophysicists cannot answer the question whether our sun interacts with it.
That's religion, not science. In science, develop theories based on evidence, not vice-versa.But, I do have a worldview that is observable in principle because I suggest that an understanding of the dynamical system produces observables in behavior, brain scanning patterns, micro structural analysis, etc..
I challenge you to point me to even one accepted scientific theory (not a hypothesis, a theory) for which we have no evidence, and which we're supposed to just accept on faith. Appealing to some future possibility of evidence ("Patience, BM. It took science 200-300K years before it even began. Since that time, we've made enormous progress in a mere 400 years...") is not enough. People have been saying this for years ("patience, guys, we will surely detect phlogiston in another couple centuries"); it didn't work then and it doesn't work now. In fact, you commit the same mistake that you accuse me of making:
Similarly, claiming that "Turing machine concepts and computers" are "just fads" because they're new doesn't get you anywhere, either -- quantum physics is new, and yet it's clearly more than a fad....but your explanation for pain sensations seems to be that we should hold out on faith for future scientists that are smarter than current scientists.
So, in summary, your worldview promises to enlighten us in another couple hundred years, as long as we accept it on faith. My worldview offers a working empirical model of the world now, and is consitent with all the data we have so far. Thus, it is more parsimonious.
Again, I fail to see why this is important, as long as the artificial human in question acts, well, human. I guarantee you that many biological humans don't have the same mental states as many other biological humans (for example, I don't have a mental state that women experience when they're pregnant), but that doesn't seem to matter much.If you lack an understanding of the internal process, then more than likely you are not reproducing all the mental states of humans.
That's not what I'm saying; all I'm saying is that, in the absence of any other test, we are justified in concluding that an entity experiences mental states based on its behavior alone. As soon as you build a working consciousness detector, I'll concede my argument.The vast majority of those unique properties are caused by mental states, and we shouldn't be fooled to thinking that only outward behavior determines mental states.
Er, would the clone still behave as though it felt pain ? If so, how would you tell which person is Bugmaster, and which is the clone (assuming that the aliens burned their paper trail) ?As you yourself agreed, you feel pain, but it is perfectly reasonable to suggest that a clone of you could be made in a lab that an ETI (extraterrestrial intelligence) has removed that clone's ability to feel pain--other than that it is exactly like you. Obviously this clone is not you at all since it lacks a major feeling of pain (and other better and more desirable feelings, of course) that you experience.
I don't understand where you're going with this. You start out with ye olde biological naturalism ("DNA is required for humanity"), and you end up with validating my point ("abilities (i.e., behaviors) determine whether someone is human"). That's not consistent at all.Humanity is a genetic lineage of a certain species on the planet. To be human means being born as a human. Any functionality that is lost or was not genetically inherited for some tragic reason is not an issue to one's humanity. However, if in the process of knowing someone and we see that a major faculty of human abilities is missing, then obviously we react differently to them by trying to compensate for that loss. So, if someone has better than 20/20 vision, we might ask them to look at distant signs when we are riding with them in a car. If someone is very intelligent, we might ask them to help us on a crossword puzzle or trivia question.
Didn't you claim that water undergoes a "phase transition" when heated or cooled, and that such phase transitions are irreducible to the interactions of individual atoms ?harvey1 wrote:I didn't.Bugmaster wrote:You've also hinted that solid water is a completely different substance from liquid water, and that their relationship is irreducible to H2O molecules -- which, to me, sounds fairly wrong.
This doesn't really tell me anything, you know :-)harvey1 wrote:Dynamical systems emerge as a consequence to how the universe is....Bugmaster wrote:How do "dynamical systems" emerge?Evolution is a dynamical system.Bugmaster wrote:How does evolution produce dynamical systems?
It's debatable whether ant colonies are self-aware, but we in fact do have algorithms that reproduce the behaviors of ant colonies (SimAnt being a trivial yet entertaining example). As it turns out, ants are controlled by a fairly small set of chemical signals, and tracing the interactions of these signals is difficult, but not prohibitively so. You can, of course, argue that a simulated ant colony is not equivalent to a real ant colony, because simulated ants don't make crunching noises when you squish them, or because they lack ant souls, or something, but I don't think these objections have merit.Ant colonies display emergent behavior as the worker ants collectively bring about a kind of self-awareness with autopoeisis behavior for the ant colony as a whole. The physical entities of the ant colony are mainly workers, but the actions of the workers can only be explained in terms of the dynamics of self-awareness displayed by the ant colony acting as a whole system.
I'm curious, though. Why do you think that ant colonies are self-aware ?
Is it not more parsimonious to conclude that C-fibers (I don't really know what they are, but whatever) do in fact produce the feeling of pain ? Why do you feel the need to postulate additional entities (to borrow Occam's language) ?The pain comes from dynamical systems that emerge as a result of structures like C-fibers in the brain. Although, the activation of C-fiber neurons alone do not explain pain, their existence hints at the dynamical system. As part of the dynamical system they provide the physical basis of pain. (Although pain itself is a mental phenomena.)
Are you saying that the NSA spying functionality is not observed by anyone ? What about the NSA ? Aren't they observing it ? That makes no sense.harvey1 wrote:I'm not suggesting that Intel has a magical power. ... The difference, though, is that the user of the computer does not know that the internal state of the computer matters with regard to whether the Intel and AMD chip are identical. Clearly observed behavior is not enough to show identity.Bugmaster wrote:If AMD really wanted to implement this functionality [NSA spying], they could. You, however, are saying that only Intel has the magical power to implement NSA's spying, and no one else does. So, there's something magical about Intel's chips that cannot, in principle, be duplicated by anyone. So... what is it ?
Er, ok, in that case your theory of mind is wrong whether I debate you or not. There, I've convinced you, right ?Bugmaster, creationism is wrong whether I debate I a creationist or not.
The reason I disbelieve Creationism is not because I somehow know it to be wrong, it's because their logic is flawed and their evidence (such as it is) is faulty.
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Post #84
Hey BM,
I hope all is well with you.
In any case, I would extend charity to them, and I would believe them if they said they have pain. However, if I found that they were an artificial lifeform, then I would take back that charity without an answer to the question on how they algorithmically accounted for pain.
The same is possible with AI. The fogged window represents the inside circuitry of the robot which you cannot see, and the lamp represents the robot's human behavior that you cannot distinguish from a human. We might be fooled by the AI machine, but like the lamp, it is a cheap imitation and does not in anyway represent any kind of breakthrough with regard to strong AI.
This is very similar how I see the issue of robot counterfeiting as humans. Sure, first time I might be fooled, but soon I'll learn to be very suspicious of robot counterfeits, and then I'll start taking any action that I can to keep those counterfeits from parading themselves off as humans. If they could show me that they really share my feelings, etc., then I would grant them equality. But, why should I grant equality to a nifty gadget sold at Best Buy?
This analogy is very similar to your challenge on internet forums. As my own Secret Service, I would do everything possible to only debate on forums where I didn't have to run into machine counterfeits. Of course, I might at times debate on those sites if it didn't matter if the response was from an inanimate gadget, however if I wanted to debate with intelligence that had feelings because I saw that as an important factor in having a debate, then I'd stay clear from those web sites.
In my quest to find humans, I might only debate on websites where the members were required to have genetic tests, blood tests, and whatever kind of test that demonstrates that they are human. We would be certified and then I could be reasonably sure that the extension of charity is warranted in those instances.
I reject that kind of myopic approach to real life systems that have been developed and evolved in nature. If a Turing machine would have worked in the wild, then surely biology would have duplicated them so that it was perfectly obvious that it is a Turing machine. We certainly don't see that obviousness. Instead we see many neurotransmitters, microtubles, and various other structures inside the brain that have no counterpart in Turing machines.
Just be patient, BM. You found a cool concept in Turing machines, and I think they are neat concepts too. Who knows, maybe we'll find one in nature someday. But, until we do, I see no reason to walk around describing natural structures like the brain in terms of a very recent concept such as Turing machines. You don't go around comparing brains to steam engines, do you?
This is what communication is. It is a negotiation where we extend charity, see if that action was profitable, and perhaps pull back or extend more charity as a result. If we know from the onset that the communication will prove fruitless (e.g., debating Gish on evolutionary science), then we may not extend any charity at all, and therefore not debate. If we know from the onset that the AI robot does not have any feelings, then we may not extend any charity at all with regards to treating the AI robot as a human. We might even become aggravated if the robot tries to talk to us as if it had feelings.
I hope all is well with you.
You agreed that you have a feeling of pain, and you agreed that this is some unknown algorithm that has yet to be discovered. Do you agree that a robot can be made to duplicate human behavior without this unknown algorithm? I think that is very possible. However, without evidence this unknown algorithm has been made, why think that a robot experiences pain when the algorithm hasn't been published?Bugmaster wrote:This, to me, sounds like saying that robots don't have souls, but humans do, and therefore robots should never be treated as human.
I don't recall that question from QED. I must have missed it...Bugmaster wrote:QED already asked you about hypothetical aliens... Do they have souls? Can they ever count as sentient beings? What if the aliens had some strict notion of privacy, and did not allow you to perform invasive scans on them, to ensure that they're not robots in disguise?
In any case, I would extend charity to them, and I would believe them if they said they have pain. However, if I found that they were an artificial lifeform, then I would take back that charity without an answer to the question on how they algorithmically accounted for pain.
The analogy applies because the lamps share one property of the sun: they emit white light like the sun. To make the analogy a little more complete, all we need to do is fog up an outside window that perhaps you might look out as you get up in the morning. By putting the lamp at an angle in the sky that you might typically expect the sun (to avoid two suns let's say that in this analogy unbeknownst to you it is a cloudy day). Now, from your perspective the lamp is identical to the sun. There is no way for you to tell the difference between the sun and the lamp. They produce the same kind of light, and other than a fogged up window, there is nothing to raise suspicions that the "sun" is a lamp.Bugmaster wrote:Remember: in my scenario, robots act just as humans do. Sunlight lamps do not act like yellow stars do (they have less mass, for one thing). This is a false analogy, again.harvey1 wrote:Sunlight lamps aren't yellow stars.Bugmaster wrote:I just conseider everyone who acts human to be human, and move on. I think it makes a lot of sense.
The same is possible with AI. The fogged window represents the inside circuitry of the robot which you cannot see, and the lamp represents the robot's human behavior that you cannot distinguish from a human. We might be fooled by the AI machine, but like the lamp, it is a cheap imitation and does not in anyway represent any kind of breakthrough with regard to strong AI.
It's not any different. However, for every algorithm inside performs some kind of function. However, we agree that there's an unknown algorithm that performs the function of you feeling pain. It is not necessarily the same algorithm that dictates how you will respond to pain. The second algorithm does not have the same function as the first algorithm. In fact, different people respond to pain differently. Some people scream and yell. Some people cry. Some people say and do nothing. So, there is a difference between feeling pain and reacting to pain.Bugmaster wrote:I don't follow you. As I see it, everything I feel is tied into the "algorithm" that powers my personality and behavior... why should pain be any different?How can you have an algorithm for pain and at the same time suggest that there's nothing more to pain than behavior of someone in pain? There's more to pain than behavior if the feeling is to produced by an algorithm.
This is true to a certain degree. For example, if you woke up and saw the sunlight lamp but thought it was the sun, ceteris paribus you are right in assuming that it is indeed the sun. However, if later I came to you laughing and said that we just filmed you doing your daily sun dance in the morning, and the joke is on you, then you should raise your suspicion level. If the next morning you do your sun dance again, then the first time occurrence it's shame on me, second time shame on you for not rubbing the fog off the window to actually identify the sun before you do your sun dance. If I kept repeating my little practical joke of filming your sun dances, and you kept feeling dupped, then at some point you probably would seek to get me to stop this snooping exercise.Bugmaster wrote:I think you have an ontology/epistemology confusion here. What I'm saying is that, regardless of whether consciousness is dualistic in nature or not, we normally deduce that someone has a consciousness based on their behavior. Therefore, it's parsimonious for us to assume that an entity that behaves although it is conscious, is in fact conscious, unless proven otherwise.
This is very similar how I see the issue of robot counterfeiting as humans. Sure, first time I might be fooled, but soon I'll learn to be very suspicious of robot counterfeits, and then I'll start taking any action that I can to keep those counterfeits from parading themselves off as humans. If they could show me that they really share my feelings, etc., then I would grant them equality. But, why should I grant equality to a nifty gadget sold at Best Buy?
Well, let me put this in perspective. If it was found that North Korea had perfectly duplicated US $20 printing plates, and they were printing $20 counterfeits that no one could tell the difference, would the U.S. Secret Service who protects U.S. currency be willing to accept that the counterfeits were identical to U.S. currency merely because no one could tell the difference in bills? Absolutely not. There would be a huge impetus by the U.S. treasury to pull $20 from circulation and re-distribute the bills using new plates and new technology that could not be counterfeited.Bugmaster wrote:Again, if you disagree, then you have to present me with a test that will distinguish a conscious being from a simulated being, without relying on their behavior. Taking my original scenario under consideration, your test would have to rely solely on what the being in question posts on Internet forums.
This analogy is very similar to your challenge on internet forums. As my own Secret Service, I would do everything possible to only debate on forums where I didn't have to run into machine counterfeits. Of course, I might at times debate on those sites if it didn't matter if the response was from an inanimate gadget, however if I wanted to debate with intelligence that had feelings because I saw that as an important factor in having a debate, then I'd stay clear from those web sites.
In my quest to find humans, I might only debate on websites where the members were required to have genetic tests, blood tests, and whatever kind of test that demonstrates that they are human. We would be certified and then I could be reasonably sure that the extension of charity is warranted in those instances.
It would depend on the technology of the replicator. I'm guessing that a replicator, if it is feasible, would use some kind of scanning technology, and then reproduce the scanned image using quantum teleporting technology. If that was the technology being used, then it is certainly not equivalent to the process used in evolution.Bugmaster wrote:If we can create a paramecium out of atomic raw materials using the atomic replicator, would we not be reproducing the work that evolution has done on the naturally evolved paramecium?
Well, I thought the job of this thread is that you are to convince me that strong AI is possible? The Turing machine is a recent invention and the development of its physical counterpart (the digital computer) is very recent. Why should we restrict ourselves to a concept that is so new and so recent? I'm sure that much in the way of non-Turing computation is yet to be learned, and it only takes a little patience. If you go back into history, you can see how people would grasp unto the latest invention (e.g., the steam engine) to make statements about how reality was in some way like that new invention. The Turing machine is just another object in a long string of such analogies.Bugmaster wrote:Again, I don't think that consciousness is a fundamental force of nature, such as magnetism or gravity, so I'm not convinced by this argument. You're stating that Turing Machines cannot produce pain, because pain cannot be produced by Turing Machines. I'd need something more before you can convince me.
I reject that kind of myopic approach to real life systems that have been developed and evolved in nature. If a Turing machine would have worked in the wild, then surely biology would have duplicated them so that it was perfectly obvious that it is a Turing machine. We certainly don't see that obviousness. Instead we see many neurotransmitters, microtubles, and various other structures inside the brain that have no counterpart in Turing machines.
Just be patient, BM. You found a cool concept in Turing machines, and I think they are neat concepts too. Who knows, maybe we'll find one in nature someday. But, until we do, I see no reason to walk around describing natural structures like the brain in terms of a very recent concept such as Turing machines. You don't go around comparing brains to steam engines, do you?
But, most physicists are reasonably convinced that dark matter exists because of the behavior of the universe. Isn't that what you've been saying all along with regard to AI pain? However, you are exactly right, dark matter behavior is not the same as having dark matter. (And, feeling pain is not the same as pain behavior.) This is actually my point. Until we have a theory all we can do is go based on behavior and try to develop a theory to explain dark matter behavior. If someone is able to duplicate dark matter behavior, then we have a right to know how they did it. If they do it by a mere ad hoc experiment, then we can wave off the experiment as having nothing to do with the nature of dark matter. On the other hand, if they provide a reasonable explanation and practical implementation of dark matter behavior, then it would be compelling indeed.Bugmaster wrote:That's not exactly right. A more correct statement would be: "without a good theory of dark matter, we cannot be reasonably sure that dark matter even exists at all".Astrophysicists deal with a similar situation with dark matter. We have no way of knowing if our sun is interacting with small amounts of dark matter since dark matter does not interact with photons. Without a good theory of dark matter, astrophysicists cannot answer the question whether our sun interacts with it.
How did you arrive at that interpretation of this text? I said that a dynamical systems approach produces observables that can be verified by science, how is that not scientific? Electromagnetic radiation produces observables, so does objects having mass, so does relativity theory. So, why would this be religion? Dynamical system theory is developed on evidence and it makes observables. It would help, I think, if you browsed the internet for 5 minutes to see how active the study is on dynamical systems. I think you have some preconceived notions that is preventing you from seeing some very basic science being studied.Bugmaster wrote:In science, nothing -- not electricity, not mass, not relativity, nothing -- is simply assumed to exist without solid evidence, as you seem to claim:That's religion, not science. In science, develop theories based on evidence, not vice-versa.But, I do have a worldview that is observable in principle because I suggest that an understanding of the dynamical system produces observables in behavior, brain scanning patterns, micro structural analysis, etc..
Where have I asked you to take something on faith? I am telling you of an approach that I think we will find successful but it takes time. Already there have been significant findings using dynamical systems approaches in terms of understanding the functioning of the brain. It takes patience. The last thing someone should be doing is using steam engines, internal combustion engines, calculators, Turing machines, etc., as their perfect model to understanding the brain. The brain should be understood by science, and science approaches the study of systems as dynamical systems. So, I don't see why accuse me of some kind of faith based science. If anything, it is your view of using Turing machines (steam engines, internal combustion engines, or whatever else is the latest fad) is a faith approach since science has never found a Turing machine and we don't know if we ever will. We have found dynamical systems in nature in many areas of scientific exploration.Bugmaster wrote:I challenge you to point me to even one accepted scientific theory (not a hypothesis, a theory) for which we have no evidence, and which we're supposed to just accept on faith.
There's a huge difference between qm and TMs. QM has been shown to be an effective theory of science. TMs are speculation. They actually can't even exist in a physical world (infinite memory). So, until it can be shown to be any different than people modelling phenomena according to it being a steam engine, I don't see why we shouldn't see it as a fad.Bugmaster wrote:Appealing to some future possibility of evidence ("Patience, BM. It took science 200-300K years before it even began. Since that time, we've made enormous progress in a mere 400 years...") is not enough. People have been saying this for years ("patience, guys, we will surely detect phlogiston in another couple centuries"); it didn't work then and it doesn't work now. In fact, you commit the same mistake that you accuse me of making:Similarly, claiming that "Turing machine concepts and computers" are "just fads" because they're new doesn't get you anywhere, either -- quantum physics is new, and yet it's clearly more than a fad....but your explanation for pain sensations seems to be that we should hold out on faith for future scientists that are smarter than current scientists.
The enlightenment happens as a process. We continually make research progress that gives us good reason to think we can make huge strides in this avenue, but there's nothing to prove to us that we can duplicate the successes of evolution with regard to cognition, consciousness, intentionality, qualia, etc..Bugmaster wrote:So, in summary, your worldview promises to enlighten us in another couple hundred years, as long as we accept it on faith.
In my opinion, your approach confuses a similation for the thing being simulated. No one doubts we can make simulations. What we want to do, though, is produce the phenomena itself.Bugmaster wrote:My worldview offers a working empirical model of the world now, and is consitent with all the data we have so far. Thus, it is more parsimonious.
For artifical intelligence you have the code. If the code doesn't show the function, the function is not there. If the code shows only the behavior of being in pain being programmed, then why would someone think that strong AI has been achieved?Bugmaster wrote:in the absence of any other test, we are justified in concluding that an entity experiences mental states based on its behavior alone. As soon as you build a working consciousness detector, I'll concede my argument.
We couldn't. Nor could we tell which $20 was made by the US Treasury, and which $20 was made by North Koreans with duplicated US printing plates. That doesn't mean that the $20 printed by North Koreans is a $20 bill. What makes it a real $20 bill (i.e., non-counterfeit) is that the US Treasury mint had printed it. If we could not tell what the North Koreans were doing, then eventually it would endanger the currency. Likewise, if we could not tell you apart from what ETI did, then as long as people knew there were two of you walking around, people would have to assume that there's a 50% chance that they are talking to a zombie Bugmaster. If a store knew there was a 50% chance that the $20 bill that customers were giving them was counterfeit, chances are good they would refuse all $20 bills.Bugmaster wrote:Er, would the clone still behave as though it felt pain ? If so, how would you tell which person is Bugmaster, and which is the clone (assuming that the aliens burned their paper trail)?
It's an issue of how to extend charity. Human heritage justifies the extension of charity. However, certain exhibited behavior justifies pulling back or increasing our extended charity to an individual. So, we normally might not extend charity to a perfect stranger in asking them their opinion on a philosophical matter, but if I see from your written words that you know a thing or two about that, I extend charity in that direction. I assume that as a human being that I can automatically extend a certain charity to you without knowing much else. I might take that extension back if I find that you don't speak English. I might as well be talking nonsense if you don't speak the language.Bugmaster wrote:I don't understand where you're going with this. You start out with ye olde biological naturalism ("DNA is required for humanity"), and you end up with validating my point ("abilities (i.e., behaviors) determine whether someone is human"). That's not consistent at all.
This is what communication is. It is a negotiation where we extend charity, see if that action was profitable, and perhaps pull back or extend more charity as a result. If we know from the onset that the communication will prove fruitless (e.g., debating Gish on evolutionary science), then we may not extend any charity at all, and therefore not debate. If we know from the onset that the AI robot does not have any feelings, then we may not extend any charity at all with regards to treating the AI robot as a human. We might even become aggravated if the robot tries to talk to us as if it had feelings.
I said the explanation of liquid water into ice (or steam) is irreducible without referring to the phase transition. This doesn't mean that liquid water is a different substance.Bugmaster wrote:Didn't you claim that water undergoes a "phase transition" when heated or cooled, and that such phase transitions are irreducible to the interactions of individual atoms?
Will Wright and Justin McCormick designed SimAnt based on emergent complexity research of Edward Wilson. SimEarth is based on the Gaia hypothesis of James Lovelock.Bugmaster wrote:It's debatable whether ant colonies are self-aware, but we in fact do have algorithms that reproduce the behaviors of ant colonies (SimAnt being a trivial yet entertaining example). As it turns out, ants are controlled by a fairly small set of chemical signals, and tracing the interactions of these signals is difficult, but not prohibitively so. You can, of course, argue that a simulated ant colony is not equivalent to a real ant colony, because simulated ants don't make crunching noises when you squish them, or because they lack ant souls, or something, but I don't think these objections have merit.
It is an example of an autopoeitic system, which is self-preserving system that is self-organized to self-preserve. Self-preservation is a weak form of self-awareness.Bugmaster wrote:I'm curious, though. Why do you think that ant colonies are self-aware?
C-fibers are not in the brain, they are throughout the body (except the dentin of teeth). For those unfortunate to lose a limb, hand, foot, finger, toe, etc, they record pain in that part of the body which is now missing (i.e., no C-fibers where the source of the pain is). If C-fibers was the explanation for pain, then there shouldn't be pain since those C-fibers are altogether missing.Bugmaster wrote:Is it not more parsimonious to conclude that C-fibers (I don't really know what they are, but whatever) do in fact produce the feeling of pain? Why do you feel the need to postulate additional entities (to borrow Occam's language) ?
But, the behavior of the Intel chip to the outside world looks identical to the AMD chip except for what the NSA knows. The NSA is an analogy of the "self." You feel pain, but the outside world does not have any reason to think that an AI robot version of you using an "AMD chip" is any different than you. The NSA example demonstrates that identical outward behavior is not necessarily equivalent to the processes happening inside you which is where you feel pain, etc..Bugmaster wrote:Are you saying that the NSA spying functionality is not observed by anyone ? What about the NSA? Aren't they observing it? That makes no sense.
Sorry, dynamical theories of the mind are some of the most exciting approaches in cognitive science. I think I can sit comfortable for now.Bugmaster wrote:Er, ok, in that case your theory of mind is wrong whether I debate you or not. There, I've convinced you, right ?
Sure. But, not everyone likes to debate issues that have already been shown to be false. Behaviorism is really not taken seriously by anyone, so I don't see what merit it is to ponder arguments for it. You're very intelligent and all, but I'm quite sure that behaviorism and creationism will not be returning just because a person who believes in them is very intelligent. You should give it up, it really is not a valid belief. But, it's not for me to try and convince you.Bugmaster wrote:The reason I disbelieve Creationism is not because I somehow know it to be wrong, it's because their logic is flawed and their evidence (such as it is) is faulty.
Post #85
Don't make it sound so mysterious. The massive amount of research that led us to (among other things) painkillers and some interesting drugs, as well as the neurological research in general, has already given us a good start at discovering the full "algorithm" for pain. And consciousness, of course.harvey1 wrote:You agreed that you have a feeling of pain, and you agreed that this is some unknown algorithm that has yet to be discovered.
You're assuming that there's only one way to experience pain: through the "algorithm" that humans are using. That sounds unnecessarily exclusive to me. After all, there are several algorithms for accomplishing more mundane tasks (array sorting, for example); why can't there be multiple algorithms for pain ?Do you agree that a robot can be made to duplicate human behavior without this unknown algorithm? I think that is very possible. However, without evidence this unknown algorithm has been made, why think that a robot experiences pain when the algorithm hasn't been published?
So, you'd extend more charity to aliens, than to your fellow forum posters -- both of whom may or may not be robotic ? That sounds inconsistent to me.In any case, I would extend charity to them, and I would believe them if they said they have pain. However, if I found that they were an artificial lifeform, then I would take back that charity...
You keep proposing analogies that attempt to demonstrate why I'm wrong: Intel chips, sunlight lamps, counterfeit $20 bills, etc. etc. All of your analogies follow the same pattern:
1). Object X seems to perform exactly as object Y
2). But as we dig deeper, we discover that there's something Y does that X doesn't do (it reports to NSA, or has a large mass, or is registered with the U.S. treasury, etc.).
3). Therefore, X cannot possibly be interchangeable with Y !
I absolutely agree with you that this argument is true... but... in step 2, you explicitly address X's defincient behavior. In my example, X's behavior is not deficient, so your argument does not apply to me. You can rephrase step 2 as,
2a). Y has something that X lacks
But then, how do you know that Y has something that X lacks ? In all your examples, you discover this deficiency by observing X's and Y's functionality -- i.e., their behavior, thus reducing (2a) back to (2).
Let's pretend that aliens did, indeed, replace our Sun with a big fusion reactor overnight (er... ok, let's say they put half the Earth to sleep and then did it, seeing as it's never night everywhere :-) ). The alien reactor "looks and feels" just like our Sun; it has the same mass, too, so that the Earth doesn't fall of its orbit. I absolutely agree with you that our Sun and the reactor are not identical. However, you, as the denizen of Earth, would not be justified in believing that your Sun has been replaced by a reactor; in fact, assuming that the aliens did a semi-decent job, you wouldn't even notice -- despite the fact that you know that fusion reactors exist.
Similarly, when you are faced with an entity that "looks and feels" just like a human being, you are not justified in arbitrarily assuming that it's not a human being -- despite the fact that you know that robots exist.
I'll pause my argument here (these posts are getting too long as it is). Let me know if you agree, and then we can proceed.
Granted, it's not necessarily the same algorithm, but I don't see why it shouldn't be. Why postulate two algorithms when one would suffice ? Additionally, you assume that all people feel pain in exactly the same way, which is kind of doubtful. People don't experience colors, sounds, and tastes the same way -- why should pain be any different ?However, for every algorithm inside performs some kind of function. However, we agree that there's an unknown algorithm that performs the function of you feeling pain. It is not necessarily the same algorithm that dictates how you will respond to pain.
Again, this is biological naturalism. It excludes uploaded humans (humans whose bodies are entirely prosthetic), and yet, you yourself have agreed previously that uploaded humans are, well, human. It also excludes aliens, who don't have human DNA -- whom you agreed to treat as human just a few paragraphs ago. From a somewhat more emotional (and thus, less sound) point of view, it makes you look like a paranoid creep, because you refuse to talk to anyone without testing their blood, first. I find this level of skepticism excessive.In my quest to find humans, I might only debate on websites where the members were required to have genetic tests, blood tests, and whatever kind of test that demonstrates that they are human.
Uh... ok. Sure why not. So, you're saying that if we decoded the entire Paramecium genome (and its protein makeup, etc.), and then assembled a Paramecium "by hand" out of available atoms (that we picked up off the floor, our out of the air, etc.), then the resulting object would not, in fact, be a Paramecium ? That doesn't sound right to me. We could put our artificial Paramecium side-by-side with a natural Paramecium, and they'd be no different, atomically speaking, than two random natural Paramecia that you'd pluck out of a Petri dish... and yet you'd claim that the natural critter has something that our artificial critter lacks ? What is it ?It would depend on the technology of the replicator. I'm guessing that a replicator, if it is feasible, would use some kind of scanning technology, and then reproduce the scanned image using quantum teleporting technology.
By definition, a Turing Machine computes everything that is computable. The term "non-Turing computation" has no meaning. I have a feeling that what you want to say instead is, "human minds are more than computation". That's all well and good, but what is this "more" component ? It could be qualia, but then, you'd have to explain why machines cannot develop qualia (which you haven't done so far, and in fact, you said that machines might develop qualia in the future). You also have no reliable test for detecting qualia, which makes them no different than phlogiston or aether.The Turing machine is a recent invention and the development of its physical counterpart (the digital computer) is very recent. Why should we restrict ourselves to a concept that is so new and so recent? I'm sure that much in the way of non-Turing computation is yet to be learned, and it only takes a little patience.
! You're kidding, right ? How do you think DNA works ? It's a natural Turing machine, it even has error-correcting algorithms built in.If a Turing machine would have worked in the wild, then surely biology would have duplicated them so that it was perfectly obvious that it is a Turing machine.
What about neural networks, which we have been using quite successfully for about 30 years now ? And what about digital signal processing algorithms, some of which we've developed independently, only to discover that the human retina works the same way ? Now, granted, the DNA is not made of silicon (and it operates on a base-4 system, not base-2), and neural networks are not made of chemicals, but surely you'd agree that they perform the same computational functions ?We certainly don't see that obviousness. Instead we see many neurotransmitters, microtubles, and various other structures inside the brain that have no counterpart in Turing machines.
No, and no one else does, either. However, I could compare the heart to a pump, and so would many other people. In fact, some people went as far as creating artificial valves for this pump, which work extremely well in most patients. Your question is, IMO, disingenuous.You don't go around comparing brains to steam engines, do you?
You keep telling me, "please be patient, surely in the future we'll discover a better model for consciousness that does not rely on computation... a model that makes total sense in my worldview". You might be right, but until you pony up and present me your model, I'll go with my own worldview, which exists today, and explains every piece of evidence that we have.
You're putting yourself in the position of a crank inventor, who says, "just be patient, as soon as someone discovers a powerful enough detector, you'll see that my goblin-powered antigravity device works". That's nice, but until we have that device, I'll go on disbelieving in goblins and antigravity both.
No, they're convinced that something exists that causes the universe to behave the way it does. That something doesn't have to be dark matter... unless, of course, you want to define "dark matter" as "something that causes the universe to behave as though dark matter existed", which would be true yet tautological.But, most physicists are reasonably convinced that dark matter exists because of the behavior of the universe.
I understood you to say, "belief in dynamical systems is required in order to observe the effects of dynamical systems", which is kind of nonsensical. Sorry if I misinterpreted your words.How did you arrive at that interpretation of this text? I said that a dynamical systems approach produces observables that can be verified by science, how is that not scientific?
What is this, other than faith ? We're supposed to believe in something, without evidence, because producing this evidence will take time ?Where have I asked you to take something on faith? I am telling you of an approach that I think we will find successful but it takes time.
Such as ?Already there have been significant findings using dynamical systems approaches in terms of understanding the functioning of the brain.
One of these things is not like the other. Steam engines and calculators are physical objects. A Turing Machine, however, is an abstract concept, just as the term "dynamical system" is an abstract concept. If you're saying we should use dynamical systems instead of Turing machines to understand the brain, you should offer some evidence.The last thing someone should be doing is using steam engines, internal combustion engines, calculators, Turing machines, etc., as their perfect model to understanding the brain.
I don't think this is true. Again, let's focus on something simple, such as Newtonian mechanics. Show me which parts of Newtonian mechanics are "dynamical", and explain to me how they'd differ from a non-dynamical understanding of mechanics.The brain should be understood by science, and science approaches the study of systems as dynamical systems.
Um, this computer I'm typing on owes its existence to Turing machines; I assure you, this computer is more than mere speculation. In fact, all of modern computing traces its roots back to Turing machines, in one way or another; the entirety of computer science is based on them. Additionally, some people who have extra time on their hands build physical copies of Turing mahcines -- out of Legos, model trains, and pretty much anything else. Your claim that Turing machines are purely hypothetical is staggeringly false.There's a huge difference between qm and TMs. QM has been shown to be an effective theory of science. TMs are speculation.
If you can't tell which person is real and which is the clone, or which $20 bill is real and which is counterfeit, or which person is squishy and which is a robot... Then what purpose does your worldview have ? It gives you zero explanatory power.We couldn't.Bugmaster wrote:Er, would the clone still behave as though it felt pain ? If so, how would you tell which person is Bugmaster, and which is the clone (assuming that the aliens burned their paper trail)?
Whoa there. You just said that you assume I'm a human being just by looking at my written words. Isn't this what I've been persuading you to do all along ? Note that you don't know "from the onset" whether I "have any feelings" or not, because all you can see are my words.So, we normally might not extend charity to a perfect stranger in asking them their opinion on a philosophical matter, but if I see from your written words that you know a thing or two about that, I extend charity in that direction. I assume that as a human being that I can automatically extend a certain charity to you without knowing much else.
Sorry, this isn't clear enough for me. Is the behavior of liquid water, as opposed to the behavior of solid water, irreducible to the behaviors of their atoms and the interactions between them ?I said the explanation of liquid water into ice (or steam) is irreducible without referring to the phase transition. This doesn't mean that liquid water is a different substance.Bugmaster wrote:Didn't you claim that water undergoes a "phase transition" when heated or cooled, and that such phase transitions are irreducible to the interactions of individual atoms?
So... are you saying that SimAnt and SimEarth demonstrate this "emergent behavior", despite that fact that their algorithms are well known (well, at least they're known to Will Wright) ? Why can't Strong AI demonstrate this emergent behavior ?Will Wright and Justin McCormick designed SimAnt based on emergent complexity research of Edward Wilson. SimEarth is based on the Gaia hypothesis of James Lovelock.
So, are you saying that every creature on this planet, from the Paramecium to humans, is self-aware ? Why would an AI that exhibits self-preservation not be self-aware ? In fact, by your logic, modern computer viruses are self-aware, since they exhibit self-preservation (they actively attempt to disable antivirus programs that hunt them).It is an example of an autopoeitic system, which is self-preserving system that is self-organized to self-preserve. Self-preservation is a weak form of self-awareness.
What, and the NSA is not part of the "outside world" ?But, the behavior of the Intel chip to the outside world looks identical to the AMD chip except for what the NSA knows.
You seem to be trying hard, nonetheless :-) It almost sounds like you're saying, "your arguments are persuasive, but I just know that you're wrong, so there". Sorry, that doesn't impress me. You have to show me why I'm wrong, not merely appeal to peer pressure, authority, or your own opinion.You're very intelligent and all, but I'm quite sure that behaviorism and creationism will not be returning just because a person who believes in them is very intelligent. You should give it up, it really is not a valid belief. But, it's not for me to try and convince you.
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Post #86
If we can't tell, then we must withdraw charity. That's what charity is, afterall. It's the view that we don't know for sure, but we are justified in believing the person is human and therefore has feelings. If we no longer could determine you from the clone, then I would have to assume that you are the clone, in which case I would no longer treat you as having qualia.Bugmaster wrote:If you can't tell which person is real and which is the clone, or which $20 bill is real and which is counterfeit, or which person is squishy and which is a robot... Then what purpose does your worldview have ? It gives you zero explanatory power.We couldn't.Bugmaster wrote:Er, would the clone still behave as though it felt pain? If so, how would you tell which person is Bugmaster, and which is the clone (assuming that the aliens burned their paper trail)?
Again, you already acknowledged that you have feelings and that this is an unknown algorithm. I think it's fairly obvious that a machine can be made to duplicate all of your motions and actions without this algorithm. So, why if it is possible for AI machines to fool you, why must we assume that we are not being fooled?
I don't think your arguments are persuasive. I wish I had an infinite amount of time, we could debate on and on about behaviorism, but it's invalidated so I see no reason to be drawn into a debate about that issue. If you wanted to find out why it is invalidated, you could easily do so. You don't need me to quote well-known sources, right?Bugmaster wrote:It almost sounds like you're saying, "your arguments are persuasive, but I just know that you're wrong, so there". Sorry, that doesn't impress me. You have to show me why I'm wrong, not merely appeal to peer pressure, authority, or your own opinion.
Of course, but so is the feeling of pain, we just have no current method to detect that feeling in any kind of objective method (as of right now). The same is true of the Intel chip metaphor.Bugmaster wrote:What, and the NSA is not part of the "outside world"?But, the behavior of the Intel chip to the outside world looks identical to the AMD chip except for what the NSA knows.
They might be able to do so. It is unknown whether they can. We know that dynamical systems can be simulated on using Turing machines. Turing machines can also simulate ferromagnetism and photosynthesis too. So, I am skeptical that Turing machines can really simulate qualia for that reason.Bugmaster wrote:So... are you saying that SimAnt and SimEarth demonstrate this "emergent behavior", despite that fact that their algorithms are well known (well, at least they're known to Will Wright)? Why can't Strong AI demonstrate this emergent behavior?
It seems that you keep ignoring my agnostic stance, and then act surprised when I approach the subject as an agnostic (as if I've changed my position). You've convinced yourself for some reason that I against the strong AI position, although I have said from the beginning that I am agnostic about it.
Dynamical systems describe changes and emergence of systems. To make that an accurate dynamic description, you need an explanation that involves phase transitions from one state to another. Any emergent state that requires a phase transition is irreducible to the prior emergent state without the explanation. You can't eliminate the explanation of phase transitions via reduction. If you want to understand what is happening at the molecular level during a phase transition, then you need to understand the phase transition properties of the system as a whole.Bugmaster wrote:Is the behavior of liquid water, as opposed to the behavior of solid water, irreducible to the behaviors of their atoms and the interactions between them?
Your example is slightly different. You aren't talking about the change of liquid water into ice water, you are talking about ab initio (first principles) of molecular dynamics. The main theory for ab initio research, if my understanding is correct, is density-functional theory (DFT). However, my understanding is that modern research into dynamical systems is a key development for DFT (e.g., scaling and renormalization groups). In other words, the same situation applies for ab initio efforts: the cause of the emergent system is an issue of scaling and symmetry breaking.
There is no valid reason why I shouldn't make the assumption that you are a human being. We don't live in the 26th century.Bugmaster wrote:Whoa there. You just said that you assume I'm a human being just by looking at my written words. Isn't this what I've been persuading you to do all along? Note that you don't know "from the onset" whether I "have any feelings" or not, because all you can see are my words.So, we normally might not extend charity to a perfect stranger in asking them their opinion on a philosophical matter, but if I see from your written words that you know a thing or two about that, I extend charity in that direction. I assume that as a human being that I can automatically extend a certain charity to you without knowing much else.
Please read my comment again. It was in response to the issue of whether Turing machines (and steam engines) are fads to use as an explanation. I'm not saying that practical implementations of Turing machines are fads. (Incidentally, Turing machines can't exist in the physical world since it requires infinite memory.)Bugmaster wrote:Um, this computer I'm typing on owes its existence to Turing machines; I assure you, this computer is more than mere speculation. In fact, all of modern computing traces its roots back to Turing machines, in one way or another; the entirety of computer science is based on them. Additionally, some people who have extra time on their hands build physical copies of Turing mahcines -- out of Legos, model trains, and pretty much anything else. Your claim that Turing machines are purely hypothetical is staggeringly false.There's a huge difference between qm and TMs. QM has been shown to be an effective theory of science. TMs are speculation.Bugmaster wrote:Similarly, claiming that "Turing machine concepts and computers" are "just fads" because they're new doesn't get you anywhere, either -- quantum physics is new, and yet it's clearly more than a fad.
Newtonian mechanics is an example of linear dynamical model, and therefore is not a good example for non-linear dynamical systems, e.g., the brain. However, Newtonian mechanics is an example of a dynamical system (that was my point, not that we should use it as an example).Bugmaster wrote:I don't think this is true. Again, let's focus on something simple, such as Newtonian mechanics. Show me which parts of Newtonian mechanics are "dynamical", and explain to me how they'd differ from a non-dynamical understanding of mechanics.The brain should be understood by science, and science approaches the study of systems as dynamical systems.
Why? Science studies dynamical systems, and the brain is just another object in the universe that science studies. Why should I produce evidence that science must study the brain like it studies every other dynamical phenomena that it encounters? You should produce evidence that science shouldn't study the brain like it studies every other phenomena.Bugmaster wrote:If you're saying we should use dynamical systems instead of Turing machines to understand the brain, you should offer some evidence.
You just answered your own question.Bugmaster wrote:Such as?.... What about neural networks, which we have been using quite successfully for about 30 years now?Already there have been significant findings using dynamical systems approaches in terms of understanding the functioning of the brain.
Science has studied many dynamical systems with success. Why is it faith on my part if I say that I think that science will find similar success with the brain? The evidence is the prior success that science has made in understanding other dynamical systems.Bugmaster wrote:What is this, other than faith? We're supposed to believe in something, without evidence, because producing this evidence will take time?Where have I asked you to take something on faith? I am telling you of an approach that I think we will find successful but it takes time.
Well, of course it might be shown later that the hypothesis is wrong, but most astrophysicists believe there's missing dark matter.Bugmaster wrote:No, they're convinced that something exists that causes the universe to behave the way it does. That something doesn't have to be dark matter... unless, of course, you want to define "dark matter" as "something that causes the universe to behave as though dark matter existed", which would be true yet tautological.But, most physicists are reasonably convinced that dark matter exists because of the behavior of the universe.
No, I don't think so. I think I'm justified in thinking that science will make progress in understanding the brain given all the other complex phenomena that took time to explain. That's not being a crank. On the hand, there have been many failed explanations using the latest fad invention as an analogy (e.g., the universe is like a computer, no, it's like a cellular automata, no, it's like a cellphone, etc.). I see no reason to jump on fad bandwagons like that.Bugmaster wrote:You're putting yourself in the position of a crank inventor, who says, "just be patient, as soon as someone discovers a powerful enough detector, you'll see that my goblin-powered antigravity device works". That's nice, but until we have that device, I'll go on disbelieving in goblins and antigravity both.
Sure, at some point in time we might have a good technology of what the brain is like. Perhaps a quantum device will be the right description, who knows?Bugmaster wrote:No, and no one else does, either. However, I could compare the heart to a pump, and so would many other people. In fact, some people went as far as creating artificial valves for this pump, which work extremely well in most patients. Your question is, IMO, disingenuous.You don't go around comparing brains to steam engines, do you?
Bugmaster, neural nets are dynamical systems and represent non-Turing computations.Bugmaster wrote:What about neural networks, which we have been using quite successfully for about 30 years now?... surely you'd agree that they perform the same computational functions?
You mean it's an example of non-Turing computation, right?Bugmaster wrote:You're kidding, right ? How do you think DNA works? It's a natural Turing machine, it even has error-correcting algorithms built in.If a Turing machine would have worked in the wild, then surely biology would have duplicated them so that it was perfectly obvious that it is a Turing machine.
Non-Turing computation does have meaning, and it is used extensively as referring to computation that analog systems and natural systems do as self-organizing systems. In addition, Hava Siegelmann at the Technion Institute of Technology suggests that non-Turing computation is more powerful than Turing computation.Bugmaster wrote:By definition, a Turing Machine computes everything that is computable. The term "non-Turing computation" has no meaning. I have a feeling that what you want to say instead is, "human minds are more than computation". That's all well and good, but what is this "more" component ? It could be qualia, but then, you'd have to explain why machines cannot develop qualia (which you haven't done so far, and in fact, you said that machines might develop qualia in the future). You also have no reliable test for detecting qualia, which makes them no different than phlogiston or aether.
Can humans be uploaded onto Turing machines? I don't recall saying that they could be. If I did, then that is a mistake on my part. I don't know if humans can be uploaded on Turing machines and remain the same. I would I have to see the code before I could attribute qualia to them.Bugmaster wrote:Again, this is biological naturalism. It excludes uploaded humans (humans whose bodies are entirely prosthetic), and yet, you yourself have agreed previously that uploaded humans are, well, human.In my quest to find humans, I might only debate on websites where the members were required to have genetic tests, blood tests, and whatever kind of test that demonstrates that they are human.
If there are also aliens in this science fiction reality you are constructing for me, then I would like it if they could be included on the forum (just as long as they tell us that they aren't strong AI machines, in which case I want an explanation of how they can really experience pain since they are Turing machines).Bugmaster wrote:It also excludes aliens, who don't have human DNA -- whom you agreed to treat as human just a few paragraphs ago.
You can, and that is your prerogative. But, any strong AI machine ought to be able to tell me how it is they feel real pain. If they won't, then I would take that as an admission that they really can't. If they can, then I'd be happy to accept them as genuine lifeforms.Bugmaster wrote:From a somewhat more emotional (and thus, less sound) point of view, it makes you look like a paranoid creep, because you refuse to talk to anyone without testing their blood, first. I find this level of skepticism excessive.
Okay, what is the algorithm that allows you to feel pain?Bugmaster wrote:Don't make it sound so mysterious. The massive amount of research that led us to (among other things) painkillers and some interesting drugs, as well as the neurological research in general, has already given us a good start at discovering the full "algorithm" for pain. And consciousness, of course.harvey1 wrote:You agreed that you have a feeling of pain, and you agreed that this is some unknown algorithm that has yet to be discovered.
Hey, more than one, that's fine. I just need one algorithm, but if strong AI people can produce more than one, all the better.Bugmaster wrote:You're assuming that there's only one way to experience pain: through the "algorithm" that humans are using. That sounds unnecessarily exclusive to me. After all, there are several algorithms for accomplishing more mundane tasks (array sorting, for example); why can't there be multiple algorithms for pain?
As long as I have reason to believe they are natural lifeforms, or I have reason that the strong AI can feel pain, then I'm pretty accepting of them.Bugmaster wrote:So, you'd extend more charity to aliens, than to your fellow forum posters -- both of whom may or may not be robotic ? That sounds inconsistent to me.In any case, I would extend charity to them, and I would believe them if they said they have pain. However, if I found that they were an artificial lifeform, then I would take back that charity...
We never know anything if you want to get down to brass tax. We use a principle of charity to confer that there are other minds, etc., and this is part of our basic beliefs. What we extend charity to is based on experience, and when it is shown that this experience is no longer effective, we must change our basic beliefs. In this case I suggest 2b:Bugmaster wrote:I absolutely agree with you that this argument is true... but... in step 2, you explicitly address X's defincient behavior. In my example, X's behavior is not deficient, so your argument does not apply to me. You can rephrase step 2 as, 2a). Y has something that X lacks But then, how do you know that Y has something that X lacks? In all your examples, you discover this deficiency by observing X's and Y's functionality -- i.e., their behavior, thus reducing (2a) back to (2).
2b: But as we dig deeper, we discover that we have good reason to doubt that X does everything that Y does, and therefore we need to identify X by genetic testing, anti-counterfeiting methods, etc. so that we can easily identify X from Y.
It depends. I am not justified in believing the sun has been replaced because I have no reason to believe this is a technical feat that occurs in practice. However, if I know that strong AI robots (i.e., assuming that I haven't been shown that qualia can be algorithmically derived) exist, then I do have to worry about robots counterfeiting themselves as humans, right?Bugmaster wrote:Let's pretend that aliens did, indeed, replace our Sun with a big fusion reactor overnight... you, as the denizen of Earth, would not be justified in believing that your Sun has been replaced by a reactor; in fact, assuming that the aliens did a semi-decent job, you wouldn't even notice -- despite the fact that you know that fusion reactors exist... when faced with an entity that "looks and feels" just like a human being, you are not justified in arbitrarily assuming that it's not a human being -- despite the fact that you know that robots exist.
What is an algorithm? An algorithm is a set of instructions that explains how to arrive at a certain internal/external state from a previous internal/external state. Why should we think that an algorithm that can only explain an external state (from a previous external state) is somehow also able to magically arrive at an internal state (from a previous internal state)?Bugmaster wrote:Granted, it's not necessarily the same algorithm, but I don't see why it shouldn't be. Why postulate two algorithms when one would suffice?However, for every algorithm inside performs some kind of function. However, we agree that there's an unknown algorithm that performs the function of you feeling pain. It is not necessarily the same algorithm that dictates how you will respond to pain.
Sure, we can have many kinds of pain, but that only adds to the problem. Why are there so many kinds of pain and what are the many different algorithms that produce it.Bugmaster wrote:Additionally, you assume that all people feel pain in exactly the same way, which is kind of doubtful. People don't experience colors, sounds, and tastes the same way -- why should pain be any different?
Post #87
In the clone scenario, you know for a fact that one of the creatures is me, and the other one is a clone. You just don't know which one is which. Are you still going to treat them both as clones ?harvey1 wrote:If we no longer could determine you from the clone, then I would have to assume that you are the clone, in which case I would no longer treat you as having qualia.
It's actually not obvious to me. I think that a machine that could duplicate all of my motions and actions (in the non-trivial sense, we're not talking about videotapes here) would need to be executing the same algorithm that I am. How else is it going to replicate all my actions ?Again, you already acknowledged that you have feelings and that this is an unknown algorithm. I think it's fairly obvious that a machine can be made to duplicate all of your motions and actions without this algorithm.
No, this is not true. We can easily go to the NSA and reverse-engineer their chip-spying technology. Then, we'd have an objective method of detecting the spyware-enabled Intel chips. Note that the spyware functionality is not really any different from anything else these chips do; it produces some physical output accoriding to some physical input. The non-Intel chips do not produce these outputs, and thus they're different. However, in my scenario, the assumption is that the AI reproduces all the input/output relationships of the biological human; hence, the analogy is false.Of course, but so is the feeling of pain, we just have no current method to detect that feeling in any kind of objective method (as of right now). The same is true of the Intel chip metaphor.
Wait, so you're using the fact that Turing machines can simulate all these phenomena as evidence against the fact that they can simulate qualia, as well ? Wouldn't the opposite be the case ?They might be able to do so. It is unknown whether they can. We know that dynamical systems can be simulated on using Turing machines. Turing machines can also simulate ferromagnetism and photosynthesis too. So, I am skeptical that Turing machines can really simulate qualia for that reason.
It depends on what you mean by "agnostic", I suppose. You've stated way earlier on that you're willing to concede the possibility of Strong AI; you then proceeded to argue that biological humans (and, perhaps, aliens) are the only things that can become intelligent, and that Strong AI cannot be built by humans. I'm not sure how to reconcile these viewpoints, really.You've convinced yourself for some reason that I against the strong AI position, although I have said from the beginning that I am agnostic about it.
All right, let's say I have a pot of water boiling on the stove. Some water is constantly being changed into steam. Are you saying that I can't express this process in terms of the interactions of individual atoms ?Bugmaster wrote:Your example is slightly different. You aren't talking about the change of liquid water into ice water, you are talking about ab initio (first principles) of molecular dynamics.
I quoted that whole exchange just for clarity. It sounds as though you're saying that right now, in the 21st century, my words alone are sufficient for you to determing whether I'm human. But, there mere existence of AI robots would immediately cause you to suspect my humanity. That seems extreme to me; but, more importantly, I fail to see the distinction. Let's say that tomorrow, some researcher announces that he has developed Strong AI, and that he has been beta-testing it on this board for the past three months. In practical terms, how would this change your posting habits ?harvey1 wrote:There is no valid reason why I shouldn't make the assumption that you are a human being. We don't live in the 26th century.Bugmaster wrote:Whoa there. You just said that you assume I'm a human being just by looking at my written words. Isn't this what I've been persuading you to do all along? Note that you don't know "from the onset" whether I "have any feelings" or not, because all you can see are my words.harvey1 wrote:So, we normally might not extend charity to a perfect stranger in asking them their opinion on a philosophical matter, but if I see from your written words that you know a thing or two about that, I extend charity in that direction. I assume that as a human being that I can automatically extend a certain charity to you without knowing much else.
It's irrelevant if they're fads or not; what matters is whether they're true. If you reject every new notion just because it's new, you're going to end up in a very stagnant place, praying to Thor to avert the lightning. And besides, all of your fractals, emergent behaviors, and dynamical systems are also relatively new; that doesn't make them false, either. You are not logically justified in dismissing some explanation just because it's new, or popular, or both.Please read my comment again. It was in response to the issue of whether Turing machines (and steam engines) are fads to use as an explanation. I'm not saying that practical implementations of Turing machines are fads.
That's true, but neither can masses (there's no such thing as a 1.0kg mass in real life). This doesn't mean that we should stop using the notion of mass to describe the world.(Incidentally, Turing machines can't exist in the physical world since it requires infinite memory.)
I absolutely agree with you about the brain. However, you're claiming that the brain is the only object in the world which can produce consciousness; i.e., an artificial brain can never be constructed. I don't see why this has to be true.Science studies dynamical systems, and the brain is just another object in the universe that science studies.
We have already replaced plenty of human organs with prosthesis (joints, teeth, heart valves), and we've replaced some non-human organs with much more powerful versions (planes, knives, clothing). Note that, while an artificial heart valve functions very similarly to a natural one, a plane is nothing at all like a bird. My claim is not that we can build an artificial brain (although I think that we can); my claim is that the brain is not the only object that can produce consciousness, just as a bird's wing is not the only object that can produce flight.
Just as the important feature of flight is keeping off the ground, the important feature of consciousness is the ability to solve problems, to demonstrate emotions, to carry on an argument, etc. -- i.e., behavior. I claim that we can use computation to reproduce this behavior.
At no point in my argument have I claimed that the brain is a binary computer, with an ALU and RAM and all that stuff. I think it's likely that the brain is a Turing machine (maybe a nondeterministic one), but even that is not truly relevant to my argument.
Inicidentally, this description of dynamical systems sounds oddly familiar to me:
This sounds like an algorithm to me, or, at the very least, a formula that can be solved by a numerical method... In fact:wikipedia wrote:A dynamical system has a state determined by a collection of real numbers. Small changes in the state of the system correspond to small changes in the numbers. The numbers are also the coordinates of a geometrical space—a manifold. The evolution rule of the dynamical system is a fixed rule that describes what future states follow from the current state. The rule is deterministic: for a given time interval only one future state follows from the current state. ... To determine the state for all future times requires iterating the relation many times—each advancing time a small step.
This might explain why neural networks have been so successfull recently:wikipedia wrote:Before the advent of fast computing machines, solving a dynamical system required sophisticated mathematical techniques and could only be accomplished for a small class of dynamical systems. Numerical methods executed on computers have simplified the task of determining the orbits of a dynamical system.
So, it seems that a simulated neural network is just as good at performing neural-network-like taks as a real neural network is. If human brains are dynamical systems, and dynamical systems can be solved through computation, this makes perfect sense. It stands to reason, then, that a computationally-based artificial brain would be just as good as a real brain at performing brain-like tasks; i.e., cognition.wikipedia wrote:An artificial neural network (ANN), also called a simulated neural network (SNN) (but the term neural network (NN) is grounded in biology and refers to very real, highly complex plexus), is an interconnected group of artificial neurons that uses a mathematical or computational model for information processing based on a connectionist approach to computation. There is no precise agreed definition among researchers as to what a neural network is, but most would agree that it involves a network of highly complex processing elements (neurons), where the global behaviour is determined by the connections between the processing elements and element parameters... The utility of artificial neural network models lies in the fact that they can be used to infer a function from observations. This is particularly useful in applications where the complexity of the data or task makes the design of such a function by hand impractical.
Also, the article about dark matter that you linked to talks of extensive computer simulations that have been used in order to describe the behavior of real galaxies -- which, presumably, are dynamical systems. Why is it that we can we simulate galaxies but not brains ? I, of course, believe that we can successfully model both...
Yes, but they define "dark matter" as "that thing we can't detect that makes galaxies spin slower". That's a far cry from actually determining what dark matter is or how it works.Well, of course it might be shown later that the hypothesis is wrong, but most astrophysicists believe there's missing dark matter
Why ? A brain is not a quantum device, after all -- at least, not in the sense that you mean. It's powered by chemistry. Ultimately, of course, all devices are quantum devices because they're made of quantum particles, but that's a trivial usage of the term.Sure, at some point in time we might have a good technology of what the brain is like. Perhaps a quantum device will be the right description, who knows?
What do you mean by "powerful" ? Any modern computer is more powerful than a Turing machine, because the Turing machine is so woefully inefficient. Similarly, an analog neural network is much more efficient at certain tasks than a digital computer. However, as various people quoted in the aricle (Christos Papadimitriou, and I think someone else...) have pointed out, this does not necessarily mean that the analog network is categorically different from a digital computer. Neither of the systems is omniscient (as the reporter who wrote the article seems to think); for example, neither of them can represent the number Pi with infinite accuracy (analog systems due to noise, digital systems due to limited memory).In addition, Hava Siegelmann at the Technion Institute of Technology suggests that non-Turing computation is more powerful than Turing computation.
I am pretty sure that you agreed that they could, earlier, but I'm too lazy to go look for the post manually, and the search function seems to be broken (out of memory, ironically enough). In any case, if you deny that uploading a human preserves his consciousness, then you're making a much stronger statement than previously: you're not just saying that we cannot manually create consciousness, you're saying that we can't create a prosthetic brain, either. This statement is especially strong in your dualistic worldview, because it implies that only biological brains can house qualia. I'm still not convinced that this is so.I don't know if humans can be uploaded on Turing machines and remain the same. I would I have to see the code before I could attribute qualia to them.
Oh, absolutely, any hypothetical science-fictional being is welcome on my Forum of the Future (tm) ! Inicidentally, the Strong AI machines in my example won't tell you that they're Strong AI machines, either. That would make it too easy :-)If there are also aliens in this science fiction reality you are constructing for me, then I would like it if they could be included on the forum (just as long as they tell us that they aren't strong AI machines, in which case I want an explanation of how they can really experience pain since they are Turing machines).
Again, you're holding AIs to a much stronger standards than humans. Can you tell me the precise qualia-powered mechanism by which you feel pain ? I personally can't even tell you the mechanism I use for walking, and that's a much simpler process. So, why demand of AIs what you can't deliver yourself ?But, any strong AI machine ought to be able to tell me how it is they feel real pain. ... Okay, what is the algorithm that allows you to feel pain?
I should also point out that, if someone does build a Strong AI, he'll of course have access to its full source code -- so he'd be able to fullfill your request.
You're missing my point. I'm saying that it may be possible to produce consciousness without duplicating the exact process that goes on in the human brain, just as it's possible to produce flight without exactly reproducing a bird's wing.harvey1 wrote:Hey, more than one, that's fine. I just need one algorithm, but if strong AI people can produce more than one, all the better.Bugmaster wrote:You're assuming that there's only one way to experience pain: through the "algorithm" that humans are using. That sounds unnecessarily exclusive to me. After all, there are several algorithms for accomplishing more mundane tasks (array sorting, for example); why can't there be multiple algorithms for pain?
That's the same as ye olde 2 and 2a. You're saying that there's some piece of functionality that Y has but X lacks. But, in my hypothetical example, I specifically stated that Strong AIs can act exactly as humans act, as far as consciousness is concerned.2b: But as we dig deeper, we discover that we have good reason to doubt that X does everything that Y does, and therefore we need to identify X by genetic testing....
I would say, wrong. Let's say we do have a race of uber-aliens who fly around replacing stars with fusion reactors -- and let's say they advertise this. So, there's a chance that they've replaced our Sun with a fusion reactor that has the exact same spectral output, mass, magnetic field, corona, lifespan, etc. that our Sun used to have. There's no physical test you can run on it that would tell you whether it's our old Sun or the Alien Sun-o-Matic (tm). So... why would you care ?It depends. I am not justified in believing the sun has been replaced because I have no reason to believe this is a technical feat that occurs in practice. However, if I know that strong AI robots (i.e., assuming that I haven't been shown that qualia can be algorithmically derived) exist, then I do have to worry about robots counterfeiting themselves as humans, right?
Ah, I see where you're coming from. You think that I embrace radical behaviorism, and claim that humans do not have internal states. Ok, this is indeed clearly false; after all, even spam filters and Paramecia have internal states, and humans are so much more complex !What is an algorithm? An algorithm is a set of instructions that explains how to arrive at a certain internal/external state from a previous internal/external state. Why should we think that an algorithm that can only explain an external state (from a previous external state) is somehow also able to magically arrive at an internal state (from a previous internal state)?
While I do absolutely accept that humans have internal states, I deny that this fact is important, because, in real life, you interact with humans based on their behavior, not based on telepathy. The internal states are a mere implementation detail of the algorithm.
No no, you misunderstand (I think). Humans do indeed experience colors, tastes, and other things differently. For example, a person might feel all warm and fuzzy when he hears "our song", whereas a different person would merely hear an ordinary Disco jingle. On a more physical level, different people have a slightly different set of taste buds on their tongue, which makes it possible for some people to taste things that others cannot. Speaking of taste -- I personally get really nauseous in the mere presence of yogurt, whereas other people find it quite tasty.harvey1 wrote:Sure, we can have many kinds of pain, but that only adds to the problem. Why are there so many kinds of pain and what are the many different algorithms that produce it.Bugmaster wrote:Additionally, you assume that all people feel pain in exactly the same way, which is kind of doubtful. People don't experience colors, sounds, and tastes the same way -- why should pain be any different?
So, there's no shared feeling of color (or taste, or pain, or whatever) that all of us are experiencing. We all have individual experiences, which are only loosely similar.
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Post #88
Of course. It's standard decision theory stuff. If I know that I can sell $20 worth of merchandise to someone, and there are two buyers, one will pay $20 in real currency, and the other will pay $20 in counterfeit currency, do I sell my stuff and collect $20? No. I contact the Secret Service and tell them what I know and walk away from the deal entirely.Bugmaster wrote:In the clone scenario, you know for a fact that one of the creatures is me, and the other one is a clone. You just don't know which one is which. Are you still going to treat them both as clones?
So, are you saying that K-bots already have human feelings?Bugmaster wrote:It's actually not obvious to me. I think that a machine that could duplicate all of my motions and actions (in the non-trivial sense, we're not talking about videotapes here) would need to be executing the same algorithm that I am. How else is it going to replicate all my actions?Again, you already acknowledged that you have feelings and that this is an unknown algorithm. I think it's fairly obvious that a machine can be made to duplicate all of your motions and actions without this algorithm.
No, I'm using it as evidence that simulation and phenomena are different commodities, and we should be skeptical of any claim that states that simulation is the phenomena. That's not to say that the simulation can't reproduce qualia, but some questions must be answered that haven't been close to being answered so far. Clearly K-bots demonstrate that qualia is something more than simulated facial behavior.Bugmaster wrote:Wait, so you're using the fact that Turing machines can simulate all these phenomena as evidence against the fact that they can simulate qualia, as well ? Wouldn't the opposite be the case?They might be able to do so. It is unknown whether they can. We know that dynamical systems can be simulated on using Turing machines. Turing machines can also simulate ferromagnetism and photosynthesis too. So, I am skeptical that Turing machines can really simulate qualia for that reason.
Have I said that strong AI cannot be achieved? If I did so, that was a mistake on my part. I thought I made it clear I am uncommitted to whether strong AI is feasible. I thought I said repeatedly that behavior is not the phenomena, and to produce the phenomena I would need to see the algorithms that produced qualia and consciousness. If those were convincing algorithms, then I would become a believer. I don't think this will happen since I think it's more likely that consciousness is an emergent quality that is a physical property (versus simulated property). However, I remain open minded that this can be shown to not be the case.Bugmaster wrote:It depends on what you mean by "agnostic", I suppose. You've stated way earlier on that you're willing to concede the possibility of Strong AI; you then proceeded to argue that biological humans (and, perhaps, aliens) are the only things that can become intelligent, and that Strong AI cannot be built by humans. I'm not sure how to reconcile these viewpoints, really.You've convinced yourself for some reason that I against the strong AI position, although I have said from the beginning that I am agnostic about it.
What do you mean express? If you mean that we can use SEMs (scanning electron microscopes) to look at what the molecules are doing when the overall state is water, and then look again at what the molecules are doing as the water boils, then of course one can do that. However, just because you can look at the change of behavior of the system at a molecular level doesn't mean that you can explain the change of behavior at the system level. In order to explain the change of behavior at the system level, you need to know the on-going dynamical processes of the system and not just the behavior of particular molecules. The dynamical properties require an understanding of phase transitions in order to explain why it is that liquid water converts into steam at a certain temperature and pressure. The change in behavior of the system as a whole is irreducible (i.e., in terms of explanation) to the change in behavior of any one individual molecule. That is, you need the phase transition explanation in order to explain why it is that the system as a whole is suddenly changing its state. If all you have is events taking place at a molecular level (i.e., without an understanding of the dynamical properties for the whole system), then it is not possible to explain the change in behavior of the system as a whole. Explanation is key since it is the only means we have of talking about a system's causal properties (e.g., water turning into steam). If the system didn't enter a phase transition, then it wouldn't change states. It is part of the causal chain of what occurs to the system as a whole.Bugmaster wrote:All right, let's say I have a pot of water boiling on the stove. Some water is constantly being changed into steam. Are you saying that I can't express this process in terms of the interactions of individual atoms?
Again, it would only change my posting habits if I were in a conversation that assumed I wasn't talking to savants. If I found out that you were a strong AI savant, then I might immediately assume that the reason that I wasn't getting through to you about qualia is because you in fact don't experience qualia, so there's no use in trying to explain it to you knowing what I know now. So, I would politely end my discussion. On the other hand, that reality is far off, so I extend charity to you by thinking that repeated exchanges might succeed in showing that the feeling of pain (etc.) is justification in itself to ignore behavior. We have K-bots already, so we can already waive off duplication of behavior as a non-issue with regard to whether strong AI is possible or not.Bugmaster wrote:I quoted that whole exchange just for clarity. It sounds as though you're saying that right now, in the 21st century, my words alone are sufficient for you to determing whether I'm human. But, there mere existence of AI robots would immediately cause you to suspect my humanity. That seems extreme to me; but, more importantly, I fail to see the distinction. Let's say that tomorrow, some researcher announces that he has developed Strong AI, and that he has been beta-testing it on this board for the past three months. In practical terms, how would this change your posting habits?
But, I need very good reason to think that the latest gadget offers some insight that steam engines, internal combustion engines, cellphones, etc., were not able to do. It's a trap that one must be careful not to fall into.Bugmaster wrote:It's irrelevant if they're fads or not; what matters is whether they're true. If you reject every new notion just because it's new, you're going to end up in a very stagnant place, praying to Thor to avert the lightning. And besides, all of your fractals, emergent behaviors, and dynamical systems are also relatively new; that doesn't make them false, either. You are not logically justified in dismissing some explanation just because it's new, or popular, or both.
In the case of dynamical systems, even our most fundamental theories of nature (i.e., the standard model) require that we understand and utilize this concept. Sure, it may itself be a false representation of what nature is like, but we have extremely good predictive results from our understanding of nature from this kind of representation of nature. I see no reason to be leary of it in favor of human inventions which have not been found in the universe. True, there's natural computation in nature, but it is not Turing computations. I know of no Turing machines found in nature.
Mass is a property of physical things, at least that's what Higgs symmetry breaking is all about.Bugmaster wrote:That's true, but neither can masses (there's no such thing as a 1.0kg mass in real life). This doesn't mean that we should stop using the notion of mass to describe the world.(Incidentally, Turing machines can't exist in the physical world since it requires infinite memory.)
No, that's not what I'm claiming. I'm only claiming that it is the only physical object we know that can produce consciousness, and we need good reason to believe that Turing simulations can produce it (and qualia, etc.) when it cannot produce so many other dynamical phenomena seen in nature (e.g., photosynthesis, ferromagnetism, etc.).Bugmaster wrote:I absolutely agree with you about the brain. However, you're claiming that the brain is the only object in the world which can produce consciousness; i.e., an artificial brain can never be constructed. I don't see why this has to be true.Science studies dynamical systems, and the brain is just another object in the universe that science studies.
And, as I've also said, I'm optimistic that eventually humans will be able to produce conscious machines. My guess is that it will be a dynamical process that achieves this result. I'm very interested in any and all achievements made by researchers in the field.Bugmaster wrote:We have already replaced plenty of human organs with prosthesis (joints, teeth, heart valves), and we've replaced some non-human organs with much more powerful versions (planes, knives, clothing). Note that, while an artificial heart valve functions very similarly to a natural one, a plane is nothing at all like a bird. My claim is not that we can build an artificial brain (although I think that we can); my claim is that the brain is not the only object that can produce consciousness, just as a bird's wing is not the only object that can produce flight.
I don't think this is very clear. Computationalism is the philosophy that using Turing machines we can achieve cognitive machines. I am skeptical about this position since nature has not produced Turing machines (just dynamical systems that do non-Turing computation). Why should cognition be the one exception that nature has suddenly produced as a Turing machine when we have no examples of Turing machines in nature? As for non-Turing computation (e.g., neural nets), I am optimistic that we can achieve cognition, but the road blocks that evolution was able to surpass may not be so easy to achieve anytime soon (and possibly not at all). K-bots are already reproducing behaviors of people, so I see no great feat in that achievementBugmaster wrote:Just as the important feature of flight is keeping off the ground, the important feature of consciousness is the ability to solve problems, to demonstrate emotions, to carry on an argument, etc. -- i.e., behavior. I claim that we can use computation to reproduce this behavior.
From the beginning I have understood your position as computationalism which you agreed to, right?:Bugmaster wrote:At no point in my argument have I claimed that the brain is a binary computer, with an ALU and RAM and all that stuff. I think it's likely that the brain is a Turing machine (maybe a nondeterministic one), but even that is not truly relevant to my argument.
Harvey: I have assumed all along that you are supporting computationalism (the view that the brain is a Turing machine).
Bugmaster: Ah good, we're clear then. Yes, this is exactly what I'm supporting. The brain might be a nondeterministic Turing machine, or a Turing machine with a random input (due to some quantum effects), but it's a Turing machine nonetheless. (December 30, 2005)
Well, there's no question that brains are computing machines (i.e., information processing machines). However, that's quite a bit different than saying that the brain is a Turing machine. All physical objects in the universe process information. Physicists working with information theory have already equated an increase of entropy with the reduction of information. There are many arguments about limits to computation according to the physical limits of the universe (e.g., irreversibility of natural computation, capacity theorems, measurement sensitivity, etc.).Bugmaster wrote:So, it seems that a simulated neural network is just as good at performing neural-network-like taks as a real neural network is. If human brains are dynamical systems, and dynamical systems can be solved through computation, this makes perfect sense. It stands to reason, then, that a computationally-based artificial brain would be just as good as a real brain at performing brain-like tasks; i.e., cognition.
Simulation is a way of modeling a real phenomena. Our models of galaxies don't actually create gravitational waves. There are aspects of any simulation that are not represented in the model. I don't know if this applies to the brain. If I had to guess, I would say the brain is just another physical object in the world that produces its own physical phenomena. If true, then we might be able to simulate the behavior without being to produce the physical phenomena itself. I see no reason to assume that simulation will produce a physical phenomena (e.g., photosynthesis).Bugmaster wrote:Also, the article about dark matter that you linked to talks of extensive computer simulations that have been used in order to describe the behavior of real galaxies -- which, presumably, are dynamical systems. Why is it that we can we simulate galaxies but not brains? I, of course, believe that we can successfully model both...
Sure, but I didn't say that either. Remember the context in which I introduced it?:Bugmaster wrote:Yes, but they define "dark matter" as "that thing we can't detect that makes galaxies spin slower". That's a far cry from actually determining what dark matter is or how it works.
As per my analogy, dark matter is assumed to exist, but like pain we cannot know that we duplicated what dark matter is just by showing a physical model of behavior. Someone could always construct a "K-bot" model of the universe that demonstrates dark matter behavior, but that doesn't mean they have eliminated dark matter. We would need to know the algorithm used in the "K-bot" to emulate that dark matter behavior. For all we know, the "K-bot" is using an algorithm that contradicts general relativity but it is not shown in the model. Perhaps there are no black holes in that "K-bot" universe.The feeling of pain is a unique phenomena that is (so far) a subjective state. If you go to the doctor and say you have a pain in your pinky finger, the doctor can only check for structural or internal damage, but the doctor cannot say that you are not feeling any pain. If the structure and internal condition check out fine, then you would be sent to a psychiatrist who would probably subscibe pain medication. This makes the job of science all the more difficult since we would have no means of knowing if pain has been effectively duplicated without a convincing theory. Astrophysicists deal with a similar situation with dark matter. We have no way of knowing if our sun is interacting with small amounts of dark matter since dark matter does not interact with photons. Without a good theory of dark matter, astrophysicists cannot answer the question whether our sun interacts with it.
How do you know that's a trivial use of the term? The brain is a very complex system, and we don't know if quantum chaos plays a role in the classical functions of the brain. This area of science is very far from being understood because Many-Body systems are so vastly complex. We can't even simulate water from H2O atoms much less a structure as complex as the brain. Evolution due to natural selection and having vast amounts of time might have found ways to utilize quantum processes via non-linear dynamics. We just don't know, so let's not pretend to have answers to questions we just don't know.Bugmaster wrote:Why? A brain is not a quantum device, after all -- at least, not in the sense that you mean. It's powered by chemistry. Ultimately, of course, all devices are quantum devices because they're made of quantum particles, but that's a trivial usage of the term.Sure, at some point in time we might have a good technology of what the brain is like. Perhaps a quantum device will be the right description, who knows?
I'm not sure what exactly Siegelmann means by more powerful, but I assume that what is meant is that a non-Turing process can possibly compute functions that are uncomputable for a Turing machine.Bugmaster wrote:What do you mean by "powerful"? Any modern computer is more powerful than a Turing machine, because the Turing machine is so woefully inefficient. Similarly, an analog neural network is much more efficient at certain tasks than a digital computer. However, as various people quoted in the aricle (Christos Papadimitriou, and I think someone else...) have pointed out, this does not necessarily mean that the analog network is categorically different from a digital computer. Neither of the systems is omniscient (as the reporter who wrote the article seems to think); for example, neither of them can represent the number Pi with infinite accuracy (analog systems due to noise, digital systems due to limited memory).
(I know, it's very frustrating that the search function is a sad state of disrepair...)Bugmaster wrote:I am pretty sure that you agreed that they could, earlier, but I'm too lazy to go look for the post manually, and the search function seems to be broken (out of memory, ironically enough).I don't know if humans can be uploaded on Turing machines and remain the same. I would I have to see the code before I could attribute qualia to them.
Well, I took the time. Just to clarify the situation:
Bugmaster:Let's say that we take Claire Danes's brain, and upload it into a computer. We now have a virtual Claire Danes, that would correspond with people in the same way that the biological Claire Danes would. When C.D. is talking to you on AIM, you'd have no way of knowing which version was actually chatting with you. Are both C.D.s human ? If not, why not, and -- most importantly -- how would you tell, assuming that AIM was your only way of communicating with them ?
Harvey: If I knew that Claire was uploadable (which she is--just in the current cyber sense), then I certainly would have to look to the government to make sure that I'm not chatting with counterfeit uploadings of individuals. (Dec. 2, 2005)
BM, all I'm saying is that if strong AI is feasible, then it is an easy matter of showing the code that makes qualia (consciousness, intentionality, etc.) occur. There's a cause for every effect, and there's an algorithm for every function. If I'm to believe that strong AI has been achieved, then show me the algorithm.Bugmaster wrote:In any case, if you deny that uploading a human preserves his consciousness, then you're making a much stronger statement than previously: you're not just saying that we cannot manually create consciousness, you're saying that we can't create a prosthetic brain, either. This statement is especially strong in your dualistic worldview, because it implies that only biological brains can house qualia. I'm still not convinced that this is so.
In the case of a prosthetic brain intelligence (i.e., non-strong AI), we would also have to show how it is that qualia (consciousness) was achieved from the engineering drawings. However, in the case of non-Turing dynamic systems (i.e., non-strong AI), it is probably not possible to see how a complex property emerges given the complexity of the dynamical emergent phenomena. Water, for example, is a very complex set of properties that one cannot easily see from hydrogen and oxygen. In fact, water appears to be rather amazing given what we know about other liquids. In order to know that we have duplicated pain in a prosthetic brain, we would have to have a good theory of pain in terms of a dynamical system (of our brains) that gave us great predictive power that we didn't have prior. If we had such a theory with predictive and explanatory power with our own brains, then this would be sufficient in believing that we had duplicated the physical process of pain, consciousness, etc. (along with the AI being saying they experienced pain, etc., without any brain patterns of showing that they are lying).
So, the answer is no, I am not making the strong claim that you are asserting me of making.
Because I have good reason to believe that there is such a "algorithm" that is residing in my head that produces pain (as you agreed that you have for yourself as well), and I don't have any reason to believe that this is a algorithm executed by a Turing machine. So, I have no reason to believe that computers have duplicated what nature has done anymore than a K-bot has duplicated pain by imitating human facial expressions of pain. I need evidence that this feat has been duplicated in bots when clearly there is a great deal of opportunity to commit a forgery by the bot maker. In the case of evolution, it is not reasonable for me to think that I'm the only one person that evolution succeeded in passing qualia down as a property. All of my understanding is based on the view that I am part of a species that has similar qualia properties as everyone else on the planet.Bugmaster wrote:Again, you're holding AIs to a much stronger standards than humans. Can you tell me the precise qualia-powered mechanism by which you feel pain? I personally can't even tell you the mechanism I use for walking, and that's a much simpler process. So, why demand of AIs what you can't deliver yourself?
Sure, but you seem to be talking with a bit of faith in your response. It's a faith that I say is not justified based on the fad of using the latest gadget as an example of what some aspect of nature is like.Bugmaster wrote:I should also point out that, if someone does build a Strong AI, he'll of course have access to its full source code -- so he'd be able to fullfill your request.
Sure, but without a theory that shows how it is that we know we are duplicating pain processes in our brain, how do you know that you are not just making K-bots?Bugmaster wrote:I'm saying that it may be possible to produce consciousness without duplicating the exact process that goes on in the human brain, just as it's possible to produce flight without exactly reproducing a bird's wing.
It depends. Is there a property that I depend on for the sun that the aliens are not giving me (e.g., sun spots)? If so, then I do care. The whole reason that I'm objecting to strong AI gadgets parading themselves off as human is because they have not shown that they are not just K-bots. If the analogy is the same as with the sun, then I assume the replaced sun is a K-bot which means that it lacks something that the earth needs from the sun. We might have to worry and ask for our sun back. If we become desperate (e.g., 4-5 billion years from now when our sun has reached its end of life), then we'll gladly take the gadget since we're desperate.Bugmaster wrote:I would say, wrong. Let's say we do have a race of uber-aliens who fly around replacing stars with fusion reactors -- and let's say they advertise this. So, there's a chance that they've replaced our Sun with a fusion reactor that has the exact same spectral output, mass, magnetic field, corona, lifespan, etc. that our Sun used to have. There's no physical test you can run on it that would tell you whether it's our old Sun or the Alien Sun-o-Matic (tm). So... why would you care?It depends. I am not justified in believing the sun has been replaced because I have no reason to believe this is a technical feat that occurs in practice. However, if I know that strong AI robots (i.e., assuming that I haven't been shown that qualia can be algorithmically derived) exist, then I do have to worry about robots counterfeiting themselves as humans, right?
Well, you are assuming that internal states are epiphenomenal. But, why should we think they are epiphenomenal? (This, btw, is the original position that I thought you were stating before I understood your position as behaviorism.)Bugmaster wrote:While I do absolutely accept that humans have internal states, I deny that this fact is important, because, in real life, you interact with humans based on their behavior, not based on telepathy. The internal states are a mere implementation detail of the algorithm.
How does that negate the importance of some kind of algorithm that is responsible for bringing about those varied qualia experiences? All that seems to suggest is that every algorithm is different inside each person, which I would agree with. How do you take the stronger position which is that the algorithm(s) itself are not so mysterious for a Turing machine process?Bugmaster wrote:No no, you misunderstand (I think). Humans do indeed experience colors, tastes, and other things differently. For example, a person might feel all warm and fuzzy when he hears "our song", whereas a different person would merely hear an ordinary Disco jingle. On a more physical level, different people have a slightly different set of taste buds on their tongue, which makes it possible for some people to taste things that others cannot. Speaking of taste -- I personally get really nauseous in the mere presence of yogurt, whereas other people find it quite tasty. So, there's no shared feeling of color (or taste, or pain, or whatever) that all of us are experiencing. We all have individual experiences, which are only loosely similar.
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Post #90
It's fantastic news. It really is. However, I think this just reinforces my agnostic stance on the topic of strong AI (I'm a strong theist for anyone reading), since it is not known how much progress can be expected from such fantastic news.Bugmaster wrote:Heh, looks like time was on my side in this debate (at least, a little bit): Phase Change in Fluids Finally Simulated After Decades of Effort I'll post an actual reply to your comments later...
Btw, I think it is important to point out again that this is a simulation of dynamical processes and in no way should be confused for the complex phenomena it models. We can already model many phase transitions (e.g., Ising models), but this seems to add meat to the bones of the simulation, and not meat to reality.