Is it possible to build a sapient machine ?

For the love of the pursuit of knowledge

Moderator: Moderators

Post Reply
User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Is it possible to build a sapient machine ?

Post #1

Post by Bugmaster »

Topic

Many threads regarding dualism, theism, and philosophy in general, often run into this topic. Is it even hypothetically possible to build a computer that will be sapient -- i.e., an artificial being that will think, feel, and socialize as humans do ? Well, here's our chance: to resolve this debate once and for all ! Smoke 'em if you got 'em, people, this post is gonna be a long one.

I claim that creating Strong AI (which is another name for a sapient computer) is possible. We may not achieve this today, or tomorrow, but it's going to happen sooner rather than later.

First, let me go over some of the arguments in favor of my position.

Pro: The Turing Test

Alan Turing, the father of modern computing (well, one of them), proposed this test in ages long past, when computers as we know them today did not yet exist. So, let me re-cast his argument in modern terms.

Turing's argument is a thought experiment, involving a test. There are three participants in the test: subject A, subject B, and the examiner E. A and B are chatting on AIM, or posting on this forum, or text-messaging each other on the phone, or engaging in some other form of textual communication. E is watching their conversations, but he doesn't get to talk. E knows that one of the subjects -- either A or B -- is a bona-fide human being, and the other one is a computer, but he doesn't know which one is which. E's job is to determine which of the subjects is a computer, based on their chat logs. Of course, in a real scientific setting, we'd have a large population of test subjects and examiners, not just three beings, but you get the idea.

Turing's claim is that if E cannot reliably determine which being -- A or B -- is human, then they both are. Let me say this again: if E can't tell which of the subjects is a computer, then they're both human, with all rights and privileges and obligations that humanity entails.

This seems like a pretty wild claim at first, but consider: how do you know that I, Bugmaster, am human ? And how do I know that you're human, as well ? All I know about you is the content of your posts; you could be a robot, or a fish, it doesn't really matter. As long as you act human, people will treat you as such (unless, of course, they're jerks who treat everyone like garbage, but that's another story). You might say, "well, you know I'm human because today's computers aren't advanced enough to post intelligently on the forums", but doesn't prove much, since our technology is advancing rapidly all the time (and we're talking about the future, anyway).

So, if you're going to deny one of Turing's subjects his humanity, then you should be prepared to deny this humanity to everyone, which would be absurd. Therefore, a computer that acts human, should be treated as such.

Pro: The Reverse Turing Test

I don't actually know the proper name for this argument, but it's sort of the opposite of the first one, hence the name.

Let's say that tomorrow, as you're crossing the street to get your morning coffee, you get hit by a bus. Your wounds are not too severe, but your pinky is shattered. Not to worry, though -- an experimental procedure is available, and your pinky is replaced with a robotic equivalent. It looks, feels, and acts just like your pinkie, but it's actually made of advanced polymers.

Are you any less human than you were before the treatment ?

Let's say that, after getting your pinkie replaced, you get hit by a bus again, and lose your arm... which gets replaced by a robo-arm. Are you human now ? What if you get hit by a bus again, and your left eye gets replaced by a robotic camera -- are you less human now ? What if you get a brain tumor, and part of your brain gets replaced ? And what if your tumor is inoperable, and the doctors (the doctors of the future, of course) are forced to replace your entire brain, as well as the rest of your organs ? Are you human ? If so, then how are you different from an artificial being that was built out of the same robotic components that your entire body now consists of ?

Note that this isn't just idle speculation. People today already have pacemakers, glasses, prosthetic limbs, and yes, even chips implanted in their brains to prevent epileptic seizures (and soon, hopefully, Alzheimers). Should we treat these people as less human than their all-natural peers ? I personally don't think so.

Ok, I know that many of you are itching to point out the flaws in these arguments, so let me go over some common objections.

(to be continued below)

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #141

Post by harvey1 »

Bugmaster wrote:If that's your only point, then why all the complexity? The SWAT guy could just post "ow ow it hurts me" on some Internet forum, as per my opening statement. We don't even need the robo-head.
Your example of a post on the internet is faulty since we cannot establish from that example that an animatronic head can show pain but never be able to experience pain--even in principle. My example was needed to show this, and that is how we can see that a Turing test fails at linking an observed behavior of pain with the actual feeling of pain.
Bugmaster wrote:
But, wait, you said before that you should treat the feeling of pain as real for bots as well. Do you agree now that the feeling of pain must be substantiated by other means than just behavior for animatronic heads? ...
Bugmaster wrote:Are you saying that a human being cannot fool you into thinking he or she is in pain? That can't be right.
No.
Bingo. When you see a human being acting as though he was in pain, you assume that he is, most likely, in pain. But in fact, that human could be a trained actor having a grand joke at your expense -- you have no way of knowing.
You're just increasing the reasons why we should reject the Turing test to establish if a creature is in pain. We have reason to be skeptical of a human being in pain because they could be lying. That's always been true, and its the reason why many people cannot get pain medication from doctors. The doctors think they are lying or exaggerating their actual pain. So, strike one against the Turing Test. Well, now comes new information, the animatronic head can never feel pain even though they behave as if in pain. That's strike two against the Turing Test. Not only must we be concerned if they are lying, now we must be certain that they are human. If there's a lot of bots around, then we better look for a more intrusive test. If no such intrusive test is legal, then we better talk to our lawmakers to make and enforce laws that prevent bots as parading themselves off as humans. (Of course, this is what I've said all along.) If the feeling of pain is eventually understood in better theoretical terms, then that's strike three. Not only must we be leary of behavior, we must discount it altogether and seek an fMRI (etc) test that verifies the structures and internal processing to produce pain exists. The bots, if they don't have a passing grade to that test, are not considered as real creatures.
Bugmaster wrote:All I'm saying that, should you see an AI acting exactly as humans act when they're in pain, you still have no way of knowing what the AI feels. All you have to go on is its behavior.
The three strikes above show how behavior takes less and less of a role in determining whether a creature is in pain. If we ever get to a point to where animatronic bots that are identical to humans are common, then we will seek other means to verify whether a creature is human. We will pay less and less attention to behavior (which is too bad because behavior is right now the best means to determine if a creature is in pain).
Bugmaster wrote:In both cases -- the human actor fooling you, and the AI fooling you -- you'd be logically justified in concluding that both entities do, in fact, feel pain. I realize that you really, really don't want to acknowledge this, which is why I've offered several ways out of this situation:
BM, you just admitted that animatronic heads controlled by SWAT guys in a van don't feel pain. You are not just being fooled, you are being conned. And as a con, it is to be considered illegal and unethical. A human might lie about their pain, and that might be their business and your intrusion into their affairs, but when someone is conning you there is more severe issues to address. Therefore, the Turing test is not an appropriate means to stop the con, and thus other means to avoid the con must be taken (e.g., imprisonment of makers of such bots, people who use bots to carry off such chirades, and even intrusive measures e.g., fMRIs, taken against the person in pain to make sure that a con is not happening).
Bugmaster wrote:* Devise a "consciousness detector" that can reliably tell you which entity is conscious and which isn't (thus, essentially, offering supporting evidence for dualism). You admitted you cannot do this.
Of course, fMRIs would currently do the trick to see if the creature is capable of experiencing pain, but whatever test is needed to verify that you don't have an animatronic device is all that is needed.
Bugmaster wrote:Prove that it is logically impossible for anything but a human to feel pain. You have denied this explicitly (and we agree on this point, at least).
Right, but it is important to seek the cause of the feeling of pain, and that means as more is understood the less importance that behavior will account for our recognition of pain.
Bugmaster wrote:Prove that it is logically impossible for anything but the process of evolution to produce machines (biological or otherwise) that can feel pain. You haven't offered any evidence for this proposition, which sounds preposterous unless you accept dualism, which brings me to the next prospect:
Well, if we don't know what physically causes pain, it seems prettty bizarre that we would assume that it can be emulated in software so that can actually re-produce the physical feeling. First we have to know the theoretical causes before we could ever assume that software running on a machine can reproduce it.
Bugmaster wrote:Prove that it is logically impossible for materialism to be true. So far, I'm not convinced.
I don't think that's foreseeable.
Bugmaster wrote:State that your belief in what amounts to an immaterial soul is based on faith (as per my opening argument). There's nothing wrong with this position, logically speaking, but it precludes any possibility of rational debate with the unbelievers.
As you can see from my answers, belief in an immaterial soul has nothing to do with these scientific issues. I'm perfectly willing to wait for science to answer these questions without in the meantime giving animatronic heads full citizenship and voting rights. I also see no reason to increase the pay of computer scientists when they emulate cognitive behavior. Until they can produce an algorithm that shows why the feeling of pain emerges, their not getting a dime more.
Bugmaster wrote:I honestly can't think of anything else you can do to get out of the absurdities that I've highlighted in your argument. I'm open to suggestions, of course.
As you can plainly see from my answers, the absurdities are with your own responses. You backed off your assertion that bots that display painful responses must feel pain, and that is an absurdity that I wish hadn't taken so many posts to prove to you. It was a matter of patience on my part to take you down that road. Just wish I could make you see things a little faster.
Bugmaster wrote:Sorry, this does not follow. Human actors can act as though they are in pain without being in pain -- and yet, I think you'd agree that human actors are conscious. In fact, as I've said earlier, an actor would have to be conscious in order to properly simulate the feeling of pain.
Well, let's finish this discussion on pain before we jump to the conscious argument. Agreement on the pain discussion is important to show why your argument for consciousness is also invalid.

User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Post #142

Post by Bugmaster »

harvey1 wrote:Your example of a post on the internet is faulty since we cannot establish from that example that an animatronic head can show pain but never be able to experience pain--even in principle.
How is posting "ow ow I'm in pain" on a forum different, in principle, from contracting certain muscles, or moving some servo-motors ?
We have reason to be skeptical of a human being in pain because they could be lying.
That's true. I've never claimed that the Turing Test is 100% accurate. The best result you can get form the Turing Test is, "well, as far as I can tell, this entity is very likely to be conscious", and the best you can tell from observing a human being's (or any other entity) behavior is, "well, as far as I can tell, this being is very likely to be in pain". I have repeatedly challenged you to provide me with some alternative means of detecting pain, and you have admitted that you can't do it.
The doctors think they are lying or exaggerating their actual pain. So, strike one against the Turing Test.
Remember that, in the original test, the AI reproduces human behavior perfectly. A doctor would not be able to implicate a faking patient if the patient was able to perfectly reproduce the condition of being in pain; therefore, this is not a "strike" against the Turing Test.
Well, now comes new information, the animatronic head can never feel pain even though they behave as if in pain.
Again, you are confusing the ability to feel pain, with the actual condition of being in pain. Human actors who fake pain do nevertheless have the ability to feel it.
Not only must we be concerned if they are lying, now we must be certain that they are human.
Is it possible for non-human entities to lie ? When my speedometer is showing a wrong speed, it's not lying, it's just faulty.
If the feeling of pain is eventually understood in better theoretical terms, then that's strike three.
I absolutely agree with you that, when the Ultimate Theory of Pain (tm) comes along, and when it shows that machines cannot feel pain, I'll retract my argument. However, so far you haven't shown me this theory, and you haven't even convinced me that it's possible to develop such a theory. Foul ball on your part (or whatever the appropriate baseball metaphor for mis-throwing the ball is).
Not only must we be leary of behavior, we must discount it altogether and seek an fMRI (etc) test that verifies the structures and internal processing to produce pain exists.
According to your own arguments, no such fMRI test can actually test for the presence of "qualia". It can only show what chemical changes occur in the patient's brain, not what the patient's "Self" is feeling.

By the way, I find it curious that you are so very, very afraid of inadvertently forming friendships with AIs -- assuming that AIs can be crafty enough to actually fool you into befriending them. Hence, your repeated calls for invasive probes and legislation. You are willing to sacrifice a major portion of your personal liberty (i.e., you're willing to submit to repeated invasive probes) just to ascertain which of your friends are biologically human. Why all the fear ? Are you implying that there's more to friendship (or to debate, or chit-chat, or any other human communication) than mere exchange of words and ideas ? If so, what is it ?
BM, you just admitted that animatronic heads controlled by SWAT guys in a van don't feel pain. You are not just being fooled, you are being conned.
Again, you're missing my point -- two of my points, actually. Firstly, in order to con me properly, whatever powers the head (in this case, the SWAT agent) would have to be able to act in a human fashion at least as well as an average human can (in this case, it's easy, because our SWAT agent is human). Secondly, I never claimed that the Turing Test is perfect. With the Turing Test, we can't be 100% sure that the target entity is human; we can only be as sure of it as we can be sure that any random person living on the Earth today is human. That's pretty damn sure, but it's still not 100%.
A human might lie about their pain, and that might be their business and your intrusion into their affairs, but when someone is conning you there is more severe issues to address.
That's just odd. When a human lies about their pain, it's ok, but when a robot lies about their pain, it's a evil con ? Why ? The robot isn't charging you for anything.
Of course, fMRIs would currently do the trick to see if the creature is capable of experiencing pain, but whatever test is needed to verify that you don't have an animatronic device is all that is needed.
Fine, let's say that we're able to run these AIs on biological components instead of electronic ones. Now you can't use fMRI anymore... And you still haven't explained to me what fMRI has to do with detecting the qualia that your "Self" is feeling.
Well, if we don't know what physically causes pain, it seems prettty bizarre that we would assume that it can be emulated in software...
Oh, but we only don't know what causes pain in your dualistic worldview. In my materialistic worldview, we have a pretty decent idea (neurobiology, behavioral studies, etc.)
As you can see from my answers, belief in an immaterial soul has nothing to do with these scientific issues.
As I've pointed out before, your notion on consciousness (or pain, whatever) -- as this completely undetectable, unfalsifiable entity -- is completely unscientific. As I pointed out on my "Caveat Emptor" thread, you don't even have a way of knowing whether Storng AI exists or not. You cannot detect the presence or absence of pain (that is, of your notion of pain) in any individual. You appeal to the magic Theory of Pain that will solve these problems, but, since your concept of pain is unfalsifiable, this theory cannot possibly be scientific.
You backed off your assertion that bots that display painful responses must feel pain...
That was never my assertion. I merely pointed out that we're logicially justified in believing that bots that display a full range of human responses (including those we associate with pain) are human (and that they're experiencing pain). Note that, in your SWAT example, our conclusion is correct (the SWAT guy is human), despite the fact that the bot is not, in fact, a bot. Our conclusion may still be wrong in some cases, of course, but, without being omniscient, we have no intellectually honest choice but to accept it.

Also, for some reason, you also insist on treating pain and consciousness as though they were separate issues. I don't see why this is necessarily so.

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #143

Post by harvey1 »

Bugmaster wrote:
You backed off your assertion that bots that display painful responses must feel pain...
That was never my assertion. I merely pointed out that we're logicially justified in believing that bots that display a full range of human responses (including those we associate with pain) are human (and that they're experiencing pain). Note that, in your SWAT example, our conclusion is correct (the SWAT guy is human), despite the fact that the bot is not, in fact, a bot. Our conclusion may still be wrong in some cases, of course, but, without being omniscient, we have no intellectually honest choice but to accept it.
Bugmaster wrote:I think that a machine that could duplicate all of my motions and actions (in the non-trivial sense, we're not talking about videotapes here) would need to be executing the same algorithm that I am. How else is it going to replicate all my actions?
Okay, I'm confused by the apparent contradiction here. How can you say that it was never your assertion that bots that display painful responses must feel pain, and yet you asserted that, "I think a machine that could duplicate all of my motions and actions would need to be executing the same [pain] algorithm"? How is that not a contradiction between these two statements? Do you still think that a machine must execute the same internal algorithms as you in order to duplicate your motions and actions?

User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Post #144

Post by Bugmaster »

harvey1 wrote:Do you still think that a machine must execute the same internal algorithms as you in order to duplicate your motions and actions?
I claim that a machine that duplicates all of my motions and actions perfectly, would need to be executing the same algorithm as I am.

However, no two human beings express or experience pain the same way; therefore, if you merely want a machine that experiences pain (or mimicks the experience of pain, if you prefer) in some reasonably human fashion, then any old algorithm would do, as long as it produces the desired result.

I don't see a contradiction here.

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #145

Post by harvey1 »

Bugmaster wrote:
harvey1 wrote:Do you still think that a machine must execute the same internal algorithms as you in order to duplicate your motions and actions?
I claim that a machine that duplicates all of my motions and actions perfectly, would need to be executing the same algorithm as I am.
This is why I want to focus on this issue, because I don't agree. If the animatronic head is controlled by the guy in the SWAT van, and the guy in the SWAT van pushes the "4878" button, then the animatronic head will exactly duplicate your outward indications of pain. However, at no point is the animatronic head executing an algorithm which gives the animatronic head the feeling of pain. So, how can it be executing the same algorithm as you if you feel pain and the animatronic head doesn't?

User avatar
QED
Prodigy
Posts: 3798
Joined: Sun Jan 30, 2005 5:34 am
Location: UK

Post #146

Post by QED »

Duplication of the "outward signs" is obviously not what this is all about. What we are looking for is something that can be inwardly duplicated. This duplication normally takes place through reproduction. If reproduction has duplicated the ability to perceive pain through all the generations of life then it once was humble. Can we not then describe how it might have started out as a neural state created by damaged nerves?

Despite Harvey's objection, the state might be just like a flag after all -- one that is learned to be avoided. So what is the motive for avoidance? Well, sensations are constructed as neural states and can be ascribed internal values. These values act as attractors or repellers and just as we can "drum up" on-demand a tune in our heads so, I suggest, we can drum-up a discord -- a state of pain. It is an essential state, and it must modify our behaviour. But I suggest that's all it takes for the whole thing to work. As such it could readily be implemented in robotics.

But then Harvey would still be shaking his head and saying the robot isn't feeling anything. Well, that's a semantic as far as I can see. Apart from the infinite subtleties of human pain I think it's just as valid to say that we don't feel anything either. Alternatively, and with just as much truth, we might also say that we do feel things and to an infinitely lesser degree so do robots.

I can still sense some incredulity about this, as I have described it all before with no success. First I would repeat that pain has evolved and that it would be essential (or something along the same effective lines) from pretty much day one of multicellular life. So this tells me that it needn't be algorithmically sophisticated, and that we can expect it to be implementable in very simple neural setups. Secondly, to try and get over the incredulity that something so apparently tangible and significant as pain might simply be reduced to flags and values, I mentioned our ability to create other mental states on-demand. That sound of scraping chalk on the blackboard that somehow seems to come in through or teeth makes me think that pain bears a significant relation to auditory stimulus and may point to the internal source of the feeling Harvey is so earnestly seeking.

User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Post #147

Post by Bugmaster »

harvey1 wrote:This is why I want to focus on this issue, because I don't agree. If the animatronic head is controlled by the guy in the SWAT van, and the guy in the SWAT van pushes the "4878" button, then the animatronic head will exactly duplicate your outward indications of pain.
Right, so would a video tape of me. However, you are once again missing the two important points:

* Interactivity
* Identical behavior

If you put me side-by side with the Chinese Room (for that is what your SWAT head really is), and subjected us both to the same series of arbitrary stimuli, and we reacted in identical ways -- including pain response -- then I'd argue that the Chinese Room executes the same exact algorithm as me.

At this point, you can look inside the Chinese Room, point at the various components, and exclaim, "Look ! This gerbil on the treadmill doesn't experience pain !", or "Look ! This series of gears and pulleys doesn't understand Chinese !", but you'd be missing the point: it's the entire room that's conscious (or, in this case, the entire room is Bugmaster 2.0) not any of its components, regardless of whether they're made of gerbils or humans or electronics.

Again: I claim that the Chinese Room could not "exactly duplicate my outward indications of pain" unless it was running a full copy of me, because my outward indications of pain vary greatly, and are ultimately tied in to the rest of my behavior. Sometimes, I talk about my pain (when the doctor asks me about it, for example), sometimes I just wince, sometimes I scream out profanities, sometimes I mutter profanities under my breath, sometimes I get violent on the human who's causing me pain, etc. etc. That's consciousness for you.

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #148

Post by harvey1 »

Bugmaster wrote:
harvey1 wrote:This is why I want to focus on this issue, because I don't agree. If the animatronic head is controlled by the guy in the SWAT van, and the guy in the SWAT van pushes the "4878" button, then the animatronic head will exactly duplicate your outward indications of pain.
the entire room that's conscious (or, in this case, the entire room is Bugmaster 2.0) not any of its components, regardless of whether they're made of gerbils or humans or electronics... I claim that the Chinese Room could not "exactly duplicate my outward indications of pain" unless it was running a full copy of me, because my outward indications of pain vary greatly, and are ultimately tied in to the rest of my behavior. Sometimes, I talk about my pain (when the doctor asks me about it, for example), sometimes I just wince, sometimes I scream out profanities, sometimes I mutter profanities under my breath, sometimes I get violent on the human who's causing me pain, etc. etc. That's consciousness for you.
What is to prevent the SWAT guy from having 5,000 pain buttons, with each button controlling a known reaction that you have given in the past to pain? The SWAT guy even becomes very familiar with how you react in any given situation, so he knows what buttons to press so that the animatronic head appears like it is really you in pain. It perfectly duplicates your reactions to pain. Are you saying that the animatronic head feels pain or that the whole system feels pain? That seems absurd. Nothing feels pain since the animatronic head is controlled by minature gears and such.

User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Post #149

Post by Bugmaster »

harvey1 wrote:What is to prevent the SWAT guy from having 5,000 pain buttons, with each button controlling a known reaction that you have given in the past to pain?
Ah, this sounds like Searle's initial version of his Chinese Room argument, where he had a fixed table of responses to pick from.

The problem is, this isn't how humans work, and it's not even how simple computer programs work.

Humans, and most computer programs, are stateful. Their responses change over time. This means that a list of 5000 pain buttons will not suffice.

Let's go even further, though. Let's say that the SWAT guy has an infinitely long table of all possible responses to stimuli. In this case, he would still need an algorithm to determine which response to pick when, wouldn't he ?
Are you saying that the animatronic head feels pain or that the whole system feels pain? That seems absurd. Nothing feels pain since the animatronic head is controlled by minature gears and such.
Now it sounds like you're saying, "we know exactly how the Chinese Room works, therefore it can't be conscious, because consciousness is mysterious". I've already covered this in my opening statement.

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #150

Post by harvey1 »

Bugmaster wrote:Their responses change over time. This means that a list of 5000 pain buttons will not suffice.
Why, do you think anyone you know would look at you odd if you only had 5,000 expressions of pain? I think if you only had a 100 expressions of pain no one would think anything odd.
Bugmaster wrote:In this case, he would still need an algorithm to determine which response to pick when, wouldn't he?
The SWAT guy knows you, that's why he picks certain responses for the animatronic head when the situation is apt. For example, if someone sticks the head with a pin, the SWAT guy presses button "243" because he knows that this is your same expression when you were once stuck with a pin.
Bugmaster wrote:Now it sounds like you're saying, "we know exactly how the Chinese Room works, therefore it can't be conscious, because consciousness is mysterious". I've already covered this in my opening statement.
No. I'm not talking about a Chinese room, you are. I'm saying the guy in the SWAT van is pressing buttons to show different expressions of pain on the animatronic head. No one can tell the difference from a live human. You are saying the animatronic head is in pain, and that seems to contradict what you said earlier when you said it was not in pain. Which is it, is the animatronic head in pain or not? If the "system" is in pain, then what is causing the feeling of pain in the system?

Post Reply