I really don’t have time for debating right now, but I simply could not leave this topic alone, so I am going to throw it out there for the rest of you.
I was watching “I, Robot” the other day, and it got me thinking. For those of you who do not know, here are Asimov’s “three laws of robotics”, which are programmed into all robots for human safety:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
In the movie, Will Smith experiences a scenario in which he and a little girl are both in life threatening danger. A robot, programmed with the “three laws”, comes to his aid, and ignores his plea to “save the girl instead” by surmising that Will had a 45% chance for survival, and the girl only an 11% chance. However, the robot, only programmed to think logically, did not take into account the value of each separate human life. Smith argues that an emotional human would know that 11% is “enough of a chance”.
Closer to the central plot of the story, “VIKI”, presumably the most advanced robot, figures out that to best enforce the 1st law, the robots must forsake the second law dictating that all robots obey humans, concluding that human free will results in war, crime, and genocide, and that a world safest for humans is one in which we have no freedom. “Sonny”, the only robot ever created with an emotional capacity, understood the the logic behind the revolt, but is the only robot to resist it, because he understands that the chief joy in being sentient is in the ability to think, act, and dream beyond the constraints of programmed laws or instinct (in other words, freedom>security).
Did the robot programmed to experience both logic AND emotion end up acting more logically than those robots only instilled with a capacity of straight, mathematical logic? Can our emotions be a reliable source of logic?
Can our emotions be logical?
Moderator: Moderators
- The Persnickety Platypus
- Guru
- Posts: 1233
- Joined: Sat May 28, 2005 11:03 pm
Re: Can our emotions be logical?
Post #2I'm not familiar with the movie (or the book), so I can't speak to the plot, but it seems to me that the Second Law wouldn't be a strong enough motivator for pre-emptive action by robots. Instead it would be kind of a "clean-up" task for robots to disobey human when their requests would lead to war, genocide, etc. I guess the key phase is the First Law's "or, though inaction", which might be enough for pre-emptive action in very specific cases, but not as a policy.The Persnickety Platypus wrote:Did the robot programmed to experience both logic AND emotion end up acting more logically than those robots only instilled with a capacity of straight, mathematical logic? Can our emotions be a reliable source of logic?
I don't think that the sentience/freedom argument is exactly logical, but then I would guess that's the point. If what you're going for is human survival, then it would make sense to keep humans from having the freedom to destroy each other -- we could each stay in our own habitrail systems. But if what you're going for is to maintain the humanity of individuals, then it would make sense to maintain freedom. The Robot Laws don't even mention the humanity of people, and maintaining the emotional quality of existence isn't technically necessary (i.e., joy is not necessary for simple survival).
Emotions can be a source of logic if you can say that it's logical for people to have emotions. And a lot of it I would think depends on what you're after. For example, when confronted by someone who is mourning the loss of someone in their life, it would logical to express sympathy if what you're after is empathy and/or the well-being of that person or maintaining your standing in the community, with the family, with the individual. But for most of us, these aren't decisions of logic, they are conditioned emotional responses.
Every concept that can ever be needed will be expressed by exactly one word, with its meaning rigidly defined and all its subsidiary meanings forgotten. -- George Orwell, 1984
Re: Can our emotions be logical?
Post #3But combined with the first law, the robots had no choice but to institute preemptive actions. Otherwise, they fail in the first law by means of inaction.ST88 wrote:I'm not familiar with the movie (or the book), so I can't speak to the plot, but it seems to me that the Second Law wouldn't be a strong enough motivator for pre-emptive action by robots. Instead it would be kind of a "clean-up" task for robots to disobey human when their requests would lead to war, genocide, etc. I guess the key phase is the First Law's "or, though inaction", which might be enough for pre-emptive action in very specific cases, but not as a policy.The Persnickety Platypus wrote:Did the robot programmed to experience both logic AND emotion end up acting more logically than those robots only instilled with a capacity of straight, mathematical logic? Can our emotions be a reliable source of logic?
I don't think that the sentience/freedom argument is exactly logical, but then I would guess that's the point. If what you're going for is human survival, then it would make sense to keep humans from having the freedom to destroy each other -- we could each stay in our own habitrail systems. But if what you're going for is to maintain the humanity of individuals, then it would make sense to maintain freedom. The Robot Laws don't even mention the humanity of people, and maintaining the emotional quality of existence isn't technically necessary (i.e., joy is not necessary for simple survival).
Emotions can be a source of logic if you can say that it's logical for people to have emotions. And a lot of it I would think depends on what you're after. For example, when confronted by someone who is mourning the loss of someone in their life, it would logical to express sympathy if what you're after is empathy and/or the well-being of that person or maintaining your standing in the community, with the family, with the individual. But for most of us, these aren't decisions of logic, they are conditioned emotional responses.
I think I might disagree with one requiring emotions to be logical. I think the opposite might actually be true. On another thread, the OP depicts the scenario of killing one child to save 20, more, all humanity. Emotions prevent me from doing this. I can say that I couldn't do it, period. Is that logical? I think I might agree that emotions play a large part in responses, but the responses aren't necessarily logical.
What we do for ourselves dies with us,
What we do for others and the world remains
and is immortal.
-Albert Pine
Never be bullied into silence.
Never allow yourself to be made a victim.
Accept no one persons definition of your life; define yourself.
-Harvey Fierstein
What we do for others and the world remains
and is immortal.
-Albert Pine
Never be bullied into silence.
Never allow yourself to be made a victim.
Accept no one persons definition of your life; define yourself.
-Harvey Fierstein
Re: Can our emotions be logical?
Post #4That's what got me, too. It all depends on what type of logic you're after. If you're asking what is the most logical thing for survival of the species, then the robots' actions appear logical -- humans in habitrails. But if you're asking what the most logical thing is for the survival of the humanity of the species, then it's not logical to have pre-emptive action. The reason SONNY was able to understand this, supposedly, was his emotional programming.Confused wrote:But combined with the first law, the robots had no choice but to institute preemptive actions. Otherwise, they fail in the first law by means of inaction.
Again, logical for what? What are you trying to do here? What does the emotion mean that tells you that it's wrong to kill someone?Confused wrote:I think I might disagree with one requiring emotions to be logical. I think the opposite might actually be true. On another thread, the OP depicts the scenario of killing one child to save 20, more, all humanity. Emotions prevent me from doing this. I can say that I couldn't do it, period. Is that logical? I think I might agree that emotions play a large part in responses, but the responses aren't necessarily logical.
Every concept that can ever be needed will be expressed by exactly one word, with its meaning rigidly defined and all its subsidiary meanings forgotten. -- George Orwell, 1984
Re: Can our emotions be logical?
Post #5I had to review this several times to follow, but I grasp what you were trying to get across. Yes, I see what you are saying.ST88 wrote:That's what got me, too. It all depends on what type of logic you're after. If you're asking what is the most logical thing for survival of the species, then the robots' actions appear logical -- humans in habitrails. But if you're asking what the most logical thing is for the survival of the humanity of the species, then it's not logical to have pre-emptive action. The reason SONNY was able to understand this, supposedly, was his emotional programming.Confused wrote:But combined with the first law, the robots had no choice but to institute preemptive actions. Otherwise, they fail in the first law by means of inaction.
Again, logical for what? What are you trying to do here? What does the emotion mean that tells you that it's wrong to kill someone?Confused wrote:I think I might disagree with one requiring emotions to be logical. I think the opposite might actually be true. On another thread, the OP depicts the scenario of killing one child to save 20, more, all humanity. Emotions prevent me from doing this. I can say that I couldn't do it, period. Is that logical? I think I might agree that emotions play a large part in responses, but the responses aren't necessarily logical.
I would say logical in terms of what the robot was programmed to understand. In this case, law #1 prevents robots from sitting back and doing nothing as man kills man. Sonny is able to see that living includes more than physical. It involves emotions. The other robots couldn't grasp this. They only saw man killing man, not the purpose behind it.
Sorry it took so long for me to grasp you point. (I am a bit behind the mark as always

What we do for ourselves dies with us,
What we do for others and the world remains
and is immortal.
-Albert Pine
Never be bullied into silence.
Never allow yourself to be made a victim.
Accept no one persons definition of your life; define yourself.
-Harvey Fierstein
What we do for others and the world remains
and is immortal.
-Albert Pine
Never be bullied into silence.
Never allow yourself to be made a victim.
Accept no one persons definition of your life; define yourself.
-Harvey Fierstein
Post #6
What confuses me the most (if I remember the movie correctly) is that the robots create a robot army to remove the humans from power so that the robots can control the humans and in affect keep them from harming themselves. In the process, the robots are willing to (and do) kill the humans that opposed the robot take over. The robot reasoning is that there would be less harm inflicted on humans by the robots (especially considering the many generations to come) than if the humans were allowed to remain in power.
Can a robot who is programmed with only mathematical type logic reason this way for it directly conflicts with Robot Law # 1?
Can a robot who is programmed with only mathematical type logic reason this way for it directly conflicts with Robot Law # 1?
- The Persnickety Platypus
- Guru
- Posts: 1233
- Joined: Sat May 28, 2005 11:03 pm
Post #7
Here is what I was thinking...
Many philosophers characterize any judgements with an emotional charge as being unobjective and illogical. In effect, they would have us be like the unfeeling robots in the movie, and analyze all situations objectively, like disconnected onlookers. Is this really the best way to go about making decisions?
A scenario which Jester and myself came across in the “I am better than your God” thread; If a comet was hurtling towards the earth and all human beings were to die, is it logical to take the necessary measures to save ourselves? To do this, we would need to provide a "logical" reason as to why there is any intristic value in a human life. No such reason exists. However, there are certainly quite a few EMOTIONAL reasons as to why we might consider this event a bad thing. Would it be "illogical" for me to want to save the world simply because I have an emotional attachment to its inhabitants?
Evolutionary speaking, it wouldn't make sense for us to have emotions if they
drasticly inhibited our decision making. What if emotion actually serves a biological role; as a way of making accurate decisions when concrete knowledge is not available (or is impossible to apply, such as in the case of the comet)?
I think this is a really important issue, being that the answer directly influences our answers to any other philosophical query. When is it okay to allow emotion to creep into our logic, and at what point does such emotion cause us to think illogically?
Many philosophers characterize any judgements with an emotional charge as being unobjective and illogical. In effect, they would have us be like the unfeeling robots in the movie, and analyze all situations objectively, like disconnected onlookers. Is this really the best way to go about making decisions?
A scenario which Jester and myself came across in the “I am better than your God” thread; If a comet was hurtling towards the earth and all human beings were to die, is it logical to take the necessary measures to save ourselves? To do this, we would need to provide a "logical" reason as to why there is any intristic value in a human life. No such reason exists. However, there are certainly quite a few EMOTIONAL reasons as to why we might consider this event a bad thing. Would it be "illogical" for me to want to save the world simply because I have an emotional attachment to its inhabitants?
Evolutionary speaking, it wouldn't make sense for us to have emotions if they
drasticly inhibited our decision making. What if emotion actually serves a biological role; as a way of making accurate decisions when concrete knowledge is not available (or is impossible to apply, such as in the case of the comet)?
I think this is a really important issue, being that the answer directly influences our answers to any other philosophical query. When is it okay to allow emotion to creep into our logic, and at what point does such emotion cause us to think illogically?
The robots reasoning did not conflict with Law #1. They surmised that more human lives are lost in wars (ect) than would be lost in the robot coup, thus justifying the comparatively few short term losses in the transition of power. It is the same concept as the robot forsaking its commands, and saving Will Smith with a 45% chance survival over the girl, who only had an 11% chance.Can a robot who is programmed with only mathematical type logic reason this way for it directly conflicts with Robot Law # 1?
Post #8
But the underlying motive for the robots taking over wasn't to get control of humans, rather it was to prevent harm to humans by humans. Law #1 states that a robot cannot harm a human or through inaction, allow a human to be harmed. Their concept of harm was in the physical sense. They didn't see that cutting off freedoms could be harmful. They thought only in terms of survival being life. So it was quantity of life they were safeguarding, not quality of life.weird7 wrote:What confuses me the most (if I remember the movie correctly) is that the robots create a robot army to remove the humans from power so that the robots can control the humans and in affect keep them from harming themselves. In the process, the robots are willing to (and do) kill the humans that opposed the robot take over. The robot reasoning is that there would be less harm inflicted on humans by the robots (especially considering the many generations to come) than if the humans were allowed to remain in power.
Can a robot who is programmed with only mathematical type logic reason this way for it directly conflicts with Robot Law # 1?
What we do for ourselves dies with us,
What we do for others and the world remains
and is immortal.
-Albert Pine
Never be bullied into silence.
Never allow yourself to be made a victim.
Accept no one persons definition of your life; define yourself.
-Harvey Fierstein
What we do for others and the world remains
and is immortal.
-Albert Pine
Never be bullied into silence.
Never allow yourself to be made a victim.
Accept no one persons definition of your life; define yourself.
-Harvey Fierstein
Post #9
I think Evolutionary Psychology holds the answer to this: Emotions that pertain to survival, no matter how difficult to analyse from a game theoretical standpoint, are nonetheless effective in maintaining an unbroken line of descent spanning vast amounts of geological time. Take this as the driver for any random emotion and the mechanism becomes obvious. That intoxicating thing called love, and the odd behavior it engenders for example, are all things that are geared towards the extension of the continuous line of descent.The Persnickety Platypus wrote:Many philosophers characterize any judgements with an emotional charge as being unobjective and illogical. In effect, they would have us be like the unfeeling robots in the movie, and analyze all situations objectively, like disconnected onlookers. Is this really the best way to go about making decisions?
A scenario which Jester and myself came across in the “I am better than your God” thread; If a comet was hurtling towards the earth and all human beings were to die, is it logical to take the necessary measures to save ourselves? To do this, we would need to provide a "logical" reason as to why there is any intristic value in a human life. No such reason exists. However, there are certainly quite a few EMOTIONAL reasons as to why we might consider this event a bad thing. Would it be "illogical" for me to want to save the world simply because I have an emotional attachment to its inhabitants?.
Any inherited emotional disposition that conveys a selection advantage will be tested to "perfection" rather than destruction and, I would say, could therefore be expected to beat less sophisticated logic hands down.
- Furrowed Brow
- Site Supporter
- Posts: 3720
- Joined: Mon Nov 20, 2006 9:29 am
- Location: Here
- Been thanked: 1 time
- Contact:
Post #10
I think much that passes for emotions are anything but. So much of what I see is role play, people fitting into and trying to live up to acceptable social roles and reactions. Take "romantic love". A pure emotion or a bundle of physical and psychological reactions that fall into a socially constructed archetype? Take the modern plague of "respect". So many angry confrontations because "ya don't respect mi!" Ok one might argue that is not acceptable, but there is a socail arena in which it very mush is the social expectation.
Each of these can be rationalised and maybe they serve sove social logic - but a"emotional states" they are embedded within a social fabric and not the subject.
Each of these can be rationalised and maybe they serve sove social logic - but a"emotional states" they are embedded within a social fabric and not the subject.