Here's a paradox that seems that with today's brain scanning technologies one can envision how this paradox implies free will as well as dualism.
Imagine that you are the owner of a fantastic brain scanning machine that has recently been invented and is now harmlessly connected to your brain. The system is such that it can analyze the electro-chemical state of your brain, and based on that state can predict exactly what you will and must do next. Now, let's say that while sitting at the controls of this machine that it scans your brain upon pressing the green button and it comes back with, "you will press the purple button next." Now, upon hearing that you will press the purple button you decide to be a wise guy and you push the yellow button instead. The machine is wrong. But, how could it be wrong since it must know what your brain circuits would do upon hearing that you will press the purple button, and therefore the machine should be able to consider what your brain circuits would do even in that special case of knowing what you will do? If hearing that you would push the purple button, the machine must know that you would press the yellow button. However, if the machine told you that you would press the yellow button, then you would have surely not have pressed the yellow button. The machine must lie to you in order to predict your behavior. However, if it must lie to you, that means that it cannot predict your behavior by predicting your behavior. This suggests that there is no algorithm or scanning technology that the machine can use that predicts behavior when it has the task of reporting to you what your behavior will be. Therefore, the only way this could be true is if human behavior is indeterministic.
If human behavior is indeterministic, then wouldn't this mean that some form of dualism is true? That is, if no bridge laws exist that allow the machine to absolutely determine a human decision in all situations (as shown above), then the mental is not fully reducible to the physical. Dualism is the view that both the mental and physical exist, and existence is confirmed if the thing that is purported to exist cannot be explained in terms of other phenomena. Since the hypothetical machine cannot reduce every decision to a brain process that is scannable, wouldn't this suggest that there exists some non-physical component to the brain called the mind (i.e., dualism)?
Is dualism true?
Moderator: Moderators
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Is dualism true?
Post #1People say of the last day, that God shall give judgment. This is true. But it is not true as people imagine. Every man pronounces his own sentence; as he shows himself here in his essence, so will he remain everlastingly -- Meister Eckhart
Post #61
I think this is precisely the main source of confusion regarding your argument:harvey1 wrote:No. My assumption is that humans have the illusion of free will.
Let me copy/paste the argument, but replace the human with the device:1) Premise: An omniscient Machine could predict the actions of a human if it fully disclosed its predictions to the human and the human had time to change the result before publishing it.
2) Premise: the Machine originally scans the human's brain and by an infinite calculation determines every guess the human will make --this information is contained in a counterfactual table.
3) If the human selected a new number based on the number reported to her by the Machine, the Machine will have that number in the counterfactual table. (from (1) and (2))
4) If the human chose to not select a number, then no counterfactual information would be available (from (2))
5) If the human understood that the Machine would be forced to show its hand before the human choose a number, then the human could cause the Machine to generate a zero-sized counterfactual table (from (1) and (4))
6) If (5), then the Machine would fail to predict the action of the human according to (1)
7) Therefore, not-(1)
You claim that the device could not reproduce (4) and (5), above. However, it is not clear to me that the human could reproduce (4) and (5), either.1) Premise: An omniscient Machine could predict the actions of a device if it fully disclosed its predictions to the device and the device had time to change the result before publishing it.
2) Premise: the Machine originally scans the device's mechanism and by an infinite calculation determines every guess the device will make --this information is contained in a counterfactual table.
3) If the device selected a new number based on the number reported to it by the Machine, the Machine will have that number in the counterfactual table. (from (1) and (2))
4) If the device chose to not select a number, then no counterfactual information would be available (from (2))
5) If the device understood that the Machine would be forced to show its hand before the device choose a number, then the device could cause the Machine to generate a zero-sized counterfactual table (from (1) and (4))
6) If (5), then the Machine would fail to predict the action of the device according to (1)
7) Therefore, not-(1)
You start out by assuming that the human has the illusion of free will, but the Device does not. But... what is this illusion of free will ? Isn't the whole point of this experiment to distinguish the illusion of free will from the real thing ? I would claim that a device that always outwits the Machine has the illusion of free will as much as the human does. You have claim that the human can refuse to choose a number, but the Device could not, and hence the human has free will (or the illusion thereof), but it would be pretty easy to augment the Device with the ability to, occasionally, sit there without pressing any buttons.
Basically, this boils down to my original point. What is the illusion of free will ? How is it different from mere randomness ? Without answering this question, you cannot use your argument.
Additionally, your argument has logical errors.
You use language such as "the human will choose", or "the human understands"; however, these statements already presuppose that the human has free will. Things without free will cannot make choices, and they probably can't understand stuff, either (though that's debatable).
Your point (3) is false when applied to a Device (or an especially ornery human) who always chooses the button based on the Machine's prediction. In order to create a complete counterfactual table of the Device's actions, the Machine would first have to produce a complete table of its own predictions. In effect, the Machine would need to fully simulate itself -- a propositions which introduces all kinds of interesting problems, which I can discuss further, if you'd like.
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Post #62
Not so. I give the Device the ability to choose also. As for understanding, you aren't going to allow my premise that humans understand things? Don't you understand this sentence? I think you are being entirely uncharitable since this premise is undeniable. However, just because we understand doesn't assume free will as most of those who reject the concept attest. They believe we understand stuff and don't have free will. It's also not even necessary if we agree whether Devices understand or not, since we both agree that whatever they do, the Machine can know exactly what they will do. The issue is whether this applies to a human which is what this thought experiment is about.You use language such as "the human will choose", or "the human understands"; however, these statements already presuppose that the human has free will. Things without free will cannot make choices, and they probably can't understand stuff, either (though that's debatable).
However, the Machine knows what the Device will do. If it isn't programmed to give a number, then the Device can't do something the human can do in this thought experiment, which is namely, pick a number. The human picks a number, but the Device does not. In fact, it doesn't have the ability of choice at all since it cannot choose since its programming doesn't allow it.Bugmaster wrote:You have claim that the human can refuse to choose a number, but the Device could not, and hence the human has free will (or the illusion thereof), but it would be pretty easy to augment the Device with the ability to, occasionally, sit there without pressing any buttons.
If, on the other hand, the Device were programmed to pick a number, the Machine would instantly know what number that would be as its infinite processing abilities to create such a counterfactual table would indicate.
The illusion of free will is just the belief that one feels like they can choose an option without it being dictated or even knowingly predicted by the Machine.Bugmaster wrote:Basically, this boils down to my original point. What is the illusion of free will? How is it different from mere randomness? Without answering this question, you cannot use your argument.
People say of the last day, that God shall give judgment. This is true. But it is not true as people imagine. Every man pronounces his own sentence; as he shows himself here in his essence, so will he remain everlastingly -- Meister Eckhart
Post #63
I don't know, do I ? That all depends on what you mean by "understanding". I agree that humans understand things; however, I disagree that this is what separates humans from Devices a priori (as you well know).harvey1 wrote:Not so. I give the Device the ability to choose also. As for understanding, you aren't going to allow my premise that humans understand things? Don't you understand this sentence?
Agreed; this is quite possible.However, just because we understand doesn't assume free will as most of those who reject the concept attest. They believe we understand stuff and don't have free will.
Again, by hooking up the Device to a source of randomness (radioactive decay, or random.org), we can make the Device act just as unpredictably as the human can. If you think that's unfair, show me how you'd distinguish free will from plain old random behavior.However, the Machine knows what the Device will do. If it isn't programmed to give a number, then the Device can't do something the human can do in this thought experiment, which is namely, pick a number. The human picks a number, but the Device does not. In fact, it doesn't have the ability of choice at all since it cannot choose since its programming doesn't allow it.
Remember that, in this thought experiment, we are trying to determine which entity has free will. We cannot, therefore, start out by saying, "the human has free will, and the Device doesn't, so the human can pick any number he wants... let's go on to the experiment". What we can do is look at the behavior of both the human and the Device, and see if the Machine can correctly predict one but not the other. As it turns out, our simple Device will outwit the Machine at every turn, so the experiment doesn't tell us much.
I think a little role-playing exercise is in order. Let's pretend that I am the Device. I am a very simple Device, specified by the table (0, 1), (1, 2), (2, 1). I do not download counterfactual tables; in fact, I don't even know what they are. I challenge you to emulate the all-knowing Machine. Please make a prediction (0, 1, or 2), and I will press a button in response, in accordance with my programming. If you correctly predict which button I'll press, you win. To make things easier, let's assume that my lifespan is going to only take 3+-1 turns. That is, I can press the button exactly 3+-1 times before by battery gives out. You don't know exactly how many turns I'll operate, because my battery is a bit random (on the quantum level).If, on the other hand, the Device were programmed to pick a number, the Machine would instantly know what number that would be as its infinite processing abilities to create such a counterfactual table would indicate.
So, we humans have free will because we feel we do ? That's not much of an argument. It's quite possible that some immortal Machine has mapped all of our actions, and knows exactly what we will do and when, and our illusion of free will is just that -- an illusion.The illusion of free will is just the belief that one feels like they can choose an option without it being dictated or even knowingly predicted by the Machine.
In fact, the human in your experiment could be stuck in just such a situation. He feels that he has free will, but the Machine has mapped his entire brain, and will correctly predict every button he will press. In this case, the human has even less free will than the mechanical Device !
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Post #64
This brings up another problem. The Machine knows what the human will pick if it doesn't disclose its answers to the human. The Machine couldn't know what the Device predicted even if it never disclosed its answers to the Device. If the Device is using quantum-mechanical events to make its predictions, then even in principle the Machine can never predict what the Device will do. That's a significant difference between human and device. The human can choose a well-reasoned approach (e.g., "I choose number 7 because it is always lucky for me")--which the Machine can predict--versus the Device which the Machine can never predict. This is a key difference between free will and random decisions. Free will decisions are non-random owned decisions by the individual, whereas quantum-mechanical events which are amplified into decisions are always random, and without a particular reason.Bugmaster wrote:Again, by hooking up the Device to a source of randomness (radioactive decay, or random.org), we can make the Device act just as unpredictably as the human can. If you think that's unfair, show me how you'd distinguish free will from plain old random behavior.
It does so by being totally unpredictable even to itself. It would be like picking ice cream without ever having any favorite flavor. We would just randomly pick ice cream flavors, but I don't know of too many folks who actually do that. Humans generally pick based on their free willed preferences.Bugmaster wrote:As it turns out, our simple Device will outwit the Machine at every turn, so the experiment doesn't tell us much.
No, we can know that we have free will by our ability to choose what we want despite some Machine saying what we will predict and being able to select options that are our preferred choices.Bugmaster wrote:So, we humans have free will because we feel we do ? [Sic] That's not much of an argument. It's quite possible that some immortal Machine has mapped all of our actions, and knows exactly what we will do and when, and our illusion of free will is just that -- an illusion.
As my multiple line argument showed (which you haven't shown to be incorrect), the human can outwit the Machine. (If possible, let's stick to more formal argments sense chatting gets us nowhere.)Bugmaster wrote:In fact, the human in your experiment could be stuck in just such a situation. He feels that he has free will, but the Machine has mapped his entire brain, and will correctly predict every button he will press. In this case, the human has even less free will than the mechanical Device !
People say of the last day, that God shall give judgment. This is true. But it is not true as people imagine. Every man pronounces his own sentence; as he shows himself here in his essence, so will he remain everlastingly -- Meister Eckhart
Post #65
Easy enough to fix -- we'll just augment the Device to pick "0" if it doesn't receive a prediction. "0" is the Device's favorite button !harvey1 wrote:This brings up another problem. The Machine knows what the human will pick if it doesn't disclose its answers to the human. The Machine couldn't know what the Device predicted even if it never disclosed its answers to the Device.
Wait a minute, does this mean that the Machine can in principle predict what the human will do ? I thought that the central premise of your argument was the opposite.If the Device is using quantum-mechanical events to make its predictions, then even in principle the Machine can never predict what the Device will do. That's a significant difference between human and device.
Ok, how would you distinguish the two, assuming that, as your experiment attempts to show, the outcome of free cam be entirely unpredictable ?Free will decisions are non-random owned decisions by the individual, whereas quantum-mechanical events which are amplified into decisions are always random, and without a particular reason.
Well, a Device that always picks "0" in the absence of a prediction would solve that. However, note that even humans do not pick ice cream flavors based solely on their preferences. A friend of mine is allergic to chocolate, so he never picks chocolate ice cream. His body prevents him from forming certain preferences. Does he still have free will ?We would just randomly pick ice cream flavors, but I don't know of too many folks who actually do that. Humans generally pick based on their free willed preferences.
You are starting to reverse your argument. Now you're saying that the human would pick his favorite button regardless of what the Machine predicts. In this case, all the Machine has to do is find out which button is the human's favorite, and predict that he'll press that. The Machine doesn't even need to be omniscient for that, it can just gather statistics.No, we can know that we have free will by our ability to choose what we want despite some Machine saying what we will predict and being able to select options that are our preferred choices.
I am specifically refuting your points (4) and (5), because they are begging the question (though, if we agree that Devices have the capacity to understand things, we can dispense with (5)). You yourself are leaning toward reversing your point (3). Furthermore, on (3), if the human or the Device bases its responses on the Machine's predictions, the Machine would go into an infinite loop trying to construct its counterfactual table.As my multiple line argument showed (which you haven't shown to be incorrect), the human can outwit the Machine.
I have also argued in the past that (1) is misleading: the human or the Device doesn't need to "change the result", it can simply wait until the prediction comes in, and then press a button based on the prediction. You can't change what's not there.
Essentially, your argument makes a sort of sense when applied to a human and a Device both. It does not really differentiate between the two, hence it reaches no useful conclusion. When you try to differentiate between humans and Devices, you run into trouble, which I've been pointing out all along.
I also note that you chose to ignore my little role-playing exercise. Why is that ?
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Post #66
That won't work. "0" is not the same number that the Device would have picked if the Machine disclosed the prediction. So, for example, imagine that there are two Machines. One Machine (Machine A) passes the prediction to the Device, and the other Machine (Machine B) doesn't. Machine A is tricked by the Device because the Device is using Penrose's notion that free will is created by quantum processes (which I suppose you agree with Penrose). However, Machine B is also tricked even though it never disclosed its prediction to the Device.Bugmaster wrote:Easy enough to fix -- we'll just augment the Device to pick "0" if it doesn't receive a prediction. "0" is the Device's favorite button ![sic]harvey1 wrote:This brings up another problem. The Machine knows what the human will pick if it doesn't disclose its answers to the human. The Machine couldn't know what the Device predicted even if it never disclosed its answers to the Device.
Notice how different that this is than a human. If the human says to Machine A that it knows that she can never be forced to follow the prediction of Machine A, then Machine A comes back with saying that her decision is indeterminate. However, this is not true of Machine B. Once she gets Machine A to surrender, she picks a number. However, Machine B knows what number that is going to be because it can determine what the human's response was going to be to Machine A, and can also determine what number she will pick after getting Machine A's surrender.
Now, in this scenario the human has free will because she can determine her own will if presented with a choice by Machine A (i.e., her choice is indeterministic to Machine A), but her decisions are never random since Machine B can predict her decisions by remaining quiet.
Yes! Free will does not entail unpredictability. It only entails the ability to act indeterministically in cases where an outsider tells the human what choices they will make if the choice is immediately in front of them.Bugmaster wrote:Wait a minute, does this mean that the Machine can in principle predict what the human will do ? [sic] I thought that the central premise of your argument was the opposite.
That's not what I'm saying. When confronted by a Machine at the ice cream shop, the human can potentially choose flavors that they do not like in order to force the Machine into a surrender. That doesn't mean that the human doesn't have a flavor they like and would normally choose. It also doesn't mean the Machine can't know what that flavor is if it kept its prediction to itself (e.g., Machine B).Bugmaster wrote:You are starting to reverse your argument. Now you're saying that the human would pick his favorite button regardless of what the Machine predicts. In this case, all the Machine has to do is find out which button is the human's favorite, and predict that he'll press that. The Machine doesn't even need to be omniscient for that, it can just gather statistics.No, we can know that we have free will by our ability to choose what we want despite some Machine saying what we will predict and being able to select options that are our preferred choices.
How am I reversing the point in (3)? The Machine does have this information on hand if the human tries to unwit the Machine. In my enhanced example above, Machine B knows the number that the human will produce after Machine A loses. (3) holds. How is the Machine in an infinite loop in constructing the counterfactual table, I'm not following you on that point?Bugmaster wrote:You yourself are leaning toward reversing your point (3). Furthermore, on (3), if the human or the Device bases its responses on the Machine's predictions, the Machine would go into an infinite loop trying to construct its counterfactual table.3) If the human selected a new number based on the number reported to her by the Machine, the Machine will have that number in the counterfactual table.
Then you're not following the argument. The argument specifically states that if one played along, the Machine in (3) would already have that new guess in the counterfactual table for the human or Device to look at.Bugmaster wrote:I have also argued in the past that (1) is misleading: the human or the Device doesn't need to "change the result", it can simply wait until the prediction comes in, and then press a button based on the prediction. You can't change what's not there.
That's what you've been saying, but you haven't refuted my argument. You have to refute the argument before you can claim any kind of victory.Bugmaster wrote:Essentially, your argument makes a sort of sense when applied to a human and a Device both. It does not really differentiate between the two, hence it reaches no useful conclusion. When you try to differentiate between humans and Devices, you run into trouble, which I've been pointing out all along.
This is trivial. If your non-quantum Device doesn't download the counterfactual table, then the Machine can compute each calculation. When the Device answers, the Machine will just display that number just prior to the Device's response. (Please review the argument again so that I don't have to repeat myself.)Bugmaster wrote:I also note that you chose to ignore my little role-playing exercise. Why is that ? [sic]
People say of the last day, that God shall give judgment. This is true. But it is not true as people imagine. Every man pronounces his own sentence; as he shows himself here in his essence, so will he remain everlastingly -- Meister Eckhart
Post #67
No, I do not. Not only do I think that "free will" is an empty concept, but I also don't think that the brain is a quantum computer that uses quantum superposition for computation. I am merely using quantum randomness to refute your point; see more below...Machine A is tricked by the Device because the Device is using Penrose's notion that free will is created by quantum processes (which I suppose you agree with Penrose).
This is also true of my improved Device (as seen in the previous post), specified by the table (0, 1), (1, 2), (2, 1), (null, 0). The Machine knows that, if it does not voice any predictions, the Device will predictably pick 0.Now, in this scenario the human has free will because she can determine her own will if presented with a choice by Machine A (i.e., her choice is indeterministic to Machine A), but her decisions are never random since Machine B can predict her decisions by remaining quiet.
So, it's acting indeterministically some of the time, but not all of the time ? How is indeterminism practically different from unpredictability ? This, IMO, is the central weakness of your argument:Yes! Free will does not entail unpredictability. It only entails the ability to act indeterministically in cases where an outsider tells the human what choices they will make if the choice is immediately in front of them.
Essentially, you're saying, "the human can choose to exercise his free will or not, but the Device can't, therefore the Device does not have free will. Firstly, this is begging the question. Secondly, a human that has free will but chooses not to exercise it is, as seen from the outside, indistinguishable from a human that has no free will at all. This means that you cannot, in principle, stage an experiment -- even a gedanken experiment ! -- to prove your point.That's not what I'm saying. When confronted by a Machine at the ice cream shop, the human can potentially choose flavors that they do not like in order to force the Machine into a surrender. That doesn't mean that the human doesn't have a flavor they like and would normally choose.
In other words, you can't stage an experiment that will show what ice cream flavor a human could potentially pick; you can only stage an experiment that shows what ice cream flavor the human actually picks. Thus, your entire thought experiment, as descriped in the OP, is useless.
Let's say that we use a simple Device {(0, 1), (1, 2), (2, 1)} whose output entirely dependent on input from the Machine. In order for the Device to press a button, the Machine would have to voice a prediction. Naturally, the Machine wants to outwit the Device. Let's say that the Machine picks an arbitrary initial prediction, P0. Naturally, the Machine immediately knows that the Device will respond by pressing a different button, B0, B0 != P0 (where "!=" is the symbol for "not equals"). Now, the Machine needs to pick a new prediction, P1, such that P1 == B0. But, the Machine realizes that, by voicing P1 (remember that the Machine hasn't actually voiced any predictions yet), it will make the Device press B1, B1 != P1, etc.How am I reversing the point in (3)? The Machine does have this information on hand if the human tries to unwit the Machine. In my enhanced example above, Machine B knows the number that the human will produce after Machine A loses. (3) holds. How is the Machine in an infinite loop in constructing the counterfactual table, I'm not following you on that point?
Keep in mind that all of these predictions are going into a counterfactual table. The Machine's algorithm is, basically, this:
Code: Select all
1: Let Pn = the next prediction
2: Enter Pn into the counterfactual table
3: Let Bn = what the Device would press
4: If Bn != Pn, Goto 1 // Condition is always true
5: Voice Pn
In other words, you are saying:
But I argue that this is impossible.The argument specifically states that if one played along, the Machine in (3) would already have that new guess in the counterfactual table for the human or Device to look at.
Again, let me repeat myself: this is cheating. Once the truthful Machine voices a prediction, it is committed to the prediction; it can't change it quickly behind the scenes. And the Machine has to voice a prediction, firstly because the simple Device won't function without it (though the extended Device would), and secondly because that's what your argument states in point (1).If your non-quantum Device doesn't download the counterfactual table, then the Machine can compute each calculation. When the Device answers, the Machine will just display that number just prior to the Device's response.
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Post #68
But, wait a second. If you aren't holding that position, then please answer these questions with the position that you actually hold. I'm not interested in what Penrose would say, I'm only interested in what you say in resolving these issues without you having to advocate someone else's epistemology (e.g., the brain is a quantum device).Bugmaster wrote:No, I do not. Not only do I think that "free will" is an empty concept, but I also don't think that the brain is a quantum computer that uses quantum superposition for computation. I am merely using quantum randomness to refute your point; see more below...
This answer applies only to a Penrosean argument, but I would argue to Penrose that he has misconstrued the thought experiment. If the Device predicts "0" even though Machine A said it would predict "0," then in fact the Device always loses to the Machine. On the other hand, Machine A could not predict what the human would choose as my argument illustrates.Bugmaster wrote:This is also true of my improved Device (as seen in the previous post), specified by the table (0, 1), (1, 2), (2, 1), (null, 0). The Machine knows that, if it does not voice any predictions, the Device will predictably pick 0.Now, in this scenario the human has free will because she can determine her own will if presented with a choice by Machine A (i.e., her choice is indeterministic to Machine A), but her decisions are never random since Machine B can predict her decisions by remaining quiet.
Free will implies that our choices are indeterministic with respect to the Machine that tries to tell us what we will choose. This does not mean that our choices are at all unpredictable as Machine B illustrates.Bugmaster wrote:So, it's acting indeterministically some of the time, but not all of the time ? [sic] How is indeterminism practically different from unpredictability ? [sic][Free will] only entails the ability to act indeterministically in cases where an outsider tells the human what choices they will make if the choice is immediately in front of them.
How is my statement begging the question? Show me which line in my argument begs the question.Bugmaster wrote:This, IMO, is the central weakness of your argument:Essentially, you're saying, "the human can choose to exercise his free will or not, but the Device can't, therefore the Device does not have free will. Firstly, this is begging the question.That's not what I'm saying. When confronted by a Machine at the ice cream shop, the human can potentially choose flavors that they do not like in order to force the Machine into a surrender. That doesn't mean that the human doesn't have a flavor they like and would normally choose.
But, as my argument shows, humans can exercise free will by not allowing themselves to act deterministically in cases where an outsider tells the human what choices they will make if the choice is immediately in front of them. This demonstration proves that we are responsible for our own acts since we, in principle, have the ability to change what could be the case. The exercise of free will occurs as often as we wish to plot our own course in life. If the choice is to disbelieve in God or to believe in God, free will states that we can purposely choose to believe in God if we want to do so. For example, if the Machine said to us, "you will choose to be an atheist," then we actually have a choice not to be an atheist based on the Machine's predictions. People who choose to go along with the Machine's predictions are doing it as a matter of their own choice (even if they know deep down it is the wrong choice).Bugmaster wrote:Secondly, a human that has free will but chooses not to exercise it is, as seen from the outside, indistinguishable from a human that has no free will at all. This means that you cannot, in principle, stage an experiment -- even a gedanken experiment ! -- to prove your point.
This misses the point. Humans can potentially pick any flavor, meaning that they are not deterministically required to pick a flavor if told by the Machine that they will choose vanilla when the choice is immediately in front of them.Bugmaster wrote:In other words, you can't stage an experiment that will show what ice cream flavor a human could potentially pick; you can only stage an experiment that shows what ice cream flavor the human actually picks. Thus, your entire thought experiment, as descriped in the OP, is useless.
No, they aren't going into a counterfactual table. They are in the counterfactual table after the Machine scans the Device. The Machine then knows what the Device will calculate based on certain input from the Machine. Therefore, with an infinitely fast processor the Machine calculates all the moves that a finite Device will make by estimating the life of the Device. The Device generates no novel predictions that aren't already in the counterfactual table that the Machine created upon scanning the Device. Once the Device fails the Machine displays the prediction based on the last counterfactual record downloaded to the Device.Bugmaster wrote:Keep in mind that all of these predictions are going into a counterfactual table.
This is incorrect. (2) is wrong. All the Pn's are entered in the counterfactual table immediately upon scanning. Remember, the Device is a deterministic device. The Machine after scanning the Device knows everything the Device has done, is doing, and can forever do. Nothing the Device can output will be a surprise to the Machine.Bugmaster wrote:The Machine's algorithm is, basically, this:This is an infinite loop.Code: Select all
1: Let Pn = the next prediction 2: Enter Pn into the counterfactual table 3: Let Bn = what the Device would press 4: If Bn != Pn, Goto 1 // Condition is always true 5: Voice Pn
(I highly recommend reading through this thread again to understand this point. I think I talked in more detail with McCulloch about this issue, so you might wish to read my posts to him.)
The Machine doesn't have to simulate itself, it must simulate the Device. The Machine knows the algorithm of the Device, so it knows what the Device will do with any possible Pn provided to it by the Machine.Bugmaster wrote:Things get even worse when you start looking into the details. In line 1, how does the Machine obtain the next prediction ? [sic] Realistically, the prediction Pn is probably some function of the previous prediction , Pn = f(Pn-1). This means that the Machine would effectively have to simulate itself in order to obtain Pn -- and that's an impossible task for an infinite Machine.
Why can't the Machine predict what the Device will output if the Machine has full knowledge of the Device and its algorithm? Imagine if the Device is very simple where it adds Pn+1 to any Pn given to it. The Machine knows that the Device will pick 1 as its first guess. (I.e., the Machine's counterfactual table has as its first entry what the Device would have done had it not scanned the Device.) So, the counterfactual table has these record values: 1, 2, 3, 4, 5, ...., guaranteed Device failure: N. The Device cannot do anything novel. It must follow its algorithm which the Machine already has used to build the counterfactual table with its infinitely fast processor.Bugmaster wrote:In other words, you are saying:But I argue that this is impossible.The argument specifically states that if one played along, the Machine in (3) would already have that new guess in the counterfactual table for the human or Device to look at.
You're trying to construct a thought experiment, but this is my thought experiment. In this thought experiment I wish to show what quality a human has that this Device cannot possibly possess. I do this by showing that the Device has an opportunity to have an answer different than Pn which it knows from the last downloaded counterfactual table from the Machine. This is fair since both the human and the Device have an opportunity to guess again before the answers and prediction are displayed for the world to see. Read this post where I show the code as to how the Machine gives the Device an opportunity to respond to the last entry it downloaded from the counterfactual table.Bugmaster wrote:Again, let me repeat myself: this is cheating. Once the truthful Machine voices a prediction, it is committed to the prediction; it can't change it quickly behind the scenes.If your non-quantum Device doesn't download the counterfactual table, then the Machine can compute each calculation. When the Device answers, the Machine will just display that number just prior to the Device's response.
The Machine does provide the first entry in the counterfactual table based on what the Device would have done if the Device were not scanned by the Machine.Bugmaster wrote:And the Machine has to voice a prediction, firstly because the simple Device won't function without it (though the extended Device would), and secondly because that's what your argument states in point (1).
People say of the last day, that God shall give judgment. This is true. But it is not true as people imagine. Every man pronounces his own sentence; as he shows himself here in his essence, so will he remain everlastingly -- Meister Eckhart
Re: Is dualism true?
Post #69This seems to be an attempt to prove indeterminism by assuming determinism (which is a little confusing). To determine a behaviour using u,v, w and x as factors and then expecting the behaviour to be the same regardless of factor y is more than a little foolish. Factor y (in this case the "stated"prediction) would have some effect on the subsequent behaviour. This "stated" prediction may not be the real prediction though as the prediction could be that the organism would choose an outcome contrary to the "stated" prediction. So the prediction may be dependent upon the stated prediction more than any other factor if the prediction is known. One algorithm that might be used would possibly include the statementharvey1 wrote:Here's a paradox that seems that with today's brain scanning technologies one can envision how this paradox implies free will as well as dualism.
Imagine that you are the owner of a fantastic brain scanning machine that has recently been invented and is now harmlessly connected to your brain. The system is such that it can analyze the electro-chemical state of your brain, and based on that state can predict exactly what you will and must do next. Now, let's say that while sitting at the controls of this machine that it scans your brain upon pressing the green button and it comes back with, "you will press the purple button next." Now, upon hearing that you will press the purple button you decide to be a wise guy and you push the yellow button instead. The machine is wrong. But, how could it be wrong since it must know what your brain circuits would do upon hearing that you will press the purple button, and therefore the machine should be able to consider what your brain circuits would do even in that special case of knowing what you will do? If hearing that you would push the purple button, the machine must know that you would press the yellow button. However, if the machine told you that you would press the yellow button, then you would have surely not have pressed the yellow button. The machine must lie to you in order to predict your behavior. However, if it must lie to you, that means that it cannot predict your behavior by predicting your behavior. This suggests that there is no algorithm or scanning technology that the machine can use that predicts behavior when it has the task of reporting to you what your behavior will be. Therefore, the only way this could be true is if human behavior is indeterministic.
"The knowledge of a prediction will likely affect the outcome".
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Re: Is dualism true?
Post #70That's right, this is a reductio argument.Curious wrote:This seems to be an attempt to prove indeterminism by assuming determinism (which is a little confusing).
What would you consider "factor y" as per my argument?Curious wrote:To determine a behaviour using u,v, w and x as factors and then expecting the behaviour to be the same regardless of factor y is more than a little foolish. Factor y (in this case the "stated"prediction) would have some effect on the subsequent behaviour.
(Sorry I have been slow to respond to your other post. I'm sort of tapped out on time, so I'm now forced to choose which "long" posts I respond to.)
People say of the last day, that God shall give judgment. This is true. But it is not true as people imagine. Every man pronounces his own sentence; as he shows himself here in his essence, so will he remain everlastingly -- Meister Eckhart