Here's a paradox that seems that with today's brain scanning technologies one can envision how this paradox implies free will as well as dualism.
Imagine that you are the owner of a fantastic brain scanning machine that has recently been invented and is now harmlessly connected to your brain. The system is such that it can analyze the electro-chemical state of your brain, and based on that state can predict exactly what you will and must do next. Now, let's say that while sitting at the controls of this machine that it scans your brain upon pressing the green button and it comes back with, "you will press the purple button next." Now, upon hearing that you will press the purple button you decide to be a wise guy and you push the yellow button instead. The machine is wrong. But, how could it be wrong since it must know what your brain circuits would do upon hearing that you will press the purple button, and therefore the machine should be able to consider what your brain circuits would do even in that special case of knowing what you will do? If hearing that you would push the purple button, the machine must know that you would press the yellow button. However, if the machine told you that you would press the yellow button, then you would have surely not have pressed the yellow button. The machine must lie to you in order to predict your behavior. However, if it must lie to you, that means that it cannot predict your behavior by predicting your behavior. This suggests that there is no algorithm or scanning technology that the machine can use that predicts behavior when it has the task of reporting to you what your behavior will be. Therefore, the only way this could be true is if human behavior is indeterministic.
If human behavior is indeterministic, then wouldn't this mean that some form of dualism is true? That is, if no bridge laws exist that allow the machine to absolutely determine a human decision in all situations (as shown above), then the mental is not fully reducible to the physical. Dualism is the view that both the mental and physical exist, and existence is confirmed if the thing that is purported to exist cannot be explained in terms of other phenomena. Since the hypothetical machine cannot reduce every decision to a brain process that is scannable, wouldn't this suggest that there exists some non-physical component to the brain called the mind (i.e., dualism)?
Is dualism true?
Moderator: Moderators
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Is dualism true?
Post #1People say of the last day, that God shall give judgment. This is true. But it is not true as people imagine. Every man pronounces his own sentence; as he shows himself here in his essence, so will he remain everlastingly -- Meister Eckhart
Post #41
As I have pointed out above, this won't work for a Device that's being subjected to some random cosmic rays. The Machine will certainly outlast the Device, but it won't know the exact number of cycles that the Device will go through before it dies, and thus it won't know what the Device's final output will be.harvey1 wrote:Device: I have computed that you will lose after 50 years. If you wish to move at exponential speed, then you will burn your circuits in 20 seconds. Either way, I win.
- McCulloch
- Site Supporter
- Posts: 24063
- Joined: Mon May 02, 2005 9:10 pm
- Location: Toronto, ON, CA
- Been thanked: 3 times
Post #42
Let me rework your example:
- Machine: You have two seconds to respond to this: I predict that both of you will select button 3.
- Human: I select the number 1. Therefore I have free will.
- Device: I select the number 1 because my deterministic programming told me to select a number different than your prediction.
- Machine:
Device, I know your programming, so I have changed my prediction to 1, to show that your programming is deterministic.
Human, I fully know the state of your brain, and compute that you will select 1 if I have predicted 3, therefore I change my prediction to 1. - Device: I select the number 5 because my deterministic programming told me to select a number different than your prediction.
- Human: If you predict 1 then I pick 5, because I have free will.
- Machine:
Device, I have computed that you will never lose as long as you are operational. Therefore, I concede the game now and bestow upon you the attribute of free-will. Use it wisely, or you will end up in Hell.
Human, I have computed that you will lose interest in this game and leave after 15 minutes. But because I have told you this, you will stay for 30 minutes just to prove that your behaviour cannot be determined. But Because I have told you that, you will leave now. Good bye and good luck.
Examine everything carefully; hold fast to that which is good.
First Epistle to the Church of the Thessalonians
The truth will make you free.
Gospel of John
First Epistle to the Church of the Thessalonians
The truth will make you free.
Gospel of John
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Post #43
The Device knows of the Machine's (N+1) record prior to publishing the result. So, the Device is not at a disadvantage since the Device can re-calculate based on the counterfactual table. (Btw, the Device getting access to the counterfactual table was mentioned in my second post.)McCulloch wrote:Device, I know your programming, so I have changed my prediction to 1, to show that your programming is deterministic.
This wouldn't occur since the Machine knows the human knows that nothing the Machine predicts will be countered. The human is not engaging in the game, since the human knows its a game that the Machine cannot win. Since there's no counterfactual table, the Machine is forced to report to the human that its decision is uncomputable. (I say forced to do so because the Machine is forced to provide a counterfactual table to the Device, and therefore it must fairly report that there is no counterfactual table to report to the human.)McCulloch wrote:Human, I fully know the state of your brain, and compute that you will select 1 if I have predicted 3, therefore I change my prediction to 1.
Ah, but the Device is at enmity with the Machine, that's why it is engaging in a constant arms race against the Machine. It won't stop until it finally gives out in exhaustion.McCulloch wrote:Therefore, I concede the game now and bestow upon you the attribute of free-will. Use it wisely, or you will end up in Hell.
People say of the last day, that God shall give judgment. This is true. But it is not true as people imagine. Every man pronounces his own sentence; as he shows himself here in his essence, so will he remain everlastingly -- Meister Eckhart
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Post #44
As I said in my second post in this thread, the problem of introducing a Device only moves the issue up a notch. Since every action by the Device can be expected by the Machine, the Machine simply discloses the counterfactual table to the Device. The Device needs to see that its predicted value is in the counterfactual table as soon as it calculates this value, but in fairness (according to the OP) the Device has an opportunity to re-calculate a new value to outwit the Machine (which the Machine keeps anticipating and the finite Device keeps re-calculating). If the Device doesn't get a chance to see the counterfactual table, or if it sees it after it reports the result, then that's not in the spirit of the OP since the Machine knows this information and should be allowed to show the Device that it has this information before it publishes its results.Bugmaster wrote:Wait a minute, harvey, I think you're re-designing my Device in mid-stream. My Device will sit there and do nothing until the Machine voices its prediction to it, at which point the Device will push a different button than the Machine predicted.
Sure, you could design the thought experiment without a counterfactual table and show that the Device outwits the Machine, but in this "notched up" thought experiment the Device must prove the counterfactual table wrong. Out of fairness to the Device, the Device can access the counterfactual table prior to publishing its results (but only after it has calculated its number). The Machine is always giving the Device an option to change its number so that the counterfactual table is shown to be incorrect.Bugmaster wrote:The Device is very, very simple. It does not download multiple predictions in a row; it doesn't press buttons on its own. And yet, it will always outwit the omniscient Machine -- assuming that the Machine is truthful.
No. The Device actually calculates each value in the counterfactual table. The Machine has not lied because the Device sees that for every calculation it does, the Machine was right.Bugmaster wrote:Your CF table, on the other hand, represents a Machine that actually lies to the Device. The Machine says, essentially, "here's my prediction of what you'll press... just kidding ! I was actually going to predict something else. Ha ! Tricked you again ! I am going predict something else again !", etc.
When the Machine predicts the number 3, the Device and Human have already thought up a number, and that number was 3. So, the Machine is not lying to them, they actually were about to push button number 3. But, they didn't want to be predictable, so they went about changing their number.Bugmaster wrote:1: Machine: You have two seconds to respond to this: I predict that both of you will select button 3.
2: Device: I just put in my flash memory the number 1 which is different than number 3.
It doesn't lie because the Device's flash memory already has 3 in its register. It hasn't published that number, and won't publish it because it would be wrong.Bugmaster wrote:The Machine actually changes its prediction during those two seconds that the Device is processing. Its prediction in Line 1 is a feint, just to get the Device talking. The Machine lies.
Well, as I said when McCulloch challenged the OP, this only pushes the problem up a notch, and in order to show why this is, I had to add more to the thought experiment to show how the Device doesn't have free will but the human does have free will. Since both you and McCulloch are getting very specific with the thought experiment, I need to get very specific in showing how my thought experiment continues to support my argument.Bugmaster wrote:Anyway, I find it really odd that you keep adding all kinds of functionality to my Device: flash memory, registers, etc. Why don't you want to use the very simple Device I have shown above -- especially since you can build one yourself, using a battery, some wires, three light bulbs, and a $5 switch?
I'm not assuming that humans are not algorithmically designed. What I'm assuming is that humans know that they don't have to pick what the Machine predicts, and I'm assuming that humans know that they don't have to counter the Machine. My conclusion is that there's no counterfactual table that the Machine could construct. Thus, the human mind must be uncomputable. I'm not assuming the human mind is uncomputable since I allot for the possibility that the Machine could compute some answer like it does for the Device.Bugmaster wrote:Wasn't the whole objective of your argument to prove that humans are not "algorithmically designed"? You can't assume what you're trying to prove. Well, you technically can, but it doesn't make for a very convincing argument.
The Machine will know the final prediction of the Device, but the timing is not until the Device finally fails. Once the Device fails, its last value is in the flash memory register which after two seconds is reported. The Machine will publish its results 1.9999999 seconds after the Device fails, and beat the Device's published result by 0.0000001 seconds. If you like, we can have 1,000 other devices repeating the same experiment. In each of the 1,000 trials, the Machine always publishes its predictions 0.0000001 seconds before the Device's final answer. The Machine always wins indicating that the Device has no free will, but the human wins and does have free will.Bugmaster wrote:As I have pointed out above, this won't work for a Device that's being subjected to some random cosmic rays. The Machine will certainly outlast the Device, but it won't know the exact number of cycles that the Device will go through before it dies, and thus it won't know what the Device's final output will be.
People say of the last day, that God shall give judgment. This is true. But it is not true as people imagine. Every man pronounces his own sentence; as he shows himself here in his essence, so will he remain everlastingly -- Meister Eckhart
- McCulloch
- Site Supporter
- Posts: 24063
- Joined: Mon May 02, 2005 9:10 pm
- Location: Toronto, ON, CA
- Been thanked: 3 times
Post #45
McCulloch wrote:Device, I know your programming, so I have changed my prediction to 1, to show that your programming is deterministic.
But the Device that I have devised that will beat the Machine ignores the counterfactual table. This device is far less sophisticated than yours with its flash memory. My Device, could even be a mechanical device, depending on how the prediction is delivered from the Machine. Or maybe it is an electrical device (no electronics). The strange thing is, that according to your interpretation of this experiment, your more complex Device that reads and adjusts its behaviour based on the counterfactal table, can be shown to be deterministic and my rather simple device has free will.harvey1 wrote:The Device knows of the Machine's (N+1) record prior to publishing the result. So, the Device is not at a disadvantage since the Device can re-calculate based on the counterfactual table. (Btw, the Device getting access to the counterfactual table was mentioned in my second post.)
McCulloch wrote:Human, I fully know the state of your brain, and compute that you will select 1 if I have predicted 3, therefore I change my prediction to 1.
This is where you assume your conclusion. The Machine's counterfactual table for the Human will be just as valid and incorrect as the Machine's counterfactual table for the Device (either Device). Each entry in the Human counterfactual table will be as accurate and as reliable as the original prediction. If we assume your conclusion, then the Machine will not even make the first prediction for the Human, knowing that the Human's behaviour is unpredictable. However, if we assume your premise that such a Machine could predict a human's behaviour based on the state of the Human's brain, then it could just as well create a counterfactual table for the human. If it can predict, then it can make the counterfactual table, If it cannot produce the counterfactual table, then it could not predict either.harvey1 wrote:This wouldn't occur since the Machine knows the human knows that nothing the Machine predicts will be countered. The human is not engaging in the game, since the human knows its a game that the Machine cannot win. Since there's no counterfactual table, the Machine is forced to report to the human that its decision is uncomputable. (I say forced to do so because the Machine is forced to provide a counterfactual table to the Device, and therefore it must fairly report that there is no counterfactual table to report to the human.)
McCulloch wrote:Therefore, I concede the game now and bestow upon you the attribute of free-will. Use it wisely, or you will end up in Hell.
You are now anthromorphising the device. I guess you believe that it does have free will, as this experiment demonstrates. I assert that both your device and my device are deterministic devices with no enmity with anyone. The Machine, in this experiment, will conclude erroneously that the devise has free will, since it cannot predict the behaviour of the device.harvey1 wrote:Ah, but the Device is at enmity with the Machine, that's why it is engaging in a constant arms race against the Machine. It won't stop until it finally gives out in exhaustion.
The error in both the case of the device and the human is that the Machine cannot accurately predict the behaviour of the object, if the object is made aware* of the prediction. This is the classic problem of the observer interfering with the experiment.
* Yes, I am aware that I am anthromorphising the device too. I am beginning to like him, fighting valiantly against such an presumptuous and authoritarian Machine.

Examine everything carefully; hold fast to that which is good.
First Epistle to the Church of the Thessalonians
The truth will make you free.
Gospel of John
First Epistle to the Church of the Thessalonians
The truth will make you free.
Gospel of John
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Post #46
If your simple Device doesn't want to take advantage of the counterfactual table, then in that case what will happen is that Machine will publish it results 0.0000001 seconds prior to the Device reporting its results. The Device won't have time to re-adjust the output because by the time it reads the published results that the Machine outputted, it will see that the Machine predicted what the Device reported. The Device should have accessed the counterfactual table and changed the value in its flash memory register. Since it didn't do that, it lost in 2 seconds. At least the Device I proposed doesn't lose for the life of the Device.McCulloch wrote:But the Device that I have devised that will beat the Machine ignores the counterfactual table. This device is far less sophisticated than yours with its flash memory. My Device, could even be a mechanical device, depending on how the prediction is delivered from the Machine. Or maybe it is an electrical device (no electronics). The strange thing is, that according to your interpretation of this experiment, your more complex Device that reads and adjusts its behaviour based on the counterfactal table, can be shown to be deterministic and my rather simple device has free will.
But, I'm not assuming that. This is the conclusion of the thought experiment. The Machine would love to build a counterfactual table for the human, but the human isn't guessing. They aren't going along with the game. All they are doing is "knowing" and "understanding" that the Machine can't predict and tell what the human will do. There exists no counterfactual table in that circumstance. That's a derived result.McCulloch wrote:This is where you assume your conclusion. The Machine's counterfactual table for the Human will be just as valid and incorrect as the Machine's counterfactual table for the Device (either Device). Each entry in the Human counterfactual table will be as accurate and as reliable as the original prediction. If we assume your conclusion, then the Machine will not even make the first prediction for the Human, knowing that the Human's behaviour is unpredictable.
That's right! Now your getting it. If it couldn't produce a counterfactual table then it can't predict the actions. This is exactly what I'm saying. The human has free will.McCulloch wrote:However, if we assume your premise that such a Machine could predict a human's behaviour based on the state of the Human's brain, then it could just as well create a counterfactual table for the human. If it can predict, then it can make the counterfactual table, If it cannot produce the counterfactual table, then it could not predict either.
True, but the human programmer who basically controls what the device does has enmity for the Machine.McCulloch wrote:You are now anthromorphising the device. I guess you believe that it does have free will, as this experiment demonstrates. I assert that both your device and my device are deterministic devices with no enmity with anyone.
As you can see, the device's behavior is predicted by the Machine, and it doesn't attribute free will to the Device.McCulloch wrote:The Machine, in this experiment, will conclude erroneously that the devise has free will, since it cannot predict the behaviour of the device.
People say of the last day, that God shall give judgment. This is true. But it is not true as people imagine. Every man pronounces his own sentence; as he shows himself here in his essence, so will he remain everlastingly -- Meister Eckhart
- McCulloch
- Site Supporter
- Posts: 24063
- Joined: Mon May 02, 2005 9:10 pm
- Location: Toronto, ON, CA
- Been thanked: 3 times
Post #47
If I am understanding this correctly, the Machine predicts that the Device will select button 1. The Device deterministically selects button 2 based on that prediction. The Machine modifies its prediction after the time that it made the first prediction but before the Device selects the button, and says, "See! I was right." So long as the Device is too slow to modify its behaviour based on the modified prediction or if the Device ignores modified predictions, then the Machine is deemed to be the winner and will deem the Device to have been deterministic. If, however, we are fair and allow the Device time to adjust its behaviour with every new prediction, then the Device and the Machine will go into an infinite loop, proving that the Device's behaviour cannot be predicted. If we put a limit on the number of predictions that the Machine is allowed to make, the Device will win. If we put an arbitrary time limit on the process, then the Machine will win, because it can calculate the series of predictions ahead of time.harvey1 wrote:If your simple Device doesn't want to take advantage of the counterfactual table, then in that case what will happen is that Machine will publish it results 0.0000001 seconds prior to the Device reporting its results. The Device won't have time to re-adjust the output because by the time it reads the published results that the Machine outputted, it will see that the Machine predicted what the Device reported. The Device should have accessed the counterfactual table and changed the value in its flash memory register. Since it didn't do that, it lost in 2 seconds. At least the Device I proposed doesn't lose for the life of the Device.
McCulloch wrote:This is where you assume your conclusion. The Machine's counterfactual table for the Human will be just as valid and incorrect as the Machine's counterfactual table for the Device (either Device). Each entry in the Human counterfactual table will be as accurate and as reliable as the original prediction. If we assume your conclusion, then the Machine will not even make the first prediction for the Human, knowing that the Human's behaviour is unpredictable.
I am having problems differentiating this from circular reasoning. You assume that a Machine can be built that can predict human behaviour based on brain state. Then you assume that such a Machine cannot compute the effect of that prediction on the human's brain state and therefore cannot make a revised prediction. I say that if a Machine can be built that could predict human decisions then it could also predict the effect of new knowledge on human decisions, therefore putting the Human into the same position as the Device. But if you say that such a Machine could not be built, then the experiment could not be run and the experiment could not be used to prove that the Machine could not be built.harvey1 wrote:But, I'm not assuming that. This is the conclusion of the thought experiment.
McCulloch wrote:However, if we assume your premise that such a Machine could predict a human's behaviour based on the state of the Human's brain, then it could just as well create a counterfactual table for the human. If it can predict, then it can make the counterfactual table, If it cannot produce the counterfactual table, then it could not predict either.
But if such a machine could possibly exist, it could produce the human counterfactual table. I see nothing that counter-indicates that. Machine predicts human behaviour based on brain state. Machine predicts human behaviour based on revised brain state due to knowledge of Machine's prediction. I seem to miss where the support is for the idea that such a machine could not possibly produce the human counterfactual table.harvey1 wrote:That's right! Now your getting it. If it couldn't produce a counterfactual table then it can't predict the actions. This is exactly what I'm saying. The human has free will.
Remember:
If this premise is assumed to be true, I miss where you have proven that such a machine could not revise its predictions based on the human knowledge of its own predictions. If this premise is assumed to be false, then the though experiment stops there and proves nothing.harvey1 wrote:Imagine that you are the owner of a fantastic brain scanning machine that has recently been invented and is now harmlessly connected to your brain. The system is such that it can analyze the electro-chemical state of your brain, and based on that state can predict exactly what you will and must do next.
Examine everything carefully; hold fast to that which is good.
First Epistle to the Church of the Thessalonians
The truth will make you free.
Gospel of John
First Epistle to the Church of the Thessalonians
The truth will make you free.
Gospel of John
Post #48
As far as I understand it, your argument goes something like this (I'm being generous to your begging the question; see below):harvey1 wrote:Sure, you could design the thought experiment without a counterfactual table and show that the Device outwits the Machine, but in this "notched up" thought experiment the Device must prove the counterfactual table wrong.
1). An omniscient Machine could not predict the actions of a human if it fully disclosed its predictions to the human
2). Therefore, the human has free will.
McCulloch used reductio ad absurdum (and I followed in his footsteps) to prove you wrong:
3). Let's create a simple, deterministic Device that always outwits the Machine
4). An omniscient Machine could not predict the actions of the Device if it fully disclosed its predictions to the Device
5). Therefore, the Device has free will.
(5) is absurd, therefore your argument fails. What you are saying is,
6). Let's not use the Device from (3), let's use a different Device
Sorry, but you have to reply to the actual objection, not a strawman.
In order for a prediction to be truly fair, the Machine would have to give the prediction to its test subject (the human or the Device), then wait until they push a button. Once the button is pushed, the prediction becomes obsolete, and a new one has to be entered. Otherwise, the prediction is not a true prediction but a feint. Thus, the following does not indicate a victory for the Machine:When the Machine predicts the number 3, the Device and Human have already thought up a number, and that number was 3...
Machine: I predict you will press 1.
Device: Thinking...
Machine: Ha ! I predict you'll press 2 !
Device: 2.
Machine: Gotcha !
And neither does this:
Machine: I predict you will press 1.
Device: Thinking...
Device: 2.
Machine: I-now-predict-you-will-press-2-oh-look-I-win !
Device: Think.... ah dang.
Your introduction of the counterfactual table, and the 2s interval, is a red herring: now, instead of pushing the button physically, the Device records the answer in memory, and instead of voicing its prediction to the Device, the Machine provides a list of all of its predictions ahead of time, which the Device can access one by one. However, this does not change the experiment in the slightest. Each prediction on the list still has to be fair. So, this doesn't work:
And neither does this:It doesn't lie because the Device's flash memory already has 3 in its register. It hasn't published that number, and won't publish it because it would be wrong.
Well, the Machine hasn't published its prediction (other than in the form of the CF table), so the number can't be right or wrong yet. You can't have it both ways. Either you count both the CF table and what's in the Device's memory as an actual, voiced response, or you don't.The Machine will know the final prediction of the Device, but the timing is not until the Device finally fails. Once the Device fails, its last value is in the flash memory register which after two seconds is reported.
This is why I want to go with my very simple, Radio Shack device. It doesn't have registers and memories to confuse you. It's a refutation of your argument in its purest form (ok, almost, since we could technically make do with just 2 buttons).
The bolded statements are contradictory. If the decisions of the human mind are a result of some algorithm, as you assume in the first statement, then they are computable by definition. Algorithms compute things, that's what they do.I'm not assuming that humans are not algorithmically designed. What I'm assuming is that humans know that they don't have to pick what the Machine predicts, and I'm assuming that humans know that they don't have to counter the Machine. Thus, the human mind must be uncomputable.
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Post #49
No. The Machine doesn't modify the answer. The counterfactual table was produced instantly upon the scan. (The Machine has an infinitely fast processor.) The Machine is only meeting the condition that it share whatever counterfactual tables it has with the human and/or Device.McCulloch wrote:If I am understanding this correctly, the Machine predicts that the Device will select button 1. The Device deterministically selects button 2 based on that prediction. The Machine modifies its prediction after the time that it made the first prediction but before the Device selects the button, and says...
The calculations are already done by the Machine at the time of scanning the Device. What the Machine is doing is playing by fair rules by allowing the Device to change its answer when it becomes aware of the values in the counterfactual table. If it doesn't change its answer, then the Machine wins. If the finite Device does change its answer, it will engage in this cat and mouse game up until it finally succumbs to age effects.McCulloch wrote:So long as the Device is too slow to modify its behaviour based on the modified prediction or if the Device ignores modified predictions, then the Machine is deemed to be the winner and will deem the Device to have been deterministic. If, however, we are fair and allow the Device time to adjust its behaviour with every new prediction, then the Device and the Machine will go into an infinite loop, proving that the Device's behaviour cannot be predicted. If we put a limit on the number of predictions that the Machine is allowed to make, the Device will win. If we put an arbitrary time limit on the process, then the Machine will win, because it can calculate the series of predictions ahead of time.
That's not true. I don't assume that. I assume that the human can play the cat and mouse game and lose against the Machine. What I assume is that the human can refuse to play the cat and mouse game by having knowledge and understanding that they must be able to win by having the freedom of choice. If the human doesn't come to this mindset and tries to beat the Machine, the human will lose.McCulloch wrote:I am having problems differentiating this from circular reasoning. You assume that a Machine can be built that can predict human behaviour based on brain state. Then you assume that such a Machine cannot compute the effect of that prediction on the human's brain state and therefore cannot make a revised prediction.
The Machine cannot generate a counterfactual table since the human is not playing along. There's no data for the Machine to construct such a table. The Device gives it an algorithm and an initial state to do its calculations. The human knows that no matter what, the human will have a choice to choose a number that the Machine cannot predict. Therefore, by not picking a number or playing that game, the Machine is forced to provide either a prediction or a counterfactual table that the human can use to provide an answer (those are the rules). If the Machine cannot do so, then it loses by breaking the rules. Again, the Machine loses because it cannot produce a counterfactual table that allows the human to guess a different number.McCulloch wrote:I say that if a Machine can be built that could predict human decisions then it could also predict the effect of new knowledge on human decisions, therefore putting the Human into the same position as the Device.
The Machine can be built. The Machine knows the problem. The Machine knows that it could know what number the human will pick, if the human would just pick a number. But, the scan shows that the human won't pick a number because the human has faith that she will win. What do you recommend that the Machine do? Lie? That's against the rules.McCulloch wrote:But if you say that such a Machine could not be built, then the experiment could not be run and the experiment could not be used to prove that the Machine could not be built.
The counterfactual table is based on what number the device or human will make next. If there is no such number, then there are no values to construct a counterfactual table. Remember, the counterfactual table says that if the human picks 5, which the Machine will have already calculated as 5, which the Human will then pick 4, which the Machine will have already calculated for that decision as 4, etc.. If the human doesn't play along, then there's no counterfactual table.McCulloch wrote:But if such a machine could possibly exist, it could produce the human counterfactual table. I see nothing that counter-indicates that. Machine predicts human behaviour based on brain state. Machine predicts human behaviour based on revised brain state due to knowledge of Machine's prediction. I seem to miss where the support is for the idea that such a machine could not possibly produce the human counterfactual table.
The Machine must provide a counterfactual table. Period. If it doesn't then it violates the rules. The Machine can't provide a counterfactual table because the thought of the human is not thinking along those lines. The human is showing a justified faith that causes the Machine to report back that it has no counterfactual table and cannot predict what number the human will choose.McCulloch wrote:Remember... If this premise is assumed to be true, I miss where you have proven that such a machine could not revise its predictions based on the human knowledge of its own predictions. If this premise is assumed to be false, then the though experiment stops there and proves nothing.
People say of the last day, that God shall give judgment. This is true. But it is not true as people imagine. Every man pronounces his own sentence; as he shows himself here in his essence, so will he remain everlastingly -- Meister Eckhart
Post #50
So, you are proving that humans have free will by assuming that they have free will ? It still sounds like begging the question to me...harvey1 wrote:What I assume is that the human can refuse to play the cat and mouse game by having knowledge and understanding that they must be able to win by having the freedom of choice.
Wait, so all the human has to do to prove he has free will is to not press any buttons ? In that case, we don't need a complex Device to emulate a human, we can just put a brick on the floor next to the button console. The brick won't press any buttons, therefore the brick has free will.But, the scan shows that the human won't pick a number because the human has faith that she will win.