Here's a paradox that seems that with today's brain scanning technologies one can envision how this paradox implies free will as well as dualism.
Imagine that you are the owner of a fantastic brain scanning machine that has recently been invented and is now harmlessly connected to your brain. The system is such that it can analyze the electro-chemical state of your brain, and based on that state can predict exactly what you will and must do next. Now, let's say that while sitting at the controls of this machine that it scans your brain upon pressing the green button and it comes back with, "you will press the purple button next." Now, upon hearing that you will press the purple button you decide to be a wise guy and you push the yellow button instead. The machine is wrong. But, how could it be wrong since it must know what your brain circuits would do upon hearing that you will press the purple button, and therefore the machine should be able to consider what your brain circuits would do even in that special case of knowing what you will do? If hearing that you would push the purple button, the machine must know that you would press the yellow button. However, if the machine told you that you would press the yellow button, then you would have surely not have pressed the yellow button. The machine must lie to you in order to predict your behavior. However, if it must lie to you, that means that it cannot predict your behavior by predicting your behavior. This suggests that there is no algorithm or scanning technology that the machine can use that predicts behavior when it has the task of reporting to you what your behavior will be. Therefore, the only way this could be true is if human behavior is indeterministic.
If human behavior is indeterministic, then wouldn't this mean that some form of dualism is true? That is, if no bridge laws exist that allow the machine to absolutely determine a human decision in all situations (as shown above), then the mental is not fully reducible to the physical. Dualism is the view that both the mental and physical exist, and existence is confirmed if the thing that is purported to exist cannot be explained in terms of other phenomena. Since the hypothetical machine cannot reduce every decision to a brain process that is scannable, wouldn't this suggest that there exists some non-physical component to the brain called the mind (i.e., dualism)?
Is dualism true?
Moderator: Moderators
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Is dualism true?
Post #1People say of the last day, that God shall give judgment. This is true. But it is not true as people imagine. Every man pronounces his own sentence; as he shows himself here in his essence, so will he remain everlastingly -- Meister Eckhart
Re: Is dualism true?
Post #71Don't worry about any delay, I have found it difficult to respond recently due to similar constraints. Factor y is the prediction (or rather the foreknowledge of). The prediction is made prior to the predicted outcome being made known. This is a real problem as a prediction is made before all factors are ""factorised"( Jeez, this is so cheesy). If the prediction is made due to a brain scanner then such a factor is required to make a prediction. An initial prediction could be made and this could be made known but after the prediction is factorised then the outcome might be different from the initial prediction. The scanner can only predict according to the brain activity. The brain activity continues after the prediction is made and so the scanner could only predict what would be the outcome prior to the final prediction being made. I hope this makes sense.harvey1 wrote:What would you consider "factor y" as per my argument?Curious wrote:To determine a behaviour using u,v, w and x as factors and then expecting the behaviour to be the same regardless of factor y is more than a little foolish. Factor y (in this case the "stated"prediction) would have some effect on the subsequent behaviour.
(Sorry I have been slow to respond to your other post. I'm sort of tapped out on time, so I'm now forced to choose which "long" posts I respond to.)
Post #72
Neither am I, for now. I am not saying that our brains are quantum computers; I am saying that you cannot distinguish between free will and quantum randomness. These are two different statements.harvey1 wrote:If you aren't holding that position, then please answer these questions with the position that you actually hold. I'm not interested in what Penrose would say...
Firstly, the Device above is entirely deterministic, so Penrose has nothing to say. Secondly, I'm not sure you understood my notation, so let me write it out for you:This answer applies only to a Penrosean argument, but I would argue to Penrose that he has misconstrued the thought experiment. If the Device predicts "0" even though Machine A said it would predict "0," then in fact the Device always loses to the Machine.Bugmaster wrote:This is also true of my improved Device (as seen in the previous post), specified by the table (0, 1), (1, 2), (2, 1), (null, 0). The Machine knows that, if it does not voice any predictions, the Device will predictably pick 0.
Code: Select all
If the Machine predicts... The Device will press...
nothing | 0
0 | 1
1 | 2
2 | 0
The Machine, being omniscient, knows that the Device's favorite button is 0. But that knowledge does not help the Machine, because it still cannot voice an accurate prediction "out loud".
I don't understand how Machine B shows that. Wouldn't Machine B predict the same button as Machine A ?Free will implies that our choices are indeterministic with respect to the Machine that tries to tell us what we will choose. This does not mean that our choices are at all unpredictable as Machine B illustrates.
By assuming that the human can make choices but the Device cannot, you are assuming that the human has free will but the Device does not, which is what you were trying to prove in the first place. Earlier, I recall you saying something like, "the human knows that it can pick whatever ice cream flavor he wants, regardless of what the Machine predicts". If you phrased it as, "the human thinks that it can pick whatever ice cream flavor he wants, regardless of what the Machine predicts", you would not be begging the question; but by implying that the human has actual correct knowledge, not a belief, you have implicitly assumed that the human has free will (and he is correct about having it).How is my statement begging the question? Show me which line in my argument begs the question.Essentially, you're saying, "the human can choose to exercise his free will or not, but the Device can't, therefore the Device does not have free will. Firstly, this is begging the question.
Is this an assumption, or a conclusion of your argument, or both ?This misses the point. Humans can potentially pick any flavor, meaning that they are not deterministically required to pick a flavor if told by the Machine that they will choose vanilla when the choice is immediately in front of them.
This is somewhat off-topic, but, as you know, I claim that we do not have a choice (at least, not a conscious one) when it comes to rational beliefs. I cannot choose to believe that the sky is green with polka dots (I just tried, and failed again !), and neither can I choose to believe in God. Both situations (green sky, God exists) are logically possible, but that's not enough.If the choice is to disbelieve in God or to believe in God, free will states that we can purposely choose to believe in God if we want to do so.
The Device I've "built" at the top of this post depends on the Machine for its output. Therefore, in order to build a table of all the Devices actions, from start to finish, the Machine would need to know all of its own actions (i.e., simulate itself). Furthermore, if the Device's lifespan is random, the Machine cannot know how many rows the table will have.No, they aren't going into a counterfactual table. They are in the counterfactual table after the Machine scans the Device. The Machine then knows what the Device will calculate based on certain input from the Machine. Therefore, with an infinitely fast processor the Machine calculates all the moves that a finite Device will make by estimating the life of the Device. ... The Machine after scanning the Device knows everything the Device has done, is doing, and can forever do. Nothing the Device can output will be a surprise to the Machine.
That's what I've been explaining all along !Why can't the Machine predict what the Device will output if the Machine has full knowledge of the Device and its algorithm?
I really don't know how to explain my position further, without participating in a little role-playing exercise. So, again: I will play the Device at the top of this post (or a simpler one, without a "favorite" button, take your pick). My Device has a lifetime of 3+-1 turns; i.e., it can press the button 3+-1 times before it goes "poof". You will play the part of an omniscient Machine. Your goal is to announce a correct prediction of which button I'll press.
Now, I realize that you are not omniscient, but I think that even your limited human capabilities will suffice for such a simple Device.
In fact, here, let me take the first turn:
Turn 0: Currently, the Machine is not predicting anything. I press 0.
There are 2+-1 turns remaining.
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Re: Is dualism true?
Post #73Okay, I see where you're going. In this thought experiment I'm not suggesting that a Machine is able to actually exist as a physical device (which indeed it cannot since it has an infinitely fast processor).Curious wrote:The scanner can only predict according to the brain activity. The brain activity continues after the prediction is made and so the scanner could only predict what would be the outcome prior to the final prediction being made. I hope this makes sense.
People say of the last day, that God shall give judgment. This is true. But it is not true as people imagine. Every man pronounces his own sentence; as he shows himself here in his essence, so will he remain everlastingly -- Meister Eckhart
Re: Is dualism true?
Post #74It is the freedom and not the speed of thought that is the relevant factor here. The prediction is made prior to the decision because the prediction is made before all factors are taken into consideration. It is like trying to predict the answer to an equation when only half the equation is known. Without the whole equation you might as well gave any random answer. Cause and effect arguments cannot be predicted without factorising cause. The prediction is cause in respect to final outcome. Unless you are suggesting the machine is prophetic, then to leave the prediction out makes the prediction a best guess.harvey1 wrote:Okay, I see where you're going. In this thought experiment I'm not suggesting that a Machine is able to actually exist as a physical device (which indeed it cannot since it has an infinitely fast processor).Curious wrote:The scanner can only predict according to the brain activity. The brain activity continues after the prediction is made and so the scanner could only predict what would be the outcome prior to the final prediction being made. I hope this makes sense.
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Re: Is dualism true?
Post #75Why can't the initial prediction include all factorization of factor y prior to being made? Remember, the scan has access to all components that make up the Device including including factor y.Curious wrote:To determine a behaviour using u,v, w and x as factors and then expecting the behaviour to be the same regardless of factor y is more than a little foolish. Factor y (in this case the "stated"prediction) would have some effect on the subsequent behaviour. This "stated" prediction may not be the real prediction though as the prediction could be that the organism would choose an outcome contrary to the "stated" prediction. So the prediction may be dependent upon the stated prediction more than any other factor if the prediction is known. One algorithm that might be used would possibly include the statement "The knowledge of a prediction will likely affect the outcome".... Factor y is the prediction (or rather the foreknowledge of). The prediction is made prior to the predicted outcome being made known. This is a real problem as a prediction is made before all factors are ""factorised"( Jeez, this is so cheesy). If the prediction is made due to a brain scanner then such a factor is required to make a prediction. An initial prediction could be made and this could be made known but after the prediction is factorised then the outcome might be different from the initial prediction.
If the scan and factorization is infinitely fast, then any brain activity that continues after the scan will also be predicted as a result of the infinitely fast scan/factorization.Curious wrote:The scanner can only predict according to the brain activity. The brain activity continues after the prediction is made and so the scanner could only predict what would be the outcome prior to the final prediction being made.... It is the freedom and not the speed of thought that is the relevant factor here. The prediction is made prior to the decision because the prediction is made before all factors are taken into consideration.
No, the Machine is not prophetic, but it is omniscient. It knows the initial condition of the brain (or Device) along with full knowledge of its algorithmic state/processing. This knowledge allows the Machine to fully predict future states at any time t.Curious wrote:It is like trying to predict the answer to an equation when only half the equation is known. Without the whole equation you might as well gave any random answer. Cause and effect arguments cannot be predicted without factorising cause. The prediction is cause in respect to final outcome. Unless you are suggesting the machine is prophetic, then to leave the prediction out makes the prediction a best guess.
People say of the last day, that God shall give judgment. This is true. But it is not true as people imagine. Every man pronounces his own sentence; as he shows himself here in his essence, so will he remain everlastingly -- Meister Eckhart
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Post #76
Nonetheless, if you can't use randomness as part of your solution to the thought experiment because your epistemology forbids it, then we aren't actually hearing how you avoid the conclusions of the thought experiment. Afterall, what if Penrose were right that free will is a result of quantum randomness, then we'd both be wrong about the nature of free will. (As it stands of course, I feel that I have a suitable answer to a Penrosean by citing two Machines, A and B.)Bugmaster wrote:Neither am I, for now. I am not saying that our brains are quantum computers; I am saying that you cannot distinguish between free will and quantum randomness. These are two different statements.harvey1 wrote:If you aren't holding that position, then please answer these questions with the position that you actually hold. I'm not interested in what Penrose would say...
Your response is confusing to me. You introduced quantum randomness and I respond by saying:Bugmaster wrote:Firstly, the Device above is entirely deterministic, so Penrose has nothing to say.This answer applies only to a Penrosean argument, but I would argue to Penrose that he has misconstrued the thought experiment. If the Device predicts "0" even though Machine A said it would predict "0," then in fact the Device always loses to the Machine.Bugmaster wrote:This is also true of my improved Device (as seen in the previous post), specified by the table (0, 1), (1, 2), (2, 1), (null, 0). The Machine knows that, if it does not voice any predictions, the Device will predictably pick 0.
If the Device is deterministic, then why did you say it was indeterministic?harvey1 wrote:This brings up another problem. The Machine knows what the human will pick if it doesn't disclose its answers to the human. The Machine couldn't know what the Device predicted even if it never disclosed its answers to the Device.
Bugmaster wrote:Easy enough to fix -- we'll just augment the Device to pick "0" if it doesn't receive a prediction. "0" is the Device's favorite button ![sic]
The Machine is never silent to the Device, and by the rules cannot be silent. It must give the Device enough time to download the next record of the counterfactual table and calculate its choice. The Machine's first record in the counterfactual table is what the Device would have selected had the Machine not scanned the Device. Any finite Device will eventually fail to outwit the Machine.Bugmaster wrote:Secondly, I'm not sure you understood my notation, so let me write it out for you:As you can see, this Device is completely deterministic. It has a "favorite" button, 0, that it will press if the Machine stays silent. If the Machine voices any prediction at all, the Device will press a different button than what the Machine predicted.Code: Select all
If the Machine predicts... The Device will press... nothing | 0 0 | 1 1 | 2 2 | 0
But, it "voices" its prediction in the counterfactual table which the Device discovers is accurate after it downloads each record.Bugmaster wrote:The Machine, being omniscient, knows that the Device's favorite button is 0. But that knowledge does not help the Machine, because it still cannot voice an accurate prediction "out loud".
Not in cases where Machine A is trying to tell a human what they will immediately pick. The human knows that Machine A cannot tell them what they will pick, and therefore the Machine must return an "uncomputable" decision to the human. Since Machine B is not sharing its predictions with the human, the human must make a decision by selecting a number. Machine B knows that number because it can predict what the human will do after Machine A surrenders.Bugmaster wrote:I don't understand how Machine B shows that. Wouldn't Machine B predict the same button as Machine A ? [sic]Free will implies that our choices are indeterministic with respect to the Machine that tries to tell us what we will choose. This does not mean that our choices are at all unpredictable as Machine B illustrates.
How have I assumed the Device cannot make a choice? I've insisted from the very beginnning that the Device is free to use its algorithms to make a choice. However, those choices are in the counterfactual table immediately upon the scan.Bugmaster wrote:By assuming that the human can make choices but the Device cannot, you are assuming that the human has free will but the Device does not, which is what you were trying to prove in the first place.How is my statement begging the question? Show me which line in my argument begs the question.
This is not begging the question since this is a premise that may or may not be true. It is not necessarily the case that the Machine cannot tell what the human will do. What if there's a glitch in our brain that when it is told by the Machine that it will predict "apricot" that it will and must pick apricot? If you can show that this is necessarily false, then be my guest. I'll need scientific evidence to support this if you think that this is not possible in principle. If not, then I think it is a good assumption that humans can believe that the Machine cannot predict their actions (i.e., without unwittingly assuming free will).Bugmaster wrote:Earlier, I recall you saying something like, "the human knows that it can pick whatever ice cream flavor he wants, regardless of what the Machine predicts". If you phrased it as, "the human thinks that it can pick whatever ice cream flavor he wants, regardless of what the Machine predicts", you would not be begging the question; but by implying that the human has actual correct knowledge, not a belief, you have implicitly assumed that the human has free will (and he is correct about having it).
It's an assumption that I think is a good assumption. However, it is not necessarily the case. There could exist a glitch in the brain that invalidates this assumption, but we don't have any good reason to suppose that this is the case.Bugmaster wrote:Is this an assumption, or a conclusion of your argument, or both ? [sic]This misses the point. Humans can potentially pick any flavor, meaning that they are not deterministically required to pick a flavor if told by the Machine that they will choose vanilla when the choice is immediately in front of them.
You always have a choice, BM. If you wish, you can pay someone to put you in extreme brainwashing sessions for a number of years (i.e., in some country where the right people are willing to do that), and the possibility is decent that you will believe the sky is green with polka dots. You could even pass a lie detector test when you come out of the sessions. You can choose! (or, if you prefer, "you can choose !").Bugmaster wrote:This is somewhat off-topic, but, as you know, I claim that we do not have a choice (at least, not a conscious one) when it comes to rational beliefs. I cannot choose to believe that the sky is green with polka dots (I just tried, and failed again ! [sic])
Again, this is my thought experiment. After you acknowledge that humans can do something in my thought experiment that a Device cannot do, then we can consider your thought experiment where the Device can do everything a human can do. (If you like, we can just use a calculator to show such an example.)Bugmaster wrote:The Device I've "built" at the top of this post depends on the Machine for its output. Therefore, in order to build a table of all the Devices actions, from start to finish, the Machine would need to know all of its own actions (i.e., simulate itself). Furthermore, if the Device's lifespan is random, the Machine cannot know how many rows the table will have.
But, you've simply changed the thought experiment which you cannot do. The thought experiment we are discussing in this thread, the thread I created, is based on the parameters that I laid out.Bugmaster wrote:That's what I've been explaining all along ! I really don't know how to explain my position further, without participating in a little role-playing exercise. So, again: I will play the Device at the top of this post (or a simpler one, without a "favorite" button, take your pick). My Device has a lifetime of 3+-1 turns; i.e., it can press the button 3+-1 times before it goes "poof". You will play the part of an omniscient Machine. Your goal is to announce a correct prediction of which button I'll press.Why can't the Machine predict what the Device will output if the Machine has full knowledge of the Device and its algorithm?
The Machine would have predicted 0 in the first row of its counterfactual table because this is what you would have pressed had the Machine not scanned you.Bugmaster wrote:Turn 0: Currently, the Machine is not predicting anything. I press 0. There are 2+-1 turns remaining.
People say of the last day, that God shall give judgment. This is true. But it is not true as people imagine. Every man pronounces his own sentence; as he shows himself here in his essence, so will he remain everlastingly -- Meister Eckhart
Post #77
(Sorry, I don't have much time, so I'll be brief; I'll respond to our other thread sometime later...)
I think we are getting confused regarding all the different Devices and Machines, etc., so let me recap.
In your thought experiment, you are initially that, if a nearly-omniscient Machine could not predict the actions of a human, even after mapping his brain, then it proves that the human has free will. McCulloch introduced a simple, deterministic Device that will outwit the Machine every time. I have introduced an indeterministic, completely random Device, whose actions are unpredictable in principle. Both Devices outwit the Machine, and yet neither of them has free will, so the original argument fails. Note that these are two separate Devices, not the same Device.
You then clarified your argument, by saying that it doesn't matter what we observe from the outside; what matters is that the human knows that he has the freedom of choice, regardless of what the Machine says. I countered by saying that you are begging the question by assuming that the human has freedom of choice to begin with.
I would further argue that your counterfactual table is irrelevant. More specifically, the following two situations are identical:
A1). Machine predicts what the subject (human or Device) will press, and voices its prediction
A2). The subject looks at the prediction and presses a button.
A3). Machine makes a new prediction, etc., repeating A1 and A2.
vs.
B1). Machine creates a counterfactual ahead of time, containing all of its predictions
B2). The subject downloads this table.
B3). The subjects looks up the next prediction in the table.
B4). The subject presses a button.
B5). Repeat B3, B4.
The steps B3, B4 are identical to A1, A2, as long as we assume that the Machine is honest; i.e., as long as we assume that the counterfactual table is a "legally binding" document, as it were. If the Machine changes its mind while the subject is making its next guess, then the Machine lies, and all bets are off.
Thus, we can dispense with the CF table, and discuss the much simpler situation where the subject and the Machine are taking turns.
If you insist on using the CF table as part of the exercise, then please provide me (acting as the Device) with this table, and I will respond accordingly.
I think we are getting confused regarding all the different Devices and Machines, etc., so let me recap.
In your thought experiment, you are initially that, if a nearly-omniscient Machine could not predict the actions of a human, even after mapping his brain, then it proves that the human has free will. McCulloch introduced a simple, deterministic Device that will outwit the Machine every time. I have introduced an indeterministic, completely random Device, whose actions are unpredictable in principle. Both Devices outwit the Machine, and yet neither of them has free will, so the original argument fails. Note that these are two separate Devices, not the same Device.
You then clarified your argument, by saying that it doesn't matter what we observe from the outside; what matters is that the human knows that he has the freedom of choice, regardless of what the Machine says. I countered by saying that you are begging the question by assuming that the human has freedom of choice to begin with.
I would further argue that your counterfactual table is irrelevant. More specifically, the following two situations are identical:
A1). Machine predicts what the subject (human or Device) will press, and voices its prediction
A2). The subject looks at the prediction and presses a button.
A3). Machine makes a new prediction, etc., repeating A1 and A2.
vs.
B1). Machine creates a counterfactual ahead of time, containing all of its predictions
B2). The subject downloads this table.
B3). The subjects looks up the next prediction in the table.
B4). The subject presses a button.
B5). Repeat B3, B4.
The steps B3, B4 are identical to A1, A2, as long as we assume that the Machine is honest; i.e., as long as we assume that the counterfactual table is a "legally binding" document, as it were. If the Machine changes its mind while the subject is making its next guess, then the Machine lies, and all bets are off.
Thus, we can dispense with the CF table, and discuss the much simpler situation where the subject and the Machine are taking turns.
Any finite Device will eventually fail to outwit the Machine.
Sure, they can believe it, but are they right in believing it ? That's what your argument is trying to prove, and that's why you can't assume this.If not, then I think it is a good assumption that humans can believe that the Machine cannot predict their actions (i.e., without unwittingly assuming free will).
In that case, I would simply trade one immutable belief (blue sky) for another (funky sky). I still do not have control over my beliefs; the brainwasher does.You always have a choice, BM. If you wish, you can pay someone to put you in extreme brainwashing sessions for a number of years (i.e., in some country where the right people are willing to do that), and the possibility is decent that you will believe the sky is green with polka dots.
I can't acknowledge this, since I believe it to be false. Duh.Again, this is my thought experiment. After you acknowledge that humans can do something in my thought experiment that a Device cannot do...
So, is the CF table "legally binding" or not ? If it is, and row 0 of the table contains the binding prediction "0", and the Machine disclosed this precition to me, the Device, then I would've pressed 1, not 0.The Machine would have predicted 0 in the first row of its counterfactual table because this is what you would have pressed had the Machine not scanned you.Bugmaster wrote:Turn 0: Currently, the Machine is not predicting anything. I press 0. There are 2+-1 turns remaining.
If you insist on using the CF table as part of the exercise, then please provide me (acting as the Device) with this table, and I will respond accordingly.
- OccamsRazor
- Scholar
- Posts: 438
- Joined: Wed Mar 29, 2006 7:08 am
- Location: London, UK
Post #78
I have spent some time reading through this thread and have come to the following conclusion:
I am not going to join in and add my own device to the discussion.
What I would say is simply that if we take any sufficiently complex system we may rule out determinism.
For example the exact time that a logic gate in a microprocessor within a computer overheats and loses a bit value is a question of quantum mechanics. In Bugmaster's device such an unforeseen event may occur and it may pick something that the prediction machine did not foresee. This does not prove that the device has a mind independent of its material components.
The human brain is a little more complex than BM's device and potential thermoelectric anomalies are likely to be far more prevalent. The hypothetical brain scanner would simply never be able to make predictions too far into the future. The more loops of "If I tell the human this it will pick that" that it makes the greater the uncertainty it faces in its prediction. Likewise with the device this does not prove dualism.
I am not going to join in and add my own device to the discussion.
What I would say is simply that if we take any sufficiently complex system we may rule out determinism.
For example the exact time that a logic gate in a microprocessor within a computer overheats and loses a bit value is a question of quantum mechanics. In Bugmaster's device such an unforeseen event may occur and it may pick something that the prediction machine did not foresee. This does not prove that the device has a mind independent of its material components.
The human brain is a little more complex than BM's device and potential thermoelectric anomalies are likely to be far more prevalent. The hypothetical brain scanner would simply never be able to make predictions too far into the future. The more loops of "If I tell the human this it will pick that" that it makes the greater the uncertainty it faces in its prediction. Likewise with the device this does not prove dualism.
One should not increase, beyond what is necessary, the number of entities required to explain anything.
- harvey1
- Prodigy
- Posts: 3452
- Joined: Fri Nov 26, 2004 2:09 pm
- Has thanked: 1 time
- Been thanked: 2 times
Post #79
The problem with that, though, is two-fold. What does complexity have to do with the actual state of the object? For example, often we hear how something cannot be reduced to its constituents because it is just darn to complex for us. However, if it is not reducible even in principle, then the situation isn't just that the object is too complex. It must also mean that there exists some kind of holistic issue where the sum cannot be reduced to its parts because parts aren't all that the whole is made up of.OccamsRazor wrote:if we take any sufficiently complex system we may rule out determinism.
The second problem is that this thought experiment is looking at the ideal situation of an omniscient Machine in order to demonstrate something is wrong in Denmark with the current materialistic view of things.
If such an event happened, then we'd simply fall back to the argument against the Penrosean argument for free will (which is that its a result of quantum indeterminacy). If that's so, then neither Machine A or Machine B can know what the Device will do. However, that's not true for the human since Machine B would know (i.e., Machine B never disclosed its answer to the human, so Machine B is able to know what the human will do in any circumstance).O.Razor wrote:For example the exact time that a logic gate in a microprocessor within a computer overheats and loses a bit value is a question of quantum mechanics. In Bugmaster's device such an unforeseen event may occur and it may pick something that the prediction machine did not foresee. This does not prove that the device has a mind independent of its material components.
But, O.Razor, this is really some Machine. Scientists from a ETI civilization that has been developing technology for the past billion years has developed it. It uses physics that would appear to us as magic. You can't even imagine how capable it is at accurately predicting and managing for potential themoelectric anomalies well past the life of an average human or Device.O.Razor wrote:The human brain is a little more complex than BM's device and potential thermoelectric anomalies are likely to be far more prevalent. The hypothetical brain scanner would simply never be able to make predictions too far into the future. The more loops of "If I tell the human this it will pick that" that it makes the greater the uncertainty it faces in its prediction. Likewise with the device this does not prove dualism.
People say of the last day, that God shall give judgment. This is true. But it is not true as people imagine. Every man pronounces his own sentence; as he shows himself here in his essence, so will he remain everlastingly -- Meister Eckhart
- OccamsRazor
- Scholar
- Posts: 438
- Joined: Wed Mar 29, 2006 7:08 am
- Location: London, UK
Post #80
No, this was not my point. I am that in your thought experiment your machine is infallible in its ability to reduce such complexity.harvey1 wrote:What does complexity have to do with the actual state of the object? For example, often we hear how something cannot be reduced to its constituents because it is just darn to complex for us. However, if it is not reducible even in principle, then the situation isn't just that the object is too complex. It must also mean that there exists some kind of holistic issue...
My point here is that if one may say that there exists an inherent indeterminacy in the prediction of one component of a system. For example predicting the exact time that an electron will move up an energy quantum. Then the more complex the system the greater the inaccuracy of determining the macroscopic level behaviour of the system.
But the Hameroff-Penrose Hypothesis does allow for such indeterminacy via quantum processes. I agree that this does also give rise to:harvey1 wrote:If such an event happened, then we'd simply fall back to the argument against the Penrosean argument for free will (which is that its a result of quantum indeterminacy).
harvey1 wrote:It must also mean that there exists some kind of holistic issue where the sum cannot be reduced to its parts because parts aren't all that the whole is made up of.
So in your thought experiment you are saying that such a machine would be able to bypass any possible quantum mechanical anomalies, which may in fact exist in reality?harvey1 wrote:If that's so, then neither Machine A or Machine B can know what the Device will do. However, that's not true for the human since Machine B would know
I could not accept an argument for dualism which would require my acceptance of a device which cannot, even in principle, actually exist in reality.
I'm not saying that your machine cannot exist, I am rather saying that it this may be the case and if so then the argument fails.
harvey1 wrote:But, O.Razor, this is really some Machine. Scientists from a ETI civilization that has been developing technology for the past billion years has developed it.

One should not increase, beyond what is necessary, the number of entities required to explain anything.