AquinasD wrote:However, I think what is important to note is that randomness is scientifically unfalsifiable (...) Randomness is not observably distinguishable from events that occur in which we are simply unable to observe the antecedent causes of the consequent.
Science does not deal with an individual elementary event but with general patterns that can be found across large numbers of events - and physical experiments involve huge numbers of similar events, making possible to verify probability laws. So indeed it would not make scientific sense to question whether an individual elementary event with a couple of possible results, is random or not random, if this event was the only event that happened in the universe.
But, in the case we are facing, that is a very large number of observations to be compared with the values of their respective probabilities calculated from a well-defined theory, then the claim that those events obey those probability laws, has all qualities of a scientifically falsifiable claim.
Randomness can directly be falsified by expressing a rule that generates the given data. More generally and precisely, the claim of obedience of measured events to a given probability law, is the claim that "The optimal compression format of the file of observed data, is the format defined by this probability law".
Ifever it is false, its falsity can readily be shown by finding a set of observed data and another compression format (expressing another probability law) that makes this file significantly shorter (say, more than 100 bytes shorter, which should not be hard for a gigabytes long file if there really were hidden causes) than with the format of the first probability law.
Until now, the quantum probability law resisted quite well, at least in non-biological systems, and possibly observed deviations from this law (in parapsychology) are very small.
This is telling: randomness heavily exists in physics.
On the other hand, there is a mathematical theorem essentially stating that as long as no explicit rule could be found, the belief that some large random piece of data would in fact be following some unknown hidden law, is unfalsifiable. This theorem was discovered by
G. Chaitin. I summed it up (with the proof) in
my metaphysics page (just seach "Chaitin" there).
Randomness is not observably distinguishable from events that occur in which we are simply unable to observe the antecedent causes of the consequent.
So what ? Of two things one:
Either the hidden causes produce a pattern of deviations of possibly observable results (= results of specially designed experiments able to display such deviations) away from the quantum probability law, in which case this pattern can be observed and tentatively expressed as new modified probability law (that may not be the ultimate one either, but already a proof that the first law was not a complete account of reality).
Or it doesn't produce any such pattern in any possible observation, in which case any speculation about it remains futile and sterile for any purpose.
ThatGirlAgain wrote:As you say, the Copenhagen Interpretation cannot be used to claim that randomness is fundamental to reality. There might be hidden variables. (However the experiments based on Bell’s Theorem make it clear that hidden variables of a
classical physics form are impossible.
Non-classical variables are still possible. The world is definitely weird but not necessarily insane.)
What these experiments exactly say cannot be summed up in so few words. Just telling that "classical" variables are impossible while "non-classical" variables are possible, does not mean anything as it begs the question what is meant by "classical". To try to be more precise, it is a matter of whether this variable is local or non-local.
What it says (or at least what it would say if it was verified with concious observers many thousands of kilometers apart, each able to immediately observe his local subsystem of a correlated system faster than light speed communication between them) is that the measurement result of a physical system at a given place cannot be locally determined by a hidden property (a hidden variable) of the measured system (or even of the local [system+observer]) that is a
local property, which means a property that has no ability to be affected by a distant event (the measurement of another system) faster than the speed of light. Nor can it even be an
independent (local) random effect with fixed probabilities determined by such hidden local properties.
(Well I admit that the logical deduction of this result from possible observational verifications of quantum theory, is not completely rigorous as it assumes the possibility of a sudden free choice of what aspect of his system each observer will decide to measure, but... looking for a way around the conclusion here based on an assumed physical determinism of what observers would choose to measure, would be rather far-fetched).
From what little I understand about the modern idea of "decoherence," it seems to be a pathetic attempt to save the idea of a collapsing wave function while avoiding the need for a conscious observer.
Decoherence is NOT an interpretation, but it is an effective physical property that can be deduced from quantum theory disregarding the choice of interpretation.
Its precise definition is :
A system S is said to have decohered with respect to a possible measurement M, if there will be no more difference on the probabilities of any future possible measurement of S, whether or not the wavefunction of S is assumed to be now already collapsed with respect to M.
In other words, a decoherence is NOT a spontaneous collapse, but it is the description of the circumstances where the question whether a collapse happened or not, becomes unverifiable, so that the "already collapsed" hypothesis becomes compatible with the predictions of quantum theory on future measurement results (while a collapse before decoherence would violate the predictions of quantum theory on future measurements).
However, this property of decoherence is an emergent property that only makes sense as a limit property of large systems instead of elementary ones, because it depends on which future measurements can remain possible or not in practice, and this is a fuzzy condition. It is not exactly an internal change, but an external irreversible loss of future opportunities to make measurements capable of deducing the past characters of the system expressed by components of the wavefunction that an hypothetical present collapse would destroy.
In practice, decoherence happens as soon as (but not only if) a measurement has been "physically processed", in the sense that we have a macroscopic delivery of the measured result, that is, when the information of the result is "out of the box" with many copies of this information escaping in the environment, so that it cannot be anymore securely hidden by any further operation.
Examples of physical circumstances that produce decoherence are already given here (even if we would not assume that cats have souls):
ThatGirlAgain wrote:Interactions with other entities that cause irreversible changes is what is the alleged cause of the decoherence. (...) The cat does not die on the spot but is already dead and partly decomposed. Or the cat has been alive and drunk all the milk. A different world branch is selected
retroactively depending on the kind of observation made?
In any case there is the issue that the status of the cat cannot be completely isolated from the outside world. The movement or lack thereof of the cat, even to the extent of breathing, will have immediate (subject to c) gravitational consequences for the entire universe.
Crazee wrote:I think it is relevant to take into consideration the idea that at any given moment, there are infinite factors determining how following events will be played out. Currently, we do not have a method for quantitatively measuring infinity. If this is true, then everything is to some degree random because we could never account for all the factors leading into how future events will occur.
Quantum theory describes physical systems as locally (in every place of finite size at every given time) only having a finite (though large) "number of possible states" (with a concept of "number of possible states" that is a specifically quantum theoretical concept, but...). Thus any idea of a presence of an infinity of factors, must refer to non-physical factors, if by "physical" we mean the kind of states of physical systems that quantum theory describes. Of course you may imagine that there are more physical aspects of systems than those described by quantum theory, but well, such other physical causes remained undetected yet.
Crazee wrote:Thus we could redefine randomness as: When something occurs for no reason that we can determine.
Indeed and just as
JonhPaul already mentioned, this is an effective, non-metaphysical conception of randomness that is very important to the events of daily life, for example when you stumble on some preacher of a sect who tries to convince you that God has plans for your life and that your stumbling on him could not be a mere accident.
JohnPaul wrote:I prefer to believe that the cat is neither alive nor dead, but that all possible versions of it continue to "really" exist, some of them alive and some dead, distributed in accordance with the probabilities derived from quantum theory, each in its own separate 3-D universe but all still part of a larger four-dimensional reality.
Imagine an experiment producing a linearly polarized photon, and its polarization is measured by some detector in another direction forming an arbitrary chosen angle with the direction of the arriving photon.
In other words, it is an experiment with exactly 2 possible results with the "same quality" (one bit of stored information in the detector) but theoretical probabilities have an arbitrary value other than 1/2 each.
Now can you make sense of the claim:
- «Both possible versions of the detector (or the larger system) after measurement, continue to "really" exist, one of them with "vertical" measured result and the other with "horizontal" measured result, distributed in accordance with the probabilities derived from quantum theory»
I think such a claim is logically inconsistent. In other words, the idea of "real existence" of all possible results, is logically incompatible with the conformity of the effective (observed) probabilities to those predicted by quantum theory.
Thus, that the experimental verification of this conformity, refutes the idea of the "real existence" of all possibilities. Unless of course you find a way to make sense of the claim that a given precise scenario has x times more reality than an other if x is an irrational number, but I fail to figure out one now.
Let's further push the examination of the thought experiment:
Note that anyway, any possible "difference of quality" of the final state of the detector between both possible results, remains independent of the angle between the directions of arrived and measured polarization; and even if you consider the whole system "emitter + detector", I fail to see how to consider any "difference of quality" between its 2 possible final states (making the one "more frequent" than the other), in such a way that this "difference of quality" depends on the configuration of whatever optical device that could have been on the way of the photon and that could have modified the direction of polarization, without itself keeping any trace of its interaction with the photon.
If a possibility is said to "more probably exist" than another as defined by the ratio of the numbers of possible "final states", then it all depends on the time at which you choose to stop the experiment and make the count of the number of possible "final states". You may as well decide to cheat by waiting longer (make more experiments...) in one case than in the other before doing the counting, so as to change the ratio of these numbers. Finally, I think such a metaphysical definition of "probability" turns out to be empty and incompatible with the effective (experimentally verifiable) meaning of "probability".
Instead, it is his own conscious awareness that randomly synchronizes or locks on to a particular nearby universe which contains a particular result of his observation, one of the many possible results predicted by the probabilities of Quantum theory. All other possible results continue to "really" exist as "real" in their own universes.
I think these two sentences contradict each other, expressing two very different and incompatible interpretations of quantum theory. The first sentence precisely expresses my own view, that is the "conciousness causes collapse" interpretation, where conciousness is beyond physics.
The second sentence, claiming for an independent reality of all alternative possible results, gets conciousness out of the picture, or in other words, provides all possible results their own respective concious observers so as to make their respective "realities" worthy of that name. But if conciousness is divided (or multiplied) across all possibilities, then how can there still be anything "random", what is a probability, and how can it make any sense to claim that calculated probabilities are "correct" and can be or have been experimentally verified ?
My own view, the "conciousness causes collapse" interpretation which I expressed in my metaphysics page, is that physical objects are not essential beings, but the only real things are:
- - mathematical objects
- concious events (including concious perceptions of the physical world)
So, alternative possibilities of a measurement only have a mathematical existence, as mathematical objects that received a mathematical number that is the value of the theoretical probability they were given at some time before the concious measurement happened.