**Physicist**: With very few exceptions, yes. What we normally call “random” is not truly random, but only appears so. The randomness is a reflection of our ignorance about the thing being observed, rather than something inherent to it.

For example: If you know everything about a craps table, and everything about the dice being thrown, and everything about the air around the table, then you will be able to predict the outcome.

If, on the other hand, you try to predict something like the moment that a radioactive atom will radioact, then you’ll find yourself at the corner of Poo Creek and No. Einstein and many others believed that the randomness of things like radioactive decay, photons going through polarizers, and other bizarre quantum effects could be explained and predicted if only we knew the “hidden variables” involved. Not surprisingly, this became known as “hidden variable theory”, and it turns out to be wrong.

If outcomes can be determined (by hidden variables or whatever), then any experiment will have a result. More importantly, any experiment will have a result whether or not you choose to do that experiment, because the result is written into the hidden variables before the experiment is even done. Like the dice, if you know all the variables in advance, then you don’t need to do the experiment (roll the dice, turn on the accelerator, etc.). The idea that every experiment has an outcome, regardless of whether or not you choose to do that experiment is called “the reality assumption”, and it should make a lot of sense. If you flip a coin, but don’t look at it, then it’ll land either heads or tails (this is an unobserved result) and it doesn’t make any difference if you look at it or not. In this case the hidden variable is “heads” or “tails”, and it’s only hidden because you haven’t looked at it.

It took a while, but hidden variable theory was eventually disproved by John Bell, who showed that there are lots of experiments that cannot have unmeasured results. Thus the results cannot be determined ahead of time, so there are no hidden variables, and the results are truly random. That is, if it is physically and mathematically impossible to predict the results, then the results are truly, fundamentally random.

What follows is **answer gravy**: a description of one of the experiments that demonstrates Bell’s inequality and shows that the reality assumption is false. If you’re already satisfied that true randomness exists, then there’s no reason to read on. Here’s the experiment:

1) Generate a pair of entangled photons (you can do this with a down converter, which splits one photon into an entangled pair of photons).

2) Fire them at two polarizers.

3) Randomly change the angle of the polarizers after the photons are emitted. This prevents information about one measurement to affect the other, since that would require that the information travels faster than light.

4) Measure both photons (do they go through the polarizers (1) or not (0)?) and record the results.

The amazing thing about entangled photons is that they always give the same result when you measure them at the same angle. Entangled particles are in fact in a single state shared between the two particles. So by making a measurement with the polarizers at different angles we can measure what one photon would do at two different angles.

It has been experimentally verified that if the polarizers are set at angles and , then the chance that the measurements are the same is: . This is only true for entangled photons. If they are not entangled, then , since the results are random. Now, notice that if and , then . This is because:

We can do two experiments at 0°, 22.5°, 45°, 67.5°, and 90°. The reality assumption says that the results of all of these experiments exist, but unfortunately we can only do two at a time. So C(0°, 22.5°) = C(22.5°, 45°) = C(45°, 67.5°) = C(67.5°, 90°) = cos^{2}(22.5°) = 0.85. Now based only on this, and the reality assumption, we know that if we were to do all of these experiments (instead of only two) then:

C(0°, 22.5°) = 0.85

C(0°, 45°) ≥ C(0°, 22.5°) + C(22.5°, 45°) -1 = 0.70

C(0°, 67.5°) ≥ C(0°, 45°) + C(45°, 67.5°) -1 = 0.55

C(0°, 90°) ≥ C(0°, 67.5°) + C(67.5°, 90°) – 1 = 0.40

That is, if we could hypothetically do all of the experiments at the same time we would find that the measurement at 0° and the measurement at 90° are the same at least 40% of the time. However, we find that C(0°, 90°) = cos^{2}(90°) = 0 (they never give the same result).

Therefore, the result of an experiment only exists if the experiment is actually done.

Therefore, you can’t predict the result of the experiment before it’s done.

Therefore, true randomness exists.

As an aside, it turns out that the absolute randomness comes from the fact that every result of every interaction is expressed in parallel universes (you can’t predict two or more mutually exclusive, yet simultaneous results). “Parallel universes” are not nearly as exciting as they sound. Things are defined to be in different universes if they can’t coexist or interact. For example: in the double slit experiment a single photon goes through two slits. These two versions of the same photon exist in different universes from their own points of view (since they are mutually exclusive), but they are in the same universe from our perspective (since we can’t tell which slit they went through, and probably don’t care). Don’t worry about it too much all at once. You gotta pace your swearing.

As another aside, Bell’s Inequality only proves that the reality assumption and locality (nothing can travel faster then light) can’t both be true. However, locality (and relativity) work perfectly, and there are almost no physicists who are willing to give it up.

“Randomness is a condition when you don’t know the outcome of an experiment.”

I disagree with this statement – not knowing is not the same thing as it being impossible to know. True randomness is when effects occur that could ever be traced to any cause, even if one was omniscient and could trace all causal paths. In other words, when an outcome cannot in principle be predicted.

You’re right that there is no way to know what science will discover in the future. We could learn that all things are deterministic, all things are causal, and everything there is is one big clockwork. However, that is not my speculation.

My speculation is that there are multiple avenues of science, mathematics, and philosophy that already point towards the phenomenon of emergent behavior as being a real, legitimate aspect of our cosmos. In other words, that there truly are effects that cannot be traced back to any cause, in principle. In other words, true randomness.

I don’t see why that is so hard to accept. Frankly, it seems to me to be a simpler solution to certain problems that the tortured idea that every single effect has a cause, tracing all the way back to the big bang (and then what caused that?). Occam’s Razor.

“…In other words, that there truly are effects that cannot be traced back to any cause, in principle. In other words, true randomness…”

When you prove that something is “impossible to know/predict”, you base yourself on some assumptions. It can be that in the future it will be discovered (like it already happened) that your assumptions are limited to some specific conditions and outside these conditions do not hold. For example, Newtonian mechanics is limited to speeds much bellow the speed of light.

An effect, which cannot be traced now, perhaps will be traced in the future?

Can you prove that scientists will never be able to trace what happened before the big bang? Based on modern physics, nothing can escape black holes, so we will never trace what happens there. But maybe future physics will give us some clue? Maybe some currently unknown particles/fields/dimensions will allow to get information about what is inside a black hole?

Yes, some people do accept that there are things that cannot be traced; other people try to trace those things… It depends on the person’s character.

Sorry, I am writing a lot in this thread, but I am really interested in the topic.

I have a question to those who wrote here:

Why should we distinguish between ‘true’ and ‘not true’ randomness and can they always be distinguished?

(1). We cannot predict when a radioactive atom will decay (true randomness).

(2). But we also cannot predict the tomorrow’s British pound rate to USD (thought not true randomness by physicists).

Can you prove that (2) is really true randomness? What if someone on Earth is now observing some atom and once that atom decays, she runs to exchange her 10,000,000 USD to GBP?

So it means that USD/GBP rate can possibly depend on an atom decay and so (2) is also “true random”.

In this way virtually any event can be true random.

I think what it means is that her actions were triggered by a random event. After the random event has happened we can presumably predict that she will run to the exchange. The randomness is still within the quantum realm.

The atom randomly decays, then she predictably runs.

“I think what it means is that her actions were triggered by a random event…”

Which means the dollar-pound rate is truly random.

…and cannot be predicted in principle

“Yes, some people do accept that there are things that cannot be traced; other people try to trace those things… It depends on the person’s character.”

And it should be no other way. The scientific method depends on people challenging theory and measuring empirically.

Likewise, there are mathematicians and physicists and philosophers who are looking for rigorous ways to demonstrate that randomness is a real phenomenon, both in nature and on paper. John Bell and Kurt Godel are good examples of people who investigated things in this direction.

One thing we can agree on: it is a fascinating topic, and I think one that is pretty fundamental to our world.

“If this is the case, then how can “randomness” be proven?”

It’s difficult, that’s for certain. It’s almost akin to proving a negative. One direction science has taken is to put bounds on how deterministic a system can be. John Bell did this with QM with his inequalities, which showed that QM really must either be indeterministic, or locality cannot hold – in other words, there must be a universal frame of reference of some kind that guides all QM outcomes, an idea that many physicists dislike deeply.

And the Bell Inequalities have survived many, many experimental verifications, which show that as best we can measure, they are indeed correct.

So it’s not impossible to conduct experiments or perform rigorous math that support the idea of randomness. Just very tricky. It is perhaps unlikely we will ever be able to exclude all alternatives, of course. 🙂

Pingback: Why creativity is special and infinity and monkeys cant't crush it - Peter Rey's Blog

Everything that happens, or that can happen, has an equation.

Every equation has an automatic solution.

Whether we take the time to look at an equation and try to solve it or not, the equation is solved regardless.

Whether we experiment on a theory and see results or not,

The experiment always has results.

Everything that can happen, has already happened.

The first clause of step 3 of the experimental proof seems to require the conclusion that “random” is already know to be true: “Randomly change the angle of the polarizers after the photons are emitted.”

@Doug Carter

That has been a significant sticking point for a long time and ensuring that it is random has been a whole thing. The results of this experiment are exactly the same regardless of how the polarizers are randomized. In fact, if you just do each of the four experiments one at a time, thousands of times in a row, the results are still the same.

Fundamentally, the important result is that the correlation probability is . The rest is just hidden-variable paranoia.

Yes, agreed, but as has already been mentioned, when we do something “randomnly” its only “random” because we cant see a sequence or association?

@jon

Sometimes. When you flip a coin and cover it, the result is still random because you don’t know what it is. Quantum randomness is entirely different. The result literally cannot be known or predicted because it literally doesn’t exist in a definite state. That’s what this post (and Bell’s theorem) attempts to demonstrate: if something is merely unknown, then it can be described with probabilities. But in this experiment we find that the results cannot be described with probabilities.

So what happens from here on in?

How do you know you’ve accounted for all the variables?

@Annese Wehbe

The idea of hidden variables is “the system is in a definite state, we just can’t predict it because there are things going on that we don’t know about (hidden variables)”. The approach here is to say, if the system is in a definite state, it can be described by probabilities and must obey all of the mathematical laws of probability. Bell’s theorem shows that certain experiments do not obey the laws of probability, and therefore the assumption that the system is in a definite state is false.

There could definitely be hidden variables we’re not taking into account, but that doesn’t change the nature of the randomness (which is due to the system not being in a definite state).

Allow me to interviene, but Bell’s Theorem, in my opinion, does not prove or disprove anything. It only shows another paradox in which logic rules get broken if information travels faster than light, and thats it.

The fact that a quantum entangled particle can instantly communicate with its mirrored part corrupts the statistical odds because it messes up with the favorable cases and therefore probability changes.

This doubious premise that quantum entaglement true, is behind all the confusion. If Quantum entaglement is wrong, then Bell’s experiment is just another paradox like many others. If it is true, then Bell’s experiment could be real and the “Theoretical” C(a,c) = 0,5 that you have mentioned before is wrong, because if in fact particles can communicate faster than light, the theoretical probability would be the one calculated by Bell and not the common statistical 0.5 you have pointed out as a real theoretical value (because in this context the so called “reality” would be the quantum entanglement, and not the conventional world statistics).

This way, the hidden variable theory was NOT disproved, because we can deterministically show how the whole system would behave if we started from the hypothetical correct premise (and with that I mean the premise in which quantum entanglement is real) and using logical argumentation. If the system would behave ” truly randomly” as you claimed, you wouldnt have been able to explain it using logical rules, such as coherent language, because the results would have none! True randomness means “lack of cause-effect or pattern”.

If we do adulterate physic rules like this, and use it as a fallacy (like this experiment was), when information travels faster than light, logic rules will get broken, and math, being the language that describes logic, starts to not make sense. If math doesnt work, time sequence events do not make sense either, since math and time order are intrinsically related. Therefore, random events may seem to take place.

Whether you believe it or not, true randomness were never proven to be right or wrong and the evidences for both sides are very interesting to study.

P.s Sorry for any english mistake or typo

Why do so many people describe superimposition as TWO particles?

To me they do not possess the property of “two-ness” but only “one-ness” UNTIL the observation at which time the ONE breaks into two. Seams to me that makes a lot of the mysteries disappear and reasoning to be better.

It’s ALWAYS about asking the right question while suppressing “common sense” which Einstein described as “prejudices” and with which I agree.