**Physicist**: With very few exceptions, yes. What we normally call “random” is not truly random, but only appears so. The randomness is a reflection of our ignorance about the thing being observed, rather than something inherent to it.

For example: If you know everything about a craps table, and everything about the dice being thrown, and everything about the air around the table, then you will be able to predict the outcome.

If, on the other hand, you try to predict something like the moment that a radioactive atom will radioact, then you’ll find yourself at the corner of Poo Creek and No. Einstein and many others believed that the randomness of things like radioactive decay, photons going through polarizers, and other bizarre quantum effects could be explained and predicted if only we knew the “hidden variables” involved. Not surprisingly, this became known as “hidden variable theory”, and it turns out to be wrong.

If outcomes can be determined (by hidden variables or whatever), then any experiment will have a result. More importantly, any experiment will have a result whether or not you choose to do that experiment, because the result is written into the hidden variables before the experiment is even done. Like the dice, if you know all the variables in advance, then you don’t need to do the experiment (roll the dice, turn on the accelerator, etc.). The idea that every experiment has an outcome, regardless of whether or not you choose to do that experiment is called “the reality assumption”, and it should make a lot of sense. If you flip a coin, but don’t look at it, then it’ll land either heads or tails (this is an unobserved result) and it doesn’t make any difference if you look at it or not. In this case the hidden variable is “heads” or “tails”, and it’s only hidden because you haven’t looked at it.

It took a while, but hidden variable theory was eventually disproved by John Bell, who showed that there are lots of experiments that cannot have unmeasured results. Thus the results cannot be determined ahead of time, so there are no hidden variables, and the results are truly random. That is, if it is physically and mathematically impossible to predict the results, then the results are truly, fundamentally random.

What follows is **answer gravy**: a description of one of the experiments that demonstrates Bell’s inequality and shows that the reality assumption is false. If you’re already satisfied that true randomness exists, then there’s no reason to read on. Here’s the experiment:

1) Generate a pair of entangled photons (you can do this with a down converter, which splits one photon into an entangled pair of photons).

2) Fire them at two polarizers.

3) Randomly change the angle of the polarizers after the photons are emitted. This prevents information about one measurement to affect the other, since that would require that the information travels faster than light.

4) Measure both photons (do they go through the polarizers (1) or not (0)?) and record the results.

The amazing thing about entangled photons is that they always give the same result when you measure them at the same angle. Entangled particles are in fact in a single state shared between the two particles. So by making a measurement with the polarizers at different angles we can measure what one photon would do at two different angles.

It has been experimentally verified that if the polarizers are set at angles and , then the chance that the measurements are the same is: . This is only true for entangled photons. If they are not entangled, then , since the results are random. Now, notice that if and , then . This is because:

We can do two experiments at 0°, 22.5°, 45°, 67.5°, and 90°. The reality assumption says that the results of all of these experiments exist, but unfortunately we can only do two at a time. So C(0°, 22.5°) = C(22.5°, 45°) = C(45°, 67.5°) = C(67.5°, 90°) = cos^{2}(22.5°) = 0.85. Now based only on this, and the reality assumption, we know that if we were to do all of these experiments (instead of only two) then:

C(0°, 22.5°) = 0.85

C(0°, 45°) ≥ C(0°, 22.5°) + C(22.5°, 45°) -1 = 0.70

C(0°, 67.5°) ≥ C(0°, 45°) + C(45°, 67.5°) -1 = 0.55

C(0°, 90°) ≥ C(0°, 67.5°) + C(67.5°, 90°) – 1 = 0.40

That is, if we could hypothetically do all of the experiments at the same time we would find that the measurement at 0° and the measurement at 90° are the same at least 40% of the time. However, we find that C(0°, 90°) = cos^{2}(90°) = 0 (they never give the same result).

Therefore, the result of an experiment only exists if the experiment is actually done.

Therefore, you can’t predict the result of the experiment before it’s done.

Therefore, true randomness exists.

As an aside, it turns out that the absolute randomness comes from the fact that every result of every interaction is expressed in parallel universes (you can’t predict two or more mutually exclusive, yet simultaneous results). “Parallel universes” are not nearly as exciting as they sound. Things are defined to be in different universes if they can’t coexist or interact. For example: in the double slit experiment a single photon goes through two slits. These two versions of the same photon exist in different universes from their own points of view (since they are mutually exclusive), but they are in the same universe from our perspective (since we can’t tell which slit they went through, and probably don’t care). Don’t worry about it too much all at once. You gotta pace your swearing.

As another aside, Bell’s Inequality only proves that the reality assumption and locality (nothing can travel faster then light) can’t both be true. However, locality (and relativity) work perfectly, and there are almost no physicists who are willing to give it up.

“Randomness is a condition when you don’t know the outcome of an experiment.”

I disagree with this statement – not knowing is not the same thing as it being impossible to know. True randomness is when effects occur that could ever be traced to any cause, even if one was omniscient and could trace all causal paths. In other words, when an outcome cannot in principle be predicted.

You’re right that there is no way to know what science will discover in the future. We could learn that all things are deterministic, all things are causal, and everything there is is one big clockwork. However, that is not my speculation.

My speculation is that there are multiple avenues of science, mathematics, and philosophy that already point towards the phenomenon of emergent behavior as being a real, legitimate aspect of our cosmos. In other words, that there truly are effects that cannot be traced back to any cause, in principle. In other words, true randomness.

I don’t see why that is so hard to accept. Frankly, it seems to me to be a simpler solution to certain problems that the tortured idea that every single effect has a cause, tracing all the way back to the big bang (and then what caused that?). Occam’s Razor.

“…In other words, that there truly are effects that cannot be traced back to any cause, in principle. In other words, true randomness…”

When you prove that something is “impossible to know/predict”, you base yourself on some assumptions. It can be that in the future it will be discovered (like it already happened) that your assumptions are limited to some specific conditions and outside these conditions do not hold. For example, Newtonian mechanics is limited to speeds much bellow the speed of light.

An effect, which cannot be traced now, perhaps will be traced in the future?

Can you prove that scientists will never be able to trace what happened before the big bang? Based on modern physics, nothing can escape black holes, so we will never trace what happens there. But maybe future physics will give us some clue? Maybe some currently unknown particles/fields/dimensions will allow to get information about what is inside a black hole?

Yes, some people do accept that there are things that cannot be traced; other people try to trace those things… It depends on the person’s character.

Sorry, I am writing a lot in this thread, but I am really interested in the topic.

I have a question to those who wrote here:

Why should we distinguish between ‘true’ and ‘not true’ randomness and can they always be distinguished?

(1). We cannot predict when a radioactive atom will decay (true randomness).

(2). But we also cannot predict the tomorrow’s British pound rate to USD (thought not true randomness by physicists).

Can you prove that (2) is really true randomness? What if someone on Earth is now observing some atom and once that atom decays, she runs to exchange her 10,000,000 USD to GBP?

So it means that USD/GBP rate can possibly depend on an atom decay and so (2) is also “true random”.

In this way virtually any event can be true random.

I think what it means is that her actions were triggered by a random event. After the random event has happened we can presumably predict that she will run to the exchange. The randomness is still within the quantum realm.

The atom randomly decays, then she predictably runs.

“I think what it means is that her actions were triggered by a random event…”

Which means the dollar-pound rate is truly random.

…and cannot be predicted in principle

“Yes, some people do accept that there are things that cannot be traced; other people try to trace those things… It depends on the person’s character.”

And it should be no other way. The scientific method depends on people challenging theory and measuring empirically.

Likewise, there are mathematicians and physicists and philosophers who are looking for rigorous ways to demonstrate that randomness is a real phenomenon, both in nature and on paper. John Bell and Kurt Godel are good examples of people who investigated things in this direction.

One thing we can agree on: it is a fascinating topic, and I think one that is pretty fundamental to our world.

“If this is the case, then how can “randomness” be proven?”

It’s difficult, that’s for certain. It’s almost akin to proving a negative. One direction science has taken is to put bounds on how deterministic a system can be. John Bell did this with QM with his inequalities, which showed that QM really must either be indeterministic, or locality cannot hold – in other words, there must be a universal frame of reference of some kind that guides all QM outcomes, an idea that many physicists dislike deeply.

And the Bell Inequalities have survived many, many experimental verifications, which show that as best we can measure, they are indeed correct.

So it’s not impossible to conduct experiments or perform rigorous math that support the idea of randomness. Just very tricky. It is perhaps unlikely we will ever be able to exclude all alternatives, of course. 🙂

Pingback: Why creativity is special and infinity and monkeys cant't crush it - Peter Rey's Blog