Q: Why is a negative times a negative positive?

Physicist: The multiplication rules for signs are

\left\{\begin{array}{l}+\cdot+=+\\+\cdot-=-\\-\cdot-=+\end{array}\right.

You’d be hard pressed to find someone who disagrees with the first rule, and if you press hard you’ll find that almost everyone has been confused (at least once) by the third.  In a nutshell, if you try to define the multiplication rules any other way, arithmetic stops working in a big hurry.  Or at least, you have to scrap a lot of other math that’s incredibly useful.

When you multiply X by a positive integer Y you’re adding it to itself Y times.  So it makes sense that a positive times a positive is positive.  For example, 3\times 2 = 2+2+2 = 6.  By looking at 3 (or any other number you like) multiplied by smaller and smaller integers you see a pattern:

\begin{array}{rcl} &\vdots\\ 3\cdot2&=&6 \\ 3\cdot1&=&3\\ 3\cdot0&=&0\\ 3\cdot(-1)&=&?\\ 3\cdot(-2)&=&?\\ &\vdots \end{array}

Every time you multiply by a one-smaller-number, you take away another 3.  Following the pattern, we take 3 away from zero and make a decent guess at how that pattern should continue:

\begin{array}{rcl} &\vdots\\ 3\cdot2&=&6 \\ 3\cdot1&=&3\\ 3\cdot0&=&0\\ 3\cdot(-1)&=&-3\\ 3\cdot(-2)&=&-6\\ &\vdots \end{array}

Do this for a couple of different numbers and you can construct a multiplication table.  Like this one!

The times table for numbers between -3 and 3. Notice that the “2 row” always increases by 2 every step to the right (…, -2, 0, 2, 4, …).  Notice that the “-3 row” always increases by -3 (which is to say, decreases by 3) every step to the right.  While you’re noticing things, notice that the pattern is always the same for each row and column, even when the sign changes.

If the “negative times a negative” quadrant on the lower left were all negative instead of positive (e.g., “(-3)\cdot(-3)=-9“), then the rows and columns that go through it will suddenly have to switch patterns (e.g., “increasing by 3’s” to “decreasing by 3’s”) when they pass zero.  In some sense, the rules for signs are set up so that multiplication tables like this follow a nice, simple pattern.

So, “+\cdot+=+, +\cdot-=-, and -\cdot-=+” is a clean, reasonable way to define multiplication.  But does it work with the rules of arithmetic?

In particular, the distributive property, which says that A\cdot(B+C) = A\cdot B+A\cdot C, is one of the backbone rules upon which all of arithmetic is built.  In fact, this property is literally the thing that defines the relationship between addition and multiplication!  For example,

\begin{array}{rl}&2+2+2\\=&2\cdot1+2\cdot1+2\cdot1\\=&2\cdot(1+1+1)\\=&2\cdot3\end{array}

because “3” is defined as “3=1+1+1”.  Losing the distributive property basically means you need to go home and start designing a new (and worse) kind of math from scratch.

For positive numbers there’s no issue, because (practically) everyone is fine with the “+\cdot+=+ rule”.  For example, 5\cdot(2+3)=5\cdot5=25 and 5\cdot(2+3)=5\cdot2+5\cdot3=10+15=25.

But if you insist on using the rule “+\cdot-=+“, then you’ll find the distributive property doesn’t work.  For example, 5\cdot(2-3)=5\cdot(-1)=5 and 5\cdot(2-3)=5\cdot2+5\cdot(-3)=10+15=25.  The discerning eye will note that 5≠25, so 5\cdot(2-3)\ne5\cdot2+5\cdot(-3).  In other words, we need to use “+\cdot-=-” in order for arithmetic to work.

And if you begrudgingly allow the “+\cdot-=-” rule, but refuse to accept the “-\cdot-=+” rule, then consider this: (-5)\cdot(2-3)=(-5)\cdot(-1)=-5 and (-5)\cdot(2-3)=(-5)\cdot2+(-5)\cdot(-3)=-10-15=-25.

On a case-by-case basis, it’s not obvious that a negative times a negative should be positive.  But when you look at lots of examples and the number system overall, you find that the “-\cdot-=+” rule is kinda hard to avoid.  Using a different rule means asking a lot of hard questions, like: What is negativeness?  Which rules of arithmetic are worth keeping?  What is the sound of negative two hands unclapping?

Using the wrong rules is a good, practical, and genuinely useful training in what not to do.  It’s well worth your time to shake off the shackles of mundanity and conformity, so that you can forge into a world of new discoveries.  Specifically, you’ll discover why arithmetic’s shackles are usually left unshaken.

Posted in Uncategorized | 17 Comments

Q: In relativity, length contracts at high speeds. But what’s contracting? Is it distance or space or is there even a difference?

The original question was: I can’t find a consistent answer to this question; please help.  A spaceship leaves Earth and heads to a star 4 light years away at 80% of light speed.  An observer on Earth knows that the spaceship’s clock will run slower than his clock by 40% for the entirety of the journey (according to the Lorentz formula).

According to the Earth-based observer, the spaceship will arrive at the star in 5 years.  However, because of time dilation, the spaceship’s clock will only read 3 years of elapsed time on arrival.  To an astronaut on the spaceship, the distance to the star appears to be just 2.4 light years because it took him just 3 years to get there while traveling at 80% light speed.

This situation is sometimes explained as a consequence of length contraction.  But what is it that’s contracting?  Some authors put it down to space itself contracting, or just distance contracting (which it seems to me amounts to the same thing) and others say that’s nonsense because you could posit two spaceships heading in the same direction momentarily side by side and traveling at different speeds, so how can there be two different distances?

So what is the correct way to understand the situation from the astronaut’s perspective?


Physicist: Space and time don’t react to how you move around.  They don’t contract or slow down just because you move fast relative to someone somewhere.  What changes is how you perceive space and time.

There’s no true “forward” direction and, terrifyingly, there’s no true “future” direction or even “space but not time” direction.  All of these directions and the lengths of things in those directions are subjective and even, dare I say, relative.

When you measure the length of something in space (in other words, “normally”), the total length isn’t just the length in the x or y directions, it’s a particular combination of both that works out exactly the way you’d think it should.  When you measure the length of something in spacetime, the total length isn’t just the length in the space or time directions, it’s a particular combination of both that works out in more or less the opposite of how you’d think it should.

We don’t talk about the three dimensions of space individually, because they’re not really distinct.  The forward, right, and up directions are a good way to describe the three different dimensions of space, but of course they vary from perspective to perspective.  Just call someone from the opposite side of the planet and ask them “What’s up?” and you’ll find yourself instantly embroiled in irreconcilable conflict.  Everyone can agree that it’s easy to pick three mutually perpendicular directions in our three-dimensional universe (try it), but there’s no sense in trying to specify which specific three are the “true” directions.

If you insist on measuring things in only one direction, then different perspectives will result in different lengths.  To find the total length, d, requires doing a couple measurements, x and y (and z too, in 3 dimensions), and applying some Pythagoras, d2=x2+y2.

A meter stick is a meter long (hence the name), so if you place it flat on a table and measure its horizontal length (with a… tape measure or something), you’ll find that its horizontal length is 100cm and its vertical length is zero.  Given that, you could reasonably divine that it must be 100cm long.  But if you tilt it up (or equivalently, tilt your head a bit), then the horizontal and vertical lengths change.  There’s nothing profound happening.  To handle a universe cruel enough to allow such differing perspectives we use the “Euclidean metric”, d^2=x^2+y^2+z^2, to find the total length of things given their lengths in each of the various directions.  The length in any given direction (x, y, z) can change, but the total length (d) stays the same.

Einstein’s big contribution (or one of them at least) was “combining” time and space under the umbrella of “spacetime”, so named because Germans love sticking words together in a traditional process called (roughly translated) stickingwordstogethertomakeonereallylongdifficulttoreadandoftunpronounceableword.

The different spatial dimensions are equivalent.  To see for yourself, walk north and south, then walk east and west.  Unless you’re carrying a compass, you shouldn’t notice any difference.  But clearly time is different.  To see for yourself, first walk north and south, then walk to tomorrow and back to yesterday.  So when someone cleverly volunteers “we live in a 4 dimensional universe!”, they’re being a little imprecise.  Physicists, who love precision slightly more than being understood, prefer to say “we live in a 3+1 dimensional universe!” to make clear that there are three space dimensions and one time dimension.

But while time and space are different, they’re not completely separate.  In very much the same way that the forward direction varies between perspectives, the “future direction” also varies.  And in the same way that rotating perspectives exchanges directions, moving at different velocities exchanges the time direction and direction of movement.  The total “distance” between points in spacetime is called the “interval”, L.  For folk familiar with the Euclidean metric, the “Minkowski metric” should look eerily familiar: L^2=x^2+y^2+z^2-(ct)^2=d^2-(ct)^2.  Some folk will flip the sign on this, L^2=-x^2-y^2-z^2+(ct)^2=-d^2+(ct)^2, because it makes it a little easier to talk about the time experienced on a particular path (in fact, I’m gonna do that in a minute), but the important thing is not the sign of this equation, it’s that it’s constant between different perspectives.  It should bother you that L^2 can be negative, but… don’t worry about it.  It’s fine.

If you’re wondering, the spacetime interval is a direct consequence of rule #1 in relativity: the speed of light is the same to everyone.  The short way to see this is to notice that if you find the interval between the start and end points of a light beam’s journey, the interval is always zero because \frac{d}{t}=c\Rightarrow d=ct\Rightarrow d^2=(ct)^2\Rightarrow d^2-(ct)^2=0.  The long way to see why the interval is what it is, is a little long.

There are two things to notice about the spacetime interval.  First, that “c” is the speed of light and it basically provides a unit conversion between meters and seconds (or furlongs and fortnights, or whatever units you prefer for distance and time).  So 1 second has an interval of about 300,000 km (one “light-second“), which is most of the distance between here and the Moon.  It turns out that the speed at which light travels comes from the “c” in this equation.  So the speed of light is dictated by the nature of space and time (as described by the Minkowski metric), not the other way around.  Which is good to know.

Second and more important is that negative.  That really screws things up.  It is arguably responsible for damn-near all of the weird, unintuitive stuff that falls out of special relativity: time dilations, length contractions, twin paradoxes, Einstein’s haircut and marriages, everything.  In particular (and this is why the exchange between distance and duration is so unintuitive), if d^2-(ct)^2 is constant, then when d increases, so does t.

This is in stark contrast to regular distance, where if x^2+y^2 is constant, an increase in x means a decrease in y.  Picture that in your head and it makes sense.  Picture relativity in your head and it doesn’t.

Left: The points a distance of 1 away form the origin form a circle.  The two blue lines are the same length.  Right: The points a spacetime interval of 1 away from the origin form a hyperbola.  The two red lines are the same “length”.  Here time is the vertical axis and one of the space directions is the horizontal axis.  So if you sit still you’d trace out a path like the first red line and if you were moving to the right you’d trace out a path like the second red line.

Now brace yourself, because here comes the point.  The original question was about a journey that, from the perspective of Earth, was d = 4 light-years long, at a speed of v = 0.8c, and taking t = 5 years.  The beauty of using “light units” (light-years, light-seconds, etc.) is that the spacetime interval is really easy to work with.  The interval between the launch and landing of the spaceship is:

L^2=-d^2+(ct)^2=-(4)^2+(5)^2=-16+25=9

So the interval is L = 3 light-years.

Left: Earth and an alien world sit still (travel through time but not space) 4 light-years apart while a spaceship traveling to the right at v=0.8c takes 5 years to travel between them.  Right: A spaceship sits still (travels through time but not space) for 3 years while Earth and an alien world travel to the left at v=0.8c.

Like regular distance, the power of the spacetime interval is that it is the same from all perspectives.  From the perspective of the spaceship the launch and landing happen in the same place.  It’s like a narcissist on a train: they get on and get off in the same place, while the world moves around them.  So d = 0 and it’s just a question of how much time passes:

3^2=-0^2+(ct)^2\Rightarrow 3^2=(ct)^2\Rightarrow 3=ct

So, t = 3 years because 3 light-years divided by the speed of light is 3 years.

So just like changing your perspective by tilting your head changes the horizontal and vertical lengths of stuff (while leaving the total length the same), changing your perspective by moving at a different speed changes length-in-the-direction-of-motion and duration (while keeping the spacetime interval the same).

That’s time dilation (5 years for Earth, but 3 years for the spaceship).  Length contraction is a little more subtle.  Normally when you measure something you get out your meter stick (or yard stick, depending on where you live), put it next to the thing in question and boom: measured.  But how do you measure the length of stuff when you’re moving past it?  With a stopwatch.

How can you tell that mile markers are a mile apart?  Because when you’re driving at 60 mph you see one a minute.

So, like the original question pointed out, if it takes you 3 years to get to your destination, which is approaching you at 0.8c, then it must be 3×0.8 = 2.4 light-years away.  Notice that in the diagram with the planets above, on the left they’re 5 light-years apart and on the right they’re 2.4ish light-years apart (measure horizontally in the space direction).

It feels like length contraction should be more complicated than this, but it’s really not.  You can get yourself tied in knots thinking about this too hard.  After all, when you talk about “the distance to that whatever-it-is” you’re talking about a straight line in spacetime between “here right now” and “over there right now”, but “now” is a little slippery when the “future direction” is relative.  Luckily, “time multiplied by speed is distance” works fine.

There are a few ways to look at the situation, but they all boil down to the same big idea: perspectives moving relative to each other see all kinds of things differently.  Reality itself, space and time and the stuff in it doesn’t change, but how we view it and interact with it doesn’t quite follow the rules we imagine.

Posted in -- By the Physicist, Physics, Relativity | 26 Comments

Q: Can you beat the uncertainty principle using entanglement, by measuring position on one particle and momentum on the other?

Physicist: The Uncertainty Principle is a little subtle.  Most folk are introduced to it as “the more precisely you measure the position of a particle, the less precisely you can measure its momentum, and vice versa”.  That makes it sound like an engineering problem; something we can get around by, for example, trying harder.

Normally, if there’s a limit to how well you can measure something, you just need better equipment.  But “there’s a limit to how well we can measure things” is not what the Uncertainty Principle is about.

But the Uncertainty Principle is a statement about what things are actually like in reality, and the weird limitations on how well we can do measurements is just a symptom.  In a very fundamental, profound, and physical sense, everything is a little uncertain.

To be clear, you can measure the position and momentum of a single particle (or many particles or whatever quantum system you prefer) very precisely, and there’s nothing stopping you from getting very precise results from both measurements.  The problem is that those precise measurements don’t really tell you much about the actual quantum state you’re looking at.  You can prepare a series of identical quantum states and measure each of them in exactly the same way, but because quantum states are generally a combination of many different states together (what folk in the quantum biz call a “superposition of states”) you generally don’t get the same result over and over.

This is a map of where an electron in a particular quantum state (in the 4,2,0 orbital) is most likely to be found when measured.  But the electron’s state is this whole thing.  If you do a measurement and find the electron at the “x”, you’re still kinda missing the big picture.

An electron in an atomic orbital is in a state with one amount of energy, but many positions.  The electron is kinda “smeared out” around the atom.  So if you measure the energy you get a definite result, but if you measure the position you could get any result within a range of positions (a small range, what with atoms being small).  You can picture this as being like a musical chord and either asking “what chord was played?” or “what note was played?”.  For example, the C chord is composed of the C, E, and G notes.  If you do a chord measurement (this is not an actual thing, but bear with me), you get a definite answer: C.  If you do a note measurement (again, more quantum metaphor than mechanics), then you get one of three results: C, E, or G.

Each electron orbital is a very specific energy state (hence the discrete spectral lines produced when electrons jump between them), but at the same time each is a superposition of many positions, so the electrons don’t exist in any particular place.

The randomness of the Uncertainty Principle has the same root cause: a single quantum state being composed of many different states at the same time.  Like the chord example, a position state is made up of a range of many momentum states.  Unlike the chord example, the reverse is also true; a momentum state is composed of many position states.  Unintuitively, the fewer position states something is in (the more specific the position) the more momentum states it is also in.  Unfortunately, because of this fact (that position is describable in terms of momentum and vice versa) you can directly derive the uncertainty principle mathematically.  In other words, assuming that every experiment ever designed to refute the basic physics didn’t all fail accidentally, the Uncertainty Principle is built into the universe and no cleverness or engineering will overcome it.

When you prepare many particles (or any other quantum system) in identical quantum states, measure them one after another, and write down the results, you’ll find that there’s some spread in where they show up as well as their momenta.  You can prepare states with very little spread in their position or with very little spread in their momenta (so called “squeeze states”), so the Uncertainty Principle isn’t as simple as “everything is random and unpredictable”, it’s about pairs of measurements applied to “conjugate variables” (position and momentum are the classic example).

The Uncertainty Principle says that if you look at the spread (standard deviation) of those two measurements for many copies of any given state and multiply those spreads together, their product is always greater than some minimum amount.  Explicitly, if \Delta x and \Delta p are how spread out the position and momentum are, then \Delta x \Delta p\ge \frac{h}{4\pi}=5.273\times 10^{-35}Js.  This, it’s worth noting, is a really tiny lower bound.  If you’re certain that a brick is in a box, then (were you so compelled) you could nail down its velocity to within around 0.000000000000000000000000000000002 meters per second, which is arguably fairly certain.

Entanglement doesn’t do much to change the picture.  With entangled pairs of particles, a measurement on one mirrors a measurement on the other.  You can entangle any property (energy, polarization, delectability, hell even existence), including position and momentum.  In a dangerously succinct nutshell, entanglement basically/sorta gives you two chances to make a measurement on a quantum state.  Assume particles A and B are position-entangled.  If you measure the position of A you’ll be able to say “ah, it’s over here” and if you measure the position of B you’ll be able to say “it sure is”.  The two measurements, although otherwise random, agree.

But what you’d really like to see is a precise position measurement on A and a precise momentum measurement on B.  It turns out: that’s fine.  Once again, that spread of results shows up.  If A shows up in some region, there is a corresponding set of momentum states that are compatible with that (if a certain “chord is played” there is a particular “set of notes” involved) and when you measure B, you’ll see one of those.

So entanglement does give you another chance to measure a quantum state with as much precision as you might desire, but… it doesn’t really change anything.  The Uncertainty Principle doesn’t say “you can’t simultaneously measure position and momentum with nigh perfect precision!”, it says “it doesn’t matter if you do!”.

The giant microscope picture is from here.

Posted in -- By the Physicist, Physics, Quantum Theory | 7 Comments

Q: Why do clouds hold their form?

Physicist: The short answer is: they don’t!

Clouds are just a bunch of moisture in the air (contain your shock).  What’s a little surprising is that the transparent, non-cloudy air around them typically has almost the same amount of moisture.  So when you see clouds in the sky, you’re not seeing a few wet blobs surrounded by dry air, you’re seeing a lot of humid air, some fraction of which has tipped past the dew point and begun to condense into visible vapor.

A patch of air can only hold so much water vapor.  Hotter or denser air can hold more, and colder or thinner air can hold less.  The “dew point” is where the air can no longer hold water vapor, which instead begins to condense out and become visible.

Evaporated water condenses when the humidity of the air it’s in increases too much, or when the temperature drops too much.  That’s why you can see your breath when it’s cold outside.  The air in your lungs is warm (because you’re a mammal) and it has plenty of opportunity to pick up moisture (because lungs are soggy and gross).  Mixing with cold air means that the temperature of your exhalation crashes and drops through the dew point, making it visible.  Mixing with lots of nearby drier air dilutes your breath, dropping the humidity in any particular parcel of air, and raises the breath/outside-air mixture back above the dew point.

Your breath is warm, so in your (humid) lungs it’s above the dew point. When it cools it drops below the dew point and becomes visible.  Then as it mixes with drier air, it pops back above the dew point and disappears.

Clouds are governed by the same physics.  If you spend some time staring at clouds (and why wouldn’t you?), you’ll find that they continuously grow and shrink, appearing and disappearing.  A given cloud maintains its rough shape and position because it takes time for conditions in air to change; if the air in some region is on one side of the dew point, then it’ll probably stay there for the next few minutes.  But if you speed up time, you find that clouds don’t hold their shape any better than steam coming out of a kettle.

The water vapor in air condenses and evaporates all the time, depending on the temperature, pressure, and humidity at every given location. While those parameters change smoothly, the dew point is fairly sharp, so the air goes flips between clear and cloudy pretty abruptly (which is why clouds don’t fade out at the edges).

The big difference between clouds-in-the-sky and breath-on-a-cold-day is what causes the air to pass through the dew point.  The dew point depends on both the humidity and temperature.  For breath, the local humidity fluctuates a lot (as anyone with close-talking friends can tell you).  For clouds, the humidity stays relatively constant, but as the air changes temperature (mostly through the expansion or contraction from changing altitude) different regions pass through the dew point and become clouds.  If you’re in the middle of the ocean or Kansas (or any other flat, featureless landscape), there’s no particular reason for any given location in the sky to have a cloud.  It’s just the luck of the draw (humidly speaking).

But when the conditions change predictably, the clouds appear and disappear predictably and you get “lenticular clouds”.  Here are some beautiful examples.  Lenticular clouds are a great way to see that clouds aren’t “objects” that move through the sky, they’re regions that, for whatever reason, are below the dew point.

Normally air moves across the land or sea in a “laminar”, smooth way, with nary a tumble or nor turbulence.  Across grasslands clouds tend to vary slowly and randomly, but when there’s an obstruction, like mountains, suddenly the air is forced to flow in a particular pattern.  If that pattern involves abrupt changes of altitude, then the air experiences abrupt changes in pressure and temperature which leads to abrupt changes in cloudness.  The same amount of moisture is present in the air at the foot of the mountain as there is at the top (give or take), but the top of the mountain is where you’ll see clouds.  In fact, if the conditions are just right, this is a clever/cheap way to get some insight into what the (normally) invisible air currents are.

The same air that flows along the foothills flows over the peaks, and yet it only forms clouds over the peaks.

So clouds roughly hold their shape (for a little while) because it takes time for the humidity or temperature to change, or for the cloud to be twisted up by local air currents. But since they’re not “blobs-of-water” so much as “patches-of-air-with-slightly-different-conditions” they change continuously and are free to pop in and out as they cross the dew point.

As for how they manage to be pretty and invariably end up looking like something familiar (cotton, marshmallow men, other clouds, negative Rorschach blots, etc.), that’s more psychological than climatological.

The clouds passing by overhead video is from here.

The lenticular cloud video is from here.

Posted in -- By the Physicist, Physics | 5 Comments

An origin story

Physicist: Back in May I advertised a Story Collider thing in LA where I talked about doing the Ask a Mathematician / Ask a Physicist booth at Burning Man and some of the folk we met there.  Story Collider collects these recordings and when they’ve got a few that fit together into a theme, they produce a podcast episode.

And that’s exactly what’s happened!  You can hear the episode here.

Before you ask, the shirt says “∃x : I♥x” which is mathspeak for “there exists x such that I love x”; a statement which is demonstrably true.

Posted in -- By the Physicist | 2 Comments

Q: Can free will exist in our deterministic universe?

Physicist: It depends on what you mean by “free will” and where you draw the line on “determinism”.  Both of these are a matter of opinion, so philosophers (and the rest of us) will have plenty to argue about for the foreseeable future.

If our choices are expressions of the activity of nerves cells, which are soggy bags of molecules clacking together (to paraphrase Gray’s Anatomy), which are governed entirely by fundamental universal laws, then everything we do is dictated by physical mechanics.  Even the feeling that we have free will would just a bunch of atoms all following the same set of simple rules every time they meet another atom, repeated super ad nauseam.

A tiny bird that sings, and greets the dawn, and knows not why.

So if physical laws are all deterministic, then everything anyone does is just as “fated” as a rock rolling down a hill or a clock chiming.  Now, you may be lacking free will in some idealized sense, but if there’s no way to tell, then what are you really missing?  Whether your actions are predictable in theory is not quite as important as whether your actions are predictable in practice.  The tiny, individual interactions that make up everything we do are easy (well… kinda) to predict, but big systems are a lot more complicated than the sum of their parts.

Conway’s “Game of Life” (different from the one where the goal is to die rich) provides a beautiful example of this.  The Game of Life, which really should have been called “Staring at Pixels”, is a very short list of rules applied to pixels on a grid that describes which will be “alive” in the next “generation”.  Systems of tiny, simple things (like pixels with basic interaction rules) are called “cellular automata”.

Each pixel borders eight others. If a “dead” pixel borders three “living” pixels, it comes to life in the next generation.  If a living pixel borders two or three other living pixels, it stays alive.  Otherwise a pixel dies.

If you’re not familiar with the Game of Life, it’s well worth taking a moment to play around with it.  Despite baby-simple rules that only govern individual pixels and their immediate neighbors, the ultimate behavior of the Game can not, in general, be predicted from the initial conditions without actually running through the generations.  In fact, the Game of Life is capable of the same level of complexity as the computer that runs it (if allowed to run on a large enough grid); so a computer can simulate the Game of Life, but at the same time the Game of Life can simulate a computer.

A computer, simulated in the Game of Life, simulated in a computer, replicated on a screen, and perceived by someone (perhaps) with free will.

The Game of Life is to free will, as a misplaced red wig full of bearer bonds is to clowns.  If you spend enough time with the former, you can’t help but ask probing questions about the latter.  Whether you’re talking about neurons or molecules or fundamental particles, people are also made up of lots of tiny “cellular automata”.  The behavior of every little piece follows a set of known rules, so in theory you should be able to determine the outcome of even large systems (like people or worlds or whateveryougot) so long as you know everything about the initial conditions.  But in practice: nope.

First, you’re not going to figure out the exact position of every particle in any system anywhere near as large as a person, and even if you could, the universe is right there to pile on more stuff to keep track of.  Look around.  See anything at all?  Then it’s happening already.

Second, like the Game of Life, it’s unlikely that there’s a “computational short cut” for physical systems.  That is to say, if you wanted to precisely predict what a person will do and think and whatever else, you’d have to simulate all of their bits and pieces, run the simulation forward, and see what happens.  But that’s exactly how you handle a being with free will: you just… see what they do.  Maybe this wild train we call life is on tracks, but if there’s no way to know where that train is going without actually riding it to the destination, what’s the difference?  That’s at least free-will-adjacent.

That all said, the universe is not entirely deterministic.  A little over a decade ago Conway and his buddy Kochen, defined something to be “free” when its actions are not explicitly determined by the past, and then proved the “Free Will Theorem” which is both profoundly bonkers and deeply frustrating:

If people have free will, then so do individual particles.

Kochen and Conway proved the free will theorem because, in a way, they had to.

The difference between scientists and philosophers, is that scientists force each other to nail down exactly what they’re talking about and philosophers tend to be a little more loosey goosey (also, scientists are big fans of empirical evidence).  So while you may disagree with how they defined the “free” in “free will”, that just means we need more words to work with.

In their paper, they spend buckets of time establishing a known physical principle, “fundamental randomness“, and just a little establishing what in the hell they’re talking about.  To wit:

“Why do we call this result the Free Will theorem?  It is usually tacitly assumed that experimenters have sufficient free will to choose the settings of their apparatus in a way that is not determined by past history.  We make this assumption explicit precisely because our theorem deduces from it the more surprising fact that the particles’ responses are also not determined by past history.” -Koch ‘n Con

Here’s the idea:

For most of history, scientists labored under a rather modest assumption; that all possible experiments have a result, regardless of whether you bother to do those experiments.  For example, if your experiment is “is this card the queen of spades?”, then there is an answer whether or not you actually look.  By looking you gain a little information, but the result is there whether you bother to do the experiment or not.  The card is whatever it is.

Even better, if you have a bunch of cards, it doesn’t matter which you choose to look at; all of them are what they are.  If you assume that things are in definite states just waiting around to be uncovered, then there are a variety of mathematical statements you can make that are always true.  Statements like “if there are three cards total, then there are a different number of red and black cards”.  No matter how clever you are with actual playing cards, this statement (and a hell of a lot besides) must be true.

There are often multiple experiments/measurements you can do with a given system.  If the result of each measurement exists before you look, then there are some mathematical statements that can be made about the collective results.

But it turns out that a lot of quantum phenomena, entanglement in particular, are incompatible with some of the equations that come from the assumption that “each card (or particle in this case) already is what it is”.  In their paper, Koch ‘n Con provide an explicit example of an incompatibility based on particle spin and this old post goes into a lot more detail on another example, the math behind it, and what it means.  The point is: seriously, as weird as it sounds, for many quantum systems it is literally impossible for the results of all possible experiments to exist (and therefore to be predetermined) because those results would be logically inconsistent with each other.

Now, if you assume that everything that happens is completely predetermined, then this isn’t actually a problem.  The weirdness of quantum mechanics, the experimenters, and the experiments they choose to do, are all “on tracks”.  The person doing the experiment had no choice about which experiment to do, which means that the result of only that one experiment needs to exist.  The others won’t be done, so they can be safely swept under the existential carpet.

A pair of quantum physicists that ponder, and experiment, and know not why.

On the other hand, if the person doing the experiment has free will (in the sense that their actions are not determined by the past), then suddenly there’s an issue.  If they’re free to choose any experiment and the result of that experiment is predetermined, then all of the results have to be predetermined.  But that’s impossible.

Of course when we do these fancy quantum experiments we always get a result.  No big deal.  But if we have free will in the sense that our behavior is not entirely determined by the past, then the quantum systems we’re playing around with have free will in exactly the same sense.

The universe would need to be dead-set on conspiring against quantum theorists on a massive scale, and in a very specific way, in order to create the experimental results we’ve seen so far.  In order to back up the assumption that the particles involved, the experimenters, and the machines used to do the measurements aren’t all subject to some all-encompassing, atom-by-atom, perfectly executed, and needlessly nefarious systematic bias, the methods used to make the “choices” in the experiments have become a little over-the-top.  For example, by measuring entangled particles fast enough and far enough apart that light can’t travel between them, and by using cosmic microwave background radiation from opposite sides of the universe to randomly orient the measurement devices after the entangled particles are created and but before they’re measured.  That way, either you really are randomly choosing which experiment you want to do, or the entire universe has been conspiring against you and this particular experiment since the beginning of time.  It’s important to be sure about things, but this level of caution can blur the line between due diligence and paranoia.

It’s an open question whether the quantum randomness of particles scales up to produce free will, or at least “not-predetermined-ness”, in living critters.  Forced to guess, I’d say… maybe/probably?  Brains can turn on a dime and unless they’re actively suppressing it, eventually that quantum randomness should “butterfly effect” its way into our actions.  At least sometimes.  But if you really want to ensure that your will is as free as an atom’s, you can always carry around a Geiger counter and base all of your decisions on what it reads.  You’d be the free-will-est person on your block!

Posted in -- By the Physicist, Philosophical, Quantum Theory | 49 Comments