Quantum immortality is a philosophical thought experiment about what happens when you combine quantum many-state-ness with the anthropic principle and survivorship bias. It’s worth underscoring that this is a *thought experiment*, not advice. It’s an interesting idea, but I wouldn’t bet my life on it.

The anthropic principle says that whatever needed to happen in order for an observer to be observing, happened. For example, because you’re reading this, then (among other things) you must have access to the internet, speak English, live in an environment capable of supporting life, and were born rather than not born. For anyone/anything not reading this, none of those things may necessarily be true.

On it’s own, the anthropic principle is already pretty powerful. It is the governing principle behind why “you are here” signs are always accurate. It says that nobody ever regrets playing Russian Roulette, they only regret inviting their friends. And it explains why the Earth is capable of supporting introspective critters such as ourselves, despite all of the incredibly unlikely things that had to go right for that to happen. That last shouldn’t seem too surprising; if only one in a million planets can support life, where would you expect to find living things?

It’s the “quantum” that makes the quantum suicide thought experiment interesting. You’ll probably find a planet capable of supporting life in the universe, because there are a *lot* of opportunities (based on the other solar systems we can see) and there’s evidently a non-zero chance. What quantum theory does is change that “probably” to “definitely” if there’s ever a non-zero chance.

In classical physics (which is to say: when you just walk around and use your eyeballs), everything seems to be in a single state. Your coat is on one hook, not many. Your front door is open or closed, but not both. If you lose your keys, they’re *someplace* specific, even if you don’t happen to know where.

In quantum physics on the other hand, when we assume that things are in only one state our predictions fail. The most famous example of this is the double slit experiment, where coherent light is shined on a pair of slits (regular readers are no doubt sick of regularly reading about the vaunted double slit experiment). Instead of seeing two bars of light on the far side of the slits, we instead see “beats”; many bars of light corresponding to interference between every possible path through the two slits. The terrifying thing about the double slit experiment is that it continues to work even when the light intensity is turned down to just one photon at a time. If we assume that each photon goes through only one slit, then we expect to see a build up of photons in just two bars. The fact that we see many bars indicates that the photons go through both slits.

Other than being clearly weird, and simple enough for first-year physicists to do the math themselves, there is nothing special about the double slit experiment. The same “things can be in many states” idea applies across the board in quantum theory. It is the back bone of chemistry, particle interactions of all kinds, freaking *everything*.

And photons aren’t special either. You can do the double slit experiment with anything (as far as we know), it’s just that bigger things are harder to work with. The largest things to successfully demonstrate going through both slits are molecules of C_{284}H_{190}F_{320}N_{4}S_{12}. That’s a modest 810 interconnected atoms! Our inability to demonstrate the quantumness of macroscopic things seems to be an engineering barrier rather than some undiscovered physical law. Every indication so far is that there’s no division between the “quantum world” and the “classical world”. Instead (like every other physical law), quantum laws seem to apply universally.

Notice that the refrain is “anything that can happen does” and not “everything happens”. Assuming the laws do apply on all scales, one of their more frustrating predictions is that the probability of observing yourself somewhere else is zero and the probability of observing two or more of yourselves is likewise zero (literally: wherever you go, there you are). These situations are impossible. Despite being in many states, none of your states directly interact with the others. Other quantum versions of yourself are like the bottom half of a Muppet; something you feel like you should be able to see, but there’s a good reason you never do.

This is not without precedent in science. For example, Newton’s laws simultaneously predict that 1) the Earth must be spinning and hurtling through space (based on how the other planets and stars move in the sky) and 2) that you’d never notice (because you’re hurtling along with the Earth). Arguably, this “it’s very weird, but it’s also really hard to notice” aspect of quantum mechanics is why it was discovered *after* bronze and the wheel.

The double slit experiment is so clean an easy to work with because you only have to worry about two states: the path that light can take through either slit. In reality the slits have some non-zero physical size, so there are many different paths photons can take through each. Those paths are all so similar that assuming they’re the same is good enough for an undergrad lab class. But if you want to nail down the *exact* pattern you see projected on the screen, you have to account for every possible path precisely. This is true whenever quantum theory applies (e.g., chemistry); counting the mostly likely quantum states buys you a decently accurate prediction, but the more states you take into account, the more accurate your prediction.

Low-probability states don’t add much, but they demonstrably add more than zero, so they must be physically real. A mountain and a pebble affect the world differently, but they’re “equally real”. So, assuming QM laws apply universally, every possible outcome of an event is physically realized and the fact that you can only experience situations where you’re alive means that you’ll be funneled into those realities where you continue to live.

There’s nothing ultimately special about life or death (unless you’re alive, and then it’s suddenly super important), they’re just more interesting to consider than, say, a quantum number generator that accidentally gives you sequential numbers forever. Quantum suicide makes some tricks a lot easier since the same quantumy arguments apply very broadly. For example, if a situation can end in either stubbing your toe or not, both results will occur. In some parallel histories, when you kick a brick barefoot you’ll miss. Still. What do you honestly think will happen if you try?

The central tenant of quantum immortality (that anything that can happen does) applies to everything and every combination of things, it’s just that we’re good at worrying about and keeping track of ourselves. There’s a vanishingly small probability that a Beanie Baby will “survive” intact for trillions of years (when it will have nearly doubled in value), so in some tiny set of the many possible futures it will. The fact that it doesn’t have a point of view means that it won’t be bothered one way or the other.

If there’s X chance that you’ll be walking around in a thousand years, then there’s a chance of about X^{2} that you and some other particular person will both be walking around. In other words, in some tiny fraction of possible futures you get to persist, and in a substantially more tiny (but still not quite zero) fraction you and Duncan MacLeod are both alive. Until the gathering at least, after which there can be only one.

So this gives us a way of testing out the completely insane idea of quantum immortality. If a bunch of us are accidentally alive in a thousand years, let’s all meet up and compare notes. We don’t need to bother agreeing on a meeting place or time, since quantum immortals should be used to relying on happenstance.

Quantum immortality is, almost by definition, a subjective experience. Clearly it’s possible to observe other people passing away (condolences everyone), but if quantum immortality is real, you’ll find out on your own. This has given rise to some clever science fiction, but not a lot of useful science fact. Point is: planning to live forever has not traditionally been an effective way to spend not-forever. I find myself alive and the improbable result of an endless string of nearly impossible coincidences, while on a world that may be unique in the universe. On the other hand, that’s everybody’s story and, not for nothing, don’t risk your life over some silly idea.

The “weaving photon” picture and a short summary of the experiment can be found here.

The fridge picture is from here.

MacLeod’s photo is from here.

]]>There are good reasons why casinos make money and “how to gamble” books are longer than two sentences. According to the law of large numbers, if you’re likely to lose a little bit of money in a game, then playing a lot of times effectively guarantees that you’ll lose a lot of money. It doesn’t matter if you think you have “a system”, gambling is a big business and the business is: you lose. But if you do want to gamble there are three simple rules to keep in mind:

1) Don’t.

2) The second you’re ahead, walk away. You’ve already won and it’s all downhill from there.

3) The second you’ve lost money, walk away. It’s gone forever and if you stay it’s just going to get worse.

This system sounds like a clever way to beat any game: just place larger and larger bets, to cover all of your previous loses and a little more, because *eventually* you’ll win. Then rinse and repeat until Musk and Bezos come begging for a handout.

At first blush this seems like a reasonable trick. And technically it does work. The problem with it is nestled firmly inside that “eventually”. Like so many things in life, this plan works great if you already have an infinite amount of money. With infinite funds, you will win eventually. But without them, you’re screwed as much as always.

Take a simple “double or nothing” scenario like the pass line in craps, where you have a (49.3%) probability of winning. Assuming you bet 1 “chip” (or whatever currency makes sense), then the first round you can either lose or gain 1 new chip. The amount of winnings you can expect (a mathematician would call this “the expected winnings”) is the probability of winning times how much you win, minus the probability you lose times how much you lose: . This is very intentionally negative, because on average you’re supposed lose.

But if you do lose, then on the second round you can bet 2 chips. That way if you win, you’ll recoup your 1 chip loss and come out 1 chip ahead. If you lose again, you’re down 3, so by doubling your bet again (4 chips), you might recoup your net loses and come out 1 chip ahead. Keep doubling until you win and then you’re back where you started, plus one shiny new chip.

In this scenario you’re guaranteed to gain 1 chip if you keep playing. The question is: how long can you keep playing? What’s missing in the “doubling system” so far is the possibility of losing everything (which is a big red flag for any gambling system). You’re likely to gain a tiny amount (by winning any time before you run out of chips) and unlikely to lose everything (by losing until you run out of chips), but *on average* you can expect to walk out of the casino with less than you walked in with.

After n rounds you’ve bet chips (you can add this up easily because it’s a geometric series). For example, after three rounds you’ve bet a total of chips. If you’ve got B chips in your pocket, then after rounds you’ll be broke.

For example, 1023 chips buys you rounds. For the pass line, the probability of losing ten games in a row and going broke is and the probability of winning before then and gaining one chip is . So your expected winnings is . Despite its cleverness, this system is actually worse than not being clever: if you just bet 1 chip ten times in a row, you can expect to lose about 0.14 chips (and only ten at most).

Long story short, you can change the probability of winning/losing and the amount that you might win/lose by using different strategies or playing different games, but on *average* you’ll always lose money. By increasing your bet every time you lose, you really can cover your loses and more. Usually. But if you’re one of those unlucky people without infinite funds, then you have to take into account the possibility of going broke. When you do account for every eventuality, you’ll find the casino’s edge is intact.

All that doubling your bets really does is make the chances of winning better while making the consequences of losing worse.

There are a few rare situations where the casino’s edge disappears. A group of MIT kids famously found one in blackjack and exploited it for years. But it wasn’t a big edge, it was very difficult to even know that it was there, and they had to play consistently and with a large bankroll for a long time to overcome the Gambler’s Ruin (which, very succinctly, is: you’ll run out of money before the casino does). But of course gambling isn’t a game, it’s a business. When the casinos figured out what those MIT kids were doing, they just kicked them out.

Casinos are perfectly happy to let you test your theories out. So if you think you’ve got a winning system, it *might* work. But don’t bet on it.

**Gravy**: When you want to prove something to death, it helps to do it in extreme generality with everything left as non-specified variables. That way you cover the “yeah, but what if…” questions all at once, which is good, but you often end up in an algebra blizzard, which is work.

Assume that the amount of money you make after placing a bet is described by an gaining factor, f. So if you bet x chips, you either lose x chips or gain fx chips. For example, f=1 is “double or nothing”; if you play x chips and win you get 2x chips back, so you *gained* 1x chips.

You can sum up the casino’s edge pretty succinctly: fp-q<0. This means that the average amount you lose from betting a chip, q, is more than the average amount you gain, pf. For example, the pass line in craps is f=1, p=0.493, q=0.507, and fp-q=(1)(0.493)-(0.507)=-0.014. Or, if you bet on any particular number in roulette, f=35, p=1/37, q=36/37, and fp-q=(35)(1/37)-(36/37)=-1/37. fp-q<0 is practically a natural law on the casino floor.

Mathematically speaking, the system here is really simple: take whatever your last bet was and multiply it by m (m=2 for doubling, m=3 for tripling, etc.). On the first round you bet one chip, on the nth round you bet chips, and if you win on the nth round, you’ll earn f times your last bet, .

However, if you lose n rounds in a row, you’ll have bet and lost chips. This means that if you lose the first n-1 rounds and win on the nth, then your take-home winnings are .

If p is the probability of winning and q is the probability of losing, then the probability of n-1 loses followed by a win is . So P(n) is the probability your ship will come in on the nth round and W(n) is how many chips you gain when it does. With those two equations in hand, you can find your expected winnings, E[W].

That sum blows up when , so if you can keep playing forever, then just by picking a large enough multiplier, m, the expected return is infinite. This is because the number of chips you get back after n rounds increases very exponentially with the number of rounds you play and there’s no chance of ever losing. It’s good to be richer than god.

But in practice, that’s *pretty* unreasonable. When you walk into the casino, you have a finite number of chips and can play at most B rounds (where B is some distinctly not-infinite number) at which point you run out. In this far more realistic situation, you might win in any of the first B rounds or you might lose every one of them. It’s this “lose every time” scenario that’s ruled out by being infinitely rich. In B rounds you bet a total of chips, which is all of them (because that’s how B is defined). The chance of losing all B rounds is .

So you’re expected winnings is:

In short, E[W]<0. There are two subtle truths that got weaved in there. First, p+q=1 (the chance of either winning or losing a given round is 1). And second is the casino’s edge, pf<q. This is ultimately where that inequality comes from.

This sort of terrifying algebra storm is why math is so damnably useful: even if you can’t keep an idea in you head all at once, you can still explore it and learn new things from it.

The casino picture is from here.

]]>There’s a vast conspiracy of optometrists and ophthalmologists who will try to convince you that our three types of cone cells somehow limit our vision, in a way that creatures with a wider variety of cones cells are less limited. But of course: color is color and you see it or you don’t. There are plenty of colors beyond the hegemonic “standard rainbow” and we don’t see them for exactly the same reason we don’t see unicorns or bigfeet: they’re very rare and never where you’d expect.

This story is nothing new. For example, regular old blue is a difficult color to isolate. If it weren’t for the sky, we’d barely ever see it in nature. And because “rare” = “expensive” = “classy”, we have “royal blue” (since royals love to be classy). Once upon a time, seeing something vibrantly blue would have really caught your attention: instantly recognizable, yet wholly alien. Seeing one of the many colors beyond the standard rainbow is a similar experience.

Sometimes a new color can be found just by looking somewhere no one else has bothered to look. For example, in 1768 Jeanne Baret, during her circumnavigation of the Earth in disguise, discovered three new colors, “och”, “ocher”, and “ochest”, only one of which is still in common use today. The first and last of these are totally distinct from any other color or combination of colors, but such were the chromatic strictures at the time, that they were shoehorned into the now defunct “och scale” (hence the names).

New colors are often found by painters who, obsessed with extending their craft, may stumble upon new materials, colors, and palates previously unseen on Earth. Desperate to keep the momentum after gaining fame for his signature style, “plusieurs visages sur la même tête“, Picasso invented a never before seen shade of cherry-blue. With his usual flare for nomenclature, Picasso named his discovery “Color on a Painter’s Palette”, though it rapidly became known as “naughtblue” in the artist community (to avoid any confusion with his earlier work).

His stunning naughtblue work, “Woman Standing Around”, is now in a private collection and so lost to the world forever.

Rothko was a painter more interested in color than you are in *anything*. After a rough night in 1942, book-ended by a fifth of absinthe and Nietzsche’s “Ecce Homo” (the ordering of which is now hotly debated), Rothko “regained cognizance” to find that he had created violet’s precise and indisputable chromatic opposite, “outdigo”. Tragically, outdigo paint famously changes color when it dries, so he was unable to create with it. Indeed, he kept his only supply sealed in an unmarked can until it was accidentally sold at a lawn sale to pay for cooking lessons.

Little is known of outdigo, beyond Rothko’s few scribbled notes:

“*It has come to me, outdigo. It shall be an heirloom of my studio. All those who follow in my bloodline shall be bound to its fate, for I shall risk no hurt to the color. It is precious to me, though I buy it with a great pain. The markings upon the canvas begin to fade. The color, which at first was as clear as red flame, has all but disappeared – a secret now that only turpentine can tell.*”

So there are plenty of new colors, it’s just a matter of persistence, luck, and looking at stuff that no one else has ever bothered to look at.

]]>Almost without exception, when you hear about quantum entanglement it’s described as some kind of communication or connection. Generally along the lines of “when you affect one entangled particle it instantly affects the other” or slightly more sophisticated “when you measure the state of one entangled particle, the other collapses to the same state”. But these statements cause more difficulties than a dark room full of loose rope. They’ve given rise to an endless parade of universally unsuccessful theories about using entanglement to communicate instantaneously between distant locations. But instantaneous communication has issues. In particular “instantaneous” is faster than c (the speed of light), which is the fastest speed. Ultimately, while entangled quantum systems do have plenty of seriously weird properties, being connected to each other is *not* one of them.

In very short: if you do the same measurement on each of a pair of entangled particles, the two measurements will produce the same, random result. But that doesn’t involve any kind of signal bouncing around the universe. The distant entangled particle is like a lost wallet: there’s no way to tell if someone’s seen it until they call you.

*Quick aside*: Entanglement takes many forms, not just “always the same” (e.g., “always opposite” is an option). If you can think of a property of a particle or other quantum system, then you’re thinking of something that can be entangled: particle spin (up vs. down), photon polarization (vertical vs. horizontal), particle position (left path vs. right path), energy levels (excited vs. ground), even existence (is vs. isn’t). Entanglement is fun stuff.

Quantum systems can be in multiple states at the same time. For example, an electron can be both spin up and spin down. The question “which is it: up or down?” genuinely doesn’t make sense, because the electron’s state can be a combination of both. We call such combinations “superpositions”. Quantum superpositions are famously delicate. You can demonstrate that an electron is in multiple states, but you have to be clever about it, because if you ever directly measure whether the electron is spin up or down, you always find that it’s one or the other, but never both. Which single state you find is fundamentally random. This “effect” of measurement, where a superposition of several states is suddenly replaced by only one of those states, is called “decoherence” or “wave function collapse” or (when you’re in a hurry) “collapse”.

If a pair of electrons share a common state (for example by both being up and both being down, but never one up / one down), then those electrons are “entangled”. A pair of entangled particles are in a superposition of states *together*.

Entanglement isn’t something that happens accidentally. Or rather, it happens all the time (almost every interaction on every level generates entanglements), but *useful* entanglement is really hard to set up. It’s like the fact that practically everything makes sound, but the dulcet tones of a barbershop quartet takes work (and three friends). In order for two things to be entangled they need to have interacted with each other, so you can’t just cause two distant particles to become entangled. Generally speaking, you entangle two particles when they’re together, then move them apart.

When a single particle in a superposition of states is measured, it collapses to one of the states in its superposition. Entangled particles have the same rule; when you measure either of them you find that they’re in only one state. But since they’re in a shared state, a measurement on the other will yield the same result. A direct measurement is made, so the superposition (both up and both down) collapses and only a single state is left (either both up or both down). The fact that there are two particles involved instead of one doesn’t change things.

Einstein infamously quipped that entanglement is “spooky action at a distance” because, despite the fact that the result of each collapse is absolutely random, when it happens to a pair of entangled particles, they still manage to agree. That would *seem* to imply that they’re somehow telling each other how to behave (even if they would need to do so across any barrier or even faster than light).

Tricks that use entanglement to instantly send information generally involve collapsing the state of the distant particle (by measuring your own), and assuming that the person in charge of that particle will somehow notice. Unfortunately (and this is the answer), there’s no “new-particle-smell” for particles that are still in superpositions and there’s no big flash when that superposition collapses. If you measure an electron’s spin the result is simply “spin up” or “spin down”. That’s all you get. But that doesn’t tell you what the state was before, or if the original state was a superposition of several states, or even if the particle was entangled with something else.

If you only have access to one particle, then the two situations, 1) the other particle is inaccessible or 2) it has been quietly measured, are indistinguishable in *every* physical sense. Regardless of whatever else is involved, you can never notice the weirdness of entangled pairs until you directly compare them with each other.

The fact that you can’t tell the difference between a random state and an entangled state is super not obvious. In fact, the Fundamental Fysiks Group (who aren’t physicists so much as parapsychologists) once came up with a clever method for using entanglement for faster-than-light communication, that wasn’t clearly non-sense. Normally the way we determine that something is in multiple states is to prepare many identical versions and test them out, so the Fundamental Fysikists proposed copying one of the entangled particles and doing tests on the ensemble. In theory, Alice could communicate one bit of information by either looking (collapse) or not looking (no collapse) at her particle, and if Bob copied his particle many times and compared their behavior, he would be able to tell the difference. Technically, Bob might be able to tell the difference *before* Alice bothered to look, which is… suspicious (such is the nature of instantaneous communication in a universe where special relativity holds sway).

Faced with such daunting bizarritudes and paradoxes, professional physicists decided to take notice and “fix” the issue by carefully examining all of the assumptions that went into this cute trick (evidently, doing physics *carefully* is what separates physicists from fysiksists). It turns out that copying a quantum state is impossible and if it weren’t for the Fundamental Fysiks Group, we wouldn’t have had figured it out quite as soon as we did. The result of that figuring is the “no-cloning” or “better luck next time Fysikists” theorem.

**Answer Gravy**: A cute way to prove to yourself that superposition can’t be measured is to notice that “superposition-ness” is subjective. That is to say, a given quantum state can be a single state from one perspective and a superposition of several states from another. This is easiest to picture with polarized light.

A “measurement basis” is the set of states that your measurement apparatus can distinguish. For example, if you’re using a vertical polarizing filter, then you can distinguish between light that is vertically polarized ( passes through) and horizontally polarized ( is stopped). So for a vertical (or horizontal) filter the measurement basis is .

But light can be polarized at any angle (there’s nothing special about verticalness). So what happens when diagonally polarized light hits a vertical filter? It turns out that diagonally polarized light, , is a superposition of both and . When you measure diagonal light in the basis, those are the only two possible results. So when you actually do this, you find that the diagonal photon is suddenly either vertical or horizontal. The superposition of vertical and horizontal states in the diagonal photon collapses to only one of those states. Which state will remain is fundamentally random.

However! A diagonal filter measures photon polarization in the basis. If the incoming state is one of those diagonal states, then the diagonal filter will determine which. When you measure diagonal photons with a similarly diagonal filter, they always pass through and there’s no collapse to either of the vertical/horizontal states.

Brace yourself for the point. The *same* *state*, , is a single state when measured in the basis and a superposition of states when measured in the basis. There’s a lot to unpack there, but the take-home is this: a state is a state is a state. There is no way to tell whether a thing is in a superposition or has collapsed, because there is fundamentally nothing different about the two situations. No matter what the state of a thing is, it is always possible to find a measurement basis where the state is a superposition and another basis where it’s a single state.

**Slightly Deeper Gravy**: Entangled particles are nothing special. An entangled state is a regular, dull as dishwater, quantum state, it’s just that it’s shared between multiple particles. There actually is a “correct” basis in which you always get definite results from entangled particles, but you need all of the involved particles in one place to do such a measurement. Without all the pieces, you *always* get random results.

A diagonal state owned by Alice looks like this: . The subscript “a” means “owned by Alice”. This notation says that if Alice measures her photon in the basis, there is a chance of either result according to the Born rule. The other diagonal state is . Both of these diagonal states produce random results in the basis, but produce definite results in the basis.

Notice that the converse is also true. and , so the vertical and horizontal states are superpositions of the diagonal states and therefore produce random results in the diagonal basis. For each of these states (or any other for that matter), there is a measurement basis where they give definite results.

But for entangled states, if you only have access to one of the two particles, there is no “correct basis” where you always get a definite result.

A nice, tidy entangled state distributed between Alice and Bob is . This notation says that if either Alice or Bob do a measurement in the basis they’ll get a random result. So what about the diagonal basis? Just rewrite the state:

Holy crap! It’s essentially the same state in the diagonal basis! Once again, it’s perfectly random (50% chance of either result). More generally, as long as you only have one particle, the results are just as random in any basis. It turns out this is one of those universal truths of entangled states: it doesn’t matter how you measure them, if you’re only looking at one particle, you’re using the “wrong” basis.

On the other hand, if you have both particles, you can tell the difference between the four “maximally entangled states“. Each particle’s state is described by a qubit (a “quantum bit”). First, perform a controlled not gate (CNot) on the two entangled qubits, then apply a Hadamard gate (H) on the control qubit, then measure both.

Controlled Not does this:

Hadamard does this:

In the case of photon polarization, the Hadamard gate is a rotation that switches between the vertical/horizontal basis and the diagonal basis.

Here’s what happens to the entangled state this post keeps talking about:

There are four maximally entangled states for pairs of two-state systems. Doing the same operations to each yields:

These are different, definite states! So this is how to look at entangled two-state systems in the “correct” basis. But it wouldn’t be possible to do this without the CNot operation and that requires *both* particles to work.

The coin box pictures are from here.

]]>Ever since the Moon entered the scene 4.5 billion years ago, it’s been slowly drifting away. Initially it was around 15 times closer and bigger across in the sky. The effects that push it away decrease rapidly with distance, so the Moon climbed most of the distance to its current orbit early, but even today it gains about 4 cm per year.

The tides raised by the Moon are made up of a lot of water and that water, like all matter, creates gravity. The Earth turns in the same direction that the Moon orbits and, since water doesn’t instantly slosh back to where it’s supposed to be, the tidal bulge is dragged a little ahead of the Moon. The nearly insignificant gravity generated by that extra water pulls the Moon forward and speeds it up. The effect is especially small, since a nearly identical bulge appears on the opposite side of the Earth, pulling in the wrong direction. The difference is that the forward water bulge is slightly closer.

If you’re in orbit and you start moving faster, your orbit gets higher (ironically you end up slowing down overall, just higher and slower). At the same time that the tidal bulge pulls on the Moon, the Moon pulls on that bulge, slowing the Earth.

So while the Moon drifts away, days on Earth get longer. According to coral records, as recently as half a billion years ago the day was a mere 22 hours. Corals aren’t inherently clerical, but they do have both daily and yearly growth cycles with corresponding growth rings. And 4.5 billion years ago, before the Moon had siphoned off so much of our rotational inertia, a day on Earth was a mere 6 hours. Give or take (there wasn’t any coral at the time to obsessively write it down). So the Moon is drifting away and in exchange we Earthlings get both lunch *and* brunch.

A retro-reflector is a clever set of three mutually perpendicular mirrors that reflect any light back toward where it came from, regardless of where that is. Basically, if someone shoots you with a laser, wearing a suit covered in retro-reflectors is the fastest way to get revenge. By bouncing lasers off of any of five sets of retro-reflectors left on the Moon between 1969 and 1973 by the USA and USSR, we can determine the distance to the Moon to within millimeters and have found that, at present, the Moon is drifting away at a sedate 4 cm per year.

Given enough time, the Moon would eventually escape entirely. Luckily, in about five billion years the Sun will swell up and destroy both the Earth and Moon before that happens. So: nothing to worry about.

However! If we’d like to continue being the only planet to have both annular and total solar eclipses (the only objective measure of planetary exceptionalism), we need to act fast in the next several million years to keep the Moon from getting any farther away.

The Moon is “rolling down a hill”, energetically speaking, and this total needs to be expended every year (effectively) forever to keep the Moon from “rolling” any farther away. Unlike practically every other question about planetary-scale engineering, “Project Sisyphus” is not impossible! Merely so expensive and pointless that it may as well be.

Anything that moves in a circle has angular momentum. Something a Moon with mass m orbiting at a radius of R around a planet with mass M in a universe with gravitational constant G will have an angular momentum of:

As r gets bigger (as the Moon moves away), L increases. To hold the Moon in place we’d need to counteract that increase and keep the angular momentum of the Moon constant. To figure out how much L changes over small distances (and 4 cm counts), we just find the differential. This is what calculus is good at: finding tiny changes in one thing, dL, given tiny changes in something else, dr.

Ion drives have exhaust velocities much higher than the escape velocity of the Moon, which is good; otherwise they wouldn’t be rockets moving the Moon, so much as fountains decorating the Moon. They’re also the most efficient type of space propulsion, maxing out around 80% efficiency. The most powerful use about 100 kW of power and are capable of generating about 5N of force (about 1 pound).

One of these drives, pushing for a year, could add or subtract

from the Moon’s orbital angular momentum.

So we could counter the Moon’s annual energy gain with a mere 24.5 million of our most powerful ion drives, each coupled with (at least) a 50 m by 50 m array of solar cells and batteries to supply the 100kW they need (including inefficiencies and the lack of Sun at night). Powering those drives to keep the Moon in place only requires a modest one sixth of the total energy generated by people every year (as of now). So if we really, *really* felt like doing it, this is the sort of “problem” that could be “solved” with little more than the greatest collective effort in human history.

If you lived on the Moon, you could build a statue pointing at the Earth and you’d never have to change it, because the Moon’s orientation is fixed with respect to the Earth. If that statue is pointing straight up, it’s in the middle of the justifiably-named “near side”. For exactly the same reason, you could build a statue pointing in the direction the Moon is moving as it orbits. If the statue were pointing straight up, it would be in the middle of the west side, near Mare Orientale.

If we put all of those ion drives, their batteries, and solar panels in Mare Orientale, they would cover about 75% of it. Combined with all of the access roads and support structures, there’d be scarcely a single stone left unturned.

Each of those ion drives would need on the order of a ton of gas every year (argon and xenon are popular choices) to use as propellant. This is the stuff that would literally be thrown in the direction the Moon is moving in order to slow it down. Summed over the millions of drives, the gas continuously fired from the Moon by Project Sisyphus would be on par with the peak loss of gas in comets as they pass through the inner solar system. That is: on a clear night we would be able to see the gas as a ghostly comet tail extending, in a very straight band, from the left side of the Moon (from the right side for our southern hemisphere readers. G’day and dia bom!).

We can harvest enough argon from our atmosphere (it’s about 1% of the air you’re breathing now) to provide propellant for the next 2 billion years, so we’ll have time to figure out another option before we exhaust the supply. The Falcon Heavy could carry enough argon for about a dozen or two of the 24.5 million drives, so we’d need to launch several thousand of them to the Moon every day until we find a better system.

Compared to that, actually building and maintaining all of the infrastructure on the Moon would be difficult. It would be a lot easier to harvest everything (steel for structures, silicon for solar cells, etc.) we need from the Moon directly. So the damage wouldn’t be restricted to just Mare Orientale; there would be deep scars all over the Moon from where we tore up the place for mining.

So the Moon is drifting away, meaning that in a few hundred million years we won’t have total eclipses and the tide will be slightly more mild. With an achievable, but mind-boggling, effort we could stop the Moon from drifting away and ensure that our descendants can, in several hundred million years, continue to appreciate a slightly wider variety of astronomical phenomena (assuming eyes are still in style). In ten million years our barely-human descendants will curse us for handing this chore down to them, and in a hundred million years their definitely-not-human descendants will shake their cyber-tentacles at us in fury.

But you guys. I think we should try it.

]]>In an effort to plug, I’ll be a guest on Story Collider, which will be recording at the Tipsy Crow in San Diego on Thursday at 7:00. It’s free and should be fun, so if you’d like to show up, you can learn more about the whole thing and register here.

And for those of you attending the Joint Mathematics Meeting (Comic Con for nerds) in San Diego this year, I’ll be at Springer’s booth on Friday.

This is Springer’s first foray into “popular science”. It’s divided into four chapters: “big things”, “small things”, “in between things”, and “not things” (math).

I aimed it at my younger self, who was unimpressed by the vagueness of pop sci and frustrated by the technicalness of actual sci. The articles in “Do Colors Exist?” cover the important ideas intuitively and without dumbing down, but also assume that you don’t know a bunch of fancy terminology. Even if physics isn’t your thing, this is exactly the sort of gift you could give a nerd/science friend without embarrassment. It provides satisfying answers for the man-on-the-street, while including details for the “advanced” reader.

The blurb from the back of the book (which I didn’t write) reads:

*Why do polished stones look wet? How does the Twin Paradox work? How can we be sure that pi never repeats? How does a quantum computer break encryption? Discover the answers to these, and other profound physics questions!*

*This fascinating book presents a collection of articles based on conversations and correspondences between the author and complete strangers about physics and math. The author, a researcher in mathematical physics, responds to dozens of questions posed by inquiring minds from all over the world, ranging from the everyday to the profound.*

*Rather than unnecessarily complex explanations mired in mysterious terminology and symbols, the reader is presented with the reasoning, experiments, and mathematics in a casual, conversational, and often comical style. Neither over-simplified nor over- technical, the lucid and entertaining writing will guide the reader from each innocent question to a better understanding of the weird and beautiful universe around us.*

*Advance praise for Do Colors Exist?: “**Every high school science teacher should have a copy of this book. The individual articles offer enrichment to those students who wish to go beyond a typical ‘dry curriculum’. The articles are very fun. I probably laughed out loud every 2-3 minutes. This is not easy to do. In fact, my children are interested in the book because they heard me laughing so much.” **-Ken Ono, Emory University*

Keeping this website ad-free and cost-free is important, so this will be the last time you’ll have to hear about this.

]]>