It’ll be at the

You can read about the show and get tickets here.

]]>But have no fear, the book is definitely not canceled. Last year, Springer gave Amazon a rather optimistic publication date, last March 14th, but didn’t bother to update it for several months. When that time came and went (plus eight weeks) Amazon figured that the book didn’t exist and *temporarily* pulled the plug.

Here’s the good news. It’s fixed. You can now pre-order or even re-pre-order through Amazon and the books will be ready to be shipped by May 28th. I’m speaking to Springer calmly and with measured tone about providing some kind of special deal through this webpage to apologize for this mass canceling. Until that happens: I’m really, *really* sorry to everyone this has affected. Please consider ordering again.

And if it hasn’t affected you; listen, it’s a really good book. You’d like it.

]]>There are plenty of equations that are infinitely long, but often they’re simple enough that we can write them compactly. For example, This equation goes on forever, but it’s fairly straight forward: every term you flip the sign and increase the denominator from one odd number to the next. You can write it in mathspeak as . Like π itself, this sum goes on forever, but it isn’t complicated. You can describe it simply and in such a way that anyone (with sufficient time and chalk) can find as many digits of π as they like. This is the basic idea behind “Kolmogorov Complexity“; the length of the shortest possible set of written instructions that can produce a given result (never mind how long it takes to actually compute it).

If you’re looking for an equation that needs to be complicated, a good place to look is physics (I mean, what else do you really *need* math for?). If you want to describe the behavior of a ball flying through the air it’s not enough to say “it goes up then down”; there’s a minimum amount of math that goes into accurately calculating the path of falling objects, and it’s more complicated than that.

Arguably, the universe is pretty complicated. But like π, it is deceptively so (we hope). If you want to do something like, say, describe the gravitational interactions of every star in the galaxy, you’d do it by numbering the stars (take your time: star 1, star 2, …, star n), determine the position and mass of each, and , and then find the force on each star produced by all the others. In practice this is absurd (there are a few hundred billion stars in the Milky Way, but we can’t see most of them because there’s a galaxy in the way), but the equation you would use is pretty straight forward. The force on star k is: . This is just Newton’s law of gravitation, , repeated for every possible pair of stars and added up. So while the situation itself is complicated, the equation describing it isn’t. Evidently, if you want an equation that genuinely needs to be complicated, you don’t need a complicated situation, you need complicated dynamics.

The whole point of physics, aside from understanding things, is to describe the rules of the universe as simply as possible. To that end, physicists love to talk about “Lagrangians”. Once you’ve got the Lagrangian of a system, you can describe the behavior of that system by applying the “principle of least action“, which says that the behavior of a system (the orbit a planet takes, the path of light through materials, etc.) will always be such that the “chosen” path will have the minimum total Lagrangian along that path. It’s a cute recipe for succinctly and simultaneously describing a lot of dynamics that would make Kolmogorov proud.

For example, you can sum up Newton’s physics almost instantly. Rather than talking about kinetic energy and momentum and falling, you can just say “Dudes and dudettes, if I may, the Lagrangian for an object flying through the air near the surface of the Earth is , where m is mass, v is velocity, and z is height”. From this *single* formula, you get the conservation of energy, conservation of momentum (when moving sideways), as well as the acceleration due to gravity. There are also Lagrangians for everything from orbiting planets to electromagnetic fields.

Generally, when you look at the same dynamics applied over and over, the equations involved don’t get much more complicated (although their solutions definitely do). And if you want to describe the dynamics of a system, Lagrangians are an extremely compact way to do it. So what’s the most (but not needlessly) complicated equation in the universe? Arguably, it’s the Standard Model Lagrangian, which covers the dynamics of every kind of particle and all of their interactions. Notably, it doesn’t cover gravity, but be cool. It’s a work in progress.

In some sense this equation is compressed data. All the relevant dynamics are there, but there’s a lot of unpacking to do before that becomes remotely obvious. All equations must have some context before they do anything or even mean anything. That’s why math books are mostly words. “2+2=4” means nothing to an alien until after you tell them what each of those symbols mean and how they’re being used. In the case of the Standard Model Lagrangian, each of these symbols mean a lot, the equation itself is uses cute short hand tricks, and it doesn’t even describe dynamics on its own without tying in the Principle of Least Action. But given all that, it’s describing the most complicated thing we can describe, which is nearly everything, without being needlessly verbose (“mathbose”?) about it.

**Answer Gravy**: This isn’t part of the question, but if you’ve taken intro physics, you’ve probably seen the equations for kinetic energy, momentum, and acceleration in a uniform gravitational field (like the one you’re experiencing right now). But unless you’re actually a physicist, you’ve probably never been freaked out by seeing a Lagrangian work. This gravy is full of calculus and intro physics.

The “action”, , is a function of the path a system takes, . More specifically, it’s the integral of the Lagrangian between any two given times:

where t_{1} and t_{2} are the start and stop times, is a path, is the time derivative (velocity) of that path, and is some given function of and . If you want to chose a path that extremizes (either minimizes or maximizes) S, then you can do it by solving the Euler-Lagrange equations:

This is called the Euler-Lagrange equations (plural) because this is actually several equations. Each different variable (x_{1}=x, x_{2}=y, x_{3}=z) tells you something different. In regular ol’ calculus, if you want to find the value of x that extremizes a function f(x), you solve for the value x. Using the Euler-Lagrange equations is philosophically similar: to find the path that extremizes S, you solve for the path .

The Lagrangian from earlier, for a free-falling object near the surface of the Earth, is:

For z:

So the E-L equation says:

or

In other words, “everything accelerates downward at the same rate”. Doing the same thing for x or y, you get , which says “things don’t accelerate sideways”. Both good things to know.

You wanna be even slicker, note that this Lagrangian is independent of time. That means that . Therefore, applying the chain rule:

But we have the E-L equations! Plugging those in:

And therefore:

This thing in the parentheses is constant (since it never changes in time). In the case of we find that this constant thing is:

Astute students of physics 1 will recognize the sum of kinetic energy plus gravitational potential. In other words: this is a derivation of the conservation of energy for free-falling objects. A more general treatment can be done using Noether’s Theorem, which says that every symmetry produces a conserved quantity. For example, a time symmetry ( doesn’t change in time) leads to conservation of energy and a space symmetry ( doesn’t change in some direction) leads to conservation of momentum in that direction.

]]>Quantum immortality is a philosophical thought experiment about what happens when you combine quantum many-state-ness with the anthropic principle and survivorship bias. It’s worth underscoring that this is a *thought experiment*, not advice. It’s an interesting idea, but I wouldn’t bet my life on it.

The anthropic principle says that whatever needed to happen in order for an observer to be observing, happened. For example, because you’re reading this, then (among other things) you must have access to the internet, speak English, live in an environment capable of supporting life, and were born rather than not born. For anyone/anything not reading this, none of those things may necessarily be true.

On it’s own, the anthropic principle is already pretty powerful. It is the governing principle behind why “you are here” signs are always accurate. It says that nobody ever regrets playing Russian Roulette, they only regret inviting their friends. And it explains why the Earth is capable of supporting introspective critters such as ourselves, despite all of the incredibly unlikely things that had to go right for that to happen. That last shouldn’t seem too surprising; if only one in a million planets can support life, where would you expect to find living things?

It’s the “quantum” that makes the quantum suicide thought experiment interesting. You’ll probably find a planet capable of supporting life in the universe, because there are a *lot* of opportunities (based on the other solar systems we can see) and there’s evidently a non-zero chance. What quantum theory does is change that “probably” to “definitely” if there’s ever a non-zero chance.

In classical physics (which is to say: when you just walk around and use your eyeballs), everything seems to be in a single state. Your coat is on one hook, not many. Your front door is open or closed, but not both. If you lose your keys, they’re *someplace* specific, even if you don’t happen to know where.

In quantum physics on the other hand, when we assume that things are in only one state our predictions fail. The most famous example of this is the double slit experiment, where coherent light is shined on a pair of slits (regular readers are no doubt sick of regularly reading about the vaunted double slit experiment). Instead of seeing two bars of light on the far side of the slits, we instead see “beats”; many bars of light corresponding to interference between every possible path through the two slits. The terrifying thing about the double slit experiment is that it continues to work even when the light intensity is turned down to just one photon at a time. If we assume that each photon goes through only one slit, then we expect to see a build up of photons in just two bars. The fact that we see many bars indicates that the photons go through both slits.

Other than being clearly weird, and simple enough for first-year physicists to do the math themselves, there is nothing special about the double slit experiment. The same “things can be in many states” idea applies across the board in quantum theory. It is the back bone of chemistry, particle interactions of all kinds, freaking *everything*.

And photons aren’t special either. You can do the double slit experiment with anything (as far as we know), it’s just that bigger things are harder to work with. The largest things to successfully demonstrate going through both slits are molecules of C_{284}H_{190}F_{320}N_{4}S_{12}. That’s a modest 810 interconnected atoms! Our inability to demonstrate the quantumness of macroscopic things seems to be an engineering barrier rather than some undiscovered physical law. Every indication so far is that there’s no division between the “quantum world” and the “classical world”. Instead (like every other physical law), quantum laws seem to apply universally.

Notice that the refrain is “anything that can happen does” and not “everything happens”. Assuming the laws do apply on all scales, one of their more frustrating predictions is that the probability of observing yourself somewhere else is zero and the probability of observing two or more of yourselves is likewise zero (literally: wherever you go, there you are). These situations are impossible. Despite being in many states, none of your states directly interact with the others. Other quantum versions of yourself are like the bottom half of a Muppet; something you feel like you should be able to see, but there’s a good reason you never do.

This is not without precedent in science. For example, Newton’s laws simultaneously predict that 1) the Earth must be spinning and hurtling through space (based on how the other planets and stars move in the sky) and 2) that you’d never notice (because you’re hurtling along with the Earth). Arguably, this “it’s very weird, but it’s also really hard to notice” aspect of quantum mechanics is why it was discovered *after* bronze and the wheel.

The double slit experiment is so clean an easy to work with because you only have to worry about two states: the path that light can take through either slit. In reality the slits have some non-zero physical size, so there are many different paths photons can take through each. Those paths are all so similar that assuming they’re the same is good enough for an undergrad lab class. But if you want to nail down the *exact* pattern you see projected on the screen, you have to account for every possible path precisely. This is true whenever quantum theory applies (e.g., chemistry); counting the mostly likely quantum states buys you a decently accurate prediction, but the more states you take into account, the more accurate your prediction.

Low-probability states don’t add much, but they demonstrably add more than zero, so they must be physically real. A mountain and a pebble affect the world differently, but they’re “equally real”. So, assuming QM laws apply universally, every possible outcome of an event is physically realized and the fact that you can only experience situations where you’re alive means that you’ll be funneled into those realities where you continue to live.

There’s nothing ultimately special about life or death (unless you’re alive, and then it’s suddenly super important), they’re just more interesting to consider than, say, a quantum number generator that accidentally gives you sequential numbers forever. Quantum suicide makes some tricks a lot easier since the same quantumy arguments apply very broadly. For example, if a situation can end in either stubbing your toe or not, both results will occur. In some parallel histories, when you kick a brick barefoot you’ll miss. Still. What do you honestly think will happen if you try?

The central tenant of quantum immortality (that anything that can happen does) applies to everything and every combination of things, it’s just that we’re good at worrying about and keeping track of ourselves. There’s a vanishingly small probability that a Beanie Baby will “survive” intact for trillions of years (when it will have nearly doubled in value), so in some tiny set of the many possible futures it will. The fact that it doesn’t have a point of view means that it won’t be bothered one way or the other.

If there’s X chance that you’ll be walking around in a thousand years, then there’s a chance of about X^{2} that you and some other particular person will both be walking around. In other words, in some tiny fraction of possible futures you get to persist, and in a substantially more tiny (but still not quite zero) fraction you and Connor MacLeod are both alive. Until the gathering at least, after which there can be only one.

So this gives us a way of testing out the completely insane idea of quantum immortality. If a bunch of us are accidentally alive in a thousand years, let’s all meet up and compare notes. We don’t need to bother agreeing on a meeting place or time, since quantum immortals should be used to relying on happenstance.

Quantum immortality is, almost by definition, a subjective experience. Clearly it’s possible to observe other people passing away (condolences everyone), but if quantum immortality is real, you’ll find out on your own. This has given rise to some clever science fiction, but not a lot of useful science fact. Point is: planning to live forever has not traditionally been an effective way to spend not-forever. I find myself alive and the improbable result of an endless string of nearly impossible coincidences, while on a world that may be unique in the universe. On the other hand, that’s everybody’s story and, not for nothing, don’t risk your life over some silly idea.

The “weaving photon” picture and a short summary of the experiment can be found here.

The fridge picture is from here.

MacLeod’s photo is from here.

]]>There are good reasons why casinos make money and “how to gamble” books are longer than two sentences. According to the law of large numbers, if you’re likely to lose a little bit of money in a game, then playing a lot of times effectively guarantees that you’ll lose a lot of money. It doesn’t matter if you think you have “a system”, gambling is a big business and the business is: you lose. But if you do want to gamble there are three simple rules to keep in mind:

1) Don’t.

2) The second you’re ahead, walk away. You’ve already won and it’s all downhill from there.

3) The second you’ve lost money, walk away. It’s gone forever and if you stay it’s just going to get worse.

This system sounds like a clever way to beat any game: just place larger and larger bets, to cover all of your previous loses and a little more, because *eventually* you’ll win. Then rinse and repeat until Musk and Bezos come begging for a handout.

At first blush this seems like a reasonable trick. And technically it does work. The problem with it is nestled firmly inside that “eventually”. Like so many things in life, this plan works great if you already have an infinite amount of money. With infinite funds, you will win eventually. But without them, you’re screwed as much as always.

Take a simple “double or nothing” scenario like the pass line in craps, where you have a (49.3%) probability of winning. Assuming you bet 1 “chip” (or whatever currency makes sense), then the first round you can either lose or gain 1 new chip. The amount of winnings you can expect (a mathematician would call this “the expected winnings”) is the probability of winning times how much you win, minus the probability you lose times how much you lose: . This is very intentionally negative, because on average you’re supposed lose.

But if you do lose, then on the second round you can bet 2 chips. That way if you win, you’ll recoup your 1 chip loss and come out 1 chip ahead. If you lose again, you’re down 3, so by doubling your bet again (4 chips), you might recoup your net loses and come out 1 chip ahead. Keep doubling until you win and then you’re back where you started, plus one shiny new chip.

In this scenario you’re guaranteed to gain 1 chip if you keep playing. The question is: how long can you keep playing? What’s missing in the “doubling system” so far is the possibility of losing everything (which is a big red flag for any gambling system). You’re likely to gain a tiny amount (by winning any time before you run out of chips) and unlikely to lose everything (by losing until you run out of chips), but *on average* you can expect to walk out of the casino with less than you walked in with.

After n rounds you’ve bet chips (you can add this up easily because it’s a geometric series). For example, after three rounds you’ve bet a total of chips. If you’ve got B chips in your pocket, then after rounds you’ll be broke.

For example, 1023 chips buys you rounds. For the pass line, the probability of losing ten games in a row and going broke is and the probability of winning before then and gaining one chip is . So your expected winnings is . Despite its cleverness, this system is actually worse than not being clever: if you just bet 1 chip ten times in a row, you can expect to lose about 0.14 chips (and only ten at most).

Long story short, you can change the probability of winning/losing and the amount that you might win/lose by using different strategies or playing different games, but on *average* you’ll always lose money. By increasing your bet every time you lose, you really can cover your loses and more. Usually. But if you’re one of those unlucky people without infinite funds, then you have to take into account the possibility of going broke. When you do account for every eventuality, you’ll find the casino’s edge is intact.

All that doubling your bets really does is make the chances of winning better while making the consequences of losing worse.

There are a few rare situations where the casino’s edge disappears. A group of MIT kids famously found one in blackjack and exploited it for years. But it wasn’t a big edge, it was very difficult to even know that it was there, and they had to play consistently and with a large bankroll for a long time to overcome the Gambler’s Ruin (which, very succinctly, is: you’ll run out of money before the casino does). But of course gambling isn’t a game, it’s a business. When the casinos figured out what those MIT kids were doing, they just kicked them out.

Casinos are perfectly happy to let you test your theories out. So if you think you’ve got a winning system, it *might* work. But don’t bet on it.

**Gravy**: When you want to prove something to death, it helps to do it in extreme generality with everything left as non-specified variables. That way you cover the “yeah, but what if…” questions all at once, which is good, but you often end up in an algebra blizzard, which is work.

Assume that the amount of money you make after placing a bet is described by an gaining factor, f. So if you bet x chips, you either lose x chips or gain fx chips. For example, f=1 is “double or nothing”; if you play x chips and win you get 2x chips back, so you *gained* 1x chips.

You can sum up the casino’s edge pretty succinctly: fp-q<0. This means that the average amount you lose from betting a chip, q, is more than the average amount you gain, pf. For example, the pass line in craps is f=1, p=0.493, q=0.507, and fp-q=(1)(0.493)-(0.507)=-0.014. Or, if you bet on any particular number in roulette, f=35, p=1/37, q=36/37, and fp-q=(35)(1/37)-(36/37)=-1/37. fp-q<0 is practically a natural law on the casino floor.

Mathematically speaking, the system here is really simple: take whatever your last bet was and multiply it by m (m=2 for doubling, m=3 for tripling, etc.). On the first round you bet one chip, on the nth round you bet chips, and if you win on the nth round, you’ll earn f times your last bet, .

However, if you lose n rounds in a row, you’ll have bet and lost chips. This means that if you lose the first n-1 rounds and win on the nth, then your take-home winnings are .

If p is the probability of winning and q is the probability of losing, then the probability of n-1 loses followed by a win is . So P(n) is the probability your ship will come in on the nth round and W(n) is how many chips you gain when it does. With those two equations in hand, you can find your expected winnings, E[W].

That sum blows up when , so if you can keep playing forever, then just by picking a large enough multiplier, m, the expected return is infinite. This is because the number of chips you get back after n rounds increases very exponentially with the number of rounds you play and there’s no chance of ever losing. It’s good to be richer than god.

But in practice, that’s *pretty* unreasonable. When you walk into the casino, you have a finite number of chips and can play at most B rounds (where B is some distinctly not-infinite number) at which point you run out. In this far more realistic situation, you might win in any of the first B rounds or you might lose every one of them. It’s this “lose every time” scenario that’s ruled out by being infinitely rich. In B rounds you bet a total of chips, which is all of them (because that’s how B is defined). The chance of losing all B rounds is .

So you’re expected winnings is:

In short, E[W]<0. There are two subtle truths that got weaved in there. First, p+q=1 (the chance of either winning or losing a given round is 1). And second is the casino’s edge, pf<q. This is ultimately where that inequality comes from.

This sort of terrifying algebra storm is why math is so damnably useful: even if you can’t keep an idea in you head all at once, you can still explore it and learn new things from it.

The casino picture is from here.

]]>There’s a vast conspiracy of optometrists and ophthalmologists who will try to convince you that our three types of cone cells somehow limit our vision, in a way that creatures with a wider variety of cones cells are less limited. But of course: color is color and you see it or you don’t. There are plenty of colors beyond the hegemonic “standard rainbow” and we don’t see them for exactly the same reason we don’t see unicorns or bigfeet: they’re very rare and never where you’d expect.

This story is nothing new. For example, regular old blue is a difficult color to isolate. If it weren’t for the sky, we’d barely ever see it in nature. And because “rare” = “expensive” = “classy”, we have “royal blue” (since royals love to be classy). Once upon a time, seeing something vibrantly blue would have really caught your attention: instantly recognizable, yet wholly alien. Seeing one of the many colors beyond the standard rainbow is a similar experience.

Sometimes a new color can be found just by looking somewhere no one else has bothered to look. For example, in 1768 Jeanne Baret, during her circumnavigation of the Earth in disguise, discovered three new colors, “och”, “ocher”, and “ochest”, only one of which is still in common use today. The first and last of these are totally distinct from any other color or combination of colors, but such were the chromatic strictures at the time, that they were shoehorned into the now defunct “och scale” (hence the names).

New colors are often found by painters who, obsessed with extending their craft, may stumble upon new materials, colors, and palates previously unseen on Earth. Desperate to keep the momentum after gaining fame for his signature style, “plusieurs visages sur la même tête“, Picasso invented a never before seen shade of cherry-blue. With his usual flare for nomenclature, Picasso named his discovery “Color on a Painter’s Palette”, though it rapidly became known as “naughtblue” in the artist community (to avoid any confusion with his earlier work).

His stunning naughtblue work, “Woman Standing Around”, is now in a private collection and so lost to the world forever.

Rothko was a painter more interested in color than you are in *anything*. After a rough night in 1942, book-ended by a fifth of absinthe and Nietzsche’s “Ecce Homo” (the ordering of which is now hotly debated), Rothko “regained cognizance” to find that he had created violet’s precise and indisputable chromatic opposite, “outdigo”. Tragically, outdigo paint famously changes color when it dries, so he was unable to create with it. Indeed, he kept his only supply sealed in an unmarked can until it was accidentally sold at a lawn sale to pay for cooking lessons.

Little is known of outdigo, beyond Rothko’s few scribbled notes:

“*It has come to me, outdigo. It shall be an heirloom of my studio. All those who follow in my bloodline shall be bound to its fate, for I shall risk no hurt to the color. It is precious to me, though I buy it with a great pain. The markings upon the canvas begin to fade. The color, which at first was as clear as red flame, has all but disappeared – a secret now that only turpentine can tell.*”

So there are plenty of new colors, it’s just a matter of persistence, luck, and looking at stuff that no one else has ever bothered to look at.

]]>