Where M_{P} and A_{P} are the mass and acceleration of a planet, M_{S} is the mass of the Sun, R is the distance between them, and G is a universal constant. What this rather bold statement says is “if you exist near the Sun, then you are accelerating toward it”. Each of the planets, moons, grains of dust, etc. all say the same thing, it’s just that with 99.86% of the mass in the solar system, the Sun says it loudest.

A force, like gravity, *accelerates* the object it acts on. So to understand what a force does it’s important to understand acceleration. Velocity describes how fast your position is changing, while acceleration describes how fast your velocity is *changing*.

“Velocity” is different from “speed” because velocity is a description of how fast you’re going *and* in which direction; “10 mph north” is a velocity, while “10 mph” is a speed. So you can have an acceleration that changes your velocity by changing your speed and/or by changing your direction.

Imagine you’re in a car (your velocity points forward):

If you accelerate forward, you speed up.

If you accelerate backward, you slow down (“decelerate”).

If you accelerate to the right or left, you turn in that direction but maintain the same speed.

Notice that when you talk about acceleration this way, suddenly the same push you feel into your seat when you step on the gas is the same as the push you feel into your seat belt when you brake and the same as the centrifugal force pushing you to the left when you turn right.

With planets the same rules apply. A planet moving around the Sun in a circular orbit always has the Sun about 90° to the side of the direction they’re moving. This means that the planet is always turning, but always moving at about the same speed. The planets are moving so fast that by the time they’ve turned a little, they’ve moved far enough that the Sun is in a new position, still 90° to the side.

So that’s how a planet can accelerate toward the Sun forever without getting any closer. The sideways motion of planets is due to the fact that if a planet were not moving sideways, it would find itself in the Sun in short order. In fact, the Sun is nothing more than a massive collection of all the matter from the formation of the solar system that wasn’t moving sideways fast enough (which is nearly all of it).

*Why* things end up in circular orbits is a more subtle question. The quickest explanation is that things in not-circular orbits run into trouble until either their orbit is sufficiently round or they’re destroyed. It’s not that circular orbits are somehow better, it’s just that other orbits carry more risk of serious impacts or gravitational interactions (e.g., with Jupiter) that may lead to short, unfortunate orbits.

Assuming that an orbit is stable, then it will be an ellipse (there’s a post here on *exactly* why, but it’s a a whole thing.). A circle is the simplest kind of ellipse, but ellipses can be extremely stretched out. For example, comets have very elliptical orbits (like Sedna in the picture below). In these orbits the comet is mostly moving toward and away from the Sun, so for them the Sun’s pull *mostly* changes their speed and changes their direction less.

There’s nothing special about the orbits the planets are in. The eight (or nine or more) planets we have in the solar system aren’t the only planets that formed, they’re the only planets left. When things are in highly elliptical orbits they tend to “drive all over the road” and smack into things. When things smack into each other one of a few things happen; generally they break or they don’t. When we look at our planetary neighbors we see craters indicating impacts right up to the limit of what that planet or moon could handle without shattering. Presumably there *should* be impacts bigger than a planet can stand, but (not surprisingly) those impacts don’t leave craters for us to find.

So objects with extremely elliptical orbits are more likely to get blown up. But even when two objects hit each other and merge, the resulting trajectory is an average of both objects’ original trajectories, and that tends to be more circular. This is a part of accretion, and Saturn’s rings provide a beautiful example of the nearly perfect circular orbits that result from it.

Given a tremendous amount of time, a big blob of material in space tends to condense into a ball (with most of the matter) and a thin disk of left over material traveling in circular orbits around it.

]]>Oranges:

Imagine taking an orange wedge and opening it so that the triangles all point “up” instead of towards the same point. If you interlaced two of these then you’d have a small brick that’s roughly rectangular.

As more triangles are used, the curved end produces less pronounced bumpiness and the straight sides come closer and closer to being straight up and down, making the brick rectangular. The height becomes equal to the radius, while the length is half of the circumference (C = 2πR) which now finds itself running along the top and bottom. As the number of triangles “approaches infinity” the circle can be taken apart and rearranged to fit almost perfectly into an “R by πR” box with an area of πR^{2}.

This is why calculus is so damn useful. We often think of infinity as being mysterious or difficult to work with, but here the infinite slicing just makes the conclusion infinitely clean and exact: A = πR^{2}.

Calculus:

On the mathier side of things, the circumference is the differential of the area. That is; if you increase the radius by “dr”, which is a tiny, tiny bit, then the area increases by Cdr where C is the circumference. We can use that fact to describe a disk as the sum of a lot of very tiny rings. “The sum of a lot of tiny _____” makes mathematicians reflexively say “use an integral“.

Every ring has an area of Cdr = (2πr)dr. Adding them up from the center, r=0, to the outer edge, r=R, is written: .

This is a beautiful example of understanding trumping memory. A mathematician will forget the equation for the area of a circle (A=πR^{2}), but remember that the circumference is its differential. That’s not to excuse their forgetfulness, just explain it.

**Physicist**: In the language of mathematics there are “dialects” (sets of axioms), and in the most standard, commonly-used dialect you can prove that 0.999… = 1. The system that’s generally taught now is used because it’s useful (in a lot of profound ways), and in it we can prove that 0.99999… = 1. If you want to do math where 1/infinity is a definable and non-zero value, you can, but it makes math unnecessarily complicated (for most tasks). The way the number system is generally taught (at the math-major level, where the differences become important) is that the real numbers are defined such that (very long story short) 1/infinity = 0 and there isn’t a “next number” for any number. That is, if you think you’ve found a number, x, that’s closer to 1 than any other number, then I can find a number half way between it and 1, (1+x)/2, that’s even closer. That’s not a trivial statement. In the system of integer numbers there *is* a next number; for 3 it’s 4, for 26 it’s 27, etc.. In the system of real numbers *every* number can be added, subtracted, multiplied, and divided without “leaving” the real numbers. That leads to the fact that we can squeeze a new number between any two different numbers. In particular, there’s no greatest number less than one. If there were, then you couldn’t fit another number between it and one, and that would make it a big weird exception. Point is: it’s tempting to say that 0.999… is the “first number below 1″, but that’s not a thing.

The term “real numbers” is just a name for a “sand box” of mathematical tools that have become standard because they’re useful. However! There are other systems where “very very very slightly less than 1″ , or more precisely “less than one, but greater than every number that’s less than one”, makes mathematical sense. These systems aren’t invalid or wrong, they’re just… not as pretty and fluid as the simple (as it reasonably can be), solid, dull as dishwater, real number system.

In the set of “real numbers” (as used today) a number can be *defined* as the limit of the decimal expansion taken one digit at a time. For example, the number “2” is {2, 2.0, 2.00, 2.000, …}. The “square root of 2″ is {1, 1.4, 1.41, 1.414, 1.4142, …}. The number, and everything you might ever want to do with it (as a real number), can be done with this sequence of ever-longer decimals (although, in practice, there are usually more sophisticated methods).

These sequences are “equivalent” and describe the same number if they get (arbitrarily) closer and closer to that same number forever. Two sequences don’t need to be identical to be equivalent. The sequences {1, 1.0, 1.00, 1.000, …} and {0, 0.9, 0.99, 0.999, …} both get closer and closer to each other and to the value “1” forever, so they’re equivalent. In absolutely every way that counts (in terms of the real numbers), the number “0.99999…” and the number “1” or “1.0000…” are exactly the same.

It does seem very bizarre that two numbers that look different can be the same, but there it is. This is *basically* the only exception; you can write things like “0.5 = 0.49999…”, but the same thing is going on.

**Physicist**: He’s very intentionally saying “and” and not saying “or”.

When something is in more than one place, or state, or position, we say it’s in a “superposition of states”. The classic example of this is the “double slit experiment”, where we see evidence of a *single* photon interfering with itself through *both* slits.

Schrödinger’s Equation describes particles (and by extension the world) in terms of “quantum wave functions”, and not in terms of “billiard balls”. His simple model described the results of a variety of experiments very accurately, but required particles to behave like waves (like the interference pattern in the double slit) and be in multiple states. In those experiments, when we actually make a measurement (“where does the photon hit the photographic plate?”) the results are best and most simply described by that wave. But while a wave describes how the particles behave and where they’ll be, when we actually measure the particle we always find it to be in one state.

“Schrödinger’s Cat” was a thought experiment that he (Erwin S.) came up with to underscore how weird his own explanation was. The thought experiment is, in a nutshell (or cat box): there’s a cat in a measurement-proof box with a vial of poison, a radioactive atom (another known example of quantum weirdness), and a bizarre caticidal geiger counter. If the counter detects that the radioactive atom has decayed, then it’ll break the vial and kill the cat. T0 figure out the probability of the cat being alive or dead you use Schrödinger’s wave functions to describe the radioactive atom. Unfortunately, these describe the atom, and hence the cat, as being in a superposition of states between the times when the box is set up and when it’s opened (in between subsequent measurements). Atoms can be in a combination of decayed and not decayed, just like the photons in the double slit can go through both slits, and that means that the cat must also be in a superposition of states. This isn’t an experiment that has been done or could reasonably be attempted. At least, not with a cat.

Schrödinger’s Cat wasn’t intended to be an educational tool, so much as a joke with the punchline “so… it works, but that’s *way* too insane to be right”. At the time it was widely assumed that in the near future an experiment would come along that would over-turn this clearly wonky interpretation of the world and set physics back on track.

But as each new experiment (with stuff smaller than cats, but still pretty big) verified and reinforced the wave interpretation and found more and more examples of quantum superposition, Schrödinger’s Cat stopped being something to be dismissed as laughable, and turned instead into something to be understood and taken seriously (and sometimes dropped nonchalantly into hipster conversations). Rather than ending with “but the cat obviously must be alive OR dead, so this interpretation is messed up somewhere” it more commonly ends with “but experiments support the crazy notion that the cat is both alive AND dead, so… something to think about”.

If it bothers you that the Cat doesn’t observe itself (why is opening the box so important?), then consider Schrödinger’s Graduate Student: unable to bring himself to open one more box full of bad news, Schrödinger leaves his graduate student to do the work for him and to report the results. Up until the moment that the graduate student opens the door to Schrödinger’s Office, Schrödinger would best describe the student as being in a superposition of states. This story was originally an addendum to Schrödinger’s ludicrous cat thing, but is now also told with a little more sobriety.

The double slit picture is from here.

]]>In empirical science (science involving tests and whatnot) things are never “proven”. Instead of asking “is this true?” or “can I prove this?” a scientist will often ask the substantially more awkward question “what is the chance that this could happen accidentally?”. Where you draw the line between a positive result (“that’s not an accident”) and a negative result (“that could totally happen by chance”) is completely arbitrary. There are standards for certainty, but they’re arbitrary (although generally reasonable) standards. The most common way to talk about a test’s certainty is “sigma” (pedantically known as a “standard deviation“), as in “this test shows the result to 3 sigmas”. You have to do the same test over and over to be able to talk about “sigmas” and “certainties” and whatnot. The ability to use statistics is a big part of why *repeatable* experiments are important.

“1 sigma” refers to about 68% certainty, or that there’s about a 32% chance of the given result (or something more unlikely) happening by chance. 2 sigma certainty is ~95% certainty (meaning ~5% chance of the result being accidental) and 3 sigmas, the most standard standard, means ~99.7% certainty (~0.3% probability of the result being random chance). When you’re using, say, a 2 sigma standard it means that there’s a 1 in 20 chance that the results you’re seeing are a false positive. That doesn’t *sound* terrible, but if you’re doing a lot of experiments it becomes a serious issue.

The more data you have, the more precise the experiment will be. Random noise can look like a signal, but *eventually* it’ll be revealed to be random. In medicine (for example) your data points are typically “noisy” or want to be paid or want to be given useful treatments or don’t want to be guinea pigs or whatever, so it’s often difficult to get better than a couple sigma certainty. In physics we have more data than we know what to do with. Experiments at CERN have shown that the Higgs boson exists (or more precisely, a particle has been found with the properties previously predicted for the Higgs) with 7 sigma certainty (~99.999999999%). That’s excessive. A medical study involving *every* human on Earth can not have results that clean.

So, here’s an actual answer. Ignoring the details about dice and replacing them with a “you win / I win” game makes this question much easier (and also speaks to the fairness of the game at the same time). If you play a game with another person and either of you wins, there’s no way to tell if it was fair. If you play N games, then (for a fair game) a sigma corresponds to excess wins or losses away from the average. For example, if you play 100 games, then

1 sigma: ~68% chance of winning between 45 and 55 games (that’s 50±5)

2 sigma: ~95% chance of winning between 40 and 60 games (that’s 50±10)

If you play 100 games with someone, and they win 70 of them, then you can feel fairly certain (4 sigmas) that something untoward is going down because there’s only a 0.0078% chance of being that far from the mean (half that if you’re only concerned with losing). The more games you play (the more data you gather), the less likely it is that you’ll drift away from the mean. After 10,000 games, 1 sigma is 50 games; so there’s a 95% chance of winning between 4,900 and 5,100 games (which is a pretty small window).

Keep in mind, before you start cracking kneecaps, that 1 in 20 people will see a 2 sigma result (that is, 1 in every N folk will see something with a probability of about 1 in N). Sure it’s unlikely, but that’s probably why you’d notice it. So when doing a test make sure you establish when the test starts and stops *ahead* of time.

In order to find the slope of a curve at a particular point requires limits, which always feel a little incomplete. When taking the limit of a function you’re not talking about a single point (which can’t have a slope), you’re not even talking about the function *at* that point, you’re talking about the function near that point as you get closer and closer. At every step there’s always a little farther to go, but “in the limit” there isn’t. Here comes an example.

Say you want to find the slope of f(x) = x^{2} at x=1. “Slope” is (defined as) rise over run, so the slope between the points and is and it just so happens that:

Finding the limit as is the easiest thing in the world: it’s 2. *Exactly* 2. Despite the fact that h=0 couldn’t be plugged in directly, there’s no problem at all. For every h≠0 you can draw a line between and and find the slope (it’s 2+h). We can then let those points get closer together and see what happens to the slope (). Turns out we get a single, exact, consistent answer. Math folk say “the limit exists” and the function is “differentiable”. Most of the functions you can think of (most of the functions worth thinking of) are differentiable, and when they’re not it’s usually pretty obvious why.

Same sort of thing happens for integrals (the other important tool in calculus). The situation there is a little more subtle, but the result is just as clean. Integrals can be used to find the area “under” a function by adding up a larger and larger number of thinner and thinner rectangles. So, say you want to find the area under f(x)=x between x=0 and x=3. This is day… 30 or so?

As a first try, we’ll use 6 rectangles.

Each rectangle is 0.5 wide, and 0.5, 1, 1.5, etc. tall. Their combined area is or, in mathspeak, . If you add this up you get 5.25, which is more than 4.5 (the correct answer) because of those “sawteeth”. By using more rectangles these teeth can be made smaller, and the inaccuracy they create can be brought to naught. Here’s how!

If there are N rectangles they’ll each be wide and will be tall (just so you can double-check, in the picture N=6). In mathspeak, the total area of these rectangles is

The fact that is just one of those math things. For every finite value of N there’s an error of , but this can be made “arbitrarily small”. No matter how small you want the error to be, you can pick a value of N that makes it even smaller. Now, letting the number of rectangles “go to infinity”, and the correct answer is recovered: 9/2.

In a calculus class a little notation is used to clean this up:

Every *finite* value of N gives an approximation, but that’s the whole point of using limits; taking the limit allows us to answer the question “what’s left when the error drops to zero and the approximation becomes perfect?”. It may seem difficult to “go to infinity” but keep in mind that math is ultimately just a bunch of (extremely useful) symbols on a page, so what’s stopping you?

Mathematicians, being consummate pessimists, have thought up an amazing variety worst-case scenarios to create “non-integrable” functions where it doesn’t really make sense to create those approximating rectangles. Then, being contrite, they figured out some slick ways to (often) get around those problems. Mathematicians will never run out of stuff to do.

Fortunately, for everybody else (especially physicists) the universe doesn’t seem to use those terrible, terrible… terrible worst-case functions in any of its fundamental laws. Mathematically speaking, all of existence is a surprisingly nice place to live.

]]>