Q: How does Earth’s magnetic field protect us?

Physicist: High energy charged particles rain in on the Earth from all directions, most of them produced by the Sun.  If it weren’t for the Earth’s magnetic field we would be subject to bursts of radiation on the ground that would be, at the very least, unhealthy.  The more serious, long term impact would be the erosion of the atmosphere.  Charged particles carry far more kinetic energy than massless particles (light), so when they strike air molecules they can kick them hard enough to eject them into space.  This may have already happened on Mars, which shows evidence of having once had a magnetic field and a complex atmosphere, and now has neither (Mars’ atmosphere is ~%1 as dense as ours).

Rule #1 for magnetic fields is the “right hand rule”: point your fingers in the direction a charged particle is moving, curl your fingers in the direction of the magnetic field, and your thumb will point in the direction the particle will turn.  The component of the velocity that points along the field is ignored (you don’t have to curl your fingers in the direction they’re already pointing), and the force is proportional to the speed of the particle and the strength of the magnetic field.

For notational reasons either lost to history or not worth looking up, the current (the direction the charge is moving) is I and the magnetic field is B.  F is the force the particle feels.

For notational reasons either lost to history or not worth looking up, the current (the direction the charge is moving) is I and the magnetic field is B. More reasonably, the Force the particle feels is F.  In this case, the particle is moving to the right, but the magnetic field is going to make it curve upwards.

This works for positively charged particles (e.g., protons).  If you’re wondering about negatively charged particles (electrons), then just reverse the direction you got.  Or use your left hand.  If the magnetic field stays the same, then eventually the ion will be pulled in a complete circle.

As it happens, the Earth has a magnetic field and the Sun fires charged particles at us (as well as every other direction) in the form of “solar wind”, so the right hand rule can explain most of what we see.  The Earth’s magnetic field points from south to north through the Earth’s core, then curves around and points from north to south on Earth’s surface and out into space.  So the positive particles flying at us from the Sun are pushed west and the negative particles are pushed east (right hand rule).

Since the Earth’s field is stronger closer to the Earth, the closer a particle is, the faster it will turn.  So an incoming particle’s path bends near the Earth, and straightens out far away.  That’s a surprisingly good way to get a particle’s trajectory to turn just enough to take it back into the weaker regions of the field, where the trajectory straightens out and takes it back into space.  The Earth’s field is stronger or weaker in different areas, and the incoming charged particles have a wide range of energies, so a small fraction do make it to the atmosphere where they collide with air.  Only astronauts need to worry about getting hit directly by particles in the solar wind; the rest of us get shrapnel from those high energy interactions in the upper atmosphere.

If a charge moves in the direction of a magnetic field, not across it, then it’s not pushed around at all.  Around the magnetic north and south poles the magnetic field points directly into the ground, so in those areas particles from space are free to rain in.  In fact, they have trouble not coming straight down.  The result is described by most modern scientists as “pretty”.

Charged particles from space following the magnetic field lines into the upper atmosphere.

Charged particles from space following the magnetic field lines into the upper atmosphere where they bombard the local matter.  Green indicates oxygen in the local matter.

The Earth’s magnetic field does more than just deflect ions or direct them to the poles.  When a charge accelerates it radiates light, and turning a corner is just acceleration in a new direction.  This “braking radiation” slows the charge that creates it (that’s a big part of why the aurora inspiring as opposed to sterilizing).  If an ion slows down enough it won’t escape back into space and it won’t hit the Earth.  Instead it gets stuck moving in big loops, following the right hand rule all the way, thousands of miles above us (with the exception of our Antarctic readers).  This phenomena is a “magnetic bottle”, which traps the moving charged particles inside of it.  The doughnut-shaped bottles around Earth and are the Van Allen radiation belts.  Ions build up there over time (they fall out eventually) and still move very fast, making it a dangerous place for delicate electronics and doubly delicate astronauts.

Magnetic bottles, by the way, are the only known way to contain anti-matter.  If you just keep anti-matter in a mason jar, you run the risk that it will touch the mason jar’s regular matter and annihilate.  But ions contained in a magnetic bottle never touch anything.  If that ion happens to be anti-matter: no problem.  It turns out that the Van Allen radiation belts are lousy with anti-matter, most of it produced in those high-energy collisions in the upper atmosphere (it’s basically a particle accelerator up there).  That anti-matter isn’t dangerous or anything.  When an individual, ultra-fast particle of radiation hits you it doesn’t make much of a difference if it’s made of anti-matter or not.

And there isn’t much of it; about 160 nanograms, which (combined with 160 nanograms of ordinary matter) yields about the same amount of energy as 7kg of TNT.  You wouldn’t want to run into it all in one place, but still: not a big worry.

Why is there a map on it?

A Van Allen radiation belt simulated in the lab.

In a totally unrelated opinion, this picture beautifully sums up the scientific process: build a thing, see what it does, tell folk about it.  Maybe give it some style (time permitting).

 

The right hand picture is from here.

Posted in -- By the Physicist, Astronomy, Particle Physics, Physics | 3 Comments

Q: If a long hot streak is less likely than a short hot streak, then doesn’t that mean that the chance of success drops the more successes there are?

One of the original questions was:  I understand “gambler’s fallacy” where it is mistaken to assume that if something happens more frequently during a period then it will be less frequently in the future.  Example:  If I flip a coin 9 times and each time I get HEADS, than to assume that  it is more “probable” that the 10th flip will be tails is a incorrect assumption.

I also understand that before I begin flipping that coin in the first place, the odds of getting 10 consecutive HEADS is a very big number and not a mere 50/50.

My question is:  Is it more likely?, more probable?, more expectant?, or is there a higher chance of a coin turning up TAILS after 9 HEADS?


Physicist: Questions of this ilk come up a lot.  Probability and combinatorics, as a field study, are just mistake factories.  In large part because single words massively change the difference between two calculations, not just in the result but in how you get there.  In this case the problem word is “given”.

Probabilities can change completely when the context, the “conditionals”, change.  For example, the probability that someone is eating a sandwich is normally pretty low, but the probability that a person is eating a sandwich given that there’s half a sandwich in front of them is pretty high.

To understand the coin example, it helps to re-phrase in terms of conditional probabilities.  The probability of flipping ten heads in a row, P(10H), is P(10H) = \left(\frac{1}{2}\right)^{10}\approx 0.1%.  Not too likely.

The probability of flipping tails given that the 9 previous flips were heads is a conditional probability: P(T | 9H) = P(T) = 1/2.

In the first situation, we’re trying to figure out the probability that a coin will fall a particular way 10 times.  In the second situation, we’re trying to figure out the probability that a coin will fall a particular way only once.  Random things like coins and dice are “memoryless”, which means that previous results have no appreciable impact on future results.  Mathematically, when A and B are unrelated events, we say P(A|B) = P(A).  For example, “the probability that it’s Tuesday given that today is rainy, is equal to the probability that it’s Tuesday” because weather and days of the week are independent.  Similarly, each coin flip is independent, so P(T | 9H) = P(T).

The probability of the “given” may be large or small, but that isn’t important for determining what happens next.  So, after the 9th coin in a row comes up heads everyone will be waiting with bated breath (9 in a row is unusual after all) for number ten, and will be disappointed exactly half the time (number 10 isn’t affected by the previous 9).

This turns out to not be the case when it comes to human-controlled events.  Nobody is “good at playing craps” or “good at roulette”, but from time to time someone can be good at sport.  But even in sports, where human beings are controlling things, we find that there still aren’t genuine hot or cold streaks (sans injuries).  That’s not to say that a person can’t tally several goalings in a row, but that these are no more or less common than you’d expect if you modeled the rate of scoring as random.

For example, say Tony Hawk has already gotten three home runs by dribbling a puck into the end zone thrice.  The probability that he’ll get another point isn’t substantially different from the probability that he’d get that first point.  Checkmate.

Notice the ass-covering use of “not substantially different”.  When you’re gathering statistics on the weight of rocks or the speed of light you can be inhumanly accurate, but when you’re gathering statistics on people you can be at best humanly accurate.  There’s enough noise in sports (even bowling) that the best we can say with certainty is that hot and cold streaks are not statistically significant enough to be easily detectable, which they really need to be if you plan to bet on them.

Posted in -- By the Physicist, Math, Probability | 9 Comments

Q: Where do the rules for “significant figures” come from?

Physicist: When you’re doing math with numbers that aren’t known exactly, it’s necessary to keep track of both the number itself and the amount of error that number carries.  Sometimes this is made very explicit.  You may for example see something like “3.2 ± 0.08″.  This means : “the value is around 3.2, but don’t be surprised if it’s as high as 3.28 or as low as 3.12… but farther out than that would be mildly surprising.”

120cm ± 0.5cm, due to hair.

120 ± 0.5 cm, due to hair-based error.

However!  Dealing with two numbers is immoral and inescapably tedious.  So, humanity’s mightiest accountants came up with a short hand: stop writing the number when it becomes pointless.  It’s a decent system.  Significant digits are why driving directions don’t say things like “drive 8.13395942652 miles, then make a slight right”.  Rather than writing a number with it’s error, just stop writing the number at the digit where noises and errors and lack of damn-giving are big enough to change the next digit.  The value of the number and the error in one number.  Short.

The important thing to remember about sig figs is that they are an imprecise but typically “good enough” way to deal with errors in basic arithmetic.  They’re not an exact science, and are more at home in the “rules of punctuation” schema than they are in the toolbox of a rigorous scientist.  When a number suddenly stops without warning, the assumption that the reader is supposed to make is “somebody rounded this off”.  When a number is rounded off, the error is at least half of the last digit.  For example, 40.950 and 41.04998 both end up being rounded to the same number, and both are reasonable possible values of “41.0″ or “41±0.05″.

For example, using significant figures, 2.0 + 0.001 = 2.0.  What the equation is really saying is that the error on that 2.0 is around ±0.05 (the “true” number will probably round out to 2.0).  That error alone is bigger than the entire second number, never mind what its error is (it’s around ±0.0005).  So the sum 2.0 + 0.001 = 2.0, because both sides of the equation are equal to 2 + “an error of around 0.05 or less, give or take”.

2.0+0.001 = 2.0 because the significant digits are conveying a notion of "error", and the second number is "drowned out" by the error in the first.

2.0 + 0.001 = 2.0.  The significant digits are conveying a notion of “error”, and the second number is being “drowned out” by the error in the first.

“Rounding off” does a terrible violence to math.  Now the error, rather than being a respectable standard deviation that was painstakingly and precisely derived from multiple trials and tabulations, is instead an order-of-magnitude stab in the dark.

The rules regarding error (answer gravy below) show that if your error is only known to within an order of magnitude (which power of ten describes its size), then when you’re done adding or multiplying two numbers together, what results will have an error of the same magnitude in the sense that you’ll retain the same number of significant digits.

For example,

\begin{array}{ll}    1234\times 0.32 \\    = (1234 \pm 0.5)(0.32\pm 0.005) \\    = \left([1.234\pm 0.0005] 10^3\right)\left([3.2\pm 0.05] 10^{-1}\right) & \leftarrow\textrm{Scientific Notation} \\    = \left(1.234\pm 0.0005\right)\left(3.2\pm 0.05\right)10^2 \\    = \left(1.234\times3.2 \pm 1.234\times0.05 \pm 3.2\times0.0005 \pm 0.05\times0.0005\right)10^2 \\    = \left(3.9488 \pm 0.0617 \pm 0.0016 \pm 0.000025\right)10^2 \\    \approx \left(3.9488 \pm 0.0617\right)10^2 \\    \approx \left(3.9488 \pm 0.05\right)10^2 \\    = 3.9 \times 10^2    \end{array}

This last expression could be anywhere from 3.8988 x 102 to 3.9988 x 102.  The only digits that aren’t being completely swamped by the error are “3.9″.  So the final “correct” answer is “3.9 x 102“.  Not coincidentally, this has two significant digits, just like “0.32″ which had the least number of significant digits at the start of the calculation.  The bulk of the error in the end came from “±1.234×0.05″, the size of which was dictated by that “0.05″, which was the error from “0.32″.

Notice that in the second to last step it was callously declared that “0.0617 ≈ 0.05″.  Normally this would be a travesty, but significant figures are the mathematical equivalent of “you know, give or take or whatever”.  Rounding off means that we’re ignoring the true error and replacing it with the closest power of ten.  That is, there’s a lot of error in how big the error is.  When you’re already introducing errors by replacing numbers like 68, 337, and 145 with “100″ (the nearest power of ten), “0.0617 ≈ 0.05″ doesn’t seem so bad.  The initial error was on the order of 1 part in 10, and the final error was likewise on the order of 1 part in 10.  Give or take.  This is the secret beauty of sig figs and scientific notation; they quietly retain the “part in ten to the ___” error.

That said, sig figs are kind of a train wreck.  They are not a good way to accurately keep track of errors.  What they do is save people a little effort, manage errors and fudges in a could-be-worse kind of way, and instill a deep sense of fatalism.  Significant figures underscore at every turn the limits either of human expertise or concern.

By far the most common use of sig figs is in grading.  When a student returns an exam with something like “I have calculated the mass of the Earth to be 5.97366729297353452283 x 1024 kg”, the grader knows immediately that the student doesn’t grok significant figures (the correct answer is “the Earth’s mass is 6 x 1024 kg, why all the worry?”).  With that in mind, the grader is now a step closer to making up a grade.  The student, for their part, could have saved some paper.


Answer Gravy: You can think of a number with an error as being a “random variable“.  Like rolling dice (a decidedly random event that generates a definitively random variable), things like measuring, estimating, or rounding create random numbers within a certain range.  The better the measurement (or whatever it is that generates the number), the smaller this range.  There are any number of reasons for results to be inexact, but we can sweep all of them under the same carpet labeling them all “error”; keeping track only of their total size using (usually) standard deviation or variance.  When you see the expression “3±0.1″, this represents a random variable with an average of 3 and a standard deviation of 0.1 (unless someone screwed up or is just making up numbers, which happens a lot).

When adding two random variables, (A±a) + (B±b), the means are easy, A+B, but the errors are a little more complex.  (A±a) + (B±b) = (A+B) ± ?.  The standard deviation is the square root of the variance, so a2 is the variance of the first random variable.  It turns out that the variance of a sum is just the sum of the variances, which is handy.  So, the variance of the sum is a2 + b2 and (A±a) + (B±b) = A+B ± √(a^2+b^2).

When adding numbers using significant digits, you’re declaring that a=0.5 x 10-D1 and b=0.5 x 10-D2, where D1 and D2 are the number of significant digits each number has.  Notice that if these are different, then the bigger error takes over.  For example, \sqrt{\left(0.5\cdot10^{-1}\right)^2 + \left(0.5\cdot10^{-2}\right)^2} = 0.5\cdot 10^{-1}\sqrt{1 + 10^{-2}} \approx 0.5\cdot 10^{-1}.  When the digits are the same, the error is multiplied by √2 (same math as last equation).  But again, sig figs aren’t a filbert brush, they’re a rolling brush.  √2?  That’s just another way of writing “1″.

The cornerstone of "sig fig" philosophy; you're not all over the place, but it won't be perfect.

The cornerstone of “sig fig” philosophy; not all over the place, but not super concerned with details.

Multiplying numbers is one notch trickier, and it demonstrates why sig figs can be considered more clever than being lazy normally warrants.  When a number is written in scientific notation, the information about the size of the error is exactly where it is most useful.  The example above of “1234 x 0.32″ gives some idea of how the 10′s and errors move around.  What that example blurred over was how the errors (the standard deviations) should have been handled.

First, the standard deviation of a product is a little messed up: (A\pm a)(B\pm b) = AB \pm\sqrt{A^2b^2 + B^2a^2 + a^2b^2}.  Even so!  When using sig figs the larger error is by far the more important, and the product once again has the same number of sig figs.  In the example, 1234 x 0.32 = (1.234 ± 0.0005) (3.2 ± 0.05) x 10-2.  So, a = 0.0005 and b = 0.05.  Therefore, the standard deviation of the product must be:

\begin{array}{ll}    \sqrt{A^2b^2 + B^2a^2 + a^2b^2} \\[2mm]    = Ab\sqrt{1 + \frac{B^2a^2}{A^2b^2} + \frac{a^2}{A^2}} \\[2mm]    = (1.234) (0.05) \sqrt{1.0000069} \\[2mm]    \approx(1.234)(0.05)\\[2mm]    \approx 0.05    \end{array}

Notice that when you multiply numbers, their error increases substantially each time (by a factor of about 1.234 this time).  According to Benford’s law, the average first digit of a number is 3.440*. As a result, if you’re pulling numbers “out of a hat”, then on average every two multiplies should knock off a significant digit, because 3.4402 = 1 x 101.

Personally, I like to prepare everything algebraically, keep track of sig figs and scientific notation from beginning to end, then drop the last 2 significant digits from the final result.  Partly to be extra safe, but mostly to do it wrong.

*they’re a little annoying, right?

Posted in -- By the Physicist, Conventions, Equations, Math | 1 Comment

Q: If time slows down when you travel at high speeds, then couldn’t you travel across the galaxy within your lifetime by just accelerating continuously?

Physicist: Yup!  But sadly, this will never happen.

This is a good news / really bad news situation.  On the one hand, it is true (for all intents and purposes) that if you travel fast enough, time will slow down and you’ll get to your destination is surprisingly little time.  The far side of the galaxy is about 100,000 lightyears away, so it will always take at least 100,000 years to get there.  However, the on-board clocks run slower (from the perspective of anyone “sitting still” in the galaxy) so the ship and everything on it may experience far less than 100,000 years.

First, when you read about traveling to far-off stars you’ll often hear about “constant acceleration drives”, which are rockets capable of accelerating at a comfortable 1g for years at a time (“1g” means that people on the rocket would feel an acceleration equivalent to the force of Earth’s gravity).  However!  Leaving a rocket on until it’s moving near the speed of light is totally infeasible.  A rocket capable of 1g of acceleration for years is a rocket that can hover just above the ground for years.  While this is definitely possible for a few seconds or minutes (“retro rockets“), you’ll never see people building bridges on rockets, or hanging out and having a picnic for an afternoon or three on a hovering rocket.  Spacecraft in general coast ballistically except for the very beginning and very end of their trip (excluding small corrections).  For example, the shuttle (before the program was shut down) could spend weeks coasting along in orbit, but the main rockets only fire for the first 8 minutes or so.  And those 8 minutes are why the shuttle weighs more than 20 times as much on the launch pad than when it weighs when it lands.

The big exception is ion drives, but a fart produces more thrust than an ion drive (seriously) so… meh.

Rockets: in a hurry for a little while and then not for a long while.

Rockets: in a hurry for a little while and then not for a long while.

In order to move faster, a rocket needs to carry more fuel, so it’s heavier, so it needs more fuel, etc.  The math isn’t difficult, but it is disheartening.  Even with antimatter fuel (the best possible source by weight) and a photon drive (exhaust velocity doesn’t get better than light speed), your ship would need to be 13 parts fuel to one part everything else, in order to get to 99% of light speed.

That said, if somehow you could accelerate at a comfortable 1g forever, you could cross our galaxy (accelerating halfway, then decelerating halfway) in a mere 20-25 years of on-board time.  According to every one else in the galaxy, you’d have been cruising at nearly light speed for the full 100,000 years.  By the way, this trip (across the Milky Way, accelerate halfway, decelerate halfway, anti-matter fuel, photon drives) would require a fuel-to-ship ratio of about 10,500,000,000 : 1.  Won’t happen.

The speed of light is still a fundamental limit, so if you were on the ship you’ll still never see stars whipping by faster than the speed of light (which you might expect would be necessary to cross 100,000 light years in only 25 years).  But relativity is a slick science; length contraction and time dilation are two sides of the same coin.  While everyone else in the galaxy explains the remarkably short travel time in terms of the people on the ship moving slower through time, the people on the ship attribute it to the distance being shorter.  The stars pass by slower than light speed, but they’re closer together (in the direction of travel).  “Which explanation is right?” isn’t a useful question; if every party does their math right, they’ll come to the same conclusions.


Answer Gravy: Figuring out how severe relativistic effects are often comes down to calculating \gamma = \frac{1}{\sqrt{1-\left(\frac{v}{c}\right)^2}}, which is the factor which describes how many times slower time passes and how many times shorter distances contract (for outside observers only, since you will always see yourself as stationary).  Photon ships make the calculation surprisingly simple.  Here’s a back-of-the-envelope trick:

If your fuel is antimatter and matter, then the energy released is E=Mc2 (it’s actually useful sometimes!).  If the exhaust is light, then the momentum it carries is P=E/c.  Finally, the energy of a moving object is γMc2 and the momentum is γMv.  It’s not obvious, but for values of v much smaller than c, this is very nearly the same as Newton’s equations.

For a fuel mass of f, a rocket mass of m, and a beam of exhaust light with energy E, lining up the energy and momentum before and after yields:

\begin{array}{ll}\left\{\begin{array}{ll}(m+f)c^2 = \gamma mc^2+E\\0=\gamma mv - \frac{E}{c}\end{array}\right.\\\Rightarrow (m+f)c^2=\gamma mc^2+\gamma mcv=\gamma mc(v+c)\\\Rightarrow \gamma = \frac{c}{v+c}\left(1+\frac{f}{m}\right)\end{array}

So, when v ≈ c (when the ship is traveling near light speed), \gamma \approx \frac{1}{2}\left(1+\frac{f}{m}\right) \approx \frac{f}{2m}.  That means that if, for example, you want to travel so fast that your trip is ten times slower than it “should” be, then you need to have around 20 times more fuel than ship.  Even worse, if you want to stop when you get where you’re going, you’ll need to square that ratio (the fuel needed to stop is included as part of the ship’s mass when speeding up).

More tricky to derive and/or use is the math behind constant acceleration.  If a ship is accelerating at a rate “a”, the on-board clock reads “τ”, and the position and time of the ship according to everyone who’s “stationary” are “x” and “t”, then

x(\tau) = \frac{c^2}{a}Cosh\left(\frac{a}{c}\tau\right)-\frac{c^2}{a} \approx \frac{c^2}{2a}e^{\frac{a}{c}\tau}

t(\tau) = \frac{c}{a}Sinh\left(\frac{a}{c}\tau\right)

this is lined up so that x(0) = t(0) = 0 (which means that everyone’s clocks are synced when the engines are first turned on).

Posted in -- By the Physicist, Astronomy, Physics, Relativity | 13 Comments

Q: When something falls on your foot, how much force is involved?

Physicist: There’s a cute trick you can use here.  It a falling object starts at rest and ends at rest, then it gains all of its energy from gravity, and all of that energy is deposited in your unfortunate foot.

Kinetic energy is (average) force times distance; whether you’re winding a spring, starting a fire (with friction), firing projectiles, or crushing your foot.  The energy the object gains when falling is equal to its weight (the force of gravity) times the distance it falls.  The energy the object uses to bust metatarsals is equal to the distance it takes for it to come to a stop times the force that does that stopping.  S0, D_{fall}F_{fall} = E = D_{stop}F_{stop}.

The distance times the force that gets an object moving is equal to the distance times the force that brings that object to a halt.

The distance times the force that gets an object moving is equal to the distance times the force that brings that object to a halt.

Of course, the distance over which the object slows down is much smaller than the distance over which it sped up.  As a result, the stopping force required is proportionately larger.  This is one of the reasons why physicists flinch so much during unrealistic action movies (that, and loud noises make us skittish).  Something falling on your foot stops in about half a cm or a quarter inch, what with skin and bones that flex a little.  Give or take.

Bowling balls; keeping pediotrosts employed since.

Bowling balls: keeping podiatrists gainfully employed for 700 years.

So, if you drop a 10 pound ball 4 feet (48 inches), and it stops in a quarter inch, then the force at the bottom of the fall is F = \frac{48}{0.25}10lbs \approx 2,000lbs.  This is why padding is so important; if that distance was only an eighth of an inch (seems reasonable) then the force jumps to 4,000lbs, and if that distance is increased to half an inch then the force drops to 1,000 lbs.

The bowling ball picture is from here.

Posted in -- By the Physicist, Experiments, Physics | 3 Comments

Q: If nothing can escape a black hole’s gravity, then how does the gravity itself escape?

Physicist: A black hole is usually described as a singularity, where all the mass is (or isn’t?), which is surrounded by an “event horizon”.  The event horizon is the “altitude” at which the escape velocity is the speed of light, so nothing can escape.  But if gravity is “emitted” by black holes, then how does that “gravity signal” get out?  The short answer is that gravity isn’t “emitted” by matter.  Instead, it’s a property of the spacetime near matter and energy.

It’s worth stepping back and considering where our understanding of black holes, and where all of our predictions about their behavior, comes from.  Ultimately our understanding of black holes, as well as all of our predictions for their bizarre behavior, stems from the math we use to describe them.  The extremely short answer to this question is: the math says nothing can escape, and that the gravity doesn’t “escape” so much as it “persists”.  No problem.

Einstein’s whole thing was considering the results of experiments at face value.  When test after test always showed the speed of light was exactly the same, regardless of how the experiment was moving, Einstein said “hey, what if the speed of light is always the same regardless of how you’re moving?”.  Genius.  There’s special relativity.

It also turns out that no experiment can tell the difference between floating motionless in deep space and accelerating under the pull of gravity (when you fall you’re weightless).  Einstein’s stunning insight (paraphrased) was “dudes!  What if there’s no difference between falling and floating?”.  Amazing stuff.

Sarcasm aside, what was genuinely impressive was the effort it took to turn those singsong statements into useful math.  After a decade of work, and buckets of differential geometry (needed to deal with messed up coordinate systems like the surface of Earth, or worse, curved spacetime) the “Einstein Field Equations” were eventually derived, and presumably named after Einstein’s inspiration: the infamous Professor Field.

This is technically 16 equations, however there are tricks to get that down to a more sedate 6 equations.

This is technically 16 equations (μ and ν are indices that take on 4 values each), however there are tricks to get that down to a more sedate 6 equations.

The left side of this horrible mess describes the shape of spacetime and relates it to the right side, which describes the amount of matter and energy (doesn’t particularly matter which) present.  This equation is based on two principles: “matter and energy make gravity… somehow” and “when you don’t feel a push or pull in any direction, then you’re moving in a straight line”.  That push or pull is defined as what an accelerometer would measure.  So satellites are not accelerating because they’re always in free-fall, whereas you are accelerating right now because if you hold an accelerometer it will read 9.8m/s2 (1 standard Earth gravity).  Isn’t that weird?  The path of a freely falling object (even an orbiting object) is a straight line through a non-flat spacetime.

Moving past the mind-bending weirdness; this equation, and all of the mathematical mechanisms of relativity, work perfectly for every prediction that we’ve been able to test.  So experimental investigation has given General Relativity a ringing endorsement.  It’s not used/taught/believed merely because it’s pretty, but because it works.

Importantly, the curvature described isn’t merely dependent on the presence of “stuff”, but on the curvature of the spacetime nearby.  Instead of being emitted from some distant source, gravity is a property of the space you inhabit right now, right where you are.  This is the important point that the “bowling ball on a sheet” demonstration is trying to get across.

The Einstein Field Equations

The Einstein Field Equations describe the stretching of spacetime as being caused both by the presence of matter and also by the curvature of nearby spacetime.  Gravity doesn’t “reach out” any more than the metal ball in the middle is.

So here’s the point.  Gravity is just a question of the “shape” of spacetime.  That’s affected by matter and energy, but it’s also affected by the shape of spacetime nearby.  If you’re far away from a star (or anything else really) the gravity you experience doesn’t come directly that star, but from the patch of space you’re sitting in.  It turns out that if that star gets smaller and keeps the same mass, that the shape of the space you’re in stays about the same (as long as you stay the same distance away, the density of an object isn’t relevant to its gravity).  Even if that object collapses into a black hole, the gravity field around it stays about the same; the shape of the spacetime is stable and perfectly happy to stay the way it is, even when the matter that originally gave rise to it is doing goofy stuff like being a black hole.

This stuff is really difficult / nigh impossible to grok directly.  All we’ve really got are the experiments and observations, which led to a couple simple statements, which led to some nasty math, which led to some surprising predictions (including those concerning black holes), which so far have held up to all of the observations of known black holes that we can do (which is difficult because they’re dark, tiny, and the closest is around 8,000 light years away, which is not walking-distance).  That said: the math comes before understanding, and the math doesn’t come easy.

Funny because it's true.

It’s funny because it’s true.

Here’s the bad news.  In physics we’ve got lots of math, which is nice, but no math should really be trusted to predict reality without lots of tests and verification and experiment (ultimately that’s where physics comes from in the first place).  Unfortunately no information ever escapes from beyond the event horizon.  So while we’ve got lots of tests that can check the nature of gravity outside of the horizon (the gravity here on Earth behaves in the same way that gravity well above the horizon behaves), we have no way even in theory to investigate the interior of the event horizon.  The existence of singularities, and what’s going on in those extreme scenarios in general, may be a mystery forever.  Maybe.

This probably doesn’t need to be mentioned, but the comic is from xkcd.

Posted in -- By the Physicist, Astronomy, Physics, Relativity | 27 Comments