A tube from the ground to space would fill with air of about the same density and pressure as the air around the straw, decreasing as you go up until eventually you have a straw full of nothing surrounded by also nothing (in space).

What holds the atmosphere to the planet is gravity, so if a patch of air tries to drift off into space it literally falls back. A straw alone wouldn’t change that. On the other hand, if you attached some kind of pump to the bottom of the straw to make it have a higher pressure than sea-level, then you could pump air up the straw and have some kind of massive space-fountain of air (the air coming out would fall back to Earth just like water in an ordinary fountain). In fact! There is a situation very close to that happening on Saturn’s moon, Enceladus.

Whenever air or water or whatever travels up a straw it’s being pushed by pressure from the bottom (there’s no such thing as sucking), and one atmosphere of pressure can only push so far. For something like liquid mercury that’s about 76cm, which is why the “1 atmosphere” of pressure is often expressed as “760mm of Mercury”. If a closed tube is taller than that, then the pressure (here on Earth) isn’t great enough to push the mercury to the top which leaves nothing at the top.

Same idea with air. If you have a long tube full of air with the top open to space and the bottom pressurized to one atmosphere (or 760mm Hg), then the column of air in the tube will be as tall as the atmosphere.

A straw doesn’t provide an “escape route”; our air is free to try to leave whenever. The atmosphere stays where it is because it’s made of mass and the Earth has gravity. It’s a little sobering to realize that there’s nothing between you and a profound nothing (space) but a thin layer of air held down by its own unimpressive weight.

The barometer picture is from here.

]]>When someone says “we live in the third dimension” what they should really say (to be overly-precise) is “the universe we inhabit has three spacial dimensions”. There are a few ways that you can tell that you live in a three dimensional world. The easiest is to try to come up with as many mutually-perpendicular directions as you can; you’ll find three without too much trouble, but you’ll never find a fourth.

If you’re feeling terribly clever, you’ll find lots of other examples that demonstrate the three (and not two or four) dimensionality of our universe. For example, if you can tie a simple knot then you definitely live in three or more dimensions (no knots in 2-D) and if you can make a Klein bottle then you definitely live in four or more dimensions.

A dimension is a direction. Living in more dimensions means having more directions you can move in. There are many weird physical consequences to living in more dimensions, but the one you’d notice first (if you were somehow to suddenly to appear in a 4-D universe) is immediate death.

If a paper doll (two-dimensional being) were suddenly brought into three dimensional space all of its innards would become outtards. Similarly, there is nothing whatsoever supporting your body in a fourth direction, so if you were to find yourself with a few extra dimensions your insides would follow the path of least (zero) resistance and fall out. It would be super gross, but would make no more of a mess than an infinitely thin oil slick. Any local 4-D critters probably wouldn’t even notice.

]]>When you first start doing trigonometry the choice between radians, degrees, turns, or hexacontades is a matter of personal preference. Most people use degrees because most other people use degrees (and other people seem pretty on the ball). But when you get to calculus using radians is the most natural choice; anything else is just a headache waiting to happen.

To see why you have to get to know the unit circle.

Start with a unit circle with a horizontal line through it and a radius (“a radius” means a line from the center to the edge somewhere). The *definition* of sine and cosine of the angle between the radius an the horizontal line are in the picture above. SOH CAH TOA is easy in this case because the hypotenuse is 1.

When you use radians you’re describing the angle by using the length of the arc it traces out on the edge of the unit circle. The circumference of a circle or radius R is 2πR, so (since R=1 on the unit circle) the full circle is 2π radians around. That is: 2π radians = 360 degrees.

You’ll notice that when the angle is very small (and measured in radians) the value of sin(θ) and the value of θ itself become very nearly equal. Not too surprisingly, this is called the “small angle approximation” and it’s remarkably useful.

So for small values sin(θ)≈θ or .

In fact, in the limit as the angle approaches zero they *are* equal, or in mathspeak: . When someone says “in the limit as ___ approaches ___” it means they’re about to talk about calculus (and true to form…). All of the calculus around trig functions can be based on the fact that . For example, one of the more important things in the world (that’s not *quite* sarcasm) is the fact that .

The derivative of a function is , so:

That doesn’t look like a big deal, but keep in mind that all of trigonometry is just a rehashing of sine. For example, and .

If it weren’t for the fact that (when using radians) we wouldn’t have .

It’s not the end of the world if you try to do calculus with trig (it’s close), it’s just that the result is multiplied by an inconvenient constant. For example, if you’re using degrees: . Same thing happens when you differentiate cosine or tangent or whatever. It’s a lot easier to understand why if you look at a graph.

Clearly when using degrees the slope (derivative) of sine at zero is not 1, it’s much smaller (it’s 2π/360 in fact). If you don’t want any weird extra constants, then you need to use radians. But if you don’t mind them, then you be you. You can certainly use degrees or whatever, but you need to be careful with all those extra 2π/360’s.

* This is a trigonometric identity.

** That isn’t obvious:

]]>Where M_{P} and A_{P} are the mass and acceleration of a planet, M_{S} is the mass of the Sun, R is the distance between them, and G is a universal constant. What this rather bold statement says is “if you exist near the Sun, then you are accelerating toward it”. Each of the planets, moons, grains of dust, etc. all say the same thing, it’s just that with 99.86% of the mass in the solar system, the Sun says it loudest.

A force, like gravity, *accelerates* the object it acts on. So to understand what a force does it’s important to understand acceleration. Velocity describes how fast your position is changing, while acceleration describes how fast your velocity is *changing*.

“Velocity” is different from “speed” because velocity is a description of how fast you’re going *and* in which direction; “10 mph north” is a velocity, while “10 mph” is a speed. So you can have an acceleration that changes your velocity by changing your speed and/or by changing your direction.

Imagine you’re in a car (your velocity points forward):

If you accelerate forward, you speed up.

If you accelerate backward, you slow down (“decelerate”).

If you accelerate to the right or left, you turn in that direction but maintain the same speed.

Notice that when you talk about acceleration this way, suddenly the same push you feel into your seat when you step on the gas is the same as the push you feel into your seat belt when you brake and the same as the centrifugal force pushing you to the left when you turn right.

With planets the same rules apply. A planet moving around the Sun in a circular orbit always has the Sun about 90° to the side of the direction they’re moving. This means that the planet is always turning, but always moving at about the same speed. The planets are moving so fast that by the time they’ve turned a little, they’ve moved far enough that the Sun is in a new position, still 90° to the side.

So that’s how a planet can accelerate toward the Sun forever without getting any closer. The sideways motion of planets is due to the fact that if a planet were not moving sideways, it would find itself in the Sun in short order. In fact, the Sun is nothing more than a massive collection of all the matter from the formation of the solar system that wasn’t moving sideways fast enough (which is nearly all of it).

*Why* things end up in circular orbits is a more subtle question. The quickest explanation is that things in not-circular orbits run into trouble until either their orbit is sufficiently round or they’re destroyed. It’s not that circular orbits are somehow better, it’s just that other orbits carry more risk of serious impacts or gravitational interactions (e.g., with Jupiter) that may lead to short, unfortunate orbits.

Assuming that an orbit is stable, then it will be an ellipse (there’s a post here on *exactly* why, but it’s a a whole thing.). A circle is the simplest kind of ellipse, but ellipses can be extremely stretched out. For example, comets have very elliptical orbits (like Sedna in the picture below). In these orbits the comet is mostly moving toward and away from the Sun, so for them the Sun’s pull *mostly* changes their speed and changes their direction less.

There’s nothing special about the orbits the planets are in. The eight (or nine or more) planets we have in the solar system aren’t the only planets that formed, they’re the only planets left. When things are in highly elliptical orbits they tend to “drive all over the road” and smack into things. When things smack into each other one of a few things happen; generally they break or they don’t. When we look at our planetary neighbors we see craters indicating impacts right up to the limit of what that planet or moon could handle without shattering. Presumably there *should* be impacts bigger than a planet can stand, but (not surprisingly) those impacts don’t leave craters for us to find.

So objects with extremely elliptical orbits are more likely to get blown up. But even when two objects hit each other and merge, the resulting trajectory is an average of both objects’ original trajectories, and that tends to be more circular. This is a part of accretion, and Saturn’s rings provide a beautiful example of the nearly perfect circular orbits that result from it.

Given a tremendous amount of time, a big blob of material in space tends to condense into a ball (with most of the matter) and a thin disk of left over material traveling in circular orbits around it.

]]>Oranges:

Imagine taking an orange wedge and opening it so that the triangles all point “up” instead of towards the same point. If you interlaced two of these then you’d have a small brick that’s roughly rectangular.

As more triangles are used, the curved end produces less pronounced bumpiness and the straight sides come closer and closer to being straight up and down, making the brick rectangular. The height becomes equal to the radius, while the length is half of the circumference (C = 2πR) which now finds itself running along the top and bottom. As the number of triangles “approaches infinity” the circle can be taken apart and rearranged to fit almost perfectly into an “R by πR” box with an area of πR^{2}.

This is why calculus is so damn useful. We often think of infinity as being mysterious or difficult to work with, but here the infinite slicing just makes the conclusion infinitely clean and exact: A = πR^{2}.

Calculus:

On the mathier side of things, the circumference is the differential of the area. That is; if you increase the radius by “dr”, which is a tiny, tiny bit, then the area increases by Cdr where C is the circumference. We can use that fact to describe a disk as the sum of a lot of very tiny rings. “The sum of a lot of tiny _____” makes mathematicians reflexively say “use an integral“.

Every ring has an area of Cdr = (2πr)dr. Adding them up from the center, r=0, to the outer edge, r=R, is written: .

This is a beautiful example of understanding trumping memory. A mathematician will forget the equation for the area of a circle (A=πR^{2}), but remember that the circumference is its differential. That’s not to excuse their forgetfulness, just explain it.

**Physicist**: In the language of mathematics there are “dialects” (sets of axioms), and in the most standard, commonly-used dialect you can prove that 0.999… = 1. The system that’s generally taught now is used because it’s useful (in a lot of profound ways), and in it we can prove that 0.99999… = 1. If you want to do math where 1/infinity is a definable and non-zero value, you can, but it makes math unnecessarily complicated (for most tasks). The way the number system is generally taught (at the math-major level, where the differences become important) is that the real numbers are defined such that (very long story short) 1/infinity = 0 and there isn’t a “next number” for any number. That is, if you think you’ve found a number, x, that’s closer to 1 than any other number, then I can find a number half way between it and 1, (1+x)/2, that’s even closer. That’s not a trivial statement. In the system of integer numbers there *is* a next number; for 3 it’s 4, for 26 it’s 27, etc.. In the system of real numbers *every* number can be added, subtracted, multiplied, and divided without “leaving” the real numbers. That leads to the fact that we can squeeze a new number between any two different numbers. In particular, there’s no greatest number less than one. If there were, then you couldn’t fit another number between it and one, and that would make it a big weird exception. Point is: it’s tempting to say that 0.999… is the “first number below 1″, but that’s not a thing.

The term “real numbers” is just a name for a “sand box” of mathematical tools that have become standard because they’re useful. However! There are other systems where “very very very slightly less than 1″ , or more precisely “less than one, but greater than every number that’s less than one”, makes mathematical sense. These systems aren’t invalid or wrong, they’re just… not as pretty and fluid as the simple (as it reasonably can be), solid, dull as dishwater, real number system.

In the set of “real numbers” (as used today) a number can be *defined* as the limit of the decimal expansion taken one digit at a time. For example, the number “2” is {2, 2.0, 2.00, 2.000, …}. The “square root of 2″ is {1, 1.4, 1.41, 1.414, 1.4142, …}. The number, and everything you might ever want to do with it (as a real number), can be done with this sequence of ever-longer decimals (although, in practice, there are usually more sophisticated methods).

These sequences are “equivalent” and describe the same number if they get (arbitrarily) closer and closer to that same number forever. Two sequences don’t need to be identical to be equivalent. The sequences {1, 1.0, 1.00, 1.000, …} and {0, 0.9, 0.99, 0.999, …} both get closer and closer to each other and to the value “1” forever, so they’re equivalent. In absolutely every way that counts (in terms of the real numbers), the number “0.99999…” and the number “1” or “1.0000…” are exactly the same.

It does seem very bizarre that two numbers that look different can be the same, but there it is. This is *basically* the only exception; you can write things like “0.5 = 0.49999…”, but the same thing is going on.