The universe does a lot of stuff (for example, whatever you did today), but literally everything that ever happens increases entropy. In some sense, the increase of entropy is equivalent to the statement “whatever the most *overwhelmingly* likely thing is, that’s the thing that will happen”. For example, if you pop a balloon there’s a *chance* that all of the air inside of it will stay where it is, but it is overwhelmingly more likely that it will spread out and mix with the other air in the room. Similarly (but a little harder to picture), energy also spreads out. In particular, heat energy always flows from the hotter to the cooler until everything is at the same temperature (hence the name: “thermodynamics”).

If you get in front of that flow you can get some work done.

“Usable energy” is energy that hasn’t spread out yet. For example, the Sun has lots of heat energy in one (relativity small) place. Ironically, if you were in the middle of the Sun, that energy wouldn’t be accessible because there’s nowhere colder for it to flow (nearby).

The spreading out of energy can be described using entropy. When energy is completely and evenly spread out and the temperatures are the same everywhere, then the system is in a “maximal entropy state” and there is no remaining useable energy. This situation is a little like building a water wheel in the middle of the ocean: there’s plenty of water (energy), but it’s not “falling” from a higher level to a lower level so we can’t use it.

The increase of entropy is a “statistical law” rather than a physical law. You’ll never see an electron suddenly vanish and you’ll never see something moving faster than light because those events would violate a physical law. On the other hand, you’ll never see a broken glass suddenly reassemble, not because it’s impossible, but because it’s *super* unlikely. A spontaneously unbreaking glass isn’t *physically* impossible, it’s *statistically* impossible.

However, when you look at really, really small systems you find that entropy will sometimes increase. This is made more explicit in the “fluctuation theorem“, which says that the probability of a system suddenly having a drop in entropy decreases exponentially with size of the drop.

For example, if you take a fistful of coins that were in a random arrangement of heads and tails and toss them on a table, there’s a chance that they’ll all land on heads. That’s a decrease in the entropy of their faces, and there is absolutely no reason for that not to happen, other than being unlikely. But if you do the same thing with two fistfuls of coins it’s not twice as unlikely, it’s “squared as unlikely” (that should be a phrase). 10 coins all landing on heads has a probability of about 1/1,000, and the probability of 20 coins all landing on heads is about 1/1,000,000 = (1/1,000)^{2}. The fluctuation theorem is a lot more subtle, but that’s the basic sorta-idea.

The “heat death of the universe” is what you get when you starting talking about the repercussions from ever-increasing entropy and never stop asking “and then what?”. Eventually every form of *useable* energy gets exhausted; every kind of energy ends up more-or-less evenly distributed and without an imbalance there’s no reason for it to flow anywhere or do work. “Heat death” doesn’t necessarily mean that there’s no heat, just no concentrations of heat.

But even in this nightmare of homogeneity we can expect occasional, local decreases in entropy. Just as there’s a *chance* that a broken glass will unbreak, there’s a *chance* that a pile of ash will unburn, and there’s a *chance* that a young (fully-fueled) star will accidentally form from an fantastically unlikely collection of scraps. There’s even a chance of a fulling functioning brain spontaneously forming. But just to be clear, these are all really unlikely. Really, *really* unlikely. As in “in an infinite universe over an infinite amount of time… maybe“. We do see entropy reverse, but only in tiny quantities (like fistfuls of coins or the arrangements of a few individual molecules). Something like the air on one side of a room (that’s in thermal equilibrium) suddenly getting 1° warmer while the other gets 1° colder would literally be the least likely thing that’s ever happened. The universe suddenly “rebooting” after the heat death is… less likely than that. Multivac interventions notwithstanding.

Those events that look like decreases in entropy have always been demonstrated to be either a matter of not taking everything into account or just being wrong.

Long story short: yes, after the heat death there should still be occasional spontaneous reversals of entropy, but they’ll happen exactly as often as you might expect. If you break a glass, don’t hold your breath. Get a new glass.

]]>Worse, you would think that momentum would go up hand in hand with kinetic energy, when the formulas above instead show the latter going up much faster due to the exponent. This also doesn’t make sense.

I’m sure you can do some math to show why it has to be this way, but can you explain in non-math terms why kinetic energy and momentum behave this way?

**Physicist**: This is pretty unintuitive. In fact, historically this was a whole thing. Buckets of profoundly smart folk argued and debated about whether velocity (momentum) or velocity squared (energy) was the conserved quantity. Turns out it’s both. The difficulty is first that energy can change forms and second that up until the 20th century lab equipment was *terrible* (and often home-made).

Force is mass times acceleration: F=ma. If you apply a force over a *time* you get momentum and if you apply force over a *distance* you get energy. Acceleration times time is velocity, so it should more-or-less make sense that force times time is momentum: . What’s a lot less obvious is energy.

A decent way to think about force and kinetic energy is to consider a falling weight. Gravity applies a constant force and thus a constant acceleration. If you tie a string to that weight you could power, say, a clock. Every meter it’s lowered it provides the same amount of energy, so lowering it 2 meters provides twice the energy as lowering it 1 meter.

Now imagine the weight free-falling that distance (instead of being slowly lowered). After the first meter it’ll already be moving, so it’ll fall through the second meter faster and in less time. The velocity gained is acceleration times time, so since it spends less time falling through that second meter, the falling weight spends less time accelerating and gains less speed.

But it still has to gain the same amount of energy every meter it falls. Otherwise weight-powered clocks would act *really* weird (a chain twice as long would yield only √2 as much energy). That means that at higher speeds you gain the same amount of energy from a smaller increase in speed. Or (equivalently) once you’re moving faster, the same increase in speed produces a greater increase in energy. This sometimes seems to produce paradoxes, but doesn’t.

With a little work and some calculus (see the answer gravy below) you can make this a lot more rigorous and you’ll find that the relationship between energy and velocity is exactly . In fact, figuring out this sort of thing is a big part of what calculus is for.

If it bothers you that energy doesn’t scale proportional to velocity, keep in mind that we’ve got that covered: momentum. Ultimately, both momentum and energy are just names for numbers that can be calculated and for which the total never changes. That which we call momentum by any other name would be as conserved.

**Answer Gravy**: Energy or work is force times distance: E=FD. When all the variables are constant finding the work done is just a multiplication away. However, when the variables aren’t constant finding the work done requires integration. The question of this post is one of the big reasons behind why calculus was originally invented. If you want to learn intro physics then please, for your own sake, learn intro calculus *first*. It is so much easier to talk about position, velocity, and acceleration (intro physics) when you can say “acceleration is the derivative of velocity and velocity is the derivative of position”. If you start physics with just a *little* calculus background, then you and your physics professor will high-five at least twice daily. Guaranteed.

Instead of a single distance with a constant force, we chop up the distance into lots of tiny pieces dx long and add them up. So a better, more universally applicable way of writing “E=FD” is , where the force is written “F(x)” to underscore that it may be different at different locations, x.

What we’ll calculate is the energy gained by an object that starts at rest, is pushed by a force F(x) over a distance D, and moves from position x=0 at time t=0 to position x=D at time t=T.

When we say the object started “at rest” we mean “v(0)=0″. Whatever v(T) is, it’s the velocity of the object when we’re done. So, the energy gained by an object that starts at rest and is pushed up to some speed v is .

Huzzah for calculus!

]]>So, you call that new number “j” (not to be confused with “j” from engineering, which is actually just “i” and presumably stands for “jamaginary number”). On the face of it, there’s nothing wrong with that; if we can make up i and work with it (to great effect), then making up j shouldn’t be terribly different. In the same way that we can write complex numbers as A+Bi, we should be able to write these new numbers as A+Bi+Cj; “trinions” as it were. However, it turns out that introducing a “j” requires us to also introduce a “k” (that also does the same thing as i and j).

Here’s why. You start by saying “i^{2} = j^{2} = -1″ and then asking “ij = ?”. You begin to get a sinking feeling when you square it: (ij)^{2 }= i^{2}j^{2} = (-1)(-1) = 1. This implies that ij = 1 or -1. But ij = 1 means that j = -i and ij = -1 means that j = i. There are more rigorous (confusing/complicated) ways to do this, but they ultimately boil down to “dude, we need another number”. That number is k (for “kamaginary” maybe).

So we’ve got i^{2} = j^{2} = k^{2} = -1 and ij = k. Fine. But there’s a big problem: quaternions can’t be commutative (mathematicians would call this big problem an “interesting property”, because they’re so chipper). “Commutative” means that order doesn’t matter, but for quaternions it must. Here comes a contradiction:

Firstly: (ij)^{2} = k^{2} = -1. This is basically a definition. It’s “True”.

Secondly (with commutativity): (ij)^{2} = (ij)(ij) = ijij = i^{2}j^{2} = (-1)(-1) = 1. Savvy readers will note that 1 ≠ -1. This can be fixed by declaring that ij = -ji.

Thirdly (declaring that ij = -ji): (ij)^{2} = (ij)(ij) = i(ji)j = i(-ij)j = -i^{2}j^{2} = -(-1)(-1) = -1. Fixed!

So far, this whole thing has been about why quaternions have the weird properties they do: there needs to be an i, j, *and* k, and you have to give up commutativity. Complex numbers are written “A+Bi” where i^{2} = -1. Quaternions are written “A+Bi+Cj+Dk” where i^{2} = j^{2} = k^{2} = -1, ij = k, jk = i, ki = j, and reversing any of these last three flips the sign.

One of the most profoundly cool things about quaternions is that they have their own form of Euler’s equation. When , . This can be derived the same way the regular Euler equation is derived, but using the fact that .

At this point it’s entirely natural (for a mathematical masochist) to ask “alright, but what if there were *yet another* square root of -1?”. Well it turns out that the next jump is harder and requires *seven* things that square to -1. Concerned at the prospect of running out of letters, clever mathematicians usually label these e_{1}, e_{2}, e_{3}, e_{4}, e_{5}, e_{6}, e_{7}. where (e_{1})^{2} = (e_{2})^{2} = … = (e_{7})^{2} = -1. An octonion number is written “A + Be_{1} + Ce_{2} + De_{3} + Ee_{4} + Fe_{5} + Ge_{6} + He_{7}“, where each of these (capital) letters is a real number. When you make the jump to octonions you not only lose commutativity you lose associativity, which makes everything terrible. With octonions you can’t say that (ab)c = a(bc), which is a big loss.

Some terribly insightful old soul might now be driven to inquire “alright, but what if there were *still more* square roots of -1?”. Sure. Enter the Cayley Dickson construction to create a “ladder” of as many of these number systems as your heart may ever desire, doubling in complexity every time.

Here’s the idea: you’ve start with a number system, then you take pairs of those numbers and slap a couple of rules on them. Complex numbers are just a pair of real numbers with some algebra glued on. For example, and . You may as well write this and . In addition to addition and multiplication, complex numbers also have an operation called “complex conjugation” (denoted with a bar or asterisk) which flips the sign of the imaginary part of a complex number. For example, or equivalently . The same operation exists for quaternions. For example, .

The Cayley Dickson construction defines numbers “higher up the ladder” as pairs of numbers from “lower down the ladder”. So a complex number, Z, is a pair of real numbers, A and B, which we can write Z=A+Bi={A,B}. A quaternion number, Z, is a pair of complex numbers, A+Bi and C+Di, which we can write Z=A+Bi+Cj+Dk=A+Bi+Cj+Dij=(A+Bi)+(C+Di)j={A+Bi,C+Di}. You’ll never guess how you can write an octonion.

Addition is handled like this , multiplication is handled like this , and conjugation is handled like this . For the jump from real to complex numbers those bars (conjugates) don’t do anything, but they’re important for each of the higher number systems. With this weird looking formalism in hand you can go from real numbers to complex numbers to quaternions to octonions to sedenions and so on and on and on (if you *really* want to).

It turns out that these higher number systems are useful. Complex numbers are ridiculously useful. Quaternions have a lot of interesting and *fairly* intuitive uses, like modeling rotations in 3 dimensions (which coincidentally is where we live) in part because they don’t have “special angles” that mess them up (e.g., the north pole is difficult to work with because it doesn’t have a definable longitude, but quaternions don’t have “north pole type problems”). While octonions are useful, they’re not useful in any easy to describe ways (when was the last time you *really* needed 8 dimensions for a problem?). Turns out they’re useful in string theory and presumably the higher number systems are useful as well. The harder mathematicians try to make mathematics that’s “pure” and free of the burden of being useful, the better they end up making our physics and computers.

The movement of Earth, as well as the Earth’s gravity, change how much time we experience compared to other objects in the universe. If we were to occasionally compare our clocks to clocks in tight orbits around black holes or neutron stars we’d find they run slower than ours, and if we compare with clocks floating deep in the middle of nowhere we’d find that we’d find that those clocks run a little faster than ours.

However, there’s no “true” time to experience; you can never experience time wrong. Time is relative which means that we can compare how time is passing for any two things, but there’s no ultimate “clock of the universe” to compare with. Your watch, no matter where you are or how you’re moving, will always read 1 second per second. That is, you’ll never see *yourself* in fast-forward or slow-motion. In that sense we can’t help but experience time correctly. Each of us may as well declare that our clock is the One True Clock, and everyone else’s is wrong.

Simply moving fast in a straight line isn’t enough to make your clock objectively run slowly compared to other clocks. If two folk run past each other they *both* see the other experiencing less time and, weirdly enough, this isn’t a paradox and they’re both “right”. There are two effects that do objectively (in a way that everyone in the universe can agree) cause clocks to run slow: the very poorly named “twin paradox” and gravity.

In spacetime the “length” (spacetime interval) of a trip is measured by a clock that makes that trip. It turns out (this is not obvious, but it can be understood) that the shortest trips are the ones that are the most circuitous. If you watch the ball drop on New Years and stay put for a year until the next ball drop, then you’ve made a pretty straight trip (in spacetime) between those two events. This path is straight, so it’s long, and your clock will read more. Instead, if you spend that year zipping around the solar system as fast as you can before coming back for the next New Years, then your path was decidedly not straight (in spacetime). This all-over-the-place path is short and your clock will read less. “The longest spacetime distance between two points is a straight line” may sound utterly insane, but it works. Long story short: if your trip involves a loop, then your clock is falling behind.

As it happens, the Earth spins on its axis and orbits the Sun and, along with the rest of the solar system and all the stars that we can see, orbits the galaxy as well. Each of these are loops, not straight lines, and each time the Earth makes one of these circuits it falls a little behind any clock that didn’t. This is a *little* hypothetical: in order to get a clock to sit in the same place while the Earth does an orbit to meet up with it every year, it would need a big rocket (it’s not orbiting the Sun, so it should be falling into it).

As fortune would have it, you can just use the gamma function, , to find the time dilation caused by running in a loop (for more complex paths, like those with different speeds, you still use the gamma function but you need calculus too). The velocity of the spinning Earth at the equator is about 0.5 km/s, we orbit the Sun at about 30 km/s, and the whole kit and kaboodle orbits the galaxy at about 200 km/s. The difference in time experienced between people living in Longyearbyen (near the pole) and people living in Ecuador (near the equator) is about one part in a trillion, which gives those proud Norwegians an extra second every 25 thousand years. Don’t spend that second all in one place, Norwegians.

The time dilation from the biggest of these speeds, our movement around the galaxy, amounts to one part in 4.5 million. That amounts to an extra second every couple months or an extra half solar year for every galactic year.

The second effect to consider is the curvature of space time caused by (or which is) gravity. Things that are lower experience less time than things that are higher. This can be explained (and even verified) by measuring how the frequency of light changes when it travels vertically in a gravity field. The details are terrible, but for most practical purposes (“most practical purposes” = “not black holes”) you can find the time dilation between two altitudes by figuring out how fast something would be moving if it fell from the higher to the lower and plugging that v into .

It’s reasonable to say that if you’re infinitely far away from something then you’re outside of its gravitational influence and your clock should be running “right”. If you fell from “infinitely far away” to the surface of something big, you’d be moving at the excellently named “escape velocity” of that big something. If you try leave a planet moving slower than the escape velocity, then eventually you’ll fall back. Excellent name.

To escape the Earth from the ground you need 11 km/s. More difficult is escaping the Sun (from Earth’s orbit) which requires 42 km/s. To leave our galaxy (from here) you need somewhere between 500 and 600 km/s. This time dilation from the Milky Way’s gravity has the biggest effect of those mentioned here.

The spinning of the Earth and the orbiting of the Sun do affect the amount of time you experience, but not by a lot. Despite being closer to the Earth than the bulk of the galaxy, it’s our orbits around, and position in, our galaxy that affects our experience of time the most.

By virtue of being a member of the Milky Way, we experience about 1 second per week less than someone hanging out deep in the intergalactic void. Most of that comes from the effects of our galaxy’s gravity directly; not from the motion of our planet.

]]>I got quite the challenge from my father in law. The problem is well defined, but I’m having difficulties finding a meaningful answer. The reason why he asked me is because I’m an engineering student and he is in the windmill industry.

Before they attach the actual mill on the concrete foundation, it has to be absolutely leveled. If not, a tall mill would be quite offset even with a very small angle. To tackle this, they use two angle gauges and measure in two directions. The angle gauges are connected and you know the angle between them, their mutual angle. I’m supposed to find a way to convert these 3 inputs (angle 1, angle 2 and mutual angle) to find 2 outputs (the steepest angle and in which direction this is relative to the gauges).

**Physicist**: This is a gorgeous question that leads through some pretty math and ends with an elegant answer. If you’ve taken a class or two that used lots of vectors, then this is a cute exploration of what you can do with surprisingly little. If you’ve never taken a class or two that used lots of vectors, then please do: it’s fun stuff. You get to draw pictures and everything.

So you’ve got a flat slab that isn’t quite level. Two angle gauges (with plumb lines or bubbles or whatever) are placed on the slab in two directions. Define and as the directions of the two gauges on the ground and as up. These may as well be unit vectors, so: they are.

Define the angle between and as and the angle between each each of the levels and as and .

Measuring the angles between these vectors means that we know sine and cosine of these angles, and knowing that means that we know the dot product and magnitude of the cross product, since and .

Finally, since the windmill will be built perpendicular to the slab, it will be built perpendicular to both and . When a physicist (hell, even a mathematician) hears “I need a vector perpendicular to two other vectors” they convulsively respond “cross product those mothers”. If is the windmill’s “up”, then . If you were standing on the slab where the tails of and meet, then would be on the right and would be on the left (that’s the right hand rule).

If we project onto the , plane, the result will be pointing in the direction opposite the direction of the windmill’s lean. Define this projection as . The direction of is the direction that the windmill needs to be “leaned” so that it will stand straight.

The questions (way back at the top of this page) now boil down to:

1) What is the angle between and ?

2) What is the angle between , and and ?

For #1, it turns out that the cross product is easier to work with. Define as the angle between and .

We also know that:

And therefore:

In the event that (and honestly, why wouldn’t you want your gauges perpendicular?), then this simplifies a lot:

For #2 we find the projection, , and dot it with and . The projection onto the slab is . That is; it’s the up direction minus whatever component points in the direction of the windmill.

In that last step you know that since the tower, , and any direction on the slab it’s on, like , are perpendicular.

Defining as the angle between the projection and ,

Again, in the event that , this simplifies:

Similarly, .

So, if you’ve got the inclinometer readings, and , then you can find the lean of the tower, , and the direction you should push it so that it doesn’t lean, and from and respectively. This is a beautiful example of math leading to a cute, relatively simple solution that you probably couldn’t guess.

The windmill picture is from here.

]]>**Physicist**: There are absolutely different degrees of entanglement!

The degree you usually hear about are “maximally entangled states”, but basically everything is a little entangled. Not because of the big bang, but because every-day interactions generate and break a little entanglement all the time. Entanglement has a lot in common with correlation: if you know something about one thing, you’ll know something about the things it’s correlated with.

Correlations crop up all the time when things interact. For example, if you leave your car in a parking lot and come back to find a dent with a little red paint in it, then you know that somewhere nearby is a red car with another dent. The random things about your dent (the height above the ground, the severity, etc.) will be similar to those properties of the corresponding dent on the other car. You and a damnable ne’er-do-well have correlated cars because looking at the dent on one tells you something about the dent on the other; not because they have a spooky cosmic connection, but because they physically ran into each other. Entanglement is a little more subtle (what with all the quantum mechanics), but not a hell of a lot more subtle. Nothing fancy.

Just to be over-precise, when we say that things are entangled what we really mean is that some of their *properties* are entangled. For example, the polarization of two photons might be entangled while their positions are not, or vice versa.

The homogeneity of the universe (the “more-or-less-the-same-everywhere-ness” of the universe) is often cited as evidence that all the matter in the very early universe briefly had a chance to mix around, but that doesn’t have too much of an impact on entanglement. There’s something called “monogamy of entanglement” that says that maximally entangled qubits only appear in pairs, and maximally entangled states are the ones that really do interesting things. This can be generalized a bit to say “the more entangled two things are, the less they’re entangled with anything else”. Unfortunately, in order for such a pair to persist until today it would need to be left almost entirely unharrassed by everything else for billions of years. However, if the universe is anything, it’s old and messy. The entanglement we (people) create on purpose requires careful isolation and control of the stuff in question.

Even *worse*, if you have access to only one entangled particle, there’s no way to tell that it’s entangled. All of the fancy effects you hear about entanglement *always* require both, or at least most, of the entangled particles.

So you (every bit of you) can be entangled with other stuff in the universe (you kinda have to be). Entanglement is generated and broken by interactions, so you’re more entangled with stuff that’s nearby (in an astronomical sense). But most importantly, it doesn’t matter; random atomic-scale correlations are a lot like random atomic-scale noise.

Even less exciting, if you (personally) are the thing that’s entangled, your experience is entirely ordinary; the thing you’re entangled with will always be in a single state (from your point of view). All of the fancy experiments we do with entangled particle always involve particles being entangled with each other, because when they become entangled with the person doing the experiment it looks like “wave function collapse” (suddenly it appears to be in only one state) and that’s boring. Similarly, if you and a distant alien are entangled it does not mean you have a spooky connection (groovy, spiritual, or otherwise), it means that they will already be in a single state (from you mutual points of view) before you ever meet each other.

Which is exactly the sort of thing you’d never notice.

]]>