## Q: What role does Dark Matter play in the behavior of things inside the solar system?

Physicist: To a stunningly good approximation: zero.

The big difference between dark matter and ordinary matter is that dark matter is “aloof” and doesn’t interact with other stuff.  Instead, it cruises by like “ghost particles”.  Matter on the other hand smacks into itself and clumps together.  The big commonality is that both of them create and are affected by gravity.

If you have a big ball of matter (doesn’t matter what kind), then both ordinary and dark matter will be pulled by its gravity.  However, there’s no reason for the dark matter to ever fall out of orbit since there’s nothing around to stop its motion.  Normal matter tends to “get in its own way”.

In fact, if it weren’t for the gravitational influence of dark matter, we would have no reason to suspect its existence at all.  Because dark matter doesn’t clump it stays really spread out and forms one big, roughly spherical, cloud around the galaxy.  Matter has more of a “big-clump-or-nothing” deal going on.  If you start with a big cloud of ordinary matter, then eventually (it can take a while) you’ll have one or two huge chunks (stars, binary stars, that sort of thing) and the few crumbs that escape tend to end up clumping together themselves (planets, moons, comets, your mom, etc.).  If you feel like impressing people at your next science party, this is called “accretion“.

Any attempt to picture the Sun and nearby stars to scale look like nothing.  This is an attempt where every square is 10 times the size of the previous square (1000 times the volume).  Point is, when ordinary matter concentrates it really, really, really concentrates.

In the above picture the dark matter is spread out uniformly.  Overall there’s a lot more of it (about 10 times as much, give or take), but here in the solar system the balance is tipped overwhelmingly in favor of ordinary matter.  But more than that, since dark matter is spread evenly (and thinly) all around us, it doesn’t pull in any particular direction.  There’s about the same amount in every direction you point, so there’s very little net pull in any direction.  Until you start considering galactic scales at least.

Ordinary matter clusters in big blobs, so when it pulls it tends to pull in one direction (right).  Dark matter does pull, but it pulls on every particle evenly in every direction, which is a lot like not pulling at all (left).

If you do consider things on a galactic scale (~100,000 lightyears), then there’s more dark matter in the direction of Sagittarius (in December this is overhead around midnight).  Technically, since we’re most of the way to the edge of the galactic disk, and the center of the galaxy is behind the stars in Sagittarius, most of the stuff in the Milky Way is more or less in that direction.  That imbalance makes the Sun and all the other nearby stars (“nearby” = “visible to the eye”) orbit the galaxy, but it also helps Earth and everything else around us do the same.  Astronauts in orbit appear weightless because their ship and their bodies are both orbiting the Earth.  They are both in “free-fall”.  Similarly, the Earth, the Sun, and even everything in our stellar neighborhood are all in free-fall around the galaxy.  So while the preponderance of dark matter in the galaxy does cause the solar system to slowly sweep out a seriously huge circle (the “galactic year” is about 250 million Earth years), it does not cause things in the solar system to move with respect to each other.

Hopefully, dark matter has more tricks than just gravity.  If it has no other way of interacting with stuff, then that makes it really difficult to study.  We can study things like stars, rocks, and puppies because they’re all “strongly interacting”.  Shine light on them?  Sure.  Poke them?  Why not.  But dark matter (whatever it is) is light-proof and poke-proof, and that’s deeply frustrating.

Posted in -- By the Physicist, Astronomy, Physics | 16 Comments

## Q: Are some number patterns more or less likely? Are some betting schemes better than others?

Physicist: First, don’t gamble unless you can be sure you won’t get caught cheating or you enjoy losing money.

Games of chance come in two flavors: “completely random” and “not quite completely random”.  It’s not always obvious which is which, and it often barely matters.  A good way to tell the difference is to imagine showing the game as it presently is to Leonard Shelby (that guy who can’t form new memories from Memento).  If after extensive investigation he always has the same advice (“I don’t know, bet on red?”), then the game is memoryless.  “Memoryless” is a genuine fancy math term, and refers to systems where the future results are unaffected by the past results.

Leonard Shelby from Memento.  If a game resets and doesn’t “remember” anything, then there’s no overall pattern, and no way to “outsmart” it.  For these games Leonard is on an equal footing with everyone else.

Say there are some folk playing a really simple game called “guess the number”.  You guess a number, roll a die, and if you guessed right you win.  For all its pomp and glitter, this is essentially what gambling is.  Don’t gamble.

Now say that a few rounds have already been played, and on the fourth round a 3 is rolled.  Lenny would experience that fourth round differently than most other people.

The same series of rounds as seen by someone without memory (top) and as seen by someone with memory (bottom).

Lenny sees a 3 and moves on with his life.  He knows that a 3 is as likely as any other number, so he isn’t surprised.  It’s only those of us burdened with memory who see “patterns” in these random numbers (fun fact: this is called “apophenia“).  Someone who had seen the first rounds churn out a string of 3′s might think that the fourth round will be less likely or more likely to be a 3.  However, assuming that the dice are fair, it turns out that Lenny’s intuition is better than ours; the roll of each of the dice is completely independent all of the other rolls.

The chance of getting these four 3′s in a row is $\left(\frac{1}{6}\right)^4 = \frac{1}{1296}$.  That’s clearly pretty unlikely, but it’s exactly as unlikely as every other possible combination.  “1, 2, 3, 4″ or “2, 6, 5, 5″ or whatever else all show up with the same probability.  There are some subtleties in combinatorics, but as long as you keep track of the order it’s fairly straightforward.  “3, 3, 3, 3″ is definitely unlikely, but so is every every other possibility.  If the lottery pulled the same number, or a string of consecutive numbers, or some other obvious pattern, it would be surprising but it would be no more or less likely than any other sequence of numbers.  That said, if it keeps happening, then you may want to explore why.  For example, there may be trickery involved.

What we expect to see is what fancy math folk call a “typical sequence“; big jumbles of numbers with no discernible rhyme or reason.  Every string of (fair) rolled dice is equally likely and while randomly emerging “patterns” will occasionally show up, they don’t change the math and can’t be predicted.  Of course, they do make for better stories.

This is from xkcd.  Clearly.

Games like craps or roulette are memoryless, which means that notions like “hot tables” and “runs” are completely baseless.  On the other hand, games like blackjack are not quite memoryless.  Since the cards are pulled from the same shoe if you sit and watch the cards for long enough you can predict which cards will be drawn next slightly better than someone who hasn’t.

Lotteries are also memoryless.  So, assuming the lottery is fair, the only way you can increase your probability of winning is to buy more tickets (but please don’t).  Number order and choice make no difference whatsoever.  Unfortunately, assuming that the lottery is fair is a big assumption, that isn’t necessarily true.  Keep in mind that lotteries, like all organized gambling institutions, are not created so that someone will win, they’re created so that everyone will lose.

If you want to win a lottery, far and away the best way to do it is to set one up yourself (which is illegal almost everywhere there are laws).  Not to put too fine a point on it, but people who run big lotteries and casinos are massive ——-s.  Gambling is seriously bad news, pretty much across the board (the owners do well).

Statistically speaking, this is a better use for your money than any form of gambling.

There are much better ways to throw away money than playing the lottery.  Before you think about giving away no-strings-attached money to people who don’t need it, consider trying: “cashfetti” cannons, recreating that scene from Indecent Proposal, money origami, lighting cigars, lining animal pens, breaking chopsticks, and eating it to gain it’s power.

A lot of folk have written in asking for mathematically-based gambling advice and, details aside, here it is: Don’t.  The only way to win is not to play.

## Q: Why does iron kill stars?

Physicist: Every now and again a physicist finds themselves in front of a camera and, either through over-enthusiasm or poor editing, is heard to say something that is “less nuanced” than they may have intended.  “Iron kills stars” is one of the classics.

Just to be clear, if you chuck a bunch of iron into a star, you’ll end up with a lot of vaporized iron that you’ll never get back.  The star itself will do just fine.  The Earth is about 1/3 iron (effectively all of that is in the core), but even if you tossed the entire Earth into the Sun, the most you’d do is upset Al Gore.  Probably a lot.

Stars are always in a balance between their own massive weight that tries to crush their cores, and the heat generated by fusion reactions in the core that pushes all that weight back out.  The more the core is crushed, the hotter and denser it gets, which increases the rate of fusion reactions (increases the cores rate of “explodingness”), which pushes the bulk of the Star away from the core again.  As long as there’s “fuel” in the core, and attempt to crush it will result in the core pushing back.

Young stars burn hydrogen, because hydrogen is the easiest element to fuse and also produces the biggest bang.  But hydrogen is the lightest element, which means that older stars end up with a bunch of heavier stuff, like carbon and oxygen and whatnot, cluttering up their cores.  But even that isn’t terribly bad news for the star.  Those new elements can also fuse and produce enough new energy to keep the core from being crushed.  The problem is, when heavier elements fuse they produce less energy than hydrogen did.  So more fuel is needed.  Generally speaking, the heavier the element, the less bang-for-the-buck.

The “nuclear binding energy” of a selection of elements by atomic weight.  The height difference gives a rough idea of how much energy is release by fusion.  Notice that there’s a huge jump between, say, hydrogen (H1) and helium (He4), but a much smaller jump between aluminum (Al27) and iron (Fe56).

Iron is where that slows to a stop.  Iron collecting in the core is like ash collecting in a fire.  It’s not that it somehow actively stops the process, but at the same time: it doesn’t help.  Throw wood on a fire, you get more fire.  Throw ash on a fire, you get hot ash.

So, iron doesn’t kill stars so much as it is a symptom of a star that’s about to be done.  Without fuel, the rest of the star is free to collapse the core without opposition, and generally it does.  When there’s a lot of iron being produced in the core, a star probably only has a few hours or seconds left to live.

Of course there are elements heavier than iron, and they can undergo fusion as well.  However, rather than producing energy, these elements require additional energy to be created (throwing liquid nitrogen on a fire, maybe?).  That extra energy (which is a lot) isn’t generally available until the outer layers of the star come crushing down on the core.  The energy of all that falling material drives the fusion rate of the remaining lighter elements way, way, way up (supernovas are super for a reason), and also helps power the creation of the elements that make our lives that much more interesting: gold, silver, uranium, lead, mercury, whatever.

There are more than a hundred known elements, and iron is only #26.  Basically, if it’s heavy, it’s from a supernova.  Long story short: iron doesn’t kill stars, but right before a (large) star dies, it is full of buckets of iron.

Posted in -- By the Physicist, Physics | 3 Comments

## Q: According to relativity, things get more massive the faster they move. If something were moving fast enough, would it become a black hole?

Physicist: Nopers!  Although that would be an amazingly cool super-weapon.

Physics can be pretty complicated, but what makes physics different from lesser sciences, like Calvinball, is that physics has rules that are absolute.  While the consequences can sometimes be difficult to predict (technically, platypi are a direct result of fundamental physical laws), the rules themselves tend to be pretty straightforward.  In the case of relativity there are two big starting rules:

#1)  All of physics works exactly the same whether you’re moving or sitting still.  So, in absolutely every way that counts, there’s no difference.

#2) The speed of a passing light beam is always the same.

There are a lot of bizarre things that fall out of that second rule (generally in not completely obvious ways).  Among them is the fact that the equations Newton figured out for momentum and energy, $P=mv$ and $E = \frac{1}{2}mv^2$, are actually only approximations.  In particular, the equation for momentum is actually $P=\gamma mv$.  That $\gamma$ describes a lot of relativistic phenomena.  It’s very close to one for low speeds, which makes $P=\gamma mv \approx mv$ (which is why Newton never noticed it).  The greater the speed, the bigger $\gamma$ becomes, and the more “$\gamma m$” looks like a bigger mass.  But keep in mind; that speed is relative.  You can only see that “increased mass” in something else, because you can never move relative to yourself.

Values of gamma vs. fraction of light speed. Being close to 1 at low speeds means that the “error” from relativistic effects is very small.  Apollo 10 is the fastest (Earth relative) any human has ever moved.

So finally, here’s the point:

So long as the particle or object in question isn’t currently slamming into anything that’s moving differently (or “relatively”), then you can just apply rule #1.  No matter how fast or slow something is moving, it will always behave exactly the same way it would if it were sitting still.  So, if you accelerate a rock to 99.99999999% of light speed (or thereabouts), then it will do exactly what a rock at 0% of light speed does: be a rock.  A fast rock, sure, but it wouldn’t suddenly do anything a regular rock wouldn’t.  I’m not knocking rocks, they’re fine and all, it’s just that they’re not black holes, which are terribly exciting.

It turns out that gravity is way more complicated than Newton first proposed.  The same set of theories (special and general relativity) that accurately predicted that fast objects will behave more massive from our “stationary” perspective, also predicted a whole mess of weird things about gravity.  Including the fact that gravity itself always obeys rules #1 and #2.  So if a thing isn’t a black hole when it’s sitting still, then it isn’t a black hole when it’s moving.

Posted in -- By the Physicist, Physics, Relativity | 8 Comments

## Q: How do we know that atomic clocks are accurate?

Physicist: It turns out that there is no way, whatsoever, to look at a single clock and tell whether or not it’s accurate.  A good clock isn’t so much accurate as it consistent.  It takes two clocks to demonstrate consistency, and three or more to find a bad clock.

Left: Might have the right time? Right: Does have the right time.

Just to define the term: a clock is a device that does some predictable physical process, and counts how many times that process is repeated.  For a grandfather clock the process is the swinging of a pendulum, and the counting is done by the escapement.  For an hour glass the process is running out of sand and being flipped over, with the counting done by hand.

A good hour glass can keep time accurate to within a few minutes a day, so sunrise and sunset won’t sneak up on you.  This is more or less the accuracy of the “human clock”Balance wheel clocks are capable of accuracies to within minutes per year, which doesn’t sound exciting, but really is.

Minutes per year is accurate enough to do a bunch of historically important stuff.  For example, you can detect that the speed of light isn’t constant.  It takes 16 minutes for light to cross Earth’s orbit, and we can predict the eclipsing of Jupiter’s moons to within less than 16 minutes.  In fact, telescopes, Jupiter’s moons, and big books full up look-up tables were once used to tell time (mostly on ships at sea).

Minutes per year is enough to determine longitude, which is a big deal.  You can use things like the angle between the north star and the horizon to figure out your latitude (north/south measure), but since the Earth spins there’s no way to know your longitude (east/west measure) without first knowing what time it is.  Alternatively, if you know your longitude and can see the Sun in the sky, then you can determine the timeIt depends on which you are trying to establish.

Trouble crops up when someone clever starts asking things like “what is a second?” or “how do we know when a clock is measuring a second?”.  Turns out: if your clock is consistent enough, then you define what a second is in terms of the clock, and then suddenly your clock is both consistent and “correct”.

An atomic clock uses a quantum process.  The word “quantum” comes from “quanta” which basically means the smallest unit.  The advantage of an atomic clock is that it makes use of the consistency of the universe, and these quanta, to keep consistent time.  Every proton has exactly the same mass, size, charge, etc. as every other proton, but no two pendulums are ever quite the same.  Build an atomic clock anywhere in the universe, and it will always “tick” at the same rate as all of the other atomic clocks.

So, how do we know atomic clocks are consistent?  Get a bunch of different people to build the same kind of clock several times, and then see if they agree with each other.  If they all agree very closely for a very long time, then they’re really consistent.  For example, if you started up a bunch of modern cesium atomic clocks just after the Big Bang, they’d all agree to within about a minute and a half today.  And that’s… well that’s really consistent.

In fact, that’s a lot more consistent than the clock that is the Earth.  The process the Earth repeats is spinning around, and it’s counted by anyone who bothers to wake up in the morning.  It turns out that the length of the day is substantially less consistent than the groups of atomic clocks we have.  Over the lifetime of the Earth, a scant few billion years, the length of the day has slowed down from around 10 hours to 24.  That’s not just inconsistent, that’s basically broken (as far as being a clock is concerned).

Atomic clocks are a far more precise way of keeping track of time than the length of the day (the turning of the Earth).

So today, one second is no longer defined as “there are 86,400 seconds in one Earth-rotation”.  One second is now defined as “the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom”, which is what most atomic clocks measure.

The best clocks today are arguably a little too good.  They’re accurate enough to detect the relativistic effects of walking speed.  I mean, what good is having a super-watch if carrying it around ruins its super-accuracy?  Scientists use them for lots of stuff, like figuring out how fast their neutrino beams are going from place to place.  But the rest of us mostly use atomic clocks for GPS, and even GPS doesn’t require that many.  Unless 64 is “many”.

That all said: If you have one clock, there’s no way to tell if it’s accurate.

If you have two, either they’re both good (which is good), or one or both of them aren’t.  However, there’s no way to know which is which.

With 3 or more clocks, as long as at least a few of them agree very closely, you can finally know which of your clocks are “right”, or at least working properly, and which are broken.

That philosophy is at the heart of science in general.  And why “repeatable science” is so important and “anecdotal evidence” is haughtily disregarded in fancy science circles.  If it can’t be shown repeatedly, it can’t be shown to be a real, consistent, working thing.

## Q: “i” had to be made up to solve the square root of negative one. But doesn’t something new need to be made up for the square root of i?

Physicist: The beauty of complex numbers (numbers that involve $i$) is that the answer to this question is a surprisingly resounding: nopers.

The one thing that needs to be known about $i$ is that, by definition, $i^2=-1$.  Other than that it behaves like any other number or variable.  It turns out that the square root is $\sqrt{i} = \frac{1}{\sqrt{2}}+\frac{i}{\sqrt{2}}$.  You can check this the same way that you can check that 2 is the square root of 4: you square it.

$\begin{array}{ll}\left(\frac{1}{\sqrt{2}}+\frac{i}{\sqrt{2}}\right)^2\\[2mm]=\left(\frac{1}{\sqrt{2}}+\frac{i}{\sqrt{2}}\right)\left(\frac{1}{\sqrt{2}}+\frac{i}{\sqrt{2}}\right)\\[2mm]=\frac{1}{\sqrt{2}}\left(1+i\right)\frac{1}{\sqrt{2}}\left(1+i\right)\\[2mm]=\frac{1}{2}\left(1+i\right)\left(1+i\right)\\[2mm]=\frac{1}{2}\left(1+i+i+i^2\right)\\[2mm]=\frac{1}{2}\left(1+i+i-1\right)\\[2mm]=\frac{1}{2}\left(2i\right)\\=i\end{array}$

And like any other square root, the negative, $-\frac{1}{\sqrt{2}}-\frac{i}{\sqrt{2}}$, is also a solution.  So, $i$ does have a square root, and it’s not even that hard to find it.  No new “super-imaginary” numbers need to be invented.

This isn’t a coincidence.  The complex numbers are “algebraically closed“, which means that no matter how weird a polynomial is, it’s roots are always complex numbers.  The square roots of $i$, for example, are the solutions of the polynomial $0 = x^2-i$.  So, any cube root, any Nth root, any power, any combination, any whatever of any complex number: still a complex number.

That hasn’t stopped mathematicians from inventing new and terrible number systems.  They just didn’t need to in this case.

Posted in -- By the Physicist, Math | 11 Comments