Q: What is radioactivity and why is it sometimes dangerous?

Physicist: Here’s every particle you’ve ever interacted with: protons, neutrons, electrons, and photons*.  Dangerous radiation is nothing more mysterious than one of those particles moving crazy fast.

The nucleus of some kinds of atoms are unstable and will, given enough time, re-shuffle and settle into a lower-energy, more stable form.  The instability comes from an imbalance in the number of neutrons vs. protons.

The most common forms of “radioactive decay” are beta+ and beta-, and these happen because an nucleus has either too many protons or too many neutrons.  Beta- is a neutron turning into a proton, an electron, and some extra energy.  Beta+ is a proton turning into a neutron, an anti-electron, and some extra energy.  The protons and neutrons stay in the nucleus, and the new electron or anti-electron takes most of that new energy and flies away.  That fast electron or anti-electron is the radiation.

Tritium is a type of radioactive hydrogen that has one proton and two neutrons.

Tritium is a type of radioactive hydrogen that has one proton and two neutrons.  Occasionally, one of those neutrons will “pop” and turn into another proton and an electron.  The result is a helium-3 atom, and a really fast electron.

Sometimes a neutron is ejected (neutron radiation), usually when the entire nucleus breaks apart (which is nuclear fission).  Neutron radiation is exciting for physicists because neutrons are nice, no-fuss particles.  Without a charge, neutrons are about as close to atomic bullets as you can get.

The most common source of neutron radiation is fission.

The most common source of neutron radiation is fission, which generally spits out a few extra neutrons.  This picture is of a controlled reaction (activated by a nearby neutron source).

Finally, an “alpha particle” is sometime ejected.  Alpha particles are a pair of protons, and a pair of neutrons, that are stuck together.  This is the same as a helium nucleus, so this is basically “high-speed-helium”.  “Alpha decay” is why there’s helium on Earth.  The helium that was around during the Earth’s formation found it’s way to the top of the atmosphere, and from there was knocked into space by solar radiation and wind.  Unlike hydrogen, which can bond to things chemically (the “H” in “H2O”), helium is a noble gas and doesn’t stick to anything.  All the helium that slowly bubbles out of the ground is from the radioactive decay of heavier elements inside of the Earth.  So, when you fill a balloon with helium, you’re literally filling it with what used to be radiation.  Fun fact: that slow dribble of helium doesn’t amount to much, and we’re about to run out.

The most common and dangerous kind of radiation is high-frequency light.  High-energy light is called “x-rays” and above that “gamma-rays”, and it tends to punch through shielding a lot better than the other kinds of radiation (that’s why x-rays can be used to look through things).  Alpha, beta, and neutron radiation are made of matter and they tend to bump into things and slow down.  A couple pieces of paper do a pretty decent job stopping alpha particles, and a few inches of water stop neutron and beta radiation remarkably well.  Gamma rays, on the other hand, are what lead shielding is for.

Radiation is dangerous because it can ionize, which breaks apart chemical bonds.  If that happens enough inside a living cell, then the cell will be killed.  With enough broken chemical “parts” they just stop working.  “Radiation poisoning” is what happens when you’ve suddenly got way too many dead cells in your body, and too many of the cells that remain are too damaged to reproduce.

When you get a sun burn you’re suffering from a little radiation damage.  UV light has enough of a punch to kill cells, which is a big part of why our outer layers of skin are just a bunch of dead skin cells: radiating dead cells doesn’t do much, so the body keeps them around to protect the living layers underneath.  This is also why really tiny bugs don’t like direct sunlight (they’re smaller than the protective layer they would need).

For those of you worried about radiation: wear sunscreen.  You’re far more likely to be harmed by chemical pollutants.  Other forms of light, like radiowaves, microwaves, or even visible light, don’t have enough power in each photon to ionize.  As a result, all they do is heat things up (not blast them apart).  In order for your cell phone to do any damage to you it would have to literally cook your head, as in “increase the temperature until such time as you are dead”.  In that respect, a warm room is far more “dangerous”.

Radiators: far more dangerous than cell phones.

Radiators: far more dangerous than cell phones or radio towers.

Every living thing on the planet has developed at least some ability to deal with low-level radiation, which is unavoidable (some are ridiculously good at dealing with it).  Each cell in your body has error correcting mechanisms that deal with genetic damage (DNA blown apart by ionizing radiation), and even when a small fraction of your cells die: no problem.  They’re just put in the bloodstream, filtered out, and poo’d.  As it happens, dead red blood cells are a major contributor to the color of poo!  Chances are you’ll remember that particular and unappetizing fact forever, so… sorry.

You’re struck by about 1 particle of ionizing radiation per square cm per second.  More at high altitudes, and more during the day.  By far, the most dangerous source of radiation that you’re likely to come across (outside of a hospital), is the Sun.  Luckily, the Sun is easy to spot, and easy to avoid.  Shade and sunscreen.  Easy.

The beta particle picture is from here, the fission picture is from here, and the radiator picture is from here.


*There are other particles beyond those four, such as gluons or W bosons or even higgs bosons, that show up all the time. But, they’re kinda “behind-the-scenes” particles that only show up for insignificant fractions of a second and are virtual.  If you find yourself in a situation where you’re interacting with these rarer particles, then you probably work at CERN and should know better.

Posted in -- By the Physicist, Biology, Particle Physics, Physics | 1 Comment

Q: How do we know that π never repeats? If we find enough digits, isn’t it possible that it will eventually start repeating?

Physicist: In physical sciences we catalog information gained through observation (“what’s that?”), then a model is created (“I bet it works like this!”), and then we try to disprove that model by using experiments (“if we’re right, then we should see this weird thing happening”).  In the physical sciences the aim is to disprove, because proofs are always out of reach.  There’s always the possibility that you’re missing something (it’s a big, complicated universe after all).

Mathematics is completely different.  In math (and really, only in math) we have the power to prove things.  The fact that π never repeats isn’t something that we’ve observed, and it’s not something that’s merely “likely” given that we’ve never observed a repeating pattern in the first several trillion digits we’ve seen so far.

The digits of pi never repeat because it can be proven that π is an irrational number.

If you write out the decimal expansion of any irrational number (not just π) you’ll find that it never repeats.  There’s nothing particularly special about π in that respect.  So, proving that π never repeats is just a matter of proving that it can’t be a rational number.  Rather than talking vaguely about math, the rest of this post will be a little more direct than the casual reader might normally appreciate.  For those of you who just scrolled down the page and threw up a little, here’s a very short argument (not a proof):

It turns out that \pi = 4\left(1-\frac{1}{3}+\frac{1}{5}-\frac{1}{7}+\frac{1}{9}-\cdots\right).  But this string of numbers includes all of the prime numbers (other than 2) in the denominator, and since there are an infinite number of primes, there should be no common denominator.  That means that π is irrational, and that means that π never repeats.  The difference between an “argument” and a “proof” is that a proof ends debates, whereas an argument just puts folk more at ease (mathematically speaking).  The math-blizzard below is a genuine proof.  First,

Numbers with repeating decimal expansions are always rational.

If a number can be written as the D digit number “N” repeating forever, then it can be expressed as N\times 10^{-D} + N\times 10^{-2D} + N\times 10^{-3D}+\cdots.  For example, when N=123 and D=3:

\begin{array}{ll}0.123123123123123\cdots\\=0.123+0.000123+0.000000123+\cdots\\=123\times 10^{-3} + 123\times 10^{-6} + 123\times 10^{-9}+\cdots\end{array}

Luckily, this can always be figured out exactly using some very old math tricks.  This is just a geometric series, and N\times 10^{-D} + N\times 10^{-2D} + N\times 10^{-3D}+\cdots = N\frac{10^{-D}}{1-10^{-D}} = N\frac{1}{10^{D}-1}.  So for example, 0.123123123123123 = 123\frac{1}{10^3-1} = \frac{123}{999}=\frac{41}{333}.

Even if the decimal starts out a little funny, and then settles down into a pattern, it doesn’t make any difference.  The “funny part” can be treated as a separate rational number.  For example, 5.412123123123123\cdots = 5.289 + 0.123123\cdots = \frac{5289}{1000} + \frac{41}{333}.  And the sum of any rational numbers is always a rational number, so for example, \frac{5289}{1000} + \frac{41}{333} = \frac{5289\cdot333 + 41\cdot1000}{1000\cdot333} = \frac{1802237}{333000}.

So, if something has a forever-repeating decimal expansion, then it is a rational number.  Equivalently, if something is an irrational number, then it does not have a repeating decimal.  For example,

√2 is an irrational number

So, in order to prove that a number doesn’t repeat forever, you need to prove that it is irrational.  A number is irrational if it cannot be expressed in the form \frac{A}{B}, where A and B are integers.  √2 was the first number shown conclusively to be irrational (about 2500 years ago).  The proof of the irrationality of π is a little tricky, so this part is just to convey the flavor of one of these proofs-of-irrationality.

Assume that \sqrt{2} = \frac{A}{B}, and that A and B have no common factors.  Then it follows that 2B^2 = A^2.  Therefore, A is an even number since A^2 has a factor of 2.  But if \frac{A}{2} is an integer, then we can write: 2B^2 = 4\left(\frac{A}{2}\right)^2 and therefore B^2 = 2\left(\frac{A}{2}\right)^2.  But that means that B is an even number.

This is a contradiction, since we assumed that A and B have no common factors.  By the way, if they did have common factors, then we could cancel them out.  No biggie.

So, √2 is irrational, and therefore its decimal expansion (√2=1.4142135623730950488016887242096980785696…) never repeats.  This isn’t just some experimental observation, it’s an absolute fact.  That’s why it’s useful to prove, rather than just observe, that

π is an irrational number

The earliest known proof of this was written in 1761.  However, what follows is a much simpler proof written in 1946.  Unfortunately, there don’t seem to be any simple, no-calculus proofs floating around, so if you don’t dig calculus and some of the notation from calculus, then you won’t dig this.  Here goes:

Assume that \pi = \frac{a}{b}.  Now define a function f(x) = \frac{x^n(a-bx)^n}{n!}, where n is some positive integer, and that excited n, n!, is “n factorial“.  No problems so far.

All of the derivatives of f(x) taken at x=0 are integers.  This is because f(x) = \frac{x^n(a-bx)^n}{n!} = \sum_{j=0}^n \frac{a^{n-j}(-b)^j}{n!} x^{n+j} (by the binomial expansion theorem), which means that the kth derivative is f^{(k)}(x) = \sum_{j=0}^n \frac{n!}{j!(n-j)!}\frac{a^{n-j}(-b)^j}{n!}(n+j)(n+j-1)\cdots(n+j-k+1) x^{n+j-k} = \sum_{j=0}^n \frac{n!}{j!(n-j)!}\frac{a^{n-j}(-b)^j}{n!}\frac{(n+j)!}{(n+j-k)!} x^{n+j-k} = \sum_{j=0}^n a^{n-j}(-b)^j\frac{(n+j)!}{j!(n-j)!(n+j-k)!} x^{n+j-k}

If k<n, then there is no constant term (an x0 term), so f^{(k)}(0) =0.  If n≤k≤2n, then there is a constant term, but f^{(k)}(0) is still an integer.  The j=k-n term is the constant term, so:

\begin{array}{ll}f^{(k)}(0)=\sum_{j=0}^n a^{n-j}(-b)^j\frac{(n+j)!}{j!(n-j)!(n+j-k)!} 0^{n+j-k}\\[2mm]=a^{2n-k}(-b)^{k-n}\frac{k!}{(k-n)!(2n-k)!0!}\\[2mm]=a^{2n-k}(-b)^{k-n}\frac{k!}{(k-n)!(2n-k)!}\end{array}

a and b are integers already, so their powers are still integers.  \frac{k!}{(k-n)!(2n-k)!} is also an integer since \frac{k!}{(k-n)!(2n-k)!}=\frac{k!}{(k-n)!n!}\frac{n!}{(2n-k)!} = {k \choose n}\frac{n!}{(2n-k)!}.  “k choose n” is always an integer, and \frac{n!}{(2n-k)!} = n(n-1)(n-2)\cdots(2n-k+1), which is just a string of integers multiplied together.

So, the derivatives at zero, f^{(k)}(0), are all integers.  More than that, by symmetry, f^{(k)}(\pi), are all integers.  This is because

\begin{array}{ll}f(\pi-x)=f\left(\frac{a}{b}-x\right)\\[2mm]=\frac{\left(\frac{a}{b}-x\right)^n(a-b\left(\frac{a}{b}-x\right))^n}{n!}\\[2mm]=\frac{\left(\frac{a}{b}-x\right)^n(a-\left(a-bx\right))^n}{n!}\\[2mm]=\frac{\left(\frac{a}{b}-x\right)^n(bx)^n}{n!}\\[2mm]=\frac{\left(\frac{1}{b}\right)^n\left(a-bx\right)^n(bx)^n}{n!}\\[2mm]=\frac{\left(a-bx\right)^n x^n}{n!}\end{array}

This is the same function, so the arguments about the derivatives at x=0 being integers also apply to x=π.  Keep in mind that it is still being assumed that \pi=\frac{a}{b}.

Finally, for k>2n, f^{(k)}(x)=0, because f(x) is a 2n-degree polynomial (so 2n or more derivatives leaves 0).

After all that, now construct a new function, g(x) = f(x)-f^{(2)}(x)+f^{(4)}(x)-\cdots+(-1)^nf^{(2n)}(x).  Notice that g(0) and g(π) are sums of integers, so they are also integers.  Using the usual product rule, and the derivative of sines and cosines, it follows that

\begin{array}{ll}\frac{d}{dx}\left[g^\prime(x)sin(x) - g(x)cos(x)\right]\\[2mm]  = g^{(2)}(x)sin(x)+g^\prime(x)cos(x) - g^{\prime}(x)cos(x)+g(x)sin(x)\\[2mm]  =sin(x)\left[g(x)+g^{(2)}(x)\right]\\[2mm]  =sin(x)\left[\left(f(x)-f^{(2)}(x)+f^{(4)}(x)-\cdots+(-1)^nf^{(2n)}(x)\right)+\left(f^{(2)}(x)-f^{(4)}(x)+f^{(6)}(x)-\cdots+(-1)^nf^{(2n+2)}(x)\right)\right]\\[2mm]  =sin(x)\left[f(x)+\left(f^{(2)}(x)-f^{(2)}(x)\right)+\left(f^{(4)}(x)-f^{(4)}(x)\right)+\cdots+(-1)^n\left(f^{(2n)}(x)-f^{(2n)}(x)\right)+(-1)^nf^{(2n+2)}(x)\right]\\[2mm]  =sin(x)\left[f(x)+(-1)^nf^{(2n+2)}(x)\right]\\[2mm]  =sin(x)f(x)  \end{array}

f(x) is positive between 0 and π, since (a-bx)^n>0 when 0<x<\frac{a}{b}=\pi.  Since sin(x)>0 when 0<x<π as well, it follows that 0<\int_0^\pi f(x)sin(x)\,dx.  Finally, using the fundamental theorem of calculus,

\begin{array}{ll}\int_0^\pi f(x)sin(x)\,dx\\[2mm]= \int_0^\pi \frac{d}{dx}\left[g^\prime(x)sin(x) - g(x)cos(x)\right]\,dx\\[2mm] = \int_0^\pi \frac{d}{dx}\left[g^\prime(x)sin(x) - g(x)cos(x)\right]\,dx\\[2mm] = \left(g^\prime(\pi)sin(\pi) - g(\pi)cos(\pi)\right) - \left(g^\prime(0)sin(0) - g(0)cos(0)\right)\\[2mm]= \left(g^\prime(\pi)(0) - g(\pi)(-1)\right) - \left(g^\prime(0)(0) - g(0)(1)\right)\\[2mm] = g(\pi)+g(0)\end{array}

But this is an integer, and since f(x)sin(x)>0, this integer is at least 1.  Therefore, \int_0^\pi f(x)sin(x)\,dx>1.

But check this out: if 0<x<\pi=\frac{a}{b}, then sin(x)f(x)=sin(x)\frac{x^n(a-bx)^n}{n!}\le\frac{x^n(a-bx)^n}{n!}\le\frac{\pi^n(a-bx)^n}{n!}\le\frac{\pi^na^n}{n!}.

Therefore, \int_0^\pi f(x)sin(x)\,dx<\int_0^\pi \frac{\pi^na^n}{n!}\,dx=\frac{\pi^{n+1}a^n}{n!}.  But here’s the thing; we can choose n to be any positive integer we’d like.  Each one creates a slightly different version of f(x), but everything up to this point works the same for each of them.  While the numerator, π(πa)n, grows exponentially fast, the denominator, n!, grows much much faster for large values of n.  This is because each time n increases by one, the numerator is multiplied by πa (which is always the same), but the denominator is multiplied by n (which keeps getting bigger).  Therefore, for a large enough value of n we can always force this integral to be smaller and smaller.  In particular, for n large enough, \int_0^\pi f(x)sin(x)\,dx<\frac{\pi^{n+1}a^n}{n!}<1.  Keep in mind that a is assumed to be some definite number, so it can’t “race against n”, which means that this fraction always becomes smaller and smaller.

Last step!  We can now say that if π can be written as \pi =\frac{a}{b}, then a function, f(x), can be constructed such that \int_0^\pi f(x)sin(x)\,dx\ge 1 and \int_0^\pi f(x)sin(x)\,dx<1.  But that’s a contradiction.  Therefore, π cannot be written as the ratio of two integers, so it must be irrational, and irrational numbers have non-repeating decimal expansions.

Boom.  That’s a proof.

For those of you still reading, it may occur to you to ask “wait… where did the properties of π get into that at all?”.  The proof required that sine and cosine be derivatives of each other, and that’s only true when using radians.  For example, \frac{d}{dx}sin(x) = \frac{\pi}{180}cos(x) when x is is degrees.  So, the proof requires that \frac{d}{dx}sin(x) = cos(x), and that requires that the angle is given in radians.  Radians are defined geometrically so that the angle is described by the length of the arc it traces out, divided by the radius.  Incidentally, this definition is equivalent to declaring the small angle approximation: sin(x)\approx x.

The radian.

The radian.

This defines the angle of a full circle as 2π radians and as a result of geometry and the definitions of sine and cosine, sin(π radians) = 0, cos(π radians) = -1, \frac{d}{dx}sin(x) = cos(x), and that’s enough for the proof!

Subtle, but behind all of that algebra, is a bedrock of geometry.

Posted in -- By the Physicist, Geometry, Logic, Math | 12 Comments

Q: Why does carbon dating detect when things were alive? How are the atoms in living things any different from the atoms in dead things?

Physicist: As far as carbon dating is concerned, the difference between living things and dead things is that living things eat and breathe and dead things are busy with other stuff, like sitting perfectly still.  Eating and breathing is how fresh 14C (carbon-14) gets into the body.

Carbon-14: if you eat living things, then you're eating fresh carbon-14.

If you eat recently-living things, then you’re eating fresh carbon-14.

The vast majority of carbon is 12C (carbon-12) which has 6 protons and 6 neutrons (12=6+6).  14C on the other hand has 6 protons and 8 neutrons (14=6+8).  Chemically speaking, those 6 protons are far more important since they are what makes carbon act like carbon (and not oxygen or some other element).  The extra pair of neutrons do two things: they make 14C heavier (by about 17%), and they make it mildly radioactive.  If you have a 14C atom it has a 50% chance of decaying in the next 5730 years (regardless of how old it presently is).  That 5730 year half-life is what allows science folk to figure out how old things are, but it’s also relatively short.

This begs the question: why is there any 14C left?  There have been about 1,000,000 half-lives since the Earth first formed, which means that there should only be about \frac{1}{2^{1000000}} of the original supply, which even google considers too small to be worth mentioning.  The answer is that 14C is being continuously produced in the upper atmosphere.

Our atmosphere is brimming over with 14N.  Nitrogen-14 has 7 protons and 7 neutrons, and is about four fifths of the air you’re breathing right now.  In addition to all the other reasons for not hanging out at the edge of space, there’s a bunch of high-energy radiation (mostly from the Sun) flying around.  Some of this radiation sometimes takes the form of free neutrons bouncing around, and when nitrogen-14 absorbs a neutron it sometimes turns into carbon-14 and a spare proton (“spare proton” = “hydrogen”).

This new 14C gets thoroughly mixed into the rest of the atmosphere pretty quickly, and carbon in the atmosphere overwhelmingly appears in the form of carbon dioxide.  It’s here that the brand-new 14C enters the carbon cycle.  Living things use carbon a lot (biochemistry is sometimes called “fun with carbon”) and this carbon enters the food chain through plants, which pull it from the air.  Any living plant you’re likely to come across is mostly made of carbon (and water) it’s absorbed from the air in the last few years, and any living animal you come across is mostly made of plants (and other animals) that it’s eaten in the last few years.

With the notable exception of the undead, when things die they stop eating or otherwise absorbing carbon.  As a result, the body of something that’s been dead for around 5700 years (the 14C half-life) will have about half as much 14C as the body of something that’s alive.  Nothing to do with being alive per se, but a lot to do with eating stuff.

Dracula is dead, but still part of the carbon cycle since he eats.  Therefore, we can expect that carbon dating

Dracula is dead, but still part of the carbon cycle since he eats (or at least drinks).  Therefore, we can expect that carbon dating would read him as “still alive”, since he should have about the same amount of carbon-14 as the people he imbibes.

There are some difficulties with carbon dating.  For example, nuclear tests or unusual solar weather can change the rate of production.  Also, any attempt to measure things that have been dead for more than several half-lives (tens of thousands of years) are subject to a lot of statistical noise.  So you can carbon date woolly mammoths, but definitely not dinosaurs.  Aside from that, carbon dating is a decently accurate way of figuring out how long ago a thing recused itself from the carbon cycle.


Answer Gravy: This was subtle, and would have derailed the flow of the post, but extremely A-type readers may have noticed that adding a neutron to 14N (7 protons, 7 neutrons) leaves 15N (7 protons, 8 neutrons).  But 15N is stable, and will not decay into 14C, or anything else.  So why does the reaction “n+14N → p+14C” happen?  It turns out that nuclear physics is more complicated than you might expect.

The introduced neutron can carry a fair amount of kinetic energy, and this extra energy can sometimes make the nucleus “splash”.  It’s a little like pouring water into a glass.  If you pour the water in slowly, then nothing spills out and the water-in-glass system is stable.  But if you pour the same amount of water into the glass quickly, then some of it is liable to splash out.  Similarly (maybe not that similarly), introducing a fast neutron to a nucleus can have a different result than introducing a slow neutron.

Dealing with complications like this is why physicists love themselves some big computers.

Posted in -- By the Physicist, Biology, Particle Physics, Physics | Leave a comment

Q: What role does Dark Matter play in the behavior of things inside the solar system?

Physicist: To a stunningly good approximation: zero.

The big difference between dark matter and ordinary matter is that dark matter is “aloof” and doesn’t interact with other stuff.  Instead, it cruises by like “ghost particles”.  Matter on the other hand smacks into itself and clumps together.  The big commonality is that both of them create and are affected by gravity.

If you have a big ball of matter both ordinary and dark matter will be pulled by its gravity.

If you have a big ball of matter (doesn’t matter what kind), then both ordinary and dark matter will be pulled by its gravity.  However, there’s no reason for the dark matter to ever fall out of orbit since there’s nothing around to stop its motion.  Normal matter tends to “get in its own way”.

In fact, if it weren’t for the gravitational influence of dark matter, we would have no reason to suspect its existence at all.  Because dark matter doesn’t clump it stays really spread out and forms one big, roughly spherical, cloud around the galaxy.  Matter has more of a “big-clump-or-nothing” deal going on.  If you start with a big cloud of ordinary matter, then eventually (it can take a while) you’ll have one or two huge chunks (stars, binary stars, that sort of thing) and the few crumbs that escape tend to end up clumping together themselves (planets, moons, comets, your mom, etc.).  If you feel like impressing people at your next science party, this is called “accretion“.

Any attempt to picture the Sun and nearby stars to scale look like nothing at all.

Any attempt to picture the Sun and nearby stars to scale look like nothing.  This is an attempt where every square is 10 times the size of the previous square (1000 times the volume).  Point is, when ordinary matter concentrates it really, really, really concentrates.

In the above picture the dark matter is spread out uniformly.  Overall there’s a lot more of it (about 10 times as much, give or take), but here in the solar system the balance is tipped overwhelmingly in favor of ordinary matter.  But more than that, since dark matter is spread evenly (and thinly) all around us, it doesn’t pull in any particular direction.  There’s about the same amount in every direction you point, so there’s very little net pull in any direction.  Until you start considering galactic scales at least.

Here on Earth we can point straight at a few big collections of matter.  The most important is straight down, and the

Ordinary matter clusters in big blobs, so when it pulls it tends to pull in one direction (right).  Dark matter does pull, but it pulls on every particle evenly in every direction, which is a lot like not pulling at all (left).

If you do consider things on a galactic scale (~100,000 lightyears), then there’s more dark matter in the direction of Sagittarius (in December this is overhead around midnight).  Technically, since we’re most of the way to the edge of the galactic disk, and the center of the galaxy is behind the stars in Sagittarius, most of the stuff in the Milky Way is more or less in that direction.  That imbalance makes the Sun and all the other nearby stars (“nearby” = “visible to the eye”) orbit the galaxy, but it also helps Earth and everything else around us do the same.  Astronauts in orbit appear weightless because their ship and their bodies are both orbiting the Earth.  They are both in “free-fall”.  Similarly, the Earth, the Sun, and even everything in our stellar neighborhood are all in free-fall around the galaxy.  So while the preponderance of dark matter in the galaxy does cause the solar system to slowly sweep out a seriously huge circle (the “galactic year” is about 250 million Earth years), it does not cause things in the solar system to move with respect to each other.

Hopefully, dark matter has more tricks than just gravity.  If it has no other way of interacting with stuff, then that makes it really difficult to study.  We can study things like stars, rocks, and puppies because they’re all “strongly interacting”.  Shine light on them?  Sure.  Poke them?  Why not.  But dark matter (whatever it is) is light-proof and poke-proof, and that’s deeply frustrating.

Posted in -- By the Physicist, Astronomy, Physics | 17 Comments

Q: Are some number patterns more or less likely? Are some betting schemes better than others?

Physicist: First, don’t gamble unless you can be sure you won’t get caught cheating or you enjoy losing money.

Games of chance come in two flavors: “completely random” and “not quite completely random”.  It’s not always obvious which is which, and it often barely matters.  A good way to tell the difference is to imagine showing the game as it presently is to Leonard Shelby (that guy who can’t form new memories from Memento).  If after extensive investigation he always has the same advice (“I don’t know, bet on red?”), then the game is memoryless.  “Memoryless” is a genuine fancy math term, and refers to systems where the future results are unaffected by the past results.

Remember Sammy Jenkins

Leonard Shelby from Memento.  If a game resets and doesn’t “remember” anything, then there’s no overall pattern, and no way to “outsmart” it.  For these games Leonard is on an equal footing with everyone else.

Say there are some folk playing a really simple game called “guess the number”.  You guess a number, roll a die, and if you guessed right you win.  For all its pomp and glitter, this is essentially what gambling is.  Don’t gamble.

Now say that a few rounds have already been played, and on the fourth round a 3 is rolled.  Lenny would experience that fourth round differently than most other people.

The same game as seen by someone without memory (top) and as seen by someone with memory (bottom).

The same series of rounds as seen by someone without memory (top) and as seen by someone with memory (bottom).

Lenny sees a 3 and moves on with his life.  He knows that a 3 is as likely as any other number, so he isn’t surprised.  It’s only those of us burdened with memory who see “patterns” in these random numbers (fun fact: this is called “apophenia“).  Someone who had seen the first rounds churn out a string of 3′s might think that the fourth round will be less likely or more likely to be a 3.  However, assuming that the dice are fair, it turns out that Lenny’s intuition is better than ours; the roll of each of the dice is completely independent all of the other rolls.

The chance of getting these four 3′s in a row is \left(\frac{1}{6}\right)^4 = \frac{1}{1296}.  That’s clearly pretty unlikely, but it’s exactly as unlikely as every other possible combination.  “1, 2, 3, 4″ or “2, 6, 5, 5″ or whatever else all show up with the same probability.  There are some subtleties in combinatorics, but as long as you keep track of the order it’s fairly straightforward.  “3, 3, 3, 3″ is definitely unlikely, but so is every every other possibility.  If the lottery pulled the same number, or a string of consecutive numbers, or some other obvious pattern, it would be surprising but it would be no more or less likely than any other sequence of numbers.  That said, if it keeps happening, then you may want to explore why.  For example, there may be trickery involved.

What we expect to see is what fancy math folk call a “typical sequence“; big jumbles of numbers with no discernible rhyme or reason.  Every string of (fair) rolled dice is equally likely and while randomly emerging “patterns” will occasionally show up, they don’t change the math and can’t be predicted.  Of course, they do make for better stories.

This is from xkcd.  Clearly.

Games like craps or roulette are memoryless, which means that notions like “hot tables” and “runs” are completely baseless.  On the other hand, games like blackjack are not quite memoryless.  Since the cards are pulled from the same shoe if you sit and watch the cards for long enough you can predict which cards will be drawn next slightly better than someone who hasn’t.

Lotteries are also memoryless.  So, assuming the lottery is fair, the only way you can increase your probability of winning is to buy more tickets (but please don’t).  Number order and choice make no difference whatsoever.  Unfortunately, assuming that the lottery is fair is a big assumption, that isn’t necessarily true.  Keep in mind that lotteries, like all organized gambling institutions, are not created so that someone will win, they’re created so that everyone will lose.

If you want to win a lottery, far and away the best way to do it is to set one up yourself (which is illegal almost everywhere there are laws).  Not to put too fine a point on it, but people who run big lotteries and casinos are massive ——-s.  Gambling is seriously bad news, pretty much across the board (the owners do well).

Statistically speaking, this is still a better use for your money.

Statistically speaking, this is a better use for your money than any form of gambling.

There are much better ways to throw away money than playing the lottery.  Before you think about giving away no-strings-attached money to people who don’t need it, consider trying: “cashfetti” cannons, recreating that scene from Indecent Proposal, money origami, lighting cigars, lining animal pens, breaking chopsticks, and eating it to gain it’s power.

A lot of folk have written in asking for mathematically-based gambling advice and, details aside, here it is: Don’t.  The only way to win is not to play.

Posted in -- By the Physicist, Combinatorics, Entropy/Information, Math, Probability | 8 Comments

Q: Why does iron kill stars?

Physicist: Every now and again a physicist finds themselves in front of a camera and, either through over-enthusiasm or poor editing, is heard to say something that is “less nuanced” than they may have intended.  “Iron kills stars” is one of the classics.

Just to be clear, if you chuck a bunch of iron into a star, you’ll end up with a lot of vaporized iron that you’ll never get back.  The star itself will do just fine.  The Earth is about 1/3 iron (effectively all of that is in the core), but even if you tossed the entire Earth into the Sun, the most you’d do is upset Al Gore.  Probably a lot.

Stars are always in a balance between their own massive weight that tries to crush their cores, and the heat generated by fusion reactions in the core that pushes all that weight back out.  The more the core is crushed, the hotter and denser it gets, which increases the rate of fusion reactions (increases the cores rate of “explodingness”), which pushes the bulk of the Star away from the core again.  As long as there’s “fuel” in the core, and attempt to crush it will result in the core pushing back.

Young stars burn hydrogen, because hydrogen is the easiest element to fuse and also produces the biggest bang.  But hydrogen is the lightest element, which means that older stars end up with a bunch of heavier stuff, like carbon and oxygen and whatnot, cluttering up their cores.  But even that isn’t terribly bad news for the star.  Those new elements can also fuse and produce enough new energy to keep the core from being crushed.  The problem is, when heavier elements fuse they produce less energy than hydrogen did.  So more fuel is needed.  Generally speaking, the heavier the element, the less bang-for-the-buck.

The "nuclear binding energy"

The “nuclear binding energy” of a selection of elements by atomic weight.  The height difference gives a rough idea of how much energy is release by fusion.  Notice that there’s a huge jump between, say, hydrogen (H1) and helium (He4), but a much smaller jump between aluminum (Al27) and iron (Fe56).

Iron is where that slows to a stop.  Iron collecting in the core is like ash collecting in a fire.  It’s not that it somehow actively stops the process, but at the same time: it doesn’t help.  Throw wood on a fire, you get more fire.  Throw ash on a fire, you get hot ash.

So, iron doesn’t kill stars so much as it is a symptom of a star that’s about to be done.  Without fuel, the rest of the star is free to collapse the core without opposition, and generally it does.  When there’s a lot of iron being produced in the core, a star probably only has a few hours or seconds left to live.

Of course there are elements heavier than iron, and they can undergo fusion as well.  However, rather than producing energy, these elements require additional energy to be created (throwing liquid nitrogen on a fire, maybe?).  That extra energy (which is a lot) isn’t generally available until the outer layers of the star come crushing down on the core.  The energy of all that falling material drives the fusion rate of the remaining lighter elements way, way, way up (supernovas are super for a reason), and also helps power the creation of the elements that make our lives that much more interesting: gold, silver, uranium, lead, mercury, whatever.

There are more than a hundred known elements, and iron is only #26.  Basically, if it’s heavy, it’s from a supernova.  Long story short: iron doesn’t kill stars, but right before a (large) star dies, it is full of buckets of iron.

Posted in -- By the Physicist, Physics | 3 Comments