Gravity Waves!

Physicist: A few days ago we managed to detect gravity waves for the first time.  Gravity waves were predicted a century ago by Einstein as a consequence of his general theory of relativity.  This success isn’t too surprising from a theoretical stand point; if
your theories are already batting a thousand, then when they bowl yet another field
goal for a check mate no one is shocked.

What is amazing is not that gravity waves exist, but that we’ve managed to detect
them.  The effect is so unimaginably small that it can be overwhelmed by someone
stubbing their toe a mile away or filing their taxes wrong.  Gravity is literally
the geometry of spacetime: very particular, tiny increases and decreases in
distances and durations.  There’s a fairly standard technique for doing this: light
is bounced back and forth along two separate paths between mirrors (in this case the
length of those two paths are each 4km) dozens of times.  The light from these two
paths is then brought together and allowed to interfere.  If the difference in the
length of the two paths changes by half a wave length, then instead of destructive
interference we see constructive interference.  The actual path difference is
substantially less than half a wave length, but it’s still detectable.

LIGO is b

A gravity wave detector is a device that very, very carefully measures the difference between the lengths of two long paths.  (Left) The tiny difference is detected by looking at the interference between lasers that travel along each path.  (Right) What the detector in Livingston, LA looks like from above.

When a gravity wave ripples through the Earth, the lengths of the two paths change
by about one part in 1021 which is a tiny fraction of the width of a proton over 4 km.  Keep in mind that the light that’s doing the measuring is bouncing off of mirrors that are made of atoms (each of which is much bigger than a proton) and that those atoms are constantly jiggling, because that’s what any level of heat does to matter.  This level of precision is the most impressive part of this whole accomplishment.  Your heart beat is currently throwing around the building you’re in by a lot more than a proton’s width.  And yet, despite the fact that the literally everything in the world is a source of experiment-ruining noise, LIGO is able to filter all of it out and then go on to detect the ridiculously faint signal of a couple of black holes a fair fraction of the universe away and even sort out details of the event.

The signal that we’re hearing about now was actually detected in September.  The cause appears to be the merging of two black holes about 1.3 billion lightyears away (which puts the source well outside of our backyard).  These black holes started with masses of around 36 and 29 times the mass of the Sun and after combining left a black hole with a combined mass of about 62 Sun-masses.  Astute second graders will observe that 36+29>62.  This is because gravity waves carry energy.  In this case the final event turned about 3 Sun’s worth of energy into ripples in spacetime that are “loud” enough to literally (albeit very, very slightly) rattle everything in the universe.  So, if we ever contact aliens from the other side of the universe and they also have nerds, then we’ll have something to talk about.  By the way, this signal (unlike so many in physics) has a frequency well within the range of human hearing.  Properly cleaned up, it sounds like this.

(Top) The signal as detected at the two observatories. The noise is bad enough that without at least two observatories it would be much more difficult to see it. (Middle) The signal as predicted by our understanding of general relativity. (Bottom) The remaining noise after the signal has been subtracted. Notice that it is now fairly constant. (Picture on the bottom) This is a plot of the strength of the signal using color vs. frequency on the vertical axis and time on the horizontal axis.

(Top) The signal as detected at the two observatories. The noise is bad enough that without at least two observatories it would be much more difficult to see it.
(Middle) The signal as predicted by our understanding of general relativity. (Bottom) The remaining noise after the signal has been subtracted. Notice that it is now fairly constant.
(Picture on the bottom) This is a plot of the strength of the signal using color vs. frequency on the vertical axis and time on the horizontal axis.

This is the first direct measurement of gravity waves, but it isn’t the first evidence we’ve seen.  If you have two really heavy masses in orbit around each other, you’ll find that they’ll slowly spiral together.  This is strange because it implies that the masses are losing energy.  But to what?  We first measured this effect with pulsars, which are a kind of neutron star (the next densest things after black holes).  Pulsars are so named because they produce radio pulses that are extremely regular.  You can think of them as giant space clocks.  They’re precise enough that they allow us to figure out exactly how they’re moving using doppler shifts, and they’ve shown that closely orbiting pairs lose energy in exactly the way we’d expect based on our theoretical understanding of gravity waves.

So what can we use this for?  So far we’ve been able to “hear” black holes merging (several more times since September).  We’re not only detecting the spiraling in, but also the process of the black holes coalescing.  Once they come in contact they briefly form an unshelled-peanut-shaped black hole before assuming a spherical shape.  This process is called the “ring down” and it also creates audible gravity waves that give us information about the behavior of black holes.  But beyond heavy things in tight orbits and ringing black holes, what will we hear?  Short answer: who knows.  If you go out in the woods you’ll hear trees falling over when no one is around and lots of bears shitting, but there’s no telling what else you’ll hear.  The only way to find out is to go out and listen.  As our gravity wave detectors get better and more plentiful we’ll be able to hear fainter and fainter signals.  We can expect to hear lots of black holes merging; not because it’s common, but because it’s loud and the universe is big.  Soon we’ll start hearing things we don’t expect and that’s when the science happens.  It’s nice to have our theories regarding gravity waves proven right, but being right isn’t the point of science.  As long as you’re right, you’re not learning.  It’s all the things we don’t expect that will be the most exciting.

Gravity wave astronomy is only the third way we have of observing the distant universe: light, neutrinos, and now gravity waves.  We didn’t know what we’d find with the first two and it’s fair to say we don’t know what we’ll learn now.  Exciting times.

You can read the paper that announced the achievement here.  And check out the author list: there was more collaboration on this than a Wu Tang album.

Update (6/20/2016): And again!

Posted in -- By the Physicist, Experiments, Physics | 10 Comments

Q: Is it possible to parachute to Earth from orbit?

Physicist: Yes and no, but mostly no.

It’s certainly possible to parachute safely to Earth from the top (or nearly the top) of the atmosphere, but this question isn’t about parachuting from space it’s about parachuting from orbit.  An orbit isn’t just a matter of being very high, it’s mostly a matter of being very, very fast.

Newton tried to explain orbits in terms of a progressively more and more powerful cannon.

Newton tried to explain orbits in terms of a progressively more and more powerful cannon.

When you throw something it follows a curved path that eventually intersects the surface of the Earth (technically this is already an orbit, it just gets interrupted by stuff in the way).  If you use a cannon, then the curve straightens out a bit but it still intersects the surface of the Earth, just farther away.  With a really, really powerful cannon (or more likely: a rocket) you can get something moving so fast that the curve of its fall matches the curve of the Earth.  When this happens the object is in orbit; a closed loop around the Earth that repeats forever.

You may have noticed that the Earth isn’t terribly curved, so it may seem that you’d need to be moving impossibly fast to follow it.  That’s exactly the case: above the air but near the surface of the Earth you’d need to be moving sideways at about 8km/s.  This is more than 23 times faster than the speed of sound.  Not slow.

A) An astronaut in low Earth orbit, who will stay there.
B) A stationary astronaut at the same height, who will be on the ground (impact on the ground) in half an hour or so.

This 8 km/s speed corresponds to the slowest, lowest orbit.  Any other orbit either won’t bring you close to the atmosphere or will do so faster (at up to about 11 km/s).  Being the slowest and lowest, these roughly circular “near Earth orbits” are very popular (that is to say: cheap).  Near Earth orbit is probably what you’re imagining when you think of parachuting to the Earth.

Orbits at different heights. In low Earth orbit are the International Space Station, the Hubble space telescope, and most communication satellites.

Orbits at different heights. In low Earth orbit are the International Space Station, the Hubble space telescope, and most communication satellites.

So here comes the point.  You can go as fast as you want if you’re doing it in space, but when you’re measuring your speed in km per second, air starts to feel like concrete (hot concrete).

The effects of air on something designed to handle it. A bag of meat (a person) would fare worse.

The effect of air on a “heat shield” designed to handle it (the bottom of the Apollo 11 crew capsule).  A bag of meat (like a person in a spacesuit) would fare worse.

When an object plows through air at very high speeds it tends to burn, shatter, and shred.  Parachutes are used for most entries and reentries, but not initially; most of the deceleration from orbit is handled by heat shields, which are a cross between parachutes and bricks (or a brick and another kind of brick).  Once enough of a falling object’s speed has been shed by a heat shield (typically slower than sound, but up to a few times faster), it is then safe to deploy an actual parachute.

If you were to jump (fast) out of the International Space Station with the aim of entering the atmosphere and deploying your chute, you’d find it filled in short order then torn to ribbons shortly after.  Like any falling star, you’d find yourself hot, dead, and profoundly luminous.  Like icy meteors, you’d probably flash into steam and air burst before reaching the ground.

The reason you can’t parachute from orbit is simply a matter of engineering.  We haven’t yet created parachutes that can survive being deployed, and then work properly, at speeds above around mach 2.  At reentry speeds, which are in excess of mach 23, parachutes just can’t hold up.  However, someday it may be possible.  We know that the accelerations involved are survivable, and there don’t seem to be any fundamental limitations, we just need better materials and techniques.  Also, for at least a little while, a spacesuit capable of reentry on its own (before the parachute has had a change to slow it) would be nice.

Merely falling from space is probably pretty easy.  The highest jump so far was from 24 miles up.  A jump from space is a mere four times higher.  You’d need a rocket instead of a balloon, but aside from being a silly thing to do, there’s nothing stopping someone from doing it.

Posted in -- By the Physicist, Engineering, Physics | 8 Comments

Q: Why can’t we see the lunar landers from the Apollo missions with the Hubble (or any other) telescope?

Physicist: About why you’d expect: they’re just too damn small and too damn far away.  Nothing fancy.  That’s not to say that we can never get images, just that you need to be a lot closer.  The lunar landers are each about 4 meters across and about 384,400,000 meters away, which makes them about as hard to see as a single coin from a thousand miles away.  You gotta squint.

A picture of the Apollo 17 landing site taken by the Lunar Reconnaissance Orbiter which, as the name implies, was in orbit around the Moon when it took these rec

A picture of the Apollo 17 landing site taken by the Lunar Reconnaissance Orbiter which, as the name implies, was in orbit around the Moon when it took this presumably reconnaissance-related picture.  Those meandering lines are tracks left by a lunar rover.  Click to enlarge.

In fact, a big part of why we (humans) bother to go to the Moon, other planets, and space in general is that photographs from Earth leave a lot to be desired.  In addition to being far from everything else, here on the surface of Earth we’re stuck at the bottom of an ever-moving sea of air.  In exactly the same way that the surface of water scatters light, air makes it difficult for astronomers to practice their dread craft.

Also, not for nothing, telescopes are terrible at retrieving material samples.

The Apollo 17 landing site from even closer.

The Apollo 17 landing site from even closer.

You and every telescope on Earth (and the Hubble Telescope in low Earth orbit) are all about a quarter million miles from the Moon and the landing sites thereon.  If we ever get around to building something bigger on the Moon, like mines or cities or president’s heads, then we shouldn’t have nearly as much trouble seeing it from Earth.


Answer Gravy: It turns out that the best/biggest telescopes we use today on Earth are can’t detect things the size and distance of the lunar landers using visible light.  This isn’t due to poor design; the devices we’re using now are, in a word, perfect.  They literally cannot be made appreciably better (at detecting visible light).  The roadblock is more fundamental.

The “resolving power” of a telescope, is described in terms of whether or not you can tell the difference between a pair of adjacent points.  If the two points are too close together, then you’ll see them blurred together as one point and they are “not resolved”.  If they’re far enough apart, then you see both points independently.

Whether it uses mirrors or lenses, the resolving power of every telescope is limited by some fundamental constraints determined by the wavelength of the light that’s being observed and by the size of the aperture.

Every point in every image is surrounded by a rapidly diminishing Airy disk.

Every point in every image is surrounded by a rapidly diminishing “Airy disk” which are a symptom of light being wave-like.  This is only a problem really close to the diffraction limit.  You don’t see these when you take a picture on a regular camera because these rings are smaller than the individual pixels in the camera’s CCD (by design).

Because light is a wave it experiences “diffraction” which makes it “ooze around corners” and generally end up going in the wrong directions.  But the larger a telescope’s opening, the more the light waves have a chance to interfere in such a way that they propagate in straight lines, which makes for cleaner images where the light ends up more-or-less where it’s supposed to be when it gets to the film or CCD or your retina or whatever.

It turns out that the relationship between the smallest resolvable angle, θ, the wavelength, λ, and the diameter, D, of the aperture is remarkably simple:

\theta \approx 1.2\frac{\lambda}{D}

Visible light has a wavelength of around 0.5 micrometers (about 2,000,000 per meter) and the largest visible-spectrum telescopes on Earth are about 10 meters across (Hubble is a more humble 2.4m across).  That means that the absolute best resolution that any of our telescopes can hope to achieve, under absolutely ideal circumstances, is about \theta \approx 1.22\frac{0.5\times10^{-6}}{10} \approx 0.00000006 rad.  Or, for the angle buffs out there, about 0.01 arcseconds.  This doesn’t take into account the scattering due to the atmosphere; we can do a little to combat that from the ground, but our techniques aren’t perfect.

By carefully looking at how the atmosphere distorts a beam shot upwards from the telescope, we can take into account how the atmosphere affects light coming into the telescope from above.

By carefully looking at how the atmosphere distorts a laser beam shot upwards from a telescope on the ground, we can take into account how the atmosphere affects light coming into the telescope from space.

The lunar landers are a little over 4 meters across (seen from above) and are about 384,403,000 meters away.  That means that the landers subtend an angle of about 0.002 arcseconds.  In order to see this from Earth, we’d need a telescope that is, at absolute minimum, about 200 meters across.  If we wanted the image to be a more than a single pixel, then we’d need a mirror that’s a few miles across.

So, don’t expect that anytime soon.

Posted in -- By the Physicist, Physics | 9 Comments

Q: How bad would it be if we accidentally made a black hole?

Physicist: Not too bad!  Any black hole that humanity might ever create is very unlikely to harm anyone who doesn’t try to eat it.

Black holes do two things that make them (potentially) dangerous: they eat and they pop.  For the black holes we might reasonably create on Earth, neither of these is a problem.

Home-grown black holes: not a serious concern.

Home-grown black holes: not a serious concern.

The recipe for black holes is literally the simplest recipe possible; it’s “get a bunch of stuff and put it somewhere”.  In practice, you need at least 3.8 Sun’s worth of stuff and the somewhere is anywhere smaller than a few dozen km across.  That last bit is important: the defining characteristic of black holes isn’t their mass, it’s their density.

The gravity through the outer Gaussian surface stays the same, since both contain the same amount of matter. The gravity through the inner Gaussian surface increases dramatically after the star collapses, because it contains all of the star's mass, instead of just a small part of it.

For a given amount of mass the same amount of gravity “flows” through every containing surface.  In this picture, the same total gravity points through both outer surfaces if they contain the same total mass.  But if all of the mass is concentrated in a tiny place (as it is on the right), then the gravity through the smaller surface must be stronger in order to equal the weaker gravity through the larger surface.  Fun fact: this can be used to derive the inverse square law of gravitation and/or is a consequence of it.

If you’re any given distance away from a conglomeration of matter, it doesn’t make much difference how that matter is arranged.  For example, if the Sun were to collapse into a black hole (it won’t), all of the planets would continue to orbit around it in exactly the same way (just colder).  The gravitational pull doesn’t start getting “black-hole-ish” until you’re well beyond where the surface of the Sun used to be.  Conversely, if the Sun were to swell up and become huge (it probably will), then all of the planets will continue to orbit it in exactly the same way (just hotter).

To create a new black hole here on Earth, we’d probably use a particle accelerator to slam particles together and (fingers crossed) get the density of energy and matter in one extremely small region high enough to collapse.  This is wildly unreasonable.  But even if we managed to pull it off, the resulting black hole wouldn’t suddenly start pulling things in any more than the original matter and energy did.

For comparison, if you were to collapse Mt. Everest into a black hole it would be no more than a few atoms across.  It’s gravity would be as strong as the gravity on Earth’s surface within around 10 meters.  If you stood right next to it you’d be in trouble, but you wouldn’t fall in if you gave it a wide berth.  In fact, that’s why mountain climbers aren’t particularly bothered by Everest’s mass; even if you’re literally standing on it, you can’t get within more than a few km of most of its mass (fundamentally, Mt. Everest is a big, spread out, pile of stuff).

But the amount of material used in particle accelerators (or any laboratory for that matter) is substantially less than the mass of Everest.  They’re “particle accelerators” after all, not “really-big-piles-of-stuff accelerators”.  The proton beams at the LHC have a mass of about 0.5 nanograms and when moving at full speed have a “relativistic mass” of about 4 micrograms (because they carry about 7500 times as much kinetic energy as mass).  4 micrograms doesn’t have a scary amount of gravity, and if you turn that into a black hole, it still doesn’t.  A black hole that small probably wouldn’t even be able to eat individual atoms.  “Probably” because we’ve never seen a black hole anywhere near that small.

The other thing that black holes do is “pop”.  Black holes emit Hawking radiation.  We haven’t measured it directly, but there a some good theoretical reasons to think that it’s a thing.  Paradoxically, the smaller a black hole is, the more it radiates.  “Natural” black holes in space (that are as massive as stars) radiate so little that they’re completely undetectable (hence the name: black hole).  The itty-bitty black holes we might create would radiate so fast that they’d be exploding (explosion = energy released fast).  The absolute worst case scenario at CERN (where all of the 115 billion protons in each of the at-most 2,808 groups moving at full speed are all piled up in the same tiny black hole) would be a “pop” with the energy of a few hundred sticks of dynamite.

That’s a good sized boom, but not world ending.  More to the point; this is exactly the same amount of energy that was put into the beams in the first place.  This boom isn’t the worst case scenario for black holes, it’s the worst case scenario for the LHC in general (cave-ins and eldritch horrors notwithstanding).  It is this “pop” that would make a tiny black hole a hazard.  The gravitational pull of a few micrograms of matter, regardless of how it is arranged, is never dangerous; you wouldn’t get pulled inside out if you ate it.  However, you wouldn’t get the chance, since any black hole that we could reasonably create would already be mid-explosion.

A black hole with a mass of a few million tons would blaze with Hawking radiation so brightly that you wouldn’t want it on the ground or even in low orbit.  It would be “stable” in that it wouldn’t just explode and disappear.  This is one method that science fiction authors use for powering their amazing fictional scientific devices.

The kind of black holes that we might imagine, that are cold (colder than the Sun at least), stable, and happily absorbing material, have a mass comparable to a continent at minimum.  Even then, it would be no more than a couple millimeters across.  These wouldn’t be popping or burning things with Hawking radiation.  The real danger of a black hole of this size isn’t the black hole itself, so much as the process of creating them (listen, I’m making a black hole, so I need to crush all of Australia into a singularity real quick).

We have no way, even in theory, to compress a mountain of material into a volume the size of a virus.  Nature compresses matter into black holes by parking a star on it.  That seems to be far and away the best option, so if we want to create black holes the “easiest” way may be to collect some stars and throw them in a pile.  But by the time you’re running around grabbing stars, you may as well just find an unclaimed black hole in space and take credit for it.

Posted in -- By the Physicist, Physics | 21 Comments

Q: What if gravity acted like magnetism?

Physicist: The problem with magnetism and the electric force is that they tend to cancel themselves out.  For example, if you have a positive charge the first thing it does is repel all the other positive charges around it and attract all the negative charges.  In short order you end up with a positive and negative charge right next to each other, pulling and pushing on every other charge with the same force (however much the positive charge pulls, the negative charge next to it pushes and vice versa).

(Top) Like charges repel and unlike charges attract. (Middle) A pair of opposite charges will tend to grab onto each other, but this pair pulls as much as it pushes on other nearby charges. (Bottom) The result is that effect of the charges cancels out and we're left with "electrically neutral" matter.

(Top) Like charges repel and unlike charges attract. (Middle) A pair of opposite charges will tend to grab onto each other, but this pair pulls as much as it pushes on other nearby charges. (Bottom) The result is that the effect of the charges cancels out and we’re left with “electrically neutral” matter.

Same thing with magnets, if you have two bar magnets floating around, they’ll try to line up with their north side next to the other magnet’s south side.

As a result these “positive/negative” forces tend to balance out really fast.  There are “dipole forces” (one charge might be a little closer, so it pulls just a skosh harder), but dipole forces are tiny and decrease much faster with distance (technically, all magnets are dipole).  In your body right now you have somewhere in the neighborhood of 1028 or 1029 (between ten and a hundred thousand trillion trillion) charged particles in the form of protons and electrons.  The number of extra, unbalanced charges on a good Van de Graff generator that’s dangerous to approach is less than a billionth of a billionth of that.

amse

A slight imbalance of charge.  The ratio of positive to negative charges here is on the order of 1 to 1.0000000000000000001.

Point is; with magnets and charges you always have a problem with things canceling themselves out almost perfectly.  The strength of the electric force between (for example) two protons is just a hell of a lot stronger than the gravitational force (about 1,000,000,000,000,000,000,000,000,000,000,000,000 times bigger), but you’d never know it since those huge forces are all balanced and cancelled out by all of the negative charges around.

Gravity, on the other hand, has only one kind of “charge”: matter.  All matter attracts all matter, so despite being far and away the weakest force, gravity is basically the last man standing on large scales.  You might imagine that if gravity acted like magnetism there would be planets and stars pushing and pulling each other every which way, but in all likelihood we just wouldn’t have large structures in the universe like planets in the first place.

Posted in -- By the Physicist, Physics | 13 Comments

Q: When you write a fraction with a prime denominator in decimal form it repeats every p-1 digits. Why?

The original question was: How come the length of the repetend for some fractions (e.g. having a prime number p as a denominator) is equal to p-1?


Physicist: The question is about the fact that if you type a fraction into a calculator, the decimal that comes out repeats.  But it repeats in a very particular way.  For example,

\frac{1}{7} = 0.\underbrace{142857}_{repetend}142857142857\ldots

7 is a prime number and (you can check this) all fractions with a denominator of 7 repeat every 7-1=6 digits (even if it does so trivially with “000000”).  The trick to understanding why this happens in general is to look really hard at how division works.  That is to say: just do long division and see what happens.

When we say that \frac{1}{7} = 0.142857\ldots, what we mean is \frac{1}{7} = 0 + \frac{1}{10} + \frac{4}{10^2} + \frac{2}{10^3}+ \frac{8}{10^4}+ \frac{5}{10^5}+ \frac{7}{10^6}\ldots.  With that in mind, here’s why \frac{1}{7} = 0.142857\ldots.

\begin{array}{ll}  \frac{1}{7} \\[2mm]  = \frac{1}{10}\frac{10}{7} \\[2mm]  = \frac{1}{10} + \frac{1}{10}\frac{3}{7} \\[2mm]  = \frac{1}{10} + \frac{1}{10^2}\frac{30}{7} \\[2mm]  = \frac{1}{10} + \frac{4}{10^2} + \frac{1}{10^2}\frac{2}{7} \\[2mm]  = \frac{1}{10} + \frac{4}{10^2} + \frac{1}{10^3}\frac{20}{7} \\[2mm]  = \frac{1}{10} + \frac{4}{10^2} + \frac{2}{10^3} + \frac{1}{10^3}\frac{6}{7} \\[2mm]  \end{array}

and so on forever.  You’ll notice that the same thing is done to the numerator over and over: multiply by 10, divide by 7, the quotient is the digit in the decimal and the remainder gets carried to the next step, multiply by 10, ….  The remainder that gets carried from one step to the next is just \left[10^k\right]_7.

Quick aside: If you’re not familiar with modular arithmetic, there’s an old post here that has lots of examples (and a shallower learning curve).  The bracket notation I’m using here isn’t standard, just better.  “[4]3” should be read “4 mod 3”.  And because the remainder of 4 divided by 3 and the remainder of 1 divided by 3 are both 1, we can say “[4]3=[1]3“.

\begin{array}{l|l}\frac{1}{7}&[1]_7\\[2mm]=\frac{1}{10}\frac{10}{7}&[10]_7\\[2mm]=\frac{1}{10}+\frac{1}{10}\frac{3}{7}&[10]_7=[3]_7\\[2mm]=\frac{1}{10}+\frac{1}{10^2}\frac{30}{7}&[10^2]_7=[30]_7\\[2mm]=\frac{1}{10}+\frac{4}{10^2}+\frac{1}{10^2}\frac{2}{7}&[10^2]_7=[2]_7\\[2mm]=\frac{1}{10}+\frac{4}{10^2}+\frac{1}{10^3}\frac{20}{7}&[10^3]_7=[20]_7\\[2mm]=\frac{1}{10}+\frac{4}{10^2}+\frac{2}{10^3}+\frac{1}{10^3}\frac{6}{7}&[10^3]_7=[6]_7\\[2mm]  \end{array}

These aren’t the numbers that end up in the decimal expansion, they’re the remainder left over when you stop calculating the decimal expansion at any point.  What’s important about these numbers is that they each determine the next number in the decimal expansion, and they repeat every 6.

\begin{array}{ll}  [1]_7=1\\[2mm]  [10]_7=3\\[2mm]  [10^2]=2\\[2mm]  [10^3]=6\\[2mm]  [10^4]=4\\[2mm]  [10^5]=5\\[2mm]  [10^6]=1\end{array}

After this it repeats because, for example, [10^9]_7 = [10^3\cdot10^6]_7 = [10^3\cdot1]_7 = [10^3]_7.  If you want to change the numerator to, say, 4, then very little changes:

\begin{array}{l|l}\frac{4}{7}&[4]_7\\[2mm]=\frac{5}{10}+\frac{1}{10}\frac{5}{7}&[4\cdot10]_7=[5]_7\\[2mm]=\frac{5}{10}+\frac{7}{10^2}+\frac{1}{10^2}\frac{1}{7}&[4\cdot10^2]_7=[1]_7\\[2mm]=\frac{5}{10}+\frac{7}{10^2}+\frac{1}{10^3}+\frac{1}{10^3}\frac{3}{7}&[4\cdot10^3]_7=[3]_7\\[2mm]=\frac{5}{10}+\frac{7}{10^2}+\frac{1}{10^3}+\frac{4}{10^4}+\frac{1}{10^4}\frac{2}{7}&[4\cdot10^4]_7=[2]_7\\[2mm]=\frac{5}{10}+\frac{7}{10^2}+\frac{1}{10^3}+\frac{4}{10^4}+\frac{2}{10^5}+\frac{1}{10^5}\frac{6}{7}&[4\cdot10^5]_7=[6]_7\\[2mm]=\frac{5}{10}+\frac{7}{10^2}+\frac{1}{10^3}+\frac{4}{10^4}+\frac{2}{10^5}+\frac{8}{10^6}+\frac{1}{10^6}\frac{4}{7}&[4\cdot10^6]_7=[4]_7\\[2mm]\end{array}

So the important bit to look at is the remainder after each step.  More generally, the question of why a decimal expansion repeats can now be seen as the question of why [10^k]_P repeats every P-1, when P is prime.  For example, for \frac{2}{3} we’d be looking at [2\cdot10^k]_3 and for \frac{30}{11} we’d be looking at [30\cdot10^k]_{11}.  The “10” comes from the fact that we use a base 10 number system, but that’s not written in stone either (much love to my base 20 Mayan brothers and sisters.  Biix a beele’ex, y’all?).

It turns out that when the number in the denominator, M, is coprime to 10 (has no factors of 2 or 5), then the numbers generated by successive powers of ten (mod M) are always also coprime to M.  In the examples above M=7 and the powers of 10 generated {1,2,3,4,5,6} (in a scrambled order).  The number of numbers less than M that are coprime to M (have no factors in common with M) is denoted by ϕ(M), the “Euler phi of M”. For example, ϕ(9)=6, since {1,2,4,5,7,8} are all coprime to 9.  For a prime number, P, every number less than that number is coprime to it, so ϕ(P)=P-1.

When you find the decimal expansion of a fraction, you’re calculating successive powers of ten and taking the mod.  As long as 10 is coprime to the denominator, this generates numbers that are also coprime to the denominator.  If the denominator is prime, there are P-1 of these.  More generally, if the denominator is M, there are ϕ(M) of them.  For example, \frac{5}{21}=0.\underbrace{238095238095}238095238095\ldots, which repeats every 12 because ϕ(21)=12.  It also repeats every 6, but that doesn’t change the “every 12” thing.

Why the powers of ten must either hit every one of the ϕ(M) coprime numbers, or some fraction of ϕ(M) (\frac{\phi(M)}{2}, or \frac{\phi(M)}{3}, or …), thus forcing the decimal to repeat every ϕ(M) will be in the answer gravy below.


Answer Gravy: Here’s where the number theory steps in.  The best way to describe, in extreme generalization, what’s going on is to use “groups“.  A group is a set of things and an operation, with four properties: closure, inverses, identity, and associativity.

In this case the set of numbers we’re looking at are the numbers coprime to M, mod M.  If M=7, then our group is {1,2,3,4,5,6} with multiplication as the operator.  This group is denoted “\mathbb{Z}_7^\times“.

The numbers coprime to M are “closed” under multiplication, which means that if a\in\mathbb{Z}_7^\times and b\in\mathbb{Z}_7^\times, then a\cdot b\in\mathbb{Z}_7^\times.  This is because if you multiply two numbers with no factors in common with M, then you’ll get a new number with no factors in common with M.  For example, [3\cdot4]_7=[12]_7=[5]_7.  No 7’s in sight (other than the mod, which is 7).

The numbers coprime to M have inverses.  This is a consequence of Bézout’s lemma (proof in the link), which says that if a and M are coprime, then there are integers x and y such that xa+yM=1, with x coprime to M and y coprime to a.  Writing that using modular math, if a and M are coprime, then there exists an x such that [xa]_M=[1]_M.  For example, [1\cdot1]_7=[1]_7, [2\cdot4]_7=[1]_7, [3\cdot5]_7=[1]_7, and [6\cdot6]_7=[1]_7.  Here we’d write [3^{-1}]_7=[5]_7, which means “the inverse of 3 is 5”.

The numbers coprime to M have an identity element.  The identity element is the thing that doesn’t change any of the other elements.  In this case the identity is 1, because 1\cdot x=x in general.  1 is coprime to everything (it has no factors), so 1 is always in \mathbb{Z}_M^\times regardless of what M is.

Finally, the numbers coprime to M are associative, which means that (ab)c=a(bc).  This is because multiplication is associative.  No biggy.

 

So \mathbb{Z}_M^\times, the set of numbers (mod M) coprime to M, form a group under multiplication.  Exciting stuff.

But what we’re really interested in are “cyclic subgroups”.  “Cyclic groups” are generated by the same number raised to higher and higher powers.  For example in mod 7, {31,32,33,34,35,36}={3,2,6,4,5,1} is a cyclic group.  In fact, this is \mathbb{Z}_7^\times.  On the other hand, {21,22,23}={2,4,1} is a cyclic subgroup of \mathbb{Z}_7^\times.  A subgroup has all of the properties of a group itself (closure, inverses, identity, and associativity), but it’s a subset of a larger group.

In general, {a1,a2,…,ar} is always a group, and often is a subgroup.  The “r” there is called the “order of the group”, and it is the smallest number such that [a^r]_M=[1]_M.

Cyclic groups are closed because [a^x\cdot a^y]_M=[a^{x+y}]_M.

Cyclic groups contain the identity.  There are only a finite number of elements in the full group, \mathbb{Z}_M^\times, so eventually different powers of a will be the same.  Therefore,

\begin{array}{ll}    [a^x]_M=[a^y]_M \\[2mm]    \Rightarrow[a^x]_M=[a^xa^{y-x}]_M \\[2mm]    \Rightarrow[(a^x)^{-1}a^x]_M=[(a^x)^{-1}a^xa^{y-x}]_M \\[2mm]    \Rightarrow[1]_M=[a^{y-x}]_M    \end{array}

That is to say, if you get the same value for different powers, then the difference between those powers is the identity.  For example, [3^2]_7=[2]_7=[3^8]_7 and it’s no coincidence that [3^{8-2}]_7=[3^6]_7=[1]_7.

Cyclic groups contain inverses.  There is an r such that [a^r]_M=[1]_M.  It follows that [ba^x]_M=[1]_M\Rightarrow[ba^x]_M=[a^r]_M\Rightarrow[b]_M=[a^{r-x}]_M.  So, [\left(a^x\right)^{-1}]_M=[a^{r-x}]_M.

And cyclic subgroups have associativity.  Yet again: no biggy, that’s just how multiplication works.

 

It turns out that the number of elements in a subgroup always divides the number of elements in the group as a whole.  For example, \mathbb{Z}_M^\times={1,2,3,4,5,6} is a group with 6 elements, and the cyclic subgroup generated by 2, {1,2,4}, has 3 elements.  But check it: 3 divides 6.  This is Lagrange’s Theorem.  It comes about because cosets (which you get by multiplying every element in a subgroup by the same number) are always the same size and are always distinct.  For example (again in mod 7),

\begin{array}{rl}    1\cdot\{1,2,4\} & = \{1,2,4\} \\    2\cdot\{1,2,4\} & = \{2,4,1\} \\    3\cdot\{1,2,4\} & = \{3,6,5\} \\    4\cdot\{1,2,4\} & = \{4,1,2\} \\    5\cdot\{1,2,4\} & = \{5,3,6\} \\    6\cdot\{1,2,4\} & = \{6,5,3\} \\    \end{array}

The cosets here are {1,2,4} and {3,5,6}.  They’re the same size, they’re distinct, and together they hit every element in \mathbb{Z}_7^\times.  The cosets of any given subgroup are always the same size as the subgroup, always distinct (no shared elements), and always hit every element of the larger group.  This means that if the subgroup has S elements, there are C cosets, and the group as a whole has G elements, then SD=G.  Therefore, in general, the number of elements in a subgroup divides the number of elements in a whole group.

 

To sum up:

In order to calculate a decimal expansion (in base 10) you need to raise 10 to higher and higher powers and divide by the denominator, M.  The quotient is the next digit in the decimal and the remainder is what’s carried on to the next step.  The remainder is what the “mod” operation yields.  This leads us to consider the group of \mathbb{Z}_M^\times which is the multiplication mod M group of numbers coprime to M (the not-coprime-case will be considered in a damn minute).  \mathbb{Z}_M^\times has exactly ϕ(M) elements.  The powers of 10 form a “cyclic subgroup”.  The number of numbers in this cyclic subgroup must divide ϕ(M), by Lagrange’s theorem.

If P is prime, then ϕ(P)=P-1, and therefore if the denominator is prime the length of the cycle of digits in the decimal expansion (which is dictated by the cyclic subgroup generated by 10) must divide P-1.  That is, the decimal repeats every P-1, but it might also repeat every \frac{P-1}{2} or \frac{P-1}{3} or whatever.  You can also calculate ϕ(M) for M not prime, and the same idea holds.


Deep Gravy:

Finally, if the denominator is not coprime to 10 (e.g., 3/5, 1/2, 1/14, 71/15, etc.), then things get a little screwed up.  If the denominator is nothing but factors of 10, then the decimal is always finite.  For example, \frac{1}{8}=0.125\underbrace{0}_{repetend}000000.

\begin{array}{l|l}    \frac{1}{8}&[1]_8\\[2mm]    =\frac{1}{10}+\frac{1}{10}\frac{2}{8}&[10]_8=[2]_8\\[2mm]    =\frac{1}{10}+\frac{2}{10^2}+\frac{1}{10^2}\frac{4}{8}&[10^2]_8=[4]_8\\[2mm]    =\frac{1}{10}+\frac{2}{10^2}+\frac{5}{10^3}&[10^3]_8=[0]_8\\[2mm]    \end{array}

In general, if the denominator has powers of 2 or 5, then the resulting decimal will be a little messy for the first few digits (equal to the higher of the two powers, for example 8=23) and after that will follow the rules for the part of the denominator coprime to 10.  For example, 28=2^2\cdot7.  So, we can expect that after two digits the decimal expansion will settle into a nice six-digit repetend (because ϕ(7)=6).

Fortunately, the system works: \frac{1}{28}=0.03\underbrace{571428}571428\ldots

This can be understood by looking at the powers of ten for each of the factors of the denominator independently.  If A and B are coprime, then \mathbb{Z}_{AB}^\times \cong \mathbb{Z}_{A}^\times\otimes \mathbb{Z}_{B}^\times.  This is an isomorphism that works because of the Chinese Remainder Theorem.  So, a question about the powers of 10 mod 28 can be explored in terms of the powers of 10 mod 4 and mod 7.

\begin{array}{l|l}    [10]_{28}=[10]_{28} & \left([10]_{4},[10]_{7}\right) = \left([2]_{4},[3]_{7}\right) \\[2mm]    [10^2]_{28}=[16]_{28} & \left([10^2]_{4},[10^2]_{7}\right) = \left([0]_{4},[2]_{7}\right) \\[2mm]    [10]_{28}=[10]_{28} & \left([10]_{4},[10]_{7}\right) = \left([0]_{4},[3]_{7}\right) \\[2mm]    [10^3]_{28}=[20]_{28} & \left([10^3]_{4},[10^3]_{7}\right) = \left([0]_{4},[6]_{7}\right) \\[2mm]    \end{array}

Once the powers of 10 are a multiple of all of the of 2’s and 5’s in the denominator, they basically disappear and only the coprime component is important.

Numbers are a whole thing.  If you can believe it, this was supposed to be a short post.

Posted in -- By the Physicist, Math, Number Theory | 6 Comments