Same thing with magnets, if you have two bar magnets floating around, they’ll try to line up with their north side next to the other magnet’s south side.

As a result these “positive/negative” forces tend to balance out *really* fast. There are “dipole forces” (one charge might be a *little* closer, so it pulls just a *skosh* harder), but dipole forces are tiny and decrease much faster with distance (technically, all magnets are dipole). In your body right now you have somewhere in the neighborhood of 10^{28} or 10^{29} (ten to a hundred thousand trillion trillion) charged particles in the form of protons and electrons. The number of extra, unbalanced charges on a good Van de Graff generator that’s dangerous to approach is less than a billionth of a billionth of that.

Point is; with magnets and charges you always have a problem with things canceling themselves out almost perfectly. The strength of the electric force between (for example) two protons is just a hell of a lot stronger than the gravitational force (about 1,000,000,000,000,000,000,000,000,000,000,000,000 times bigger), but you’d never know it since those huge forces are all balanced and cancelled out by all of the negative charges around.

Gravity, on the other hand, has only one kind of “charge”: matter. All matter attracts all matter, so despite being far and away the weakest force, gravity is basically the last man standing on large scales. You might imagine that if gravity acted like magnetism there would be planets and stars pushing and pulling each other every which way, but in all likelihood we just wouldn’t have large structures in the universe like planets in the first place.

]]>**Physicist**: The question is about the fact that if you type a fraction into a calculator, the decimal that comes out repeats. But it repeats in a very particular way. For example,

7 is a prime number and (you can check this) all fractions with a denominator of 7 repeat every 7-1=6 digits (even if it does so trivially with “000000”). The trick to understanding why this happens in general is to look really hard at how division works. That is to say: just do long division and see what happens.

When we say that , what we mean is . With that in mind, here’s why .

and so on forever. You’ll notice that the same thing is done to the numerator over and over: multiply by 10, divide by 7, the quotient is the digit in the decimal and the remainder gets carried to the next step, multiply by 10, …. The remainder that gets carried from one step to the next is just .

*Quick aside*: If you’re not familiar with modular arithmetic, there’s an old post here that has lots of examples (and a shallower learning curve). The bracket notation I’m using here isn’t standard, just better. “[4]_{3}” should be read “4 mod 3”. And because the remainder of 4 divided by 3 and the remainder of 1 divided by 3 are both 1, we can say “[4]_{3}=[1]_{3}“.

These aren’t the numbers that end up in the decimal expansion, they’re the remainder left over when you stop calculating the decimal expansion at any point. What’s important about these numbers is that they each determine the next number in the decimal expansion, and they repeat every 6.

After this it repeats because, for example, . If you want to change the numerator to, say, 4, then very little changes:

So the important bit to look at is the remainder after each step. More generally, the question of why a decimal expansion repeats can now be seen as the question of why repeats every P-1, when P is prime. For example, for we’d be looking at and for we’d be looking at . The “10” comes from the fact that we use a base 10 number system, but that’s not written in stone either (much love to my base 20 Mayan brothers and sisters. Biix a beele’ex, y’all?).

It turns out that when the number in the denominator, M, is coprime to 10 (has no factors of 2 or 5), then the numbers generated by successive powers of ten (mod M) are always also coprime to M. In the examples above M=7 and the powers of 10 generated {1,2,3,4,5,6} (in a scrambled order). The number of numbers less than M that are coprime to M (have no factors in common with M) is denoted by ϕ(M), the “Euler phi of M”. For example, ϕ(9)=6, since {1,2,4,5,7,8} are all coprime to 9. For a prime number, P, every number less than that number is coprime to it, so ϕ(P)=P-1.

When you find the decimal expansion of a fraction, you’re calculating successive powers of ten and taking the mod. As long as 10 is coprime to the denominator, this generates numbers that are also coprime to the denominator. If the denominator is prime, there are P-1 of these. More generally, if the denominator is M, there are ϕ(M) of them. For example, , which repeats every 12 because ϕ(21)=12. It *also* repeats every 6, but that doesn’t change the “every 12” thing.

Why the powers of ten must either hit every one of the ϕ(M) coprime numbers, or some fraction of ϕ(M) (, or , or …), thus forcing the decimal to repeat every ϕ(M) will be in the answer gravy below.

**Answer Gravy**: Here’s where the number theory steps in. The best way to describe, in extreme generalization, what’s going on is to use “groups“. A group is a set of things and an operation, with four properties: closure, inverses, identity, and associativity.

In this case the set of numbers we’re looking at are the numbers coprime to M, mod M. If M=7, then our group is {1,2,3,4,5,6} with multiplication as the operator. This group is denoted ““.

The numbers coprime to M are “closed” under multiplication, because if you multiply two numbers with no factors in common with M, you’ll get a new number with no factors in common with M. For example, . No 7’s in sight (other than the mod, which is 7).

The numbers coprime to M have inverses. This means that if and , then . This is a consequence of Bézout’s lemma (proof in the link), which says that if a and M are coprime, then there are integers x and y such that xa+yM=1, with x coprime to M and y coprime to a. Writing that using modular math, if a and M are coprime, then there exists an x such that . For example, , , , and . Here we’d write , which means “the inverse of 3 is 5”.

The numbers coprime to M have an identity element. The identity element is the thing that doesn’t change any of the other elements. In this case the identity is 1, because in general. 1 is coprime to everything (it has no factors), so 1 is always in regardless of what M is.

Finally, the numbers coprime to M are associative, which means that (ab)c=a(bc). This is because multiplication is associative. No biggy.

So , the set of numbers (mod M) coprime to M, form a group under multiplication. Exciting stuff.

But what we’re really interested in are “cyclic subgroups”. “Cyclic groups” are generated by the same number raised to higher and higher powers. For example in mod 7, {3^{1},3^{2},3^{3},3^{4},3^{5},3^{6}}={3,2,6,4,5,1} is a cyclic group. In fact, this is . On the other hand, {2^{1},2^{2},2^{3}}={2,4,1} is a cyclic *subgroup* of . A subgroup has all of the properties of a group itself (closure, inverses, identity, and associativity), but it’s a subset of a larger group.

In general, {a^{1},a^{2},…,a^{r}} is always a group, and often is a subgroup. The “r” there is called the “order of the group”, and it is the smallest number such that .

Cyclic groups are closed because .

Cyclic groups contain the identity. There are only a finite number of elements in the full group, , so eventually different powers of a will be the same. Therefore,

That is to say, if you get the same value for different powers, then the difference between those powers is the identity. For example, and it’s no coincidence that .

Cyclic groups contain inverses. There is an r such that . It follows that . So, .

And cyclic subgroups have associativity. Yet again: no biggy, that’s just how multiplication works.

It turns out that the number of elements in a subgroup always divides the number of elements in the group as a whole. For example, ={1,2,3,4,5,6} is a group with 6 elements, and the cyclic subgroup generated by 2, {1,2,4}, has 3 elements. But check it: 3 divides 6. This is Lagrange’s Theorem. It comes about because cosets (which you get by multiplying every element in a subgroup by the same number) are always the same size and are always distinct. For example (again in mod 7),

The cosets here are {1,2,4} and {3,5,6}. They’re the same size, they’re distinct, and together they hit every element in . The cosets of *any* given subgroup are always the same size as the subgroup, always distinct (no shared elements), and always hit every element of the larger group. This means that if the subgroup has S elements, there are C cosets, and the group as a whole has G elements, then SD=G. Therefore, in general, the number of elements in a subgroup divides the number of elements in a whole group.

To sum up:

In order to calculate a decimal expansion (in base 10) you need to raise 10 to higher and higher powers and divide by the denominator, M. The quotient is the next digit in the decimal and the remainder is what’s carried on to the next step. The remainder is what the “mod” operation yields. This leads us to consider the group of which is the multiplication mod M group of numbers coprime to M (the not-coprime-case will be considered in a damn minute). has exactly ϕ(M) elements. The powers of 10 form a “cyclic subgroup”. The number of numbers in this cyclic subgroup must divide ϕ(M), by Lagrange’s theorem.

If P is prime, then ϕ(P)=P-1, and therefore if the denominator is prime the length of the cycle of digits in the decimal expansion (which is dictated by the cyclic subgroup generated by 10) must divide P-1. That is, the decimal repeats every P-1, but it might *also* repeat every or or whatever. You can also calculate ϕ(M) for M not prime, and the same idea holds.

**Deep Gravy**:

Finally, if the denominator is not coprime to 10 (e.g., 3/5, 1/2, 1/14, 71/15, etc.), then things get a little screwed up. If the denominator is nothing but factors of 10, then the decimal is always finite. For example, .

In general, if the denominator has powers of 2 or 5, then the resulting decimal will be a little messy for the first few digits (equal to the higher of the two powers, for example 8=2^{3}) and after that will follow the rules for the part of the denominator coprime to 10. For example, . So, we can expect that after two digits the decimal expansion will settle into a nice six-digit repetend (because ϕ(7)=6).

Fortunately, the system works:

This can be understood by looking at the powers of ten for each of the factors of the denominator independently. If A and B are coprime, then . This is an isomorphism that works because of the Chinese Remainder Theorem. So, a question about the powers of 10 mod 28 can be explored in terms of the powers of 10 mod 4 *and* mod 7.

Once the powers of 10 are a multiple of all of the of 2’s and 5’s in the denominator, they basically disappear and only the coprime component is important.

Numbers are a whole thing. If you can believe it, this was supposed to be a short post.

]]>This is exciting stuff. The reason we have big fancy pictures of the *eight* planets in our solar system is because we’ve sent cameras to them. Hubble (and future space telescopes) are great, but sometimes you’ve just gotta be there.

Tomorrow morning (July 14, 2015) New Horizons will pass Pluto at mach 48 (48 times the speed of sound, which is a misleading and entirely inappropriate way of measuring speed in space). It will furiously take pictures and measurements for a couple hours and then continue into interstellar space, where its docket will be pretty open for the next few million years.

Already New Horizons has sent us the clearest images of Pluto ever.

Unlike the last post in this vein, there’s nothing for you to personally do. But still: now we get to learn stuff about the planet-turned-dwarf-planet that’s been a bit of an asterisk for 85 years. Good times!

Update (July 15, 2015): Huzzah!

]]>When we picture an atom we usually picture the “Bohr model”: a nucleus made of a bunch of particles packed together (protons and neutrons) with other particles zipping around it (electrons). In this picture, if you make a guess about of the size of electrons and calculate how far they are from the nucleus, then you get that weird result about atoms being mostly empty. But that guess is surprisingly hard to make. The “classical electron radius” is an upper-limit guess based on the electron being nothing more than it’s own electric field, but it’s ultimately just a gross estimate.

However, electrons aren’t really particles (which is why it’s impossible to actually specify their size); they’re waves. Instead of being in a particular place, they’re kinda “smeared out”. If you ring a bell, you can say that there is a vibration in that bell but you can’t say where exactly that vibration is: it’s a wave that’s spread out all over the bell. Parts of the bell will be vibrating more, and parts may not be vibrating at all. Electrons are more like the “ringing” of the bell and less like a fly buzzing around the bell.

Just to be clear, this is a metaphor: atoms are not tiny bells. The math that describes the “quantum wave function” of electrons in atoms and the math that describes vibrations in a bell have some things in common.

So, the space in atoms isn’t empty. A more accurate thing to say is that the overwhelming majority of the matter in an atom is concentrated in the nucleus, which is tiny compared to the region where the electrons are found. However, even in the nucleus the same “problem” crops up; protons and neutrons are just “the ringing of bells” and aren’t simply particles either. The question “where exactly is this electron/proton/whatever?” isn’t merely difficult to answer, the question genuinely doesn’t have an answer. In quantum physics things tend spread out between a lot of states (in this case those different states include different positions).

The atom picture is from here.

]]>But there are a lot of physical phenomena that poke holes in it pretty quick. For example, Foucault pendulums (more commonly known as “big pendulums“) swing as though the Earth were turning under them and in a way that exactly corresponds to the way everything in the sky turns overhead (not a coincidence).

The classic way that heliocentrism (the idea that the Sun is at the center of the solar system) is demonstrated to be better that geocentrism (the idea that the Earth is at the center of the solar system) is by looking at the motion of the other planets. This was essentially what Copernicus did; point out that with the Earth at the center the motions of the other planets are crazy, but with the Sun at the center the motion of the planets (including Earth) are simple ellipses. His original argument was essentially just an application of Occam’s razor: simpler is better, so the Sun must be at the center.

Occam’s razor is a great red flag for detecting ad-hoc theories, but it’s *not* science. With that in mind, it’s impressive how much Copernicus got exactly right. Fortunately, about a century and a half after Copernicus, Newton came along and squared that circle. Newtonian physics says a lot more than “gee wiz, but ellipses are pretty”; it actually describes exactly why all of the orbits behave the way they do with a remarkably simple set of laws for gravity and movement in general. Newtonian physics goes even farther, describing not just the motion of the planets, but also why we don’t *directly* notice the motion of our own.

If we still assumed that the Earth was sitting still in the universe, physicists would have spent the last couple centuries desperately trying to explain what’s hauling the Sun (and the rest of the planets) around in such huge circles. We’d need a bunch of extra, mysterious forces to explain away why the center of mass in the solar system doesn’t sit still (or move at a uniform speed), but is instead whipping by overhead daily.

What follows is a bunch of Newtonian stuff.

Position and velocity are both entirely subjective, but acceleration is objective. What that physically means that there is absolutely no way, whatsoever, to determine where you are or how fast you’re moving by doing tests of any kind. Sure, you can look around and see other things passing by, but even then you’re only measuring your relative velocity (your velocity *relative* to whatever you’re looking at). So, hypothetically, if you’re on a big ball of stuff flying through space, you’d never be able to tell. Acceleration on the other hand is easy to measure.

At first blush it would seem as though there’s no way, from here on Earth, to tell the difference between the Earth moving or sitting still. If the Earth is sitting still, we wouldn’t be able to tell. If the Earth is moving, we also wouldn’t be able to tell. But we’re doing more than just moving; we’re moving in circles and as it happens traveling in a circle requires acceleration. The push you feel when you speed up or slow down comes from the exact same source as the push you feel when you turn a corner or spin around: acceleration.

If the Earth were “nailed to space” and never accelerated, then we’d only have one each of the lunar and solar tides. If the Earth never moved, then the Moon’s gravity would pull the oceans toward it and that’s it. But the Earth does move. The Moon is heavy so, even though the Earth doesn’t move nearly as much, the Earth does execute little circles to balance the Moon’s big circles. The Moon’s big circles generate enough centrifugal force to balance the Earth’s pull on it (that’s what an orbit is), and at the same time the Earth’s little circles balance the Moon’s pull on us.

The same basic thing happens between the Earth and the Sun. Things closer to the Sun orbit faster and things farther away orbit slower. But the Earth has to travel as one big block. The side facing the Sun is about 4,000 miles closer, and traveling slower than it would if it were orbiting at that slightly lower level. As a result, the Sun’s gravity “wins” a little in the “noon region” of the Earth and we get a high tide (pulled toward the Sun). The side facing away is moving a little bit faster than something at that distance from the Sun should, so it’s flung outward a little more than it should be and we get another high tide at midnight. These are called the “solar tide” and they’re harder to notice because they’re about half as strong as the lunar (regular) tides. That said, the solar tides are important and they exist because the Earth is traveling in a circle around the Sun.

Long story short: If the Earth were stationary (geocentrism) then we’d have to come up with lots of bizarre excuses to explain why Newton’s laws work perfectly here on the ground, but not at all in space, and we’d only have one solar and lunar tide a day. If the Earth is moving (specifically: around the Sun), then Newton’s simple laws can be applied universally without buckets of caveats and asterisks*, and we get two lunar and solar tides a day.

*or even †’s.

]]>**Physicist**: There is! For readers not already familiar with first year calculus, this post will be a lot of non-sense.

Strictly speaking, the derivative only makes sense in integer increments. But that’s never stopped mathematicians from generalizing. Heck, non-integer exponentiation doesn’t make much sense (I mean, 2^{3.5} is “2 times itself three and a half times”. What is that?), but with a little effort we can move past that.

The derivative of a function is the slope at every point along that function, and it tells you how fast that function is changing. The “2nd derivative” is the derivative of the derivative, and it tells you how fast the slope is changing.

When you want to generalize something like this to you basically need to “connect the dots” between those cases where the math actually makes sense. For something like exponentiation by not-integers there’s a “correct” answer. For not-integer derivatives there really isn’t. One way is to use Fourier Transforms. Another is to use Laplace Transforms. Neither of these is ideal. Just to be clear: non-integral derivatives are nothing more than a matter of choosing “what works” from a fairly short list of options that aren’t terrible.

It turns out (as used in both of those examples) that integrals are a great way of “connecting dots”. When you integrate a function the result is more continuous and more smooth. In order to get something out that’s discontinuous at a given point, the function you put in needs to be infinitely nasty at that point (technically, it has to be so nasty it’s not even a function). So, integrals are a quick way of “connecting the dots”.

To get the idea, take a look at N!. That excited looking N is “N factorial” and it’s defined as . For example, . Clearly, it doesn’t make a lot of sense to write “3.5!” or, even worse, “π!”. And yet there’s a cute way to smoothly connect the dots between 3! and 4!.

The Gamma function, Γ(N), (not to be confused with the gamma factor) is defined as: . Before you ask, I don’t know why Euler decided to use “N+1” instead of “N”. Sometimes decent-enough folk have good reasons for doing confusing things. If you do a quick integration by parts, a pattern emerges:

So, Γ(N+1) has the same defining property that N! has: and . Even better, , which is the other defining property of N!, 0!=1. We now have a bizarre new way of writing N!. For all natural numbers N, N! = Γ(N+1). Unlike N!, which only makes sense for natural numbers, Γ(N+1) works for any positive real number since you can plug in whatever positive N you like into .

Even better, this formulation is “analytic” which means it not only works for any positive real number, but (using analytic continuation) works for any complex number as well (with the exception of those poles at each negative integer where it jumps to infinity).

Long story short, with that integral formulation you can connect the dots between the integer values of N (where N! makes sense) to figure out the values between (where N! doesn’t make sense).

So, here comes a pretty decent way to talk about fractional derivatives: fractional integrals.

If “f ‘(x)=f^{(1)}(x)” is the derivative of f, “f^{(N)}(x)” is the Nth derivative of f, and “f^{(-1)}(x)” is the anti-derivative, then by the fundamental theorem of calculus . It turns out that . x-t runs over strictly positive values, so there’s no issue with non-integer powers, and it just so happens that we already have a cute way of dealing with non-integer factorials, so we may as well deal with that factorial cutely: .

Holy crap! We now have a way to describe fractional integrals that works pretty generally. Finally, and this is very round-about, but it turns out that a really good way to do half a derivative is to do half an integral and *then* do a full derivative of the result:

That “root pi” is just another math thing. If you want to do, say, a third of a derivative, then you can first find f^{(-2/3)}(x) and then differentiate that. This isn’t the “correct” way to do fractional derivatives, just something that works while satisfying a short wishlist of properties and re-creating regular derivatives without making a big deal about it.

**Answer Gravy**: You can show that (or even better, ) through induction. The base case is . This is true by the fundamental theorem of calculus, which says that the anti-derivative (the “-1” derivative) is just the integral. So… check.

To show the equation in general, you demonstrate the (N+1)th case using the Nth case.

Huzzah! Using the formula for f^{(-N)}(x) we get the formula for f^{(-N-1)}(x).

There’s a subtlety that goes by really quick between the fourth and fifth lines. When you switch the order of integration (dudt to dtdu) it messes up the limits. Far and away the best way to deal with this is to draw a picture. At first, for a given value of t, we integrate u from zero to t, and then integrating t from zero to x. When switching the order we need to make sure we’re looking at the same region. So for a given value of u, we integrate t from u to x and then integrate u from zero to x.

So that’s what happened there.

]]>