Q: How hard would it be to keep the Moon from drifting away?

Physicist: Almost prohibitively!

Ever since the Moon entered the scene 4.5 billion years ago, it’s been slowly drifting away.  Initially it was around 15 times closer and bigger across in the sky.  The effects that push it away decrease rapidly with distance, so the Moon climbed most of the distance to its current orbit early, but even today it gains about 4 cm per year.

Earth’s tidal bulges (the water distended by the Moon’s tidal forces) are dragged of the Moon by Earth’s rotation.  That extra water pulls the Moon a forward a tiny amount, speeding it up and elevating its orbit.

The tides raised by the Moon are made up of a lot of water and that water, like all matter, creates gravity.  The Earth turns in the same direction that the Moon orbits and, since water doesn’t instantly slosh back to where it’s supposed to be, the tidal bulge is dragged a little ahead of the Moon.  The nearly insignificant gravity generated by that extra water pulls the Moon forward and speeds it up.  The effect is especially small, since a nearly identical bulge appears on the opposite side of the Earth, pulling in the wrong direction.  The difference is that the forward water bulge is slightly closer.

If you’re in orbit and you start moving faster, your orbit gets higher (ironically you end up slowing down overall, just higher and slower).  At the same time that the tidal bulge pulls on the Moon, the Moon pulls on that bulge, slowing the Earth.

The effect that gives the Moon more orbital energy is a small effect (a couple extra meters of water on the surface of Earth, slightly leading the Moon) acting over a comparatively large distance.  The overall effect is so small that we could, with fantastic but feasible effort, counteract it.

So while the Moon drifts away, days on Earth get longer.  According to coral records, as recently as half a billion years ago the day was a mere 22 hours.  Corals aren’t inherently clerical, but they do have both daily and yearly growth cycles with corresponding growth rings.  And 4.5 billion years ago, before the Moon had siphoned off so much of our rotational inertia, a day on Earth was a mere 6 hours.  Give or take (there wasn’t any coral at the time to obsessively write it down).  So the Moon is drifting away and in exchange we Earthlings get both lunch and brunch.

A retro-reflector is a clever set of three mutually perpendicular mirrors that reflect any light back toward where it came from, regardless of where that is.  Basically, if someone shoots you with a laser, wearing a suit covered in retro-reflectors is the fastest way to get revenge.  By bouncing lasers off of any of five sets of retro-reflectors left on the Moon between 1969 and 1973 by the USA and USSR, we can determine the distance to the Moon to within millimeters and have found that, at present, the Moon is drifting away at a sedate 4 cm per year.

Left: The Terrestrial half of the Lunar ranging program.  Right: The Lunar half.  Only a handful of photons from the original laser bounces off of that little square and makes it back to be detected.  So on the left, that isn’t a laser going out and coming back; it’s two lasers going out.

Given enough time, the Moon would eventually escape entirely.  Luckily, in about five billion years the Sun will swell up and destroy both the Earth and Moon before that happens.  So: nothing to worry about.

However!  If we’d like to continue being the only planet to have both annular and total solar eclipses (the only objective measure of planetary exceptionalism), we need to act fast in the next several million years to keep the Moon from getting any farther away.

Earth, with our Moon, is the only known planet to see both annular (top) and total (bottom) eclipses.  In a total eclipse the Sun is totally covered, leaving only the fainter corona around it visible.  This is what we stand to lose.

The Moon is “rolling down a hill”, energetically speaking, and this total needs to be expended every year (effectively) forever to keep the Moon from “rolling” any farther away.  Unlike practically every other question about planetary-scale engineering, “Project Sisyphus” is not impossible!  Merely so expensive and pointless that it may as well be.

Anything that moves in a circle has angular momentum.  Something a Moon with mass m orbiting at a radius of R around a planet with mass M in a universe with gravitational constant G will have an angular momentum of:

L=m\sqrt{GMr}

As r gets bigger (as the Moon moves away), L increases.  To hold the Moon in place we’d need to counteract that increase and keep the angular momentum of the Moon constant.  To figure out how much L changes over small distances (and 4 cm counts), we just find the differential.  This is what calculus is good at: finding tiny changes in one thing, dL, given tiny changes in something else, dr.

dL=\frac{m}{2}\sqrt{\frac{GM}{r}}dr=\frac{7.35\times10^{22}kg}{2}\sqrt{\frac{(6.67\times10^{-11}\frac{Jm}{kg^2})(5.97\times10^{24}kg)}{3.85\times 10^8m}}(0.04m)=1.49\times10^{24}\frac{m^2kg}{s}

 

Ion drives have exhaust velocities much higher than the escape velocity of the Moon, which is good; otherwise they wouldn’t be rockets moving the Moon, so much as fountains decorating the Moon.  They’re also the most efficient type of space propulsion, maxing out around 80% efficiency.  The most powerful use about 100 kW of power and are capable of generating about 5N of force (about 1 pound).

One of these drives, pushing for a year, could add or subtract

rFt=(3.85\times 10^8m)(5N)(<span class="st">1year</span>)=6.07\times10^{16}\frac{m^2kg}{s}

from the Moon’s orbital angular momentum.

So we could counter the Moon’s annual energy gain with a mere 24.5 million of our most powerful ion drives, each coupled with (at least) a 50 m by 50 m array of solar cells and batteries to supply the 100kW they need (including inefficiencies and the lack of Sun at night).  Powering those drives to keep the Moon in place only requires a modest one sixth of the total energy generated by people every year (as of now).  So if we really, really felt like doing it, this is the sort of “problem” that could be “solved” with little more than the greatest collective effort in human history.

If you lived on the Moon, you could build a statue pointing at the Earth and you’d never have to change it, because the Moon’s orientation is fixed with respect to the Earth.  If that statue is pointing straight up, it’s in the middle of the justifiably-named “near side”.  For exactly the same reason, you could build a statue pointing in the direction the Moon is moving as it orbits.  If the statue were pointing straight up, it would be in the middle of the west side, near Mare Orientale.

The hemispheres of the Moon.  The nearside (upper left) should look familiar.  As seen in this nearside picture, the Moon is moving left toward the western hemisphere (lower right).  Mare Orientale is that central dark crater that may as well say “place breaking rockets here”.

If we put all of those ion drives, their batteries, and solar panels in Mare Orientale, they would cover about 75% of it.  Combined with all of the access roads and support structures, there’d be scarcely a single stone left unturned.

Each of those ion drives would need on the order of a ton of gas every year (argon and xenon are popular choices) to use as propellant.  This is the stuff that would literally be thrown in the direction the Moon is moving in order to slow it down.  Summed over the millions of drives, the gas continuously fired from the Moon by Project Sisyphus would be on par with the peak loss of gas in comets as they pass through the inner solar system.  That is: on a clear night we would be able to see the gas as a ghostly comet tail extending, in a very straight band, from the left side of the Moon (from the right side for our southern hemisphere readers.  G’day and dia bom!).

We can harvest enough argon from our atmosphere (it’s about 1% of the air you’re breathing now) to provide propellant for the next 2 billion years, so we’ll have time to figure out another option before we exhaust the supply.  The Falcon Heavy could carry enough argon for about a dozen or two of the 24.5 million drives, so we’d need to launch several thousand of them to the Moon every day until we find a better system.

Compared to that, actually building and maintaining all of the infrastructure on the Moon would be difficult.  It would be a lot easier to harvest everything (steel for structures, silicon for solar cells, etc.) we need from the Moon directly.  So the damage wouldn’t be restricted to just Mare Orientale; there would be deep scars all over the Moon from where we tore up the place for mining.

So the Moon is drifting away, meaning that in a few hundred million years we won’t have total eclipses and the tide will be slightly more mild.  With an achievable, but mind-boggling, effort we could stop the Moon from drifting away and ensure that our descendants can, in several hundred million years, continue to appreciate a slightly wider variety of astronomical phenomena (assuming eyes are still in style).  In ten million years our barely-human descendants will curse us for handing this chore down to them, and in a hundred million years their definitely-not-human descendants will shake their cyber-tentacles at us in fury.

But you guys.  I think we should try it.

Posted in -- By the Physicist, Astronomy, Engineering, Physics | 7 Comments

For the first time ever, you can buy a book!

Physicist: Over the past year I’ve been putting together a collection of some (fifty-four) of my favorite and most elucidating articles from the past decade, revised, updated, and in book form.  You can get your very own copy here!

I wrote a book! It’s good. You should buy it.  The cover is a false-color x-ray of a chameleon, which is hilarious.

In an effort to plug, I’ll be a guest on Story Collider, which will be recording at the Tipsy Crow in San Diego on Thursday at 7:00.  It’s free and should be fun, so if you’d like to show up, you can learn more about the whole thing and register here.

And for those of you attending the Joint Mathematics Meeting (Comic Con for nerds) in San Diego this year, I’ll be at Springer’s booth on Friday.

 

This is Springer’s first foray into “popular science”.  It’s divided into four chapters: “big things”, “small things”, “in between things”, and “not things” (math).

I aimed it at my younger self, who was unimpressed by the vagueness of pop sci and frustrated by the technicalness of actual sci.  The articles in “Do Colors Exist?” cover the important ideas intuitively and without dumbing down, but also assume that you don’t know a bunch of fancy terminology.  Even if physics isn’t your thing, this is exactly the sort of gift you could give a nerd/science friend without embarrassment.  It provides satisfying answers for the man-on-the-street, while including details for the “advanced” reader.

 

The blurb from the back of the book (which I didn’t write) reads:

Why do polished stones look wet? How does the Twin Paradox work? How can we be sure that pi never repeats? How does a quantum computer break encryption? Discover the answers to these, and other profound physics questions!

This fascinating book presents a collection of articles based on conversations and correspondences between the author and complete strangers about physics and math. The author, a researcher in mathematical physics, responds to dozens of questions posed by inquiring minds from all over the world, ranging from the everyday to the profound.

Rather than unnecessarily complex explanations mired in mysterious terminology and symbols, the reader is presented with the reasoning, experiments, and mathematics in a casual, conversational, and often comical style. Neither over-simplified nor over- technical, the lucid and entertaining writing will guide the reader from each innocent question to a better understanding of the weird and beautiful universe around us.

Advance praise for Do Colors Exist?: “Every high school science teacher should have a copy of this book. The individual articles offer enrichment to those students who wish to go beyond a typical ‘dry curriculum’. The articles are very fun. I probably laughed out loud every 2-3 minutes. This is not easy to do. In fact, my children are interested in the book because they heard me laughing so much.” -Ken Ono, Emory University

 

Keeping this website ad-free and cost-free is important, so this will be the last time you’ll have to hear about this.

Posted in -- By the Physicist | 10 Comments

Q: Where is all the anti-matter?

Physicist: Anti-matter is exactly the same as ordinary matter but opposite, in very much the same way that a left hand is exactly the same as a right hand… but opposite.  Every anti-particle has exactly the same mass as their regular-particle counterparts, but with a bunch of their other characteristics flipped.  For example, protons have an electric charge of +1 and a baryon number of +1.  Anti-protons have an electric charge of -1 and a baryon number -1.  The positive/negativeness of these numbers are irrelevant.  A lot like left and right hands, the only thing that’s important about positive charges is that they’re the opposite of negative charges.

Hydrogen is stable because its constituent particles have opposite charges and opposites attract.  Anti-hydrogen is stable for exactly the same reason.

Anti-matter acts, in (nearly) every way we can determine, exactly like matter.  Light (which doesn’t have an anti-particle) interacts with one in exactly the same way as the other, so there’s no way to just look at something and know which is which.  The one and only exception we’ve found so far is beta decay.  In beta decay a neutron fires a new electron out of its “south pole”, whereas an anti-neutron fires an anti-electron out of its “north pole”.  This is exactly the difference between left and right hands.  Not a big deal.

Left: A photograph of an actual flower made of regular matter. Right: An artistic representation of a flower made of anti-matter.

So when we look out into the universe and see stars and galaxies, there’s no way to tell which matter camp, regular or anti, that they fall into.  Anti-stars would move and orbit the same way and produce light in exactly the same way as ordinary stars.  Like the sign of electrical charge or the handedness of hands, the nature of matter and anti-matter are indistinguishable until you compare them.

But you have to be careful when you do, because when a particle comes into contact with its corresponding anti-particle, the two cancel out and dump all of their matter into energy (usually lots of light).  If you were to casually grab hold of 1 kg of anti-matter, it (along with 1 kg of you) would release about the same amount of energy as the largest nuclear detonation in history.

The Tsar Bomba from 100 miles away.  This is what 2 kg worth of energy can do (when released all at once).

To figure out exactly how much energy is tied up in matter (either kind), just use the little known relation between energy and matter: E=mc2.  When you do, be sure to use standard units (kilograms for mass, meters and seconds for the speed of light, and Joules for energy) so that you don’t have to sweat the unit conversions.  For 2 kg of matter, E = (2 kg)(3×108 m/s)2 = 1.8×1017 J.

When anti-matter and matter collide it’s hard to miss.  We can’t tell whether a particular chunk of stuff is matter or anti-matter just by looking at it, but because we don’t regularly see stupendous space kablooies as nebulae collide with anti-nebulae, we can be sure that (at least in the observable universe) everything we see is regular matter.  Or damn near everything.  Our universe is a seriously unfriendly place for anti-matter.

So why would we even suspect that anti-matter exists?  First, when you re-write Schrödinger’s equation (an excellent way to describe particles and whatnot) to make sense in the context of relativity (the fundamental nature of spacetime) you find that the equation that falls out has two solutions; a sort of left and right form for most kinds of particles (matter and anti-matter).  Second, and more importantly, we can actually make anti-matter.

Very high energy situations, like those in particle accelerators, randomly generate new particles.  But these new particles are always produced in balanced pairs; for every new proton (for example) there’s a new anti-proton.  The nice thing about protons is that they have a charge and can be pushed around with magnets.  Conveniently, anti-protons have the opposite charge and are pushed in the opposite direction by magnets.  So, with tremendous cleverness and care, the shrapnel of high speed particle collisions can be collected and sorted.  We can collect around a hundred million anti-particles at a time using particle accelerators (to create them) and particle decelerators (to stop and store them).

Anti-matter, it’s worth mentioning, is (presently) an absurd thing to build a weapon with.  Considering that it takes the energy of a small town to run a decent particle accelerator, and that a mere hundred million anti-protons have all the destructive power of a single drop of rain, it’s just easier to throw a brick or something.

The highest energy particle interactions we can witness happen in the upper atmosphere; to see them we just have to be patient.  The “Oh My God Particle” arrived from deep space with around ninety million times the energy of the particle beams in CERN, but we only see such ultra-high energy particles every few months and from dozens of miles away.  We bothered to build CERN so we could see (comparatively feeble) particle collisions at our leisure and from really close up.

Those upper atmosphere collisions produce both matter and anti-matter, some tiny fraction of which ends up caught in the Van Allen radiation belts by the Earth’s magnetic field.  In all, there are a few nanograms of anti-matter up there.  Presumably, every planet and star with a sufficient and stable magnetic field has a tiny, tiny amount of anti-matter in orbit just like we do.  So if you’re looking for all natural anti-matter, that’s the place to look.

But if anti-matter and matter are always created in equal amounts, and there’s no real difference between them (other than being different from each other), then why is all of the matter in the universe regular matter?

No one knows.  It’s a total mystery.  Isn’t that exciting?  Baryon asymmetry is a wide open question and, not for lack of trying, we’ve got nothing.

The rose photo is from here.

Update: A commenter kindly pointed out that a little anti-matter is also produced during solar flares (which are definitively high-energy) and streams away from the Sun in solar wind.

Posted in -- By the Physicist, Particle Physics, Physics | 14 Comments

Q: Is it possible to write a big number using a small number? Is there a limit to how much information can be compressed?

Physicist: Although there are tricks that work in very specific circumstances, in general when you “encode” any string of digits using fewer digits, you lose some information.  That means that when you want to reverse the operation and “decode” what you’ve got, you won’t recover what you started with.  What we normally call “compressed information” might more accurately be called “better bookkeeping”.

When you encode something, you’re translating all of the symbols in one set into another set. The second set needs to be bigger (left) so that you can reverse the encoding. If there are fewer symbols in the new set (right), then there are at least a few symbols (orange dots) in the new set that represent several symbols in the old set.  So, when you try to go back to the old set, it’s impossible to tell which symbol is the right one.

As a general rule of thumb, count up the number of possible symbols (which can be numbers, letters, words, anything really) and make sure that the new set has more.  For example, a single letter is one of 26 symbols (A, B, …, Z), a single digit is one of 10 (0, 1, …, 9), and two digits is one of 102=100 (00, 01, …, 99).  That means that no matter how hard you try, you can’t encode a letter with a single number, but you can easily do it with 2 (because 10 < 26 < 100).  The simplest encoding in this case is a1, …, z26, and the decoding is equally straightforward.  This particular scheme isn’t terribly efficient (because 27-100 remain unused), but it is “lossless” because you’ll always recover your original letter.  No information is lost.

Similarly, the set of every possible twenty-seven letter words has 2627 = 1.6×1038 different permutations in it (from aaaaaaaaaaaaaaaaaaaaaaaaaaa to zzzzzzzzzzzzzzzzzzzzzzzzzzz).  So, if you wanted to encode “Honorificabilitudinitatibus” as a number, you’d need at least 39 numerical digits (because 39 is the first number bigger than log10(2627)=38.204).

π gives us a cute example of the impossibility of compressing information.  Like effectively all numbers, you can find any number of any length in π (probably).  Basically, the digits in π are random enough that if you look for long enough, then you’ll find any number you’re looking for in a “million monkeys on typewriters” sort of way.

So, if every number shows up somewhere in π, it seems reasonable to think that you could save space by giving the address of the number (what digit it starts at) and its length.  For example, since π=3.141592653589793238… if your phone number happens to be “415-926-5358“, then you could express it as “10 digits starting at the 2nd” or maybe just “10,2”.  But you and your phone number would be extremely lucky.  While there are some numbers like this, that can be represented using really short addresses, on average the address is just as long as the number you’re looking for.  This website allows you to type in any number to see where it shows up in the first hundred million digits in π.  You’ll find that, for example, “1234567” doesn’t appear until the 9470344th digit of π.  This seven digit number has a seven digit address, which is absolutely typical.

The exact same thing holds true for any encoding scheme.  On average, encoded data takes up just as much room as the original data.

However!  It is possible to be clever and some data is inefficiently packaged.  You could use 39 digits to encode every word, so that you could handle any string of 27 or fewer letters.  But the overwhelming majority of those strings are just noise, so why bother having a way to encode them?  Instead you could do something like enumerating all of the approximately 200,000 words in the English dictionary (1=”aardvark”, …., 200000=”zyzzyva”), allowing you to encode any word with only six digits.  It’s not that data is being compressed, it’s that we’re doing a better job keeping track of what is and isn’t a word.

What we’ve done here is a special case of actual data “compression”.  To encode an entire book as succinctly as possible (without losing information), you’d want to give shorter codes to words you’re likely to see (1=”the”), give longer codes to words you’re unlikely to see (174503=”absquatulate“), and no code for words you’ll never see (“nnnfrfj”).

Every individual thing needs to have its own code, otherwise you can’t decode and information is lost.  So this technique of giving the most common things the shortest code is the best you can do as far as compression is concerned.  This is literally how information is defined.  Following this line of thinking, Claude Shannon derived “Shannon Entropy” which describes the density of information in a string of symbols.  The Shannon entropy gives us an absolute minimum to how much space is required for a block of data, regardless of how clever you are about encoding it.


Answer Gravy: There is a limit to cleverness, and in this case it’s Shannon’s “source coding theorem“.  In the densest possible data, each symbol shows up about as often as every other and there are no discernible patterns.  For example, “0000000001000000” can be compressed a lot, while “0100100101101110” can’t.  Shannon showed that the entropy of a string of symbols, sometimes described as the “average surprise per symbol”, tells you how compactly that string can be written.

Incidentally, it’s also a great way to define information, which is exactly what Shannon did in this remarkable (and even fairly accessible) paper.

If the nth symbol in your set shows up with probability Pn, then the entropy in bits (the average number of bits per symbol) is: H=-\sum_n P_n\log_2\left(P_n\right).  The entropy tells you both the average information per character and the highest density that can be achieved.

For example, in that first string of mostly zeros, there are 15 zeros and 1 one.  So, P_0=\frac{15}{16}, P_1=\frac{1}{16}, and H=-\frac{15}{16}\log_2\left(\frac{15}{16}\right)-\frac{1}{16}\log_2\left(\frac{1}{16}\right)\approx0.337.  That means that each digit only uses 0.337 bits on average.  So a sequence like this (or one that goes on a lot longer) could be made about a third as long.

In the second, more balanced string, P_0=P_1=\frac{8}{16}=\frac{1}{2} and H=-\frac{1}{2}\log_2\left(\frac{1}{2}\right)-\frac{1}{2}\log_2\left(\frac{1}{2}\right)=\frac{1}{2}+\frac{1}{2}=1.  In other words, each digit uses about 1 bit of information on average; this sequence is already about as dense as it can get.

Here the log was done in base 2, but it doesn’t have to be; if you did the log in base 26, you’d know the average number of letters needed per symbol.  In base 2 the entropy is expressed in “bits”, in base e (the natural log) the entropy is expressed in “nats”, and in base π the entropy is in “slices”.  Bits are useful because they describe information in how many “yes/no” questions you need, nats are more natural (hence the name) for things like thermodynamic entropy, and slices are useful exclusively for this one joke about π (its utility is debatable).

Posted in -- By the Physicist, Combinatorics, Computer Science, Entropy/Information, Math | 8 Comments

Q: Is reactionless propulsion possible?

Physicist: In a word: no.

A reactionless drive is basically a closed box with the ability to just start moving, on its own, without touching or exuding anything.  The classic sci-fi tropes of silent flying cars or hovering UFOs are examples of reactionless drives.

The problem at the heart of all reactionless drives is that they come into conflict with Newton’s famous law “for every action there is an equal and opposite reaction” (hence the name).  To walk in one direction, you push against the ground in the opposite direction.  To paddle your canoe forward, you push water backward.  The stuff you push backward so you can move forward is called the “reaction mass”.

In order to move stuff forward, you need to move other stuff backward.

This is a universal law, so unfortunately it applies in space.  If you want to move in space (where there’s nothing else around) you need to bring your reaction mass with you.  This is why we use rockets instead of propellers or paddles in space; a rocket is a mass-throwing machine.

But mass is at a premium in space.  It presently costs in the neighborhood of $2000/kg to send stuff to low Earth orbit (a huge improvement over just a few years ago).  So, the lighter your rocket, the better.  Typically, a huge fraction of a rocket’s mass is fuel/reaction mass, so the best way to make spaceflight cheaper and more feasible is to cut down on the amount of reaction mass.  The only way to do that at present is to use that mass more efficiently.  If you can throw mass twice as fast, you’ll push your rocket twice as hard.  Traditionally, that’s done by burning fuel hotter and under higher pressure so it comes shooting out faster.

In modern rockets the exhaust is moving on the order of 2-3 km per second.  However, your reaction mass doesn’t need to be fuel, it can be anything.  Ion drives fire ionized gas out of their business end at up to 50 km per second, meaning they can afford to carry far less reaction mass.  Space craft with ion drives are doubly advantaged: not only are they throwing their reaction mass much faster, but since they carry less of it, they can be smaller and easier to push.

The drawback is that ion drives dole out that reaction mass a tiny bit at a time.  The most powerful ion drives produce about 0.9 ounces of force.  A typical budgie (a small, excitable bird) weighs about 1.2 ounces and, since they can lift themselves, budgies can generate more force than any ion drive presently in production.

Compared to rockets, ion drives pack a greater punch for a given amount of reaction mass.  However, they deliver that punch over a very long time and with less force than a budgie.

Given the limitations and inefficiencies, wouldn’t it be nice to have a new kind of drive that didn’t involve reaction mass at all?  You’d never have to worry about running out of reaction mass; all you’d need is a power supply, and you could fly around for as long as you want.

That’s not to say that propellantless propulsion isn’t possible.  There are ways to move without carrying reaction mass with you.  You can use light as your exhaust (a “photon drive”), but you’ll notice that a flashlight or laser pointer doesn’t have much of a kick.  And you can slingshot around planets, but then the planet is your reaction mass.

The problem with reactionless drives, fundamentally, is that Newton’s third law has no (known) exceptions.  It is one of the most bedrock, absolute rules in any science and a keystone in our understanding of the universe.  On those rare occasions when someone thought they had found an exception, it always turned out to be an issue with failing to take something into account.  For example, when a neutron decays into a proton and electron, the new pair of particles don’t fly apart in exactly opposite directions.  Instead, the pair have a net momentum that the original neutron did not.

When a stationary neutron (gray) decays into a proton (red) and electron (blue), the new pair flies apart, but always favor one direction.  Newton’s laws imply that there must be a third particle moving in the other direction to balance the other two.

The implication (according to Newton’s law) is that there must be another particle to balance things out.  And that’s exactly the case.  Although the “extra particle that’s really hard to detect” theory was first proposed in 1930, it wasn’t until 1956 that neutrinos were finally detected and verified to exist.  The imbalanced momentum, a violation of Newton’s laws, came down to a missing particle.  Today neutrinos are one of the four ways we can learn about space, along with light, gravity waves, and go-there-yourself.

There are plenty of ideas floating around about how to create reactionless drives, such as the Woodward Effect or the Albecurrie warp drive.  But in no case do these ideas use established science.  The Woodward effect depends on Mach’s principle (that somehow inertia is caused by all the other matter in the universe), and reads like a pamphlet a stranger on the street might give you, while the Albecurrie drive needs lots of negative energy, which flat-out isn’t a thing.

Science is all about learning things we don’t know and trying to prove our own theories wrong.  While scientific discovery is certainly awe inspiring, it is also the exact opposite of wishful thinking.  That said, good science means keeping an open mind much longer than any reasonable person would be willing to.  In the ultimate battle between theoretical and experimental physics, the experimentalists always win.  If someone ever manages to create a self-moving, reactionless drive, then all the theories about why that’s impossible go out the window.  But as of now, those theories (standard physics) are holding firm.  We can expect that for the rest of forever, all space craft will have a tail of exhaust behind them.

Posted in -- By the Physicist, Physics, Relativity | 15 Comments

Q: How can I set up a random gift exchange that’s different from year to year?

The original question was: I’ve got a large family and we do a yearly gift exchange one person to one person. And I’d like to make a algorithm or something to do random selection without repeating for some time. And to be able to take old data and put it in to avoid repeats. I’m pretty good at math I’m 29 and my trade is being a machinist so I’ve got some understanding of how things kinda work.


Physicist: A good method should be trivially easy to keep track of from year to year, work quickly and simply, never assign anyone to themselves, make sure that everyone gives to everyone else exactly once, and be unobtrusive enough that it doesn’t bother anybody.  Luckily, there are a few options.  Here’s one of the simplest.

Give each of the N people involved a number from 1 to N.  The only things you’ll need to keep track of from year to year are everyone’s number and an index number, k.  Regardless of who the best gift-giver is, there are no best numbers and no way to game the system, and since no one wants to keep track of a list from one year to the next, you should choose something simple like alphabetical numbering (Aaron Aardvark would be #1 and Zylah von Zyzzyx would be #N).

Draw N dots in a circle then, starting with dot #1, draw an arrow to dot #(1+k), and repeat.  There’s nothing special about any particular dot; since they’re arranged in a circle #1 has just as much a claim to be first as #3 or #N.  When you finally get back to dot #1, you’ll have drawn a star.  Each different value of k, from 1 to N-1, will produce a different star and a different gift-giving pattern.  For example, if N=8 and k=3, then you get a star that describes the pattern {1→4→7→2→5→8→3→6→1} (that is “1 gives to 4 who gives to 7 …”).

When N is prime, or more generally when N and k have no common factors, you’ll hit every dot with a single star.  Otherwise, you have to draw a few different stars (specifically, “the greatest common divisor of k and N” different stars).  For example, if N=8 and k=2, then you need two stars: {1→3→5→7→1} and {2→4→6→8→2}.

Given N points, you can create a star by counting k points and drawing a connection.  This works great if N is a prime (bottom), since you’ll always hit every point, but when N isn’t prime you’ll often need to create several stars (top).

That’s why drawing is a good way of doing this math: it’s easy to see when your star is weird-shaped (your k changed halfway through) and really easy to see when you’ve missed some of the dots.

The “star method” gives you N-1 years of “cyclic permutations” (“cyclic” because when you get to the end you just go back to the beginning).  However, for large values of N that’s only a drop in the permutation sea.  Were you so determined, you could cyclicly permute N things in (N-1)! ways.

However!  With a family of 6 you’d need 5! = 1x2x3x4x5 = 120 years to get through every permutation.  More than time enough for any family to gain or lose members, or for some helpful soul to start questioning the point.  Moreover!  Each of those permutations are similar enough to others that you’ll start to feel as though you’re doing a lot of repeating.  For example: {1→2→3→4→5→6→1}, {1→2→3→4→6→5→1}, {1→2→3→5→6→4→1}, …

For those obsessive, immortal gift-givers who want to hit every permutation with the least year-to-year change, just to fight off boredom for another thousand years, there’s Heap’s algorithm.  For the rest of us, drawing stars is more than good enough.

Posted in -- By the Physicist, Combinatorics, Experiments, Math | 2 Comments