## Q: When something falls on your foot, how much force is involved?

Physicist: There’s a cute trick you can use here.  It a falling object starts at rest and ends at rest, then it gains all of its energy from gravity, and all of that energy is deposited in your unfortunate foot.

Kinetic energy is (average) force times distance; whether you’re winding a spring, starting a fire (with friction), firing projectiles, or crushing your foot.  The energy the object gains when falling is equal to its weight (the force of gravity) times the distance it falls.  The energy the object uses to bust metatarsals is equal to the distance it takes for it to come to a stop times the force that does that stopping.  S0, $D_{fall}F_{fall} = E = D_{stop}F_{stop}$.

The distance times the force that gets an object moving is equal to the distance times the force that brings that object to a halt.

Of course, the distance over which the object slows down is much smaller than the distance over which it sped up.  As a result, the stopping force required is proportionately larger.  This is one of the reasons why physicists flinch so much during unrealistic action movies (that, and loud noises make us skittish).  Something falling on your foot stops in about half a cm or a quarter inch, what with skin and bones that flex a little.  Give or take.

Bowling balls: keeping podiatrists gainfully employed for 700 years.

So, if you drop a 10 pound ball 4 feet (48 inches), and it stops in a quarter inch, then the force at the bottom of the fall is $F = \frac{48}{0.25}10lbs \approx 2,000lbs$.  This is why padding is so important; if that distance was only an eighth of an inch (seems reasonable) then the force jumps to 4,000lbs, and if that distance is increased to half an inch then the force drops to 1,000 lbs.

The bowling ball picture is from here.

Posted in -- By the Physicist, Experiments, Physics | 3 Comments

## Q: If nothing can escape a black hole’s gravity, then how does the gravity itself escape?

Physicist: A black hole is usually described as a singularity, where all the mass is (or isn’t?), which is surrounded by an “event horizon”.  The event horizon is the “altitude” at which the escape velocity is the speed of light, so nothing can escape.  But if gravity is “emitted” by black holes, then how does that “gravity signal” get out?  The short answer is that gravity isn’t “emitted” by matter.  Instead, it’s a property of the spacetime near matter and energy.

It’s worth stepping back and considering where our understanding of black holes, and where all of our predictions about their behavior, comes from.  Ultimately our understanding of black holes, as well as all of our predictions for their bizarre behavior, stems from the math we use to describe them.  The extremely short answer to this question is: the math says nothing can escape, and that the gravity doesn’t “escape” so much as it “persists”.  No problem.

Einstein’s whole thing was considering the results of experiments at face value.  When test after test always showed the speed of light was exactly the same, regardless of how the experiment was moving, Einstein said “hey, what if the speed of light is always the same regardless of how you’re moving?”.  Genius.  There’s special relativity.

It also turns out that no experiment can tell the difference between floating motionless in deep space and accelerating under the pull of gravity (when you fall you’re weightless).  Einstein’s stunning insight (paraphrased) was “dudes!  What if there’s no difference between falling and floating?”.  Amazing stuff.

Sarcasm aside, what was genuinely impressive was the effort it took to turn those singsong statements into useful math.  After a decade of work, and buckets of differential geometry (needed to deal with messed up coordinate systems like the surface of Earth, or worse, curved spacetime) the “Einstein Field Equations” were eventually derived, and presumably named after Einstein’s inspiration: the infamous Professor Field.

This is technically 16 equations (μ and ν are indices that take on 4 values each), however there are tricks to get that down to a more sedate 6 equations.

The left side of this horrible mess describes the shape of spacetime and relates it to the right side, which describes the amount of matter and energy (doesn’t particularly matter which) present.  This equation is based on two principles: “matter and energy make gravity… somehow” and “when you don’t feel a push or pull in any direction, then you’re moving in a straight line”.  That push or pull is defined as what an accelerometer would measure.  So satellites are not accelerating because they’re always in free-fall, whereas you are accelerating right now because if you hold an accelerometer it will read 9.8m/s2 (1 standard Earth gravity).  Isn’t that weird?  The path of a freely falling object (even an orbiting object) is a straight line through a non-flat spacetime.

Moving past the mind-bending weirdness; this equation, and all of the mathematical mechanisms of relativity, work perfectly for every prediction that we’ve been able to test.  So experimental investigation has given General Relativity a ringing endorsement.  It’s not used/taught/believed merely because it’s pretty, but because it works.

Importantly, the curvature described isn’t merely dependent on the presence of “stuff”, but on the curvature of the spacetime nearby.  Instead of being emitted from some distant source, gravity is a property of the space you inhabit right now, right where you are.  This is the important point that the “bowling ball on a sheet” demonstration is trying to get across.

The Einstein Field Equations describe the stretching of spacetime as being caused both by the presence of matter and also by the curvature of nearby spacetime.  Gravity doesn’t “reach out” any more than the metal ball in the middle is.

So here’s the point.  Gravity is just a question of the “shape” of spacetime.  That’s affected by matter and energy, but it’s also affected by the shape of spacetime nearby.  If you’re far away from a star (or anything else really) the gravity you experience doesn’t come directly that star, but from the patch of space you’re sitting in.  It turns out that if that star gets smaller and keeps the same mass, that the shape of the space you’re in stays about the same (as long as you stay the same distance away, the density of an object isn’t relevant to its gravity).  Even if that object collapses into a black hole, the gravity field around it stays about the same; the shape of the spacetime is stable and perfectly happy to stay the way it is, even when the matter that originally gave rise to it is doing goofy stuff like being a black hole.

This stuff is really difficult / nigh impossible to grok directly.  All we’ve really got are the experiments and observations, which led to a couple simple statements, which led to some nasty math, which led to some surprising predictions (including those concerning black holes), which so far have held up to all of the observations of known black holes that we can do (which is difficult because they’re dark, tiny, and the closest is around 8,000 light years away, which is not walking-distance).  That said: the math comes before understanding, and the math doesn’t come easy.

It’s funny because it’s true.

Here’s the bad news.  In physics we’ve got lots of math, which is nice, but no math should really be trusted to predict reality without lots of tests and verification and experiment (ultimately that’s where physics comes from in the first place).  Unfortunately no information ever escapes from beyond the event horizon.  So while we’ve got lots of tests that can check the nature of gravity outside of the horizon (the gravity here on Earth behaves in the same way that gravity well above the horizon behaves), we have no way even in theory to investigate the interior of the event horizon.  The existence of singularities, and what’s going on in those extreme scenarios in general, may be a mystery forever.  Maybe.

This probably doesn’t need to be mentioned, but the comic is from xkcd.

Posted in -- By the Physicist, Astronomy, Physics, Relativity | 27 Comments

## Q: Is there a formula for finding primes? Do primes follow a pattern?

Physicist: Primes are, for many purposes, basically random.  It’s not easy to “find the next prime” or determine if a given number is prime, but there are tricks.  Which trick depends on the size of the number.  Some of the more obvious ones are things like “no even numbers (other than 2)” and “the last digit can’t be 5″; but those just eliminate possibilities instead of confirming them.  Confirming that a number is prime is a lot more difficult.

Small (~10): The Sieve of Eratosthenes finds primes and also does a decent job demonstrating the “pattern” that they form.

Starting with 2, remove every multiple. The first blank is a new prime. Remove every multiple of that new prime. Repeat forever or until bored.

The integers come in 4 flavors: composites, primes, units (1 and -1), and zero.  2 is the first prime and every multiple of it is composite (because they have 2 as a factor).  If you mark every multiple of 2, you’ll be marking only composite numbers.  The first unmarked number is 3 (another prime), and every multiple of 3 is composite.  Continue forever.  This makes a “map” of all of the primes up to a given number (in the picture above it’s 120).  Every composite number has at least one factor less than or equal to its square root, so if the largest number on your map is N, then you only need to check up to √N.  After that, all of the remaining blanks are primes.

This algorithm is great for people (human beings as opposed to computers) because it rapidly finds lots of primes.  However, like most by-hand algorithms it’s slow (by computer standards).  You wouldn’t want to use it to check all the numbers up to, say, 450787.

Eratosthenes, in a completely unrelated project, accurately calculated the circumference of the Earth around 2200 years ago using nothing more than the Sun, a little trigonometry, and some dude willing to walk the ~900km between Alexandria and Syene.  This marks one of the earliest recorded instances of grad student abuse.

Medium (~1010): Fermat’s Little Theorem or AKS.

Fermat’s little theorem (as opposed to Fermat’s theorem) works like this: if N is prime and A is any number such that 1<A< N, then if $A^{N-1} \, mod \, N\ne1$, then N is definitely composite and if $A^{N-1} \, mod \, N=1$ then N is very likely to be prime.  “Mod N” means every time you have a value bigger than N, you subtract multiples of N until your number is less than N.  Equivalently, it’s the remainder after division by N.  This test has no false negatives, but it does sometimes have false positives.  These are the “Carmichael numbers” and they’re more and more rare the larger the numbers being considered.  However, because of their existence we can’t use FLT with impunity.  For most purposes (such as generating encryption keys) FLT is more than good enough.

For a very long time (millennia) there was no way to verify with certainty that a number is prime in an efficient way.  But in 2002 Primes is in P was published, which introduced AKS (Agrawal–Kayal–Saxena primality test) that can determine whether or not a number is prime with complete certainty.  The time it takes for both FTL and AKS to work is determined by the log of N (which means they’re fast enough to be useful).

Stupid Big (~101010): Even if you have a fantastically fast technique for determining primality, you can render it useless by giving it a large enough number.  The largest prime found to date (May 2014) is N = 257,885,161 − 1.  At 17.4 million digits, this number is around ten times longer than the Lord of the Rings, and about twice as interesting as the Silmarillion.

Number of digits in the largest known prime vs. the year it was verified.

To check that a number this big is prime you need to pick the number carefully.  The reason that 257,885,161 − 1 can be written so succinctly (just a power of two minus one) is that it’s one of the Mersenne primes, which have a couple nice properties that make them easy to check.

A Mersenne number is of the form Mn = 2n -1.  Turns out that if n isn’t prime, then neither is Mn.  Just like FLT there are false positives; for example M11 = 211 -1 = 23×89, which is clearly composite even though 11 is prime.  Fortunately, there’s yet another cute trick.  Create the sequence of numbers, Sk, defined recursively as Sk = (Sk-1)2 – 2 with S0 = 4.  If $S_{p-2} = 0\,mod\,M_p$, then Mp is prime.  This is really, really not obvious, so be cool.

With enough computer power this is a thing that can be done, but it typically requires more computing power than can reasonably be found in one place.

Answer Gravy: Fermat’s little theorem is pretty easy to use, but it helps to see an example.  There’s a lot more of this sort of thing (including a derivation) over here.

Example: N=7 and A=2.

[27-1]7 = [26]7 = [64]7 = [64-9x7]7 = [64-63]7 = 1

So, 7 is mostly likely prime.

Example: N=9 and A=5.

[58-1]9 = [57]9 = [25x25x25x5]9 = [7x7x7x5]9 = [49x35]9 = [4x8]9 = [32]9 = 5

Astute readers will note that 5 is different from 1, so 9 is definitely not be prime.

For bigger numbers a wise nerd will typically exponentiate by squaring.

Example: N=457 and A=2.  First, a bunch of squares:

$\begin{array}{ll}\left[2^2\right]_{457}=\left[4\right]_{457}\\\left[2^4\right]_{457}=\left[4^2\right]_{457}=\left[16\right]_{457}\\\left[2^8\right]_{457} =\left[16^2\right]_{457}=\left[256\right]_{457}\\\left[2^{16}\right]_{457}=\left[256^2\right]_{457}=\left[185\right]_{457}\\\left[2^{32}\right]_{457}=\left[185^2\right]_{457}=\left[407\right]_{457}\\\left[2^{64}\right]_{457}=\left[407^2\right]_{457}=\left[215\right]_{457}\\\left[2^{128}\right]_{457}=\left[215^2\right]_{457}=\left[68\right]_{457}\\\left[2^{256}\right]_{457}=\left[68^2\right]_{457}=\left[54\right]_{457}\end{array}$

As it happens, 457-1 = 456 = 256 + 128 + 64+8.

$\begin{array}{ll}\left[2^{456}\right]_{457}\\[2mm] =\left[2^{256}\cdot2^{128}\cdot2^{64}\cdot2^{8}\right]_{457}\\[2mm] =\left[54\cdot68\cdot215\cdot256\right]_{457}\\[2mm] =\left[202106880\right]_{457}\\[2mm] =1\end{array}$

So 457 is very likely to be prime (it is).  This can be verified with either some fancy algorithm or (more reasonably) by checking that it’s not divisible by any number up to √457.

Posted in -- By the Physicist, Math, Number Theory | 10 Comments

## Q: If the number of ancestors you have doubles with each generation going back, you quickly get to a number bigger than the population of Earth. Does that mean we’re all a little inbred?

Physicist: In a word: yes.  But it’s not a problem in large populations.

The original questioner pointed out that in the age of Charlemagne (more or less when everybody’s 40-greats grandfolk were living) the world population was between 200 and 300 million, and yet 2^40 (the number of ancestors you would have with no overlap) is 1,099,511,627,776.  As it happens, 1.09 trillion is bigger than 300 million (math!).  That means that your average ancestor alive 1200 years ago shows up in your 40-generation-tall family tree at least around 4,000 times.  That redundancy is likely to be much higher.  Many of the people alive during the reign of Chuck the Great left no descendents, and while your family tree is probably wider than you might suspect, most of your ancestors probably came from only a few regions of the world.  Most people will start seeing redundancy in their family tree within a dozen generations (small towns and all that).  Fortunately, “redundancy” isn’t an issue as long as the genetic pool is large enough.

The biology of living things assumes that things will break and/or mess up frequently.  One of the stop-gaps to keep mistakes in the genetic code from being serious is to keep two different copies around.  This squares the chance of error (which is good).  If one strand of DNA gets things right 90% of the time, then if you have access to two strands that gets bumped up to 99% (of the 10% the first missed, the second picks up 90%).  However, if you have two identical copies, then this advantage goes away because both copies of the DNA will contain the same mistakes.  That’s a why (for example) red/green colorblindness is far more common in dudes (who have 1 X chromosome) than in ladies (who have two).  Don’t get too excited ladies; gentlemen still have two copies of all of the other chromosomes.  Also, that 90% thing is just for ease of math; if 1 in 10 genes were errors, then life wouldn’t work.

The two copies that each of us carry around are only combined together in the germline (found in our junk), and that combination is what’s passed on.  What makes the cut into the next generation is pretty random, which helps ensure genetic diversity (and is why siblings look similar, while identical twins look the same).

As long as genes have a chance to mix around, the chance of an error showing up in the same person twice is pretty low.  That said, there are a lot of things that can go “wrong” so, statistically speaking, everybody‘s got at least a few switches flipped backwards.  It happens.  If it weren’t for mistakes, biology would be pretty boring.

An impressive, but somewhat speculative computer model, says it’s likely that we all have a common ancestor (a long-dead someone who is directly related to everyone presently living) a mere few thousand years ago.  That person is very unlikely to be unique, and their genes are so watered down by now that it barely matters who/where they were.  What the computer model is saying is that, given what we know about human migration and travel, a “single drop in the human genetic pool” only takes a few thousand years to diffuse to the farthest corners of the world.

So we all have some repeated ancestry, but it’s no big deal.  You still have lots of ancestors with lots of genetic diversity.

Posted in -- By the Physicist, Biology, Evolution, Probability | 8 Comments

## Q: Why are many galaxies, our solar system, and Saturn’s rings all flat?

Physicist: This may be the shortest answer yet: “accretion“.

Accretion: making stuff flat for billions of years.

Accretion is the process of matter gravitationally collapsing from a cloud of dust or gas or (usually) both.  Before thinking about what a big cloud of gas does when it runs into itself, it’s worth thinking about what happens to just two clumps of dust when they run into each other.

Most collisions are inelastic, which means they lose energy and that the particles’ trajectories are “averaged” a little.  In the most extreme case things will stick together.

In a perfectly elastic collision objects will bounce out at about the same angle that they came in.  Most collisions are inelastic, which means they lose energy and the angle between the objects’ trajectories decreases after a collision.  In the most extreme inelastic case the particles will stick together.  For tiny particles this is more common than you might think.

Table salt, in zero gravity, spontaneously clumping due to electrostatic forces (click image for movie).

Over time collisions release energy (as heat and light).  This loss of energy causes the cloud to physically contract, since losing energy means the bits of dust and gas are moving slower (and that means falling into lower and lower orbits).  But collisions also make things “average” their trajectories.  So while a big puffy cloud may have bits of dust and gas traveling every-which-way, during accretion they eventually settle into the same, average, rotational plane.

Each atom of gas and mote of dust moves along its own orbital loop, pulled by the collaborative gravitational influence of every other atom and mote (there’s no one point where the gravity originates).  While the path of each of these is pretty random, there’s always net rotation in some direction.  The idea is any cloud in space starts out with at least a little bit of spin.  This isn’t a big claim; pour coffee into a cup, and at least some little bits will be turning.  That same turbulence shows up naturally at all larger-than-coffee-cup scales in the universe (although typically not much smaller).  So, on average, any cloud will be turning in some direction.

Things in the cloud will continue to run into each other until every part of it has done one of three things: 1) escaped, 2) fallen into the center, or 3) moves with the flow. Most of the cloud ends up in the center.  For example, our Sun makes up 99.86% of the matter in the solar system.  The stuff that stops colliding and goes with the flow forms the ring.  Anything not in the plane of the ring must be on an orbit that passes through it, which means that it will continue hitting things and loosing energy.  Eventually, the “incorrectly orbiting” object will either find itself co-orbiting with everything else in the ring, or will loose enough kinetic energy to fall into the planet or star below.  By the way, there’s still a lot of “unaffiliated” junk in our solar system that’s still waiting to “join” a planet.

Those rings are pretty exciting places themselves.  Inside of them there are bound to be “lumps” of higher density that draw in surrounding material.  Eventually this turns into smaller accretion disks within the larger disk.  Our solar system formed as a disk with all of the planets forming within that disk in the “plane of the ecliptic”.  One of those lumps became Jupiter, which has its own set of moons that also formed in an accretion disk around Jupiter.  In fact, Jupiter’s moons are laid out so much like the rest of the solar system (all orbiting in the same plane) that they helped early astronomers to first understand the entire solar system.  It’s hard to see how the planets are moving from a vantage on the surface of one of those moving planets (Earth), so it’s nice to have a simple boxed example like Jupiter.

The planets always lie within the same plane, “the ecliptic”.  Since the Earth is also in this plane, the ecliptic appears as a band in the sky where the Sun and all of the planets can be found.  Similarly, Jupiter’s moons also lie in a plane.

That all said, those lumps add an element of chaos to the story.  Planets and moons don’t simply orbit the Sun, they also interact with each other.  Sometimes this leads to awesome stuff like planets impacting each other and big explosions.  One of the leading theories behind the formation of our Moon is one such impact.  But these interactions can sometime slingshot smaller objects into weird, off-plane orbits.  Knowing that planets tend to be found in the same plane make astronomer’s jobs that much easier.  From Earth, the ecliptic appears as a thin band that none of the other planets stray from.  Pluto was the second dwarf planet found (after Ceres) because it orbits close to the plane of all the other planets, and is inside this band.  The dwarf planet Xena and its moon Gabriel orbit way off of the ecliptic, which is a big part of why they weren’t found until 2005 (the sky is a big place after all).  Xena and Gabriel’s official names are “Eris” and “Dysnomia” respectively, but I support the original discoverer’s labels, because they’re amazing.  So things can have wonky orbits, but they need to do it way the crap out there where they don’t inevitably run into something else.  Xena is usually about twice as far out as Pluto, which itself is definitively way the crap out there.

Not all matter forms accretion disks.  In order for a disk to form the matter involved has to interact.  Gas and dust does a great job of that.  But once they’ve formed, stars barely interact at all.  For example, when (not if!) the Andromeda and Milky Way galaxies hit each other, it’s really unlikely that any stars will smack into each other (they’re just too small and far apart).  However, the giant gas clouds in each should slam into each other and spark a flurry of new star formation.  In four billion years the sky will be especially pretty.

Posted in -- By the Physicist, Astronomy, Physics | 8 Comments

## Q: How do you define the derivatives of the Heaviside, Sign, Absolute Value, and Delta functions? How do they relate to one another?

Physicist: These are four standard reference functions.  In the same way that there are named mathematical constants, like π or e, there are named mathematical functions.  These are among the more famous (after the spotlight hogging trig functions).

The Sign, Delta, Absolute Value, and Heaviside functions.  The graphs on top are the slope of the graphs on the bottom (and slope=derivative).

The absolute value function flips the sign of negative numbers, and leaves positive numbers alone.  The sign function is 1 for positive numbers and -1 for negative numbers.  The Heaviside function is very similar; 1 for positive numbers and 0 for negative numbers.  By the way, the Heaviside function, rather than being named after its shape, is named after Oliver Heaviside, who was awesome.

The delta function is a whole other thing.  The delta function is zero everywhere other than at x=0 and at x=0 it’s infinite but there’s “one unit of area” under that spike.  Technically the delta function isn’t a function because it can’t be defined at zero.  The “Dirac delta function” is used a lot in physics (Dirac was a physicist) to do things like describe the location of the charge of single particles.  An electron has one unit of charge, but it’s smaller than basically anything, so describing it as a unit of charge located in exactly one point usually works fine (and if it doesn’t, don’t use a delta function).  This turns out to be a lot easier than modeling a particle as… just about anything else.

The derivative of a function is the slope of that function.  So, the derivative of |x| is 1 for positive numbers (45° up), and -1 for negative numbers (45° down).  But that’s the sign function!  Notice that at x=0, |x| has a kink and the slope can’t be defined (hence the open circles in the graph of sgn(x)).

The derivative of the Heaviside function is clearly zero for x≠0 (it’s completely level), but weird stuff happens at x=0.  There, if you were to insist that somehow the slope exists, you would find that no finite number does the job (vertical lines are “infinitely steep”).  But that sounds a bit like the delta function; zero everywhere, except for an infinite spike at x=0.

It is possible (even useful!) to define the delta function as δ(x) = H’(x).  Using that, you find that sgn’(x) = 2δ(x), simply because the jump is twice the size.  However, how you define derivatives for discontinuous functions is a whole thing, so that’ll be left in the answer gravy.

Answer Gravy: The Dirac delta function really got under the skin of a lot of mathematicians.  Many of them flatly refuse to even call it a “function” (since technically it doesn’t meet all the requirements).  Math folk are a skittish bunch, and when a bunch of handsome/beautiful pioneers (physicists) are using a function that isn’t definable, mathematicians can’t help but be helpful.  When they bother, physicists usually define the delta function as the limit of a series of progressively thinner and taller functions (usually Gaussians).

One of the simplest ways to construct the delta function is the series of functions fn(x) = n, 0 ≤ x ≤ 1/n (the first four of which are shown).  The area under each of these is 1, and most/many of the important properties of delta functions can be derived by looking at the limit as n→∞.

Mathematicians take a different tack.  For those brave cognoscenti, the delta function isn’t a function at all; instead it’s a “distribution”, which is a member of the dual of function space, and it’s used to define a “bounded linear functional”.

So that’s one issue cleared up.

A “functional” takes an entire function as input, and spits out a single number as output.  When you require that the functional is linear (and why not?), you’ll find that the only real option is for the functional to take the form $F(g) = \int f(x)g(x)\,dx$.  This is because of the natural linearity of the integral:

$\begin{array}{ll}F(g + h) \\= \int f(x)\left(g(x)+h(x)\right)\,dx \\= \int f(x)g(x)\,dx + \int f(x)h(x)\,dx \\= F(g) + F(h)\end{array}$

In $F(g) = \int f(x)g(x)\,dx$, F is the functional, f(x) is the distribution corresponding to that functional, and g(x) is the function being acted upon.  The delta function is the distribution corresponding to the functional which simply returns the value at zero.  That is, $\int \delta(x)g(x)\,dx = g(0)$.  So finally, what in the crap does “returning the value at zero” have to do with the derivative of the Heaviside function?  As it happens: buckets!

Assume here that A<0<B,

$\begin{array}{ll} \int_A^B \delta(x)g(x)\,dx \\[2mm] = H(x)g(x)\big|_A^B - \int_A^B H(x)g^\prime(x)\,dx & \textrm{(integration by parts)} \\[2mm] = H(B)g(B) - H(A)g(A) - \int_A^B H(x)g^\prime(x)\,dx \\[2mm] = H(B)g(B) - \int_0^B H(x)g^\prime(x)\,dx & (H(x)=0, x<0) \\[2mm] = g(B) - \int_0^B g^\prime(x)\,dx & (H(x)=1, x>0) \\[2mm] = g(B) - g(x)\big|_0^B & \textrm{(fundamental theorem of calculus)} \\[2mm] = g(B) - \left[g(B) - g(0) \right] \\[2mm] =g(0) \end{array}$

Running through the same process again, you’ll find that this is a halfway decent way of going a step further and defining the derivative of the delta function, δ’(x).

$\begin{array}{ll} \int_A^B \delta^\prime(x)g(x)\,dx \\[2mm] = \delta(x)g(x)\big|_A^B - \int_A^B \delta(x)g^\prime(x)\,dx \\[2mm] = - \int_A^B \delta(x)g^\prime(x)\,dx \\[2mm] = -g^\prime(0) \end{array}$

δ’(x) also isn’t a function, but is instead another profoundly abstract distribution.  And yes: this can be done ad nauseam (or at least ad queasyam) to create distributions that grab higher and higher derivatives of input functions.

Posted in -- By the Physicist, Equations, Math | 3 Comments