*The original question was*: Another one of those questions of the type “does this make sense”. You have first derivatives and second derivatives. f'(x), f”(x) or sometimes dy/dx and d^2y/dx^2. Is there any sensible definition of a something like a “half” derivative, or more generally an nth derivative for a non-integer n?

**Physicist**: There is! For readers not already familiar with first year calculus, this post will be a lot of non-sense.

Strictly speaking, the derivative only makes sense in integer increments. But that’s never stopped mathematicians from generalizing. Heck, non-integer exponentiation doesn’t make much sense (I mean, 2^{3.5} is “2 times itself three and a half times”. What is that?), but with a little effort we can move past that.

The derivative of a function is the slope at every point along that function, and it tells you how fast that function is changing. The “2nd derivative” is the derivative of the derivative, and it tells you how fast the slope is changing.

f(x) is a parabola. f'(x) describes the fact that as you move to the right the parabola’s slope increases. Notice that a negative slope means “down hill”. f”(x) describes the slope of f'(x), which is constant.

When you want to generalize something like this to you basically need to “connect the dots” between those cases where the math actually makes sense. For something like exponentiation by not-integers there’s a “correct” answer. For not-integer derivatives there really isn’t. One way is to use Fourier Transforms. Another is to use Laplace Transforms. Neither of these is ideal. Just to be clear: non-integral derivatives are nothing more than a matter of choosing “what works” from a fairly short list of options that aren’t terrible.

It turns out (as used in both of those examples) that integrals are a great way of “connecting dots”. When you integrate a function the result is more continuous and more smooth. In order to get something out that’s discontinuous at a given point, the function you put in needs to be infinitely nasty at that point (technically, it has to be so nasty it’s not even a function). So, integrals are a quick way of “connecting the dots”.

To get the idea, take a look at N!. That excited looking N is “N factorial” and it’s defined as . For example, . Clearly, it doesn’t make a lot of sense to write “3.5!” or, even worse, “π!”. And yet there’s a cute way to smoothly connect the dots between 3! and 4!.

Γ(N+1) is a fairly natural way of generalizing N! to non-natural numbers. The dotted lines correspond to 1!=1, 2!=2, and 3!=6.

The Gamma function, Γ(N), (not to be confused with the gamma factor) is defined as: . Before you ask, I don’t know why Euler decided to use “N+1” instead of “N”. Sometimes decent-enough folk have good reasons for doing confusing things. If you do a quick integration by parts, a pattern emerges:

So, Γ(N+1) has the same defining property that N! has: and . Even better, , which is the other defining property of N!, 0!=1. We now have a bizarre new way of writing N!. For all natural numbers N, N! = Γ(N+1). Unlike N!, which only makes sense for natural numbers, Γ(N+1) works for any positive real number since you can plug in whatever positive N you like into .

Even better, this formulation is “analytic” which means it not only works for any positive real number, but (using analytic continuation) works for any complex number as well (with the exception of those poles at each negative integer where it jumps to infinity).

|Γ(N)|, where N can now take values in the complex plane.

Long story short, with that integral formulation you can connect the dots between the integer values of N (where N! makes sense) to figure out the values between (where N! doesn’t make sense).

So, here comes a pretty decent way to talk about fractional derivatives: fractional integrals.

If “f ‘(x)=f^{(1)}(x)” is the derivative of f, “f^{(N)}(x)” is the Nth derivative of f, and “f^{(-1)}(x)” is the anti-derivative, then by the fundamental theorem of calculus . It turns out that . x-t runs over strictly positive values, so there’s no issue with non-integer powers, and it just so happens that we already have a cute way of dealing with non-integer factorials, so we may as well deal with that factorial cutely: .

Holy crap! We now have a way to describe fractional integrals that works pretty generally. Finally, and this is very round-about, but it turns out that a really good way to do half a derivative is to do half an integral and *then* do a full derivative of the result:

That “root pi” is just another math thing. If you want to do, say, a third of a derivative, then you can first find f^{(-2/3)}(x) and then differentiate that. This isn’t the “correct” way to do fractional derivatives, just something that works while satisfying a short wishlist of properties and re-creating regular derivatives without making a big deal about it.

**Answer Gravy**: You can show that (or even better, ) through induction. The base case is . This is true by the fundamental theorem of calculus, which says that the anti-derivative (the “-1” derivative) is just the integral. So… check.

To show the equation in general, you demonstrate the (N+1)th case using the Nth case.

Huzzah! Using the formula for f^{(-N)}(x) we get the formula for f^{(-N-1)}(x).

There’s a subtlety that goes by really quick between the fourth and fifth lines. When you switch the order of integration (dudt to dtdu) it messes up the limits. Far and away the best way to deal with this is to draw a picture. At first, for a given value of t, we integrate u from zero to t, and then integrating t from zero to x. When switching the order we need to make sure we’re looking at the same region. So for a given value of u, we integrate t from u to x and then integrate u from zero to x.

Integrating over the same region in two different orders.

So that’s what happened there.