## Q: Is fire a plasma? What is plasma?

Physicist: Generally speaking, by the time a gas is hot enough to be seen, it’s a plasma.

The big difference between regular gas and plasma is that in a plasma a fair fraction of the atoms are ionized.  That is, the gas is so hot, and the atoms are slamming around so hard, that some of the electrons are given enough energy to (temporarily) escape their host atoms.  The most important effect of this is that a plasma gains some electrical properties that a non-ionized gas doesn’t have; it becomes conductive and it responds to electrical and magnetic fields.  In fact, this is a great test for whether or not something is a plasma.

For example, our Sun (or any star) is a miasma of incandescent plasma.  One way to see this is to notice that the solar flares that leap from its surface are directed along the Sun’s (generally twisted up and spotty) magnetic fields.

A solar flare as seen in the x-ray spectrum.  The material of the flare, being a plasma, is affected and directed by the Sun’s magnetic field.  Normally this brings it back into the surface (which is for the best).

We also see the conductance of plasma in “toys” like a Jacob’s Ladder.  Spark gaps have the weird property that the higher the current, the more ionized the air in the gap, and the lower the resistance (more plasma = more conductive).  There are even scary machines built using this principle.  Basically, in order for a material to be conductive there need to be charges in it that are free to move around.  In metals those charges are shared by atoms; electrons can move from one atom to the next.  But in a plasma the material itself is free charges.  Conductive almost by definition.

A Jacob’s Ladder.  The electricity has an easier time flowing through the long thread of highly-conductive plasma than it does flowing through the tiny gap of poorly-conducting air.

As it happens, fire passes all these tests with flying colors.  Fire is a genuine plasma.  Maybe not the best plasma, or the most ionized plasma, but it does alright.

The free charges inside of the flame are pushed and pulled by the electric field between these plates, and as those charged particles move they drag the rest of the flame with them.

Even small and relatively cool fires, like candle flames, respond strongly to electric fields and are even pretty conductive.  There’s a beautiful video here that demonstrates this a lot better than this post does.

The candle picture is from here, and the Jacob’s ladder picture is from here.

Posted in -- By the Physicist, Physics | 8 Comments

## Q: Why are determinants defined the weird way they are?

Physicist: This is a question that comes up a lot when you’re first studying linear algebra.  The determinant has a lot of tremendously useful properties, but it’s a weird operation.  You start with a matrix, take one number from every column and multiply them together, then do that in every possible combination, and half of the time you subtract, and there doesn’t seem to be any rhyme or reason why.  This particular math post will be a little math heavy.

If you have a matrix, ${\bf M} = \left(\begin{array}{cccc}a_{11} & a_{21} & \cdots & a_{n1} \\a_{12} & a_{22} & \cdots & a_{n1} \\\vdots & \vdots & \ddots & \vdots \\a_{1n} & a_{2n} & \cdots & a_{nn}\end{array}\right)$, then the determinant is $det({\bf M}) = \sum_{\vec{p}}\sigma(\vec{p}) a_{1p_1}a_{2p_2}\cdots a_{np_n}$, where $\vec{p} = (p_1, p_2, \cdots, p_n)$ is a rearrangement of the numbers 1 through n, and $\sigma(\vec{p})$ is the “signature” or “parity” of that arrangement.  The signature is (-1)k, where k is the number of times that pairs of numbers in $\vec{p}$ have to be switched to get to $\vec{p} = (1,2,\cdots,n)$.

For example, if ${\bf M} = \left(\begin{array}{ccc}a_{11} & a_{21} & a_{31} \\a_{12} & a_{22} & a_{32} \\a_{13} & a_{23} & a_{33} \\\end{array}\right) = \left(\begin{array}{ccc}4 & 2 & 1 \\2 & 7 & 3 \\5 & 2 & 2 \\\end{array}\right)$, then

$\begin{array}{ll}det({\bf M}) \\= \sum_{\vec{p}}\sigma(\vec{p}) a_{1p_1}a_{2p_2}a_{3p_3} \\=\left\{\begin{array}{ll}\sigma(1,2,3)a_{11}a_{22}a_{33}+\sigma(1,3,2)a_{11}a_{23}a_{32}+\sigma(2,1,3)a_{12}a_{21}a_{33}\\+\sigma(2,3,1)a_{12}a_{23}a_{31}+\sigma(3,1,2)a_{13}a_{21}a_{32}+\sigma(3,2,1)a_{13}a_{22}a_{31}\end{array}\right.\\=a_{11}a_{22}a_{33}-a_{11}a_{23}a_{32}-a_{12}a_{21}a_{33}+a_{12}a_{23}a_{31}+a_{13}a_{21}a_{32}-a_{13}a_{22}a_{31}\\= 4 \cdot 7 \cdot 2 - 4 \cdot 2 \cdot 3 - 2 \cdot 2 \cdot 2 +2 \cdot 2 \cdot 1 + 5 \cdot 2 \cdot 3 - 5 \cdot 7 \cdot 1\\=23\end{array}$

Turns out (and this is the answer to the question) that the determinant of a matrix can be thought of as the volume of the parallelepiped created by the vectors that are columns of that matrix.  In the last example, these vectors are $\vec{v}_1 = \left(\begin{array}{c}4\\2\\5\end{array}\right)$, $\vec{v}_2 = \left(\begin{array}{c}2\\7\\2\end{array}\right)$, and $\vec{v}_3 = \left(\begin{array}{c}1\\3\\2\end{array}\right)$.

The parallelepiped created by the vectors a, b, and c.

Say the volume of the parallelepiped created by $\vec{v}_1, \cdots,\vec{v}_n$ is given by $D\left(\vec{v}_1, \cdots, \vec{v}_n\right)$.  Here come some properties:

1) $D\left(\vec{v}_1, \cdots, \vec{v}_n\right)=0$, if any pair of the vectors are the same, because that corresponds to the parallelepiped being flat.

2) $D\left(a\vec{v}_1,\cdots, \vec{v}_n\right)=aD\left(\vec{v}_1,\cdots,\vec{v}_n\right)$, which is just a fancy math way of saying that doubling the length of any of the sides doubles the volume.  This also means that the determinant is linear (in each column).

3) $D\left(\vec{v}_1+\vec{w},\cdots, \vec{v}_n\right) = D\left(\vec{v}_1,\cdots, \vec{v}_n\right) + D\left(\vec{w},\cdots, \vec{v}_n\right)$, which means “linear”.  This works the same for all of the vectors in $D$.

Check this out!  By using these properties we can see that switching two vectors in the determinant swaps the sign.

$\begin{array}{ll} D\left(\vec{v}_1,\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right)\\ =D\left(\vec{v}_1,\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right)+D\left(\vec{v}_1,\vec{v}_1, \vec{v}_3\cdots, \vec{v}_n\right) & \textrm{Prop. 1}\\ =D\left(\vec{v}_1,\vec{v}_1+\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right) & \textrm{Prop. 3} \\ =D\left(\vec{v}_1,\vec{v}_1+\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right)-D\left(\vec{v}_1+\vec{v}_2,\vec{v}_1+\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right) & \textrm{Prop. 1} \\ =D\left(-\vec{v}_2,\vec{v}_1+\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right) & \textrm{Prop. 3} \\ =-D\left(\vec{v}_2,\vec{v}_1+\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right) & \textrm{Prop. 2} \\ =-D\left(\vec{v}_2,\vec{v}_1, \vec{v}_3\cdots, \vec{v}_n\right)-D\left(\vec{v}_2,\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right) & \textrm{Prop. 3} \\ =-D\left(\vec{v}_2,\vec{v}_1, \vec{v}_3\cdots, \vec{v}_n\right) & \textrm{Prop. 1} \end{array}$

4) $D\left(\vec{v}_1,\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right)=-D\left(\vec{v}_2,\vec{v}_1, \vec{v}_3\cdots, \vec{v}_n\right)$, so switching two of the vectors flips the sign.  This is true for any pair of vectors in D.  Another way to think about this property is to say that when you exchange two directions you turn the parallelepiped inside-out.

Finally, if $\vec{e}_1 = \left(\begin{array}{c}1\\0\\\vdots\\0\end{array}\right)$, $\vec{e}_2 = \left(\begin{array}{c}0\\1\\\vdots\\0\end{array}\right)$, … $\vec{e}_n = \left(\begin{array}{c}0\\0\\\vdots\\1\end{array}\right)$, then

5) $D\left(\vec{e}_1,\vec{e}_2, \vec{e}_3\cdots, \vec{e}_n\right) = 1$, because a 1 by 1 by 1 by … box has a volume of 1.

Also notice that, for example, $\vec{v}_2 = \left(\begin{array}{c}v_{21}\\v_{22}\\\vdots\\v_{2n}\end{array}\right) = \left(\begin{array}{c}v_{21}\\0\\\vdots\\0\end{array}\right)+\left(\begin{array}{c}0\\v_{22}\\\vdots\\0\end{array}\right)+\cdots+\left(\begin{array}{c}0\\0\\\vdots\\v_{2n}\end{array}\right) = v_{21}\vec{e}_1+v_{22}\vec{e}_2+\cdots+v_{2n}\vec{e}_n$

Finally, with all of that math in place,

$\begin{array}{ll} D\left(\vec{v}_1,\vec{v}_2, \cdots, \vec{v}_n\right) \\ = D\left(v_{11}\vec{e}_1+v_{12}\vec{e}_2+\cdots+v_{1n}\vec{e}_n,\vec{v}_2, \cdots, \vec{v}_n\right) \\ = D\left(v_{11}\vec{e}_1,\vec{v}_2, \cdots, \vec{v}_n\right) + D\left(v_{12}\vec{e}_2,\vec{v}_2, \cdots, \vec{v}_n\right) + \cdot + D\left(v_{1n}\vec{e}_n,\vec{v}_2, \cdots, \vec{v}_n\right) \\= v_{11}D\left(\vec{e}_1,\vec{v}_2, \cdots, \vec{v}_n\right) + v_{12}D\left(\vec{e}_2,\vec{v}_2, \cdots, \vec{v}_n\right) + \cdot + v_{1n}D\left(\vec{e}_n,\vec{v}_2, \cdots, \vec{v}_n\right) \\ =\sum_{j=1}^n v_{1j}D\left(\vec{e}_j,\vec{v}_2, \cdots, \vec{v}_n\right) \end{array}$

Doing the same thing to the second part of D,

$=\sum_{j=1}^n\sum_{k=1}^n v_{1j}v_{2k}D\left(\vec{e}_j,\vec{e}_k, \cdots, \vec{v}_n\right)$

The same thing can be done to all of the vectors in D.  But rather than writing n different summations we can write, $=\sum_{\vec{p}}\, v_{1p_1}v_{2p_2}\cdots v_{np_n}D\left(\vec{e}_{p_1},\vec{e}_{p_2}, \cdots, \vec{e}_{p_n}\right)$, where every term in $\vec{p} = \left(\begin{array}{c}p_1\\p_2\\\vdots\\p_n\end{array}\right)$ runs from 1 to n.

When the $\vec{e}_j$ that are left in D are the same, then D=0.  This means that the only non-zero terms left in the summation are rearrangements, where the elements of $\vec{p}$ are each a number from 1 to n, with no repeats.

All but one of the $D\left(\vec{e}_{p_1},\vec{e}_{p_2}, \cdots, \vec{e}_{p_n}\right)$ will be in a weird order.  Switching the order in D can flip sign, and this sign is given by the signature, $\sigma(\vec{p})$.  So, $D\left(\vec{e}_{p_1},\vec{e}_{p_2}, \cdots, \vec{e}_{p_n}\right) = \sigma(\vec{p})D\left(\vec{e}_{1},\vec{e}_{2}, \cdots, \vec{e}_{n}\right)$, where $\sigma(\vec{p})=(-1)^k$, where k is the number of times that the e’s have to be switched to get to $D(\vec{e}_1, \cdots,\vec{e}_n)$.

So,

$\begin{array}{ll} det({\bf M})\\ = D\left(\vec{v}_{1},\vec{v}_{2}, \cdots, \vec{v}_{n}\right)\\ =\sum_{\vec{p}}\, v_{1p_1}v_{2p_2}\cdots v_{np_n}D\left(\vec{e}_{p_1},\vec{e}_{p_2}, \cdots, \vec{e}_{p_n}\right) \\ =\sum_{\vec{p}}\, v_{1p_1}v_{2p_2}\cdots v_{np_n}\sigma(\vec{p})D\left(\vec{e}_{1},\vec{e}_{2}, \cdots, \vec{e}_{n}\right) \\ =\sum_{\vec{p}}\, \sigma(\vec{p})v_{1p_1}v_{2p_2}\cdots v_{np_n} \end{array}$

Which is exactly the definition of the determinant!  The other uses for the determinant, from finding eigenvectors and eigenvalues, to determining if a set of vectors are linearly independent or not, to handling the coordinates in complicated integrals, all come from defining the determinant as the volume of the parallelepiped created from the columns of the matrix.  It’s just not always exactly obvious how.

For example: The determinant of the matrix ${\bf M} = \left(\begin{array}{cc}2&3\\1&5\end{array}\right)$ is the same as the area of this parallelogram, by definition.

The parallelepiped (in this case a 2-d parallelogram) created by (2,1) and (3,5).

Using the tricks defined in the post:

$\begin{array}{ll} D\left(\left(\begin{array}{c}2\\1\end{array}\right),\left(\begin{array}{c}3\\5\end{array}\right)\right) \\[2mm] = D\left(2\vec{e}_1+\vec{e}_2,3\vec{e}_1+5\vec{e}_2\right) \\[2mm] = D\left(2\vec{e}_1,3\vec{e}_1+5\vec{e}_2\right) + D\left(\vec{e}_2,3\vec{e}_1+5\vec{e}_2\right) \\[2mm] = D\left(2\vec{e}_1,3\vec{e}_1\right) + D\left(2\vec{e}_1,5\vec{e}_2\right) + D\left(\vec{e}_2,3\vec{e}_1\right) + D\left(\vec{e}_2,5\vec{e}_2\right) \\[2mm] = 2\cdot3D\left(\vec{e}_1,\vec{e}_1\right) + 2\cdot5D\left(\vec{e}_1,\vec{e}_2\right) + 3D\left(\vec{e}_2,\vec{e}_1\right) + 5D\left(\vec{e}_2,\vec{e}_2\right) \\[2mm] = 0 + 2\cdot5D\left(\vec{e}_1,\vec{e}_2\right) + 3D\left(\vec{e}_2,\vec{e}_1\right) + 0 \\[2mm] = 2\cdot5D\left(\vec{e}_1,\vec{e}_2\right) - 3D\left(\vec{e}_1,\vec{e}_2\right) \\[2mm] = 2\cdot5 - 3 \\[2mm] =7 \end{array}$

Or, using the usual determinant-finding-technique, $det\left|\begin{array}{cc}2&3\\1&5\end{array}\right| = 2\cdot5 - 3\cdot1 = 7$.

Posted in -- By the Physicist, Math | 6 Comments

## Q: Are white holes real?

Physicist: The Big Bang is sometimes described as being a white hole.  But if you think of a  white hole as something that’s the opposite of a black hole, then no: white holes aren’t real.

They show up when you describe a black hole using some weird coordinates, so they’re essentially just a non-real mathematical artifact.  However, white holes are a cute idea so they show up a lot in sci-fi.  White holes are a mathematical abstraction that necessarily exist in the infinite past.  That is to say, if you follow the mathematical model that physicists use, you’ll never have a situation where a white hole exists at the same time as anything else.  Its existence happens infinitely long ago.

Spacetime gets seriously messed up near and inside of a black hole.  To make the math easier, and to help make the situation easier to picture, the Kruskal-Szekeres coordinate system was created.

In this (very unintuitive diagram) straight lines through the center are lines of constant time, with the future roughly up.  The event horizon of the black hole is also the infinite future (from an outside perspective it takes forever to fall all the way into a black hole).  That should make very little sense, but keep in mind: black holes and weird spacetime go together like Colonial Williamsburg and a lingering sense of disappointment.  The black hole’s interior is the upper triangle, the entire universe is the right triangular region and the white hole is the lower region.

The boundary of this lower region is in the infinite past.  That is; in this goofy mathematical idealization of a static and eternal black hole, a white hole shows up automatically in the infinite past.  One of the issues here is that black holes need to form at some point (in the finite past).

Taking this model completely seriously and assuming that it implies that white holes are real is a little like saying “imagine an infinite robot-godzilla”, and then worrying about where it came from.  It’s an abstraction used to think about other things.  Physicists love themselves some math, but the love is tempered by the understanding that writing down an equation doesn’t make things real.

Physicists love themselves some math, but (almost always) recognize the scope and limitations of their own equations.

For example, we can talk about the location “North 97°, East 40°”, but that doesn’t make it exist (North 90° is the north pole, the farthest north you can get by definition).

Sci-fi is about the only place you’ll hear people talking about white holes.  Whites holes are the opposite of black holes: they spit out matter and energy, they’re impossible to enter, they’re very bright, that sort of thing.  In fiction “the opposite of…” is a great way to get weird new ideas (e.g., Bizarro Superman).

The Einstein picture was created here.

Posted in -- By the Physicist, Astronomy, Math, Physics | 4 Comments

## Q: If a photon doesn’t experience time, then how can it travel?

Physicist: It’s a little surprising this hasn’t been a post yet.

In order to move from one place to another always takes a little time, no matter how fast you’re traveling.  But “time slows down close to the speed of light”, and indeed at the speed of light no time passes at all.  So how can light get from one place to another?  The short, unenlightening, somewhat irked answer is: look who’s asking.

Time genuinely doesn’t pass from the “perspective” of a photon but, like everything in relativity, the situation isn’t as simple as photons “being in stasis” until they get where they’re going.  Whenever there’s a “time effect” there’s a “distance effect” as well, and in this case we find that infinite time dilation (no time for photons) goes hand in hand with infinite length contraction (there’s no distance to the destination).

At the speed of light there’s no time to cover any distance, but there’s also no distance to cover.  Left: regular, sub-light-speed movement.  Right: “movement” at light speed.

The name “relativity” (as in “theory of…”) comes from the central tenant of relativity, that time, distance, velocity, even the order of events (sometimes) are relative.  This takes a few moments of consideration; but when you say that something’s moving, what you really mean is that it’s moving with respect to you.

Everything has its own “coordinate frame”.  Your coordinate frame is how you define where things are.  If you’re on a train, plane, rickshaw, or whatever, and you have something on the seat next to you, you’d say that (in your coordinate frame) that object is stationary.  In your own coordinate frame you’re never moving at all.

How zen is that?

Everything is stationary from its own perspective.  Movement is something other things do.  When you describe the movement of those other things it’s always in terms of your notion of space and time coordinates.

The last coordinate to consider is time, which is just whatever your clock reads.  One of the very big things that came out of Einstein’s original paper on special relativity is that not only will different perspectives disagree on where things are, and how fast they’re moving, different perspectives will also disagree on what time things happen and even how fast time is passing (following some very fixed rules).

When an object moves past you, you define its velocity by looking at how much of your distance it covers, according to your clock, and this (finally) is the answer to the question.  The movement of a photon (or anything else) is defined entirely from the point of view of anything other than the photon.

One of the terribly clever things about relativity is that we can not only talk about how fast other things are moving through our notion of space, but also “how fast” they’re moving through our notion of time (how fast is their clock ticking compared to mine).

The meditating monk picture is from here.

Posted in -- By the Physicist, Relativity | 38 Comments

## Q: What is energy? What is “pure energy” like?

Physicist: Unfortunately, “pure energy” isn’t really a thing.  Whenever you hear someone talking about something or other being “turned into pure energy”, you’re listening to someone who could stand to be a little more specific about what kind of energy.  And whenever you hear someone talking about something being “made of pure energy”, you’re probably listening to someone who’s mistaken.

“Pure energy” shows up a lot in fiction, and most sci-fi/fantasy fans have some notion of what it’s like, but it isn’t a thing you’ll find in reality.

Energy comes in a hell of a lot of forms, but they’re all pretty mundane.  For example, when “energy is released” in an explosion (most explosions) that energy mostly takes the form of kinetic energy (things moving and heat).  Light is about the closest anything comes to being pure energy, but it’s not pure energy so much as it’s one of the several kinds of energy that isn’t tied up in matter.  It’s “matterless”, sure, but that doesn’t mean that electromagnetic fields (light) are any closer to being pure than, say, gravity fields (another, very different, massless form of energy).  “Pure” energy: nope.  Some form of energy without matter: that happens.

So, energy can change from one form into another into another into another, etc., but the question remains: what is energy?  The answer to that is a little unsatisfying.

There’s this quantity, that takes a lot of forms (physical movement, electromagnetic fields, being physically high in a gravitational well, chemical potential, etc., etc.).  We can measure each of them, and we know that the total value between all of the various forms stays constant, and with every constant, measurable thing it gets a name; energy.

If fusion in the Sun releases energy*, then the amount released is E = (Δm)c2 (where Δm is the change in mass between the hydrogen input and helium output and c is the speed of light).  If that energy travels from the Sun to the Earth as light, then each photon of that light carries E=hν (Planck’s constant times frequency), of it.  If those photons then fall onto a solar panel, that light energy can be converted into electrical energy.  If that electrical energy runs a motor, then the energy used is E = VIT (voltage times current times time).  If that motor is used to compress a spring, then the energy stored in the spring is E=0.5kA2 (where k is a spring constant, and A is the distance it’s compressed).  If that spring tosses a stone into the air, then at the top of its flight it will have converted all of that energy into gravitational potential, in the amount of E = mgh (mass of the stone times the acceleration of gravity times height).  When it falls back to the ground that energy will become kinetic energy again, E=0.5mv2 (where m is the stone’s mass and v is its velocity).  If that stone falls into water and stirs it up, then the water will heat up by an amount given by E = C(ΔT) (where C is the heat capacity of water, and ΔT is the change in temperature).

The “same energy” is being used at every stage of this example (assuming perfect efficiency).  But there’s no “carry through” that makes it from the beginning to the end.  The only thing that really stays the same is the somewhat artificial constant number that we Humans (or more precisely: Newton) call “energy”.

When you want to explain the heck out of something that’s a little abstract, it’s best to leave it to professional bongo player, and sometimes-physicist Richard Feynman:

“There is a fact, or if you wish, a law governing all natural phenomena that are known to date.  There is no known exception to this law – it is exact so far as we know.  The law is called the conservation of energy.  It states that there is a certain quantity, which we call “energy,” that does not change in the manifold changes that nature undergoes.  That is a most abstract idea, because it is a mathematical principle; it says there is a numerical quantity which does not change when something happens.  It is not a description of a mechanism, or anything concrete; it is a strange fact that when we calculate some number and when we finish watching nature go through her tricks and calculate the number again, it is the same.  (Something like a bishop on a red square, and after a number of moves – details unknown – it is still on some red square.  It is a law of this nature.)

(…) It is important to realize that in physics today, we have no knowledge of what energy ‘is’.  We do not have a picture that energy comes in little blobs of a definite amount.  It is not that way.  It is an abstract thing in that it does not tell us the mechanism or the reason for the various formulas.” -Dick Feynman

The Green Lantern picture is from here.

*Every time energy is released from anything, that thing ends up weighing less. It’s just that outside of nuclear reactions (either fission or fusion) the change is so small that it’s not worth mentioning.

Posted in -- By the Physicist, Physics | 3 Comments

## Q: Why is Schrodinger’s cat both dead and alive? Is this not a paradox?

One of the original questions was: A basic rule of logic is that something cannot contradict itself. It is impossible for P to be true and not true. Doesn’t Schrödinger’s cat violate this law and therefore invalidate logic?

Schrödinger proposed this thought-experiment to demonstrate how ridiculous quantum super-position is.  Basically the multiple states of a single atom (decayed and not decayed) causes a cat to be in multiple states (living and dead).

Physicist: The resolution to this comes from a careful look at what is meant by the “state” of something.  Turns out, logic is safe from Lil’ Schrödinger’s claws.

There’s a big difference between “reasonable” and “logical”.  To see the difference, find a calm, reasonable person and talk to them, and then (this is more difficult) find a professional logician and try to talk to them.

Talking to professional Logicians: among the more frustrating conversations you’ll ever have.

It’s pretty reasonable to say that a single thing must be in one state or another, especially if those states are mutually exclusive.  It’s obvious.  It’s common sense.  In fact, it’s so reasonable/obvious/sensible that disagreeing with it would be a good way of being laughed out of every fancy science salon of the 19th century (or at least the occasional salon with sober members).  Logic, on the other hand, has nothing to do with physical reality (neither does being reasonable for that matter).

Logicians start with a big bucket of postulates and symbols and statements, and then run with them.  None of it needs to be “physically motivated” or even remotely intuitive.

Clearly, this reads “P is possibly true if and only if P is not definitely untrue” and also “P is definitely true if and only if it is not possible for P to not be true”.

The statement that things must be one way or another (specifically, that each state is mutually exclusive of the others), is a whole new logical statement on its own.  The statement even has a name: “counterfactual definiteness“.  Overly-complicated terms like that are just made up so that people will think that physicists are wizards-of-smartness.  A better term for things needing to be in a definite state is “realism”.  While realism is “obviously true”, is isn’t necessarily true (not “logically true”), and point of fact: isn’t true.

There’s a famous no-go theorem in quantum physics called “Bell’s theorem” that says that, given the results of a variety of experiments involving entanglement, “local realism” is impossible.  This means that things always being in single states requires the exchange of some kind of faster-than-light signals.  Or conversely, if no effects can travel faster than light, then things must be allowed to be in multiple states.

It’s pretty natural to jump to the conclusion that things are communicating faster than light.  Losing realism is philosophically, even mathematically, a bitter pill to swallow.  Unfortunately, there are a lot of problems with faster than light stuff (like this one!).

It turns out that the universe doesn’t seem to have any problem dropping realism.  Things are perfectly happy being in multiple states at the same time: particles being in multiple positions or energy states, single events happening at multiple times, or (admittedly reaching a little past our grasp) being in multiple states of living and dead.  The last of course has never been observed in the lab (and probably never will be), but this is a well-studied property otherwise.  We’ve seen multiple-stated-ness in every physical system we’re capable of measuring the effect in.  So far, there doesn’t seem to be any limit to the scale at which quantum weirdness shows up.

In short, it does make sense to say that things must be in a single state or another, but it isn’t necessarily “logical”.  The universe couldn’t care less about what makes sense.

Answer gravy: This bit threatened to derail the flow of the post.

Realism is technically a statement that limits the exact nature of what kind of states are allowed.  For example, only the states $|living\rangle$ and $|dead\rangle$ are allowed.  When the cat is both living and dead it’s technically just in multiple states in certain “measurement bases”.  So the cat could be in the single state $\frac{1}{\sqrt{2}}\left(|living\rangle+|dead\rangle\right)$.

We see this all the time in the polarization of light, for example.  A diagonally polarized photon is in a single state, $|\nearrow\rangle$.  But, if you insist on looking at it (measuring it) in terms of horizontal and vertical polarizations, then you find that it must be in multiple states, $|\nearrow\rangle = \frac{1}{\sqrt{2}}\left(|\rightarrow\rangle + |\uparrow\rangle\right)$.  This moves the problem from being a purely philosophical/logical problem, to one of defining what is meant in detail by the word “state”.

The answer to whether Schrödinger’s cat is in multiple states becomes a resounding “Yes!  Unless some very specific measurement is set up, in which case: no!”.

Posted in -- By the Physicist, Logic, Physics, Quantum Theory | 18 Comments