A Quantum Computation Course 2: Quantum Harder

Physicist: The quantum computation course is still where all of my time is going.  Here’s five more lessons, each more profound than the last.

When you need to measure a bell state, but don’t have time to drive all the way across Pennsylvania.

Lecture 6: Density Matrices

Lecture 7: Classical Information

Lecture 8: Entanglement

Lecture 9: Quantum Information I

Lecture 10: Quantum Information II

Posted in Uncategorized | 2 Comments

A Quantum Computation Course

Physicist: I’ve been a little busy to post much here for a while, but you may be interested in what I’m working on, so here it is.  I’m teaching an introductory course on quantum information and computation, and the primary material for the course are these lecture notes.  I’ll post more as they exist more.

This is funny in context.

If you’re interested and are familiar with (or willing to learn) linear algebra, then take a look.

Lecture 1: The Quantum State

Lecture 2: Bra-ket Notation

Lecture 3: Operators and Eigenstates

Lecture 4: The Bloch Sphere and Quantum Circuits

Lecture 5: Universality and Deutsch-Jozsa

Posted in -- By the Physicist, Computer Science, Entropy/Information, Experiments, Math, Philosophical, Physics, Probability, Quantum Theory | 9 Comments

Q: Is silicon life possible? Why all the fuss over carbon-based life?

The original question was: If there is life on any other planets, is it highly likely to be Carbon based than any other elements, for instance Silicon?

I am asking this question because, almost all the discussion of Extraterrestrial life that I have seen is Carbon based. Whenever they find a new planet, possibility of life is speculated on the basis of existence of water in liquid form, which is a necessary condition for life forms similar to that of Earth. I cannot understand, what is this big thing with water and carbon ?


Physicist: Silicon and carbon show up in the same column on the periodic table, meaning that they share a lot of chemical properties.  In particular, they both want to form four bonds with other atoms.  So you can be forgiven for thinking that what works for carbon (like biochemistry) should work for silicon.

Left: Methane and silane are an example of carbon and silicon’s (occasionally) similar properties.  That similarity leads to speculation about… Right: silicon-based life, such as the Horta which, like all silicon life, is immune to mind probes.  That’s just science.

We can’t predict what other forms of life may be like, but everything we’ve seen here on Earth says that life needs totally over-the-top chemical complexity.  As far as we know, “lots of carbon” is the only option if you want a huge variety of dynamic, stable, and stunningly complex molecules with thousands of atoms.

Carbon has a couple things going for it that silicon doesn’t.  Carbon seems to be perfectly happy to form arbitrarily large and complex molecules, while silicon generally doesn’t.  More than that, when carbon oxidizes if forms carbon dioxide and monoxide, which are gases even at low temperatures and are (evidently) fairly easy to break apart.  That property (making gases with oxygen that aren’t too hard to unmake) means that carbon is available as a building block for anything at the bottom of the food chain playing the “anchor and filter” gambit (like plants).

Despite carbon dioxide being a trace gas in the atmosphere, plants filter it out to make food and more plant.  And despite that rock face having a hell of a lot silicon (comparatively), the tree leafs it where it is.

Silicon on the other hand mostly forms solid minerals (which is why you’re not breathing it right now) which are very difficult to chemically break apart.  It wasn’t even known to be an independent element until well into the 19th century, because of how difficult it is to isolate it from oxygen and it’s always found chemically bound to oxygen.  Even worse, silicon doesn’t bind with itself nearly as well as carbon does.  That’s an an unfortunate property to miss, because without it larger/complicated molecules are far less likely or stable.

Carbon is a major, early product of stellar fusion, which is why it’s the fourth most common element in the universe.  Because of that, a lot of astronomers suspect the existence of “carbon worlds” (which are exactly what you think they are).  But Earth is decidedly not a carbon world.

Earth’s crust: oxygen (46.6%), silicon (27.7%), some other stuff (25.7%), and way down the list, carbon (0.03%).

Despite what you may have heard (and about a fifth of what you’re made of) there is surprisingly little carbon on Earth.  Silicon is the second most abundant element in the crust and about a thousand times more common than carbon.  Compared to the universe at large, Earth is very silicon rich and carbon poor.  The biosphere is a razor-thin spiderweb of living stuff hemmed in by a heck of a lot of stuff that can never be alive.

Here’s the point: if silicon life was going to happen anywhere, it should be happening here.  If some kind of alien silicon life were to happen to drop in, it would find Earth a lot more palatable than we do.  It wouldn’t have to scrape by on trace elements, the way poor schlubs like all known life has to.

Unfortunately, we only have a few data points from life here on Earth and they’re all related to each other.  Carbon seems to be uniquely able to form fantastically complex structures and water, while not necessarily unique, is made of some of the most common material in the universe (hydrogen and oxygen) and provides a great medium for life and chemistry.

Speculation is fun, but actual knowledge comes from the world around us.  What we really need is experience with alien environments that have some possibility of supporting non-carbon, non-watery life.  Saturn’s moon Titan has a dense atmosphere, methane and ethane rain, rivers, and oceans, and zero liquid water (it’s -180°C over there), so it’s both very alien and supportive of much greater chemical complexity than, say, our Moon.  Of the places to check out “nearby”, Titan is at the top of the list.  But considering how short that list is, it may be that we’ll see the first evidence of non-standard alien life in the atmospheres of exoplanets.  We’ve found thousands of those (which is less than a drop in the bucket) and, while we can’t actually get a picture of any of those planets, the tiny light that bounces off of them or filters through their atmospheres carries a lot of information (which we can use to see if there’s Earth-ish life present).

Alternatively, we could just get SETI to start sending dinner RSVPs into deep space with a choice between “chicken” and “bucket of sand”, then wait for the responses to roll in.  That should do it.

Posted in Uncategorized | 11 Comments

Q: Why are the laws of quantum mechanics so strange? Does it mean that we’re missing something?

Physicist: We’re definitely missing something, but we’re always missing something.  One of the most famous quotes about quantum physics, often used in lieu of a shrug, is due to St. Feynman: “If you think you understand quantum mechanics, you don’t understand quantum mechanics.”  Which is fair, but it applies equally well to bicycle mechanics.  Or paper airplane design.  Not even cake science has a hope of ever being completely understood.

The classical world (the world we experience) has one set of rules and the quantum world has another set of rules.  But of course, they’re describing the same, one-and-only universe.  The “correspondence principle” says that the quantum laws should, on a large and noisy enough scale, reproduce the classical laws.  So far that seems to be exactly the case; as weird as they are, the laws of quantum mechanics are always compatible with the world we see around us.  In other words, the very particular laws we consider “normal” are a special case of quantum laws.  How that works is unfortunately a case-by-case thing.  There’s no clean “quantum-to-classical translation technique”.  The reason a thrown rock follows a particular path has a very different explanation than the cause of rainbows.  Even the distinction itself, between classical and quantum, is often impossible to nail down.

Quantum mechanical laws are rules for the universe, and in that sense they’re no weirder than gravity or anything that Newton did.  And we explore them in the same way: follow clues, come up with models, test them out, find out that practically all of them are wrong, etc.  When it comes to the actual doing-of-the-math and the exploration-of-the-physics, there’s nothing unusual about quantum physics.  After all, it wasn’t obvious that “for a set of isolated objects, the sum of their masses times the derivative of their positions with respect to time is invariant.”  The math and experiments took a while to develop, but then we gave it a name and teach it to kids as “conservation of momentum” (an object in motion stays in motion…).

When you get into the nuts and bolts of physics (the math), no subject is terribly intuitive.  If you study classical mechanics (how stuff moves) you run headfirst into Lagrangians and principles of least time or action, and suddenly conversations about balls bouncing and spinning tops become… abstract.  Honestly, the big difference isn’t in the difficulty of the physics, it’s in the implications.  If you learn about how to predict orbital trajectories, you’ll walk away with the sense that orbits are complicated.  If you learn about how to predict quantum tunneling, you’ll walk away with the distinct sense that either the universe is messing with you or that someone, somewhere made a huge mistake and nobody’s bothered to double check their work.  The difficulty is comparable, the unease is not.

So why is quantum mechanics so unintuitive?  Because we’re not used to it.  It would be difficult to get used to moving around in the world if you were a tiny insect, and drops of water stood up like boulders, or if you were a bird, and the air had a “landscape” of movement.  We think of the world we live in as intuitive because we not only grew up and live in it, but evolved for it over millions of generations.  Quantum effects seem strange to our minds because our brains have never had to deal with them before.

Everything lives in the “normal world”, but can disagree wildly on what “normal” is.  For most of these critters you’d have to carefully explain basic stuff like “what goes up must come down”, “the sky is blue”, “fire is hot”, “water is wet”, etc.  It’s no coincidence that minds are great at handling the environment they find themselves in, and remarkable that we can handle more.

On the other hand (literally), our bodies have had to deal with quantum effects since before they were bodies.  While a human mind is really good at maneuvering a human body through the world, dealing with moving objects, weather, other humans, and making up dirty limericks, there hasn’t been much call (over evolutionary timescales) for us to worry about quantum effects.  But on the scale of biochemistry, quantum effects are incredibly pervasive, and the tiny chemical mechanisms in every one of our cells take advantage of them all the time.  On a chemical level, life has been dealing with quantum phenomena since it began, and (even though we don’t have to think about it) it’s gotten really good at it.  For example, photosynthesis involves maintaining the coherence of incoming light (it acts like a wave, not a particle) until it can be directed to a set of molecules that can actually use the energy to make food.  If plants didn’t take advantage of superposition and coherence (inherently quantum things), they’d need light-as-a-particle to bullseye the tiny receptors; a huge waste of the vast majority of photons that would be off-target.

At a basic level, every chemical process is inherently quantum mechanical.  Chemistry and biochemistry are just applied quantum physics, so if you want to see some ridiculously fancy quantum physics at work, go no farther than your mirror.  Or your dinner for that matter.

That said, your conscious mind isn’t there to understand every detail of how your body works down to the atoms, it’s there to react to information from your senses and direct your physical body in such a way that you live through the day (also fall in love and explore and appreciate beauty or whatever).  So we’re good at that and we’re used to it.  Atoms and quantum mechanics, although not inherently more weird than the “classical world”, take some getting used to.

Fortunately for us, we seem to be pretty adaptable.  For example, the Earth intuitively seems to be flat, motionless, and not a brief exception to an infinite nothingness, when in fact it’s round, spinning, unimaginably old, and hurtling through the void at 30 km/s.  And yet, most people don’t seem to mind.  The nature of the planet we’re on is totally unintuitive, but it’s something you get used to with experience.

The classical world is already incredibly complex and weird and, as “intuitive” as it seems, it still takes a lot of work to understand it (as much as anyone can).  But we get used to it.  Quantum mechanics is weird too and involves plenty of ideas that seem totally bizarre to our squishy hominid brains.  But we get used to it.

Posted in -- By the Physicist, Evolution, Paranoia, Philosophical, Physics, Quantum Theory | 22 Comments

Q: What is quantum supremacy? Is it awesome or worrisome?

Physicist: Mostly awesome.  Eventually.

Recently, some of the folk at Google claimed to have achieved “quantum supremacy” (here’s what they had to say about it).  Google has, like many other big companies and nations, been very gung-ho about research into quantum computers, and that research has been coming along remarkably fast in spite of stupefying engineering hurdles in the way.

Quantum computers aren’t “better computers”.  They’re a completely new way to do computations that’s practically unrecognizable to “classical” computer science.  Saying that a quantum computer is better than a classical computer is a little like saying that a bicycle is better than a horse; there are comparisons to be made, but “better” isn’t a particularly useful way to frame them.

Like quantum and classical computers, the best option is typically some combination of both.

For example, if you want to factor big numbers (to break RSA based cryptosystems, for example) or search an unsorted database, then quantum computers promise speeds that are impossible for a classical computer.  On the other hand, if you’d like to add two numbers together or look at pictures of horse-bikes, a classical computer is arguably better (inasmuch as it is hundreds of millions of dollars cheaper).

Quantum supremacy is a historical milestone more than anything, when a quantum computer manages to do a calculation that no classical computer is likely to ever match.  A few weeks ago, Google’s “Sycamore” computer did just that.  But your bank accounts are safe.  Sycamore isn’t nearly powerful enough to break crypto keys and the calculation it did is arguably… a little pointless.

Sycamore was given a series of random quantum circuits to simulate, which it was able to do.  Barely.  According to Google, “Our largest random quantum circuits have 53 qubits, 1113 single-qubit gates, 430 two-qubit gates, and a measurement on each qubit, for which we predict a total fidelity of 0.2%.”  That 0.2% isn’t how often it fails, that’s how often it works.  We’re not launching boldly into the era of quantum supremacy so much as we’re limping across the starting line.  Baby steps.

Even so, in just a few minutes Sycamore did the same calculation 5 million times and, since mistakes are random and not-mistakes are consistent, that 0.2% is good enough to be useful.  The most powerful supercomputers could do the same calculation once in about ten thousand years (brags Google).

Simulating random quantum circuits using a quantum computer might seem silly, but only because it is.  This is the sort of thing that you might do if you needed to prove to someone else that you really do have a quantum computer, but Google already knows that.  Because they built it.  Their test boils down to asking Sycamore “Are you really a quantum computer?” to which Sycamore responded “…yes.  Wait, didn’t you build me?”.  We can believe the answer because it would take thousands of years for a classical computer to lie.

It’s worth stepping back to consider what a computer is and what makes quantum computation so alien.  In a moment you’re going to get the overwhelming urge to yell “n’ doy!”, but bear with me.

A computer takes information as input, does something with it, and spits out a result.  What kind of information, how it does what it does, and what the output looks like, all depend on the “architecture” of the computer.  For classical computers (like whatever you’re using right now), information takes the form of a heck of a lot of bits and logic gates that compare those bits and produce new bits.  “Bits” are “binary digits” and each describes some binary choice: 0/1, on/off, trek/wars, up/down, etc.  Logic gates are the simplest, smallest part of a computer, manipulating bits one or two at a time.  For example, the “and gate” does this: \left\{\begin{array}{rcl}00&\to&0\\10&\to&0\\01&\to&0\\11&\to&1\end{array}\right.

There are a lot of ways to do this.  Today we use transistors, because they’re efficient, fast, and we’ve gotten weirdly good at making them tiny.  In (NPN type) transistors, current is only allowed to flow across when a secondary voltage pushes electrons into the “conductive band”, so they’re available to carry electricity.  That’s an “and gate” right there: unless two voltages are applied, one to make the transistor conductive, and another to actually push current across it, no electricity passes through.  Before transistors phased every other contender out, “relays” were a common way to construct logic gates: current from one source generated a magnetic field that physically grabbed a metal plate and pulled on it to close a switch, so that another current has the opportunity to flow (which is really brute-force logic).  Although they could be loud enough drive nearby engineers crazy, at least they didn’t wear out all the time or set things on fire the way vacuum tubes did.  For some sense of what relays sound like, watch an old Star Trek.  Somebody at Paramount guessed super wrong about the future (of computers and hair styles at least).

All three of these things serve the same basic function, just in different ways.  Left: In vacuum tubes, an excess of charge on a screen (the black thing) prevents electricity from literally making the jump between a cathode and anode (the plates above and below it).  Middle: In relays, current flows through a solenoid (the coil of wire) generating a magnetic field that physically pulls a switch shut or open, controlling current through a second wire.  Right: In transistors, a tiny voltage provides either electrons or “holes” (a lack of electrons) to carry electrical current across a semi-conductor.

The big things that make quantum computers different are superposition, qubits, and quantum parallelism (which are kinda three sides of the same coin).  Bits are either one of two states (0 or 1).  Qubits, “quantum bits”, are any combination of two states.  That means that even a single qubit is already much more complicated than a single bit.

One bit is as complicated as “this or that”.  One qubit is as complicated as the surface of a “Bloch sphere”: this or that, and everything in between, and also every “complex valued” combination.

In order to simulate a single qubit, a classical computer needs to keep track of two complex numbers to fair accuracy (depending on how accurate the simulation needs to be).  But that’s not what makes simulating quantum computers hard; it’s quantum parallelism.

If you’ve heard of “entanglement”, then you’ve probably heard it described as “two particles spookily connected to one another”, which makes it sound like entanglement is about sending signals.  But at it’s heart, entanglement is about something a bit more profound.  You can’t describe a quantum system by looking at one part at a time, you have to consider all of it together.  Two entangled particles share a single state between them.

A single tiny thing can be in a superposition of states.  If there are only two states, like with an electron’s spin, then you’ve got a qubit and you can describe the superposition of up and down states like this: \alpha|\uparrow\rangle+\beta|\downarrow\rangle.  Physicists like to use Greek letters for complex numbers because it makes their math look fancy.  But if you have, say, three electrons, then your three qubits are more than three times as complicated.  If the three electrons are “separable” (which means they’re not entangled and they can be accurately considered one at a time), then their collective state looks like this:

\left(\alpha|\uparrow\rangle+\beta|\downarrow\rangle\right)\left(\gamma|\uparrow\rangle+\delta|\downarrow\rangle\right)\left(\epsilon|\uparrow\rangle+\zeta|\downarrow\rangle\right)

That’s all three electrons in their own superpositions.  However, if those electrons have been brought together and had a chance to interact, affect each other, and become entangled (which is a typical situation), then their collective state looks like this:

\begin{array}{l}\alpha|\uparrow\rangle|\uparrow\rangle|\uparrow\rangle+\beta|\uparrow\rangle|\uparrow\rangle|\downarrow\rangle+\gamma|\uparrow\rangle|\downarrow\rangle|\uparrow\rangle+\delta|\uparrow\rangle|\downarrow\rangle|\downarrow\rangle\\+\epsilon|\downarrow\rangle|\uparrow\rangle|\uparrow\rangle+\zeta|\downarrow\rangle|\uparrow\rangle|\downarrow\rangle+\eta|\downarrow\rangle|\downarrow\rangle|\uparrow\rangle+\theta|\downarrow\rangle|\downarrow\rangle|\downarrow\rangle\end{array}

For three qubits this isn’t much of a jump, from 2\times3=6 terms to 2^3=8, but that exponent makes a big difference.  100 bits takes, well, 100 bits to describe.  100 qubits takes 2^{100}=1267650600228229401496703205376 complex numbers to describe.  That’s a huge amount of information for even modest a quantum computer to work with.

Hemoglobin uses around 10,000 atoms to do two things: grab oxygen and let go of oxygen. Presumably, if it could be done better with fewer atoms, we wouldn’t have hemoglobin, so understanding how all of this mess works together is important.

The same phenomena that makes quantum computation exponentially complex also makes quantum systems exponentially difficult to simulate.  For example, we can simulate a handful of atoms without too much trouble, but what we’d really like to do is simulate entire proteins, so we can get a better handle on (and maybe monetize?) the nature of life itself at the most basic level.  The problem is that proteins generally have many thousands of atoms, and its collective behavior (like any quantum system) is exponentially more complicated than the sum of its parts.  So this is a great opportunity for quantum computers to step in and do the thing they do best: act quantum.  Simulating proteins, drug interactions, and really anything involving lots of atoms, may be the killer app for quantum computers.  If and when they get off the ground, this may be the big thing we notice (unless, for some reason, I can’t predict the future).

And yet, quantum computers are not usually exponentially more powerful or faster than classical computers.  They don’t actually “check every possibility” when they do a calculation.  If they did, we could ask incredibly broad questions with very specific answers (“what is best in life?“) and then use some protocol to sort them out.  What’s happening inside a quantum computer is more akin to wave interference, where waves from different sources overlap and interfere to create some combined pattern that isn’t the result of any one source in particular.  The output is always the combined result of every input.  The trick is in getting the answers you don’t want to cancel each other out and the answers you do want to add together.  If that sounds difficult and frustrating: yes.  Again, they’re not “better” than classical computers, but so wildly different that they open up new options.

There’s also a pretty spectacular bottle neck between what a quantum computer can think and what it can say.  For example, when you measure an electron’s spin, it’s either up or down regardless of whatever else is going on.  This is true in general: while N qubits can be in 2^N states together, when you get around to measuring them, each will only produce a single bit for N bits total (this is an application of Holevo’s Bound).  With 300 qubits, a quantum computer would have access to more states than there are atoms in the visible universe, and yet its output would be shorter than this sentence.

The big worry with quantum computers is an end to the age of encryption, which would be bad for anyone who wants more privacy than instagram nudists.  We (the people) have two big things going for us.  First, the Shor algorithm requires at least twice as many qubits (realistically, a hell of a lot more to handle error correction) and while the best quantum computers today only several dozen qubits, a good RSA key is hundreds or thousands of bits long.  So there’s some time.  Second, RSA is not the only kind of encryption.  Governments and companies around the world are already making the move to “post quantum cryptography“.  Even without functioning quantum computers, quantum technology gives us access to “quantum cryptography” which is basically a method for creating perfectly random numbers (which professional cryptographers love) at two locations without actually sending those numbers, as well as determining whether someone is listening in (which paranoid cryptographers love).

Posted in -- By the Physicist, Computer Science, Engineering, Quantum Theory | 9 Comments

Q: Do we actually live in a computer simulation?

Physicist: This has become a whole thing.  Although it has shown up in a lot of incarnations throughout history, the most recent is something like this: Computers are getting better all the time and some day artificial reality will be indistinguishable from reality.  Also, people love to use computers to simulate stuff, so maybe people in the far future will want to simulate the whole universe.  And if they do, they’ll probably do it lots of times.  So that means that most of the consciousnesses that will ever exist, will exist inside of those simulations.  So all things being equal, you’re probably a simulation.  QED

The thing is… there are issues.  Right off the bat, you can’t do a full-resolution, real-time simulation of an entire universe inside of a similar universe.  Like all simulations, you have to cut corners.  Maybe stars outside of our galaxy really are just dots.  Maybe atoms are only rendered when someone bothers to break out an electron microscope.  As far as anyone will ever be able to tell, there’s no difference between a “universe simulation” and a “perception of the universe simulation”.

Not everything can be swept under the digital rug.  You, personally, must exist in some form because you’re thinking about it (right now in fact).  More accurately, I must exist.  The jury’s still out on you and everyone else.  And as long as RAM is a problem, there’s no point fulling rendering everyone else’s mind, so maybe you don’t live in a huge universe simulation, you live in a tiny “you simulation”.

“Maybe we’re all just brains in (super gross) boxes!”

It is not impossible that all of our perceptions, or even our minds, are simulated, but there’s no way to know for sure.  And that’s a huge problem.  The statement “Hey, maybe we’re all just brains in boxes!” isn’t the beginning of a scientific debate, it’s the unceremonious end of it.  The whole point of science is that we can have plenty of theories and ideas, but ultimately the physical universe is what settles debates.  Removing physical reality from consideration means that nobody gets to be right or wrong or even informed.  Instead of furthering human knowledge, we just get locked into a solipsistic cage match with no referee.

“Our avatars were created in our Simulator’s image and exist by Their whim alone” is… conveniently familiar.  But the idea that the super-reality is anything like ours is a hard guess with no solid justification.  Without a physical reality to experimentally bounce ideas off of, the scientific process is rudderless.  There’s nothing we can usefully say about our hosting super-reality, and even less about the motivations of the “people” living in it.  For example, we know that some people do use computers to simulate the universe (nerds), but the overwhelming majority of simulations, especially those with people in them, are used for absurdly unrealistic fantasies (congratulations to Kyle Giersdorf by the way).  Therefore, assuming that we’re simulated and all things being equal, our simulation is totally unlike the super-reality where our universe is most likely running on a phone in someone’s pocket.  Or it’s being simulated in a dream.  Or in a book.  Or a spontaneously spawned theocracy of Boltzmann Brains.  Or any other mechanism that you’re totally free to make up.  How could you be wrong?  Double QED!

So declaring the unreality of our reality isn’t wrong, it’s just… not a useful way to spend time when you’re sober.  If the simulation is convincing (and it seems to be), then the best you can do is a sort of Pascal’s Computer: given the two possibilities, either the universe is real or it isn’t, you may as well at least pretend it’s real.  Keep eating food, don’t walk into traffic, be considerate to animals, etc.  You know: live in the world.

That all said, if we do live in a computer simulation that isn’t explicitly designed to fool us, then hopefully there should be some way to determine that fact: scrolling green Matrix code, the same black cat walking by twice, or maybe something more subtle.

Enter James Gates, who noticed something subtle.  If you’ve ever heard someone mention that some physicist found evidence that we’re living in a simulation, they’re probably talking about Gates (this has kinda been his thing for a while).  The claim has been made that Gate “discovered computer code” in the laws of physics.  Before you get too excited about that implying that some physicist found lines of code floating around in Newton’s laws or get overly worried that someone might accidentally execute a “goto 10”, that’s not what he’s talking about and more importantly, that’s not what he did.

Gates is a string theorist who’s unhappy with the equation-centric way physics is done.  Which is fair.  The universe is described beautifully and (as far as we can tell) perfectly by mathematics, but that doesn’t necessary mean that equations are the best way to do that math.  For example, you could talk about light flux by saying “Hey everybody, \int_{\partial V}\vec{L}\cdot \vec{da} = \int_V \left(\nabla \cdot \vec{L} \right)\,dv!” or alternatively you could say “Hey everybody, the total amount of light coming out of any bubble is equal to the amount of light produced in that bubble!”.  Gaussian surfaces are a terribly clever way to not do calculus.

Gates found a cute way to talk about the equations of string theory using what he calls “adinkras”, named after the symbols the Akan people used to represent complex ideas.  How this is done is a whole thing, and he spell’s out the broad picture in the article he wrote for Physics World way better than I can.  Here’s the long and the short of it: when Gates applied his adinkra-nating technique to some of the equations he was looking at, he found that the resulting adinkra had a cute pattern.  That in itself is not unusual.  Patterns show up all the time.

Gates’ adinkra for some string theory equations.  Each ball is an equation and the lines are relationships between them (a variety of different derivatives).  The numbering is an artificial addition.

This pattern, once some binary digits have been slapped on, looks like a similar pattern that shows up when you deal with “doubly even self-dual linear binary error-correcting block codes“.  Not for nothing, it also looks a bit like the skeleton of a hypercube.  Patterns show up.

While “doubly even self-dual linear binary error-correcting block codes” does sound very scienceful, it’s also a long way to go to find a matching pattern.  Especially considering that the equations in question have nothing to do with error correction.  Gates, by way of Wigner, may have said it best:

“… If that sounds crazy to you – well, you could be right. It is certainly possible to overstate mathematical links between different systems: as the physicist Eugene Wigner pointed out in 1960, just because a piece of mathematics is ubiquitous and appears in the description of several distinct systems does not necessarily mean that those systems are related to each other. The number π , after all, occurs in the measurement of circles as well as in the measurement of population distributions. This does not mean that populations are related to circles.”

“Do we live in a computer simulation?” is a great question to ask and a terrible question to answer.  Ultimately the simulation hypothesis is a theological idea, not just because it gives us all kinds of new gods to worry about (“Dear usr1, please do not delete me…”), but specifically because it can neither be proven nor refuted by any physical experiment or investigation.  Until that changes, we should answer this like other similarly profound questions like: “Is there a really fast, quiet person standing behind me all the time?”, “Does the light stay on when I close the refrigerator door?”, “Did the universe suddenly start, as it is, ten minutes ago?”.

Nope.

Posted in -- By the Physicist, Computer Science, Paranoia, Philosophical, Skepticism | 15 Comments