# Q: Why can’t we see the lunar landers from the Apollo missions with the Hubble (or any other) telescope?

Physicist: About why you’d expect: they’re just too damn small and too damn far away.  Nothing fancy.  That’s not to say that we can never get images, just that you need to be a lot closer.  The lunar landers are each about 4 meters across and about 384,400,000 meters away, which makes them about as hard to see as a single coin from a thousand miles away.  You gotta squint.

A picture of the Apollo 17 landing site taken by the Lunar Reconnaissance Orbiter which, as the name implies, was in orbit around the Moon when it took this presumably reconnaissance-related picture.  Those meandering lines are tracks left by a lunar rover.  Click to enlarge.

In fact, a big part of why we (humans) bother to go to the Moon, other planets, and space in general is that photographs from Earth leave a lot to be desired.  In addition to being far from everything else, here on the surface of Earth we’re stuck at the bottom of an ever-moving sea of air.  In exactly the same way that the surface of water scatters light, air makes it difficult for astronomers to practice their dread craft.

Also, not for nothing, telescopes are terrible at retrieving material samples.

The Apollo 17 landing site from even closer.

You and every telescope on Earth (and the Hubble Telescope in low Earth orbit) are all about a quarter million miles from the Moon and the landing sites thereon.  If we ever get around to building something bigger on the Moon, like mines or cities or president’s heads, then we shouldn’t have nearly as much trouble seeing it from Earth.

Answer Gravy: It turns out that the best/biggest telescopes we use today on Earth are can’t detect things the size and distance of the lunar landers using visible light.  This isn’t due to poor design; the devices we’re using now are, in a word, perfect.  They literally cannot be made appreciably better (at detecting visible light).  The roadblock is more fundamental.

The “resolving power” of a telescope, is described in terms of whether or not you can tell the difference between a pair of adjacent points.  If the two points are too close together, then you’ll see them blurred together as one point and they are “not resolved”.  If they’re far enough apart, then you see both points independently.

Whether it uses mirrors or lenses, the resolving power of every telescope is limited by some fundamental constraints determined by the wavelength of the light that’s being observed and by the size of the aperture.

Every point in every image is surrounded by a rapidly diminishing “Airy disk” which are a symptom of light being wave-like.  This is only a problem really close to the diffraction limit.  You don’t see these when you take a picture on a regular camera because these rings are smaller than the individual pixels in the camera’s CCD (by design).

Because light is a wave it experiences “diffraction” which makes it “ooze around corners” and generally end up going in the wrong directions.  But the larger a telescope’s opening, the more the light waves have a chance to interfere in such a way that they propagate in straight lines, which makes for cleaner images where the light ends up more-or-less where it’s supposed to be when it gets to the film or CCD or your retina or whatever.

It turns out that the relationship between the smallest resolvable angle, θ, the wavelength, λ, and the diameter, D, of the aperture is remarkably simple:

$\theta \approx 1.2\frac{\lambda}{D}$

Visible light has a wavelength of around 0.5 micrometers (about 2,000,000 per meter) and the largest visible-spectrum telescopes on Earth are about 10 meters across (Hubble is a more humble 2.4m across).  That means that the absolute best resolution that any of our telescopes can hope to achieve, under absolutely ideal circumstances, is about $\theta \approx 1.22\frac{0.5\times10^{-6}}{10} \approx 0.00000006 rad$.  Or, for the angle buffs out there, about 0.01 arcseconds.  This doesn’t take into account the scattering due to the atmosphere; we can do a little to combat that from the ground, but our techniques aren’t perfect.

By carefully looking at how the atmosphere distorts a laser beam shot upwards from a telescope on the ground, we can take into account how the atmosphere affects light coming into the telescope from space.

The lunar landers are a little over 4 meters across (seen from above) and are about 384,403,000 meters away.  That means that the landers subtend an angle of about 0.002 arcseconds.  In order to see this from Earth, we’d need a telescope that is, at absolute minimum, about 200 meters across.  If we wanted the image to be a more than a single pixel, then we’d need a mirror that’s a few miles across.

So, don’t expect that anytime soon.

This entry was posted in -- By the Physicist, Physics. Bookmark the permalink.

### 9 Responses to Q: Why can’t we see the lunar landers from the Apollo missions with the Hubble (or any other) telescope?

1. David says:

Good morning from the Big Island of Hawaii.

Nice post – and a question I get asked often (mostly by lunar lander hoax believers).

I work for a world class observatory (Subaru Observatory). I have one comment about your statement:

“It turns out that the best/biggest telescopes we use today on Earth are pretty close to being able to detect things the size and distance of the lunar landers. This isn’t due to poor design; the devices we’re using now are, in a word, perfect. They literally cannot be made appreciably better. ”

Subaru has, what is considered to be, the largest single piece mirror that can be practically made, at 8.2 meters (27 ft) and weighing a whopping 25 tons (try slewing that around quickly). Larger than that and the mirror properties themselves become problematic (as it is, we have servos in the mirror that correct for distortions that happen due to the weight at various angles).

So observatories such as Keck use segmented mirrors. One very large mirror made from a large number of smaller, hexagonal mirrors (they use software to remove the seams).

With such a design, there are no real limits to size (other than ability to move it).

This is why we have things coming along like TMT – a thirty meter telescope (98+ feet).

The light gathering power of a 98 ft mirror, versus our 27 ft mirror, is staggering.

Now let’s look at the Instruments. Modern telescopes are not like those of long ago, with aged scientists peering through long tubes.

Our telescopes are more like a giant tube waiting for something to happen. That is where the scientific instruments come in. We can hook various sensitive instruments to various areas (prime focus, cassegrain focus, and two nasmyth points). And these instruments can each be different (IR, Visible, spectrum, etc etc).

When an astronomer makes a proposal for viewing, not only do they tell us what target they want to view, but they specify which instruments they want to use to do the viewing.

Perhaps they are looking for exoplanets, and want a spectrograph. Or distant galaxies, so a dust busting IR detector should be involved, etc.

The instruments are constantly improving. We are deploying a new instrument right now that is magnitudes more sensitive than the one it is replacing.

And that gets us to Adaptive Optics. All the world class observatories are using Adaptive Optics for not only artificial guide stars, but also for removing atmospheric distortion (goodbye twinkle twinkle little star – no more twinkle).

We are now on our 3rd generation AO system, with the fourth being tested. Each generation is leaps and bounds better than the previous.

AO lets us take images that rival Hubble, which sits outside the atmospheric well.

So the comment “They literally cannot be made appreciably better” is vastly incorrect. They can, and are constantly being made appreciably better.

It is true that at some point, observatories on Earth will be mostly replaced by space based observatories. But at our current point the cost and practicality of such a goal remains in the future.

The good news is, things like segmented mirrors can be applied directly to space based solutions – making our new generation of telescopes a good test bed for technology that can be deployed in space.

Happy New Year!

2. Locutus says:

I’ve read that the same limitations exist in creating minimum feature sizes on computer processors. However, despite the fundamental limit being about 50 nanometers using their current wavelengths of light, they are able to decrease the size down significantly (current best in consumer technology is 14 nm). They use a process called computational photolithography. I don’t really understand the process, but it is there anything here that could enhance the resolving power of telescopes?

https://en.wikipedia.org/wiki/Photolithography#Resolution_in_projection_systems

https://en.wikipedia.org/wiki/Computational_lithography

3. David says:

Locutus:

“They use a process called computational photolithography. I don’t really understand the process, but it is there anything here that could enhance the resolving power of telescopes?”

Not unless you are looking down the wrong end of the telescope 🙂

I’m being serious in the glib answer too. Photolithography and similar techniques are extremely important in astronomy – but not perhaps where you were thinking.

These techniques are directly responsible for our constantly improving and enhanced detectors – the devices that do the actual observing.

Smaller and smaller pixels let us get more resolution and sensitivity in our light gathering area.

So yes, such techniques are very important in astronomy – though the research and end results are being carried out by companies not directly involved in astronomy (we merely reap the bounty).

4. Locutus says:

David:

So practically, what does this mean? Do we get better quality images maybe in terms of color (or less fuzz or something) without actually increasing the resolving power?

5. David says:

Locutus:

It can mean a number of things, depending on the detector. It can mean more pixels – so greater resolution packed into the same area. It can mean more sensitivity – so the ability to detect weaker photons. It can mean increased wavelength range (eg., ability to see IR better than before, etc).

As an example. I am currently part of a team redesigning one of our instruments. This version is so much more sensitive than previous versions that we are actually detecting alpha particles (radiation) from a coating on one of the lenses. Same lens, just a more sensitive detector than the previous version (since they are alpha particles, they are easy to block with a very thin sheet of glass).

Telescopes are all about light gathering. Larger mirrors mean more light. Better detectors mean more light.

6. Locutus says:

David:

Thank you. That’s very interesting.

7. Ikenna says:

Even the little mathematics you displayed here[tita is approx equal to 1.22(0.5*10^-6)approx equal to 0.00000006rads]puts me off balance talkless of the topics you discussed in the question segment which includes”how a scientist convert ideas into maths formula”.
THANKS ALOT!

8. Con says:

HA this is all rubbish. NASA hasn’t landed people on the moon but the Russians landed there in 1962 but decided to set up a nuke test site there. That’s what those Lunar Transient Phenomena are. Most of them are nuke tests on the moon. They have built their bases underground too so the orbiters can’t see them. By the way all these lander pics are fakes. All those moon rocks are fakes too.

9. David says:

Con:

As an ex-NASA scientist – I can assure you that we did land people on the moon. Multiple times.

But go ahead and believe the woo-woo if you want.