Why Can't We See Stars During the Day?

iStock
iStock / iStock
facebooktwitterreddit

What causes our inability to see stars during the day? I always thought sunlight would bounce off the particles in the air, thus illuminating them. And the stars would no longer stand out. However people argue that the reason there are no stars in moon landing pictures is because the pictures are taken in lunar days. But the moon has no atmosphere. So I'm wrong.Rebecca Pitts:

Your thinking is not wrong, merely incomplete. Rather, you’re applying the same principles to two different situations: Sunlight can scatter off of any substance between a light source and a detector—including all parts of your eyeball in front of your retinas—but in the absence of that, it’d still be hard to see the stars. The Sun, and bodies that reflect its light, are just too darn bright compared to their surroundings.

To quantify just how much brighter the Sun and the daytime sky are than the stars, let me start by introducing the wonky way astronomers gauge how bright things are relative to each other or to a standard star. It’s called the Magnitude system, and barely makes sense today because it’s a 2000-year-old hand-me-down from Hipparchus/Ptolemy (it’s so old we can’t even agree on who’s responsible). The relevant details are summed up in the following images:

Astronomy 3130 [Spring 2015] Home Page, Photometry lecture.

(By the way, that infographic is overly optimistic in one regard: the naked-eye limit in most cities is more like 3rd magnitude.)

To put the Sun and Moon on that scale and show you just how far the magnitude system can go into the negatives, look at this:

The daytime sky is bright enough that it outshines anything fainter than magnitude -4. So, yes, on Earth, the atmosphere is in fact the problem, because of Rayleigh Scattering.

Now what about situations where the atmosphere isn’t a factor?

Combining information from the two figures, the full moon is at least 25,000 times brighter than Sirius. The sun is 400,000 times brighter than that—10,000,000,000 times brighter than the brightest star in the night sky. The brightness of a candle, not coincidentally, is about 1 candela (SI unit of brightness). What’s something 10,000,000,000 times brighter than a candle? Try something like the Luxor Sky Beam in Las Vegas, which shines at 42.3 billion candela. Seeing a star with the sun in your field of view will never be less hard than spotting a handful of candles while staring down the beam of the most powerful spotlight on Earth.

The ratio of signal intensity (brightness in the case of light) between the faintest detectable signal and the point where your instrument maxes out (saturation) is called dynamic range, essentially the maximum contrast ratio. So to photograph the sun and have another star show up in the same image, your detector needs a dynamic range of 10 billion. The dynamic ranges of existing technologies are as follows:

  • Charge Coupled Devices (CCDs, the detectors for digital cameras): 70,000–500,000 depending on the grade (16-bit Analogue-to-Digital converter software that typically accompanies consumer- and education-grade CCDs will cut this to about 50,000)
  • Charge-Injection Devices (the fancier cousin of the CCD where pixels are handled individually rather than by rows and columns): 20 million, as this PDF demonstrates.
  • Human Eye: widely variable, but tops out around 15,000
  • Photographic Film: a few hundred. Yep—that’s it.

To add insult to injury, film doesn’t even react to 98 to 99 percent of the light that hits it. Your eye is every bit as inefficient, but at least it has a dynamic range closer to that of a CCD than to film. CCDs will register upwards of 90 percent of the incident light. You can read about other advantages of CCDs here (their stat on the dynamic range of film is a tad low). But back in the 1960s, CCDs didn’t exist. NASA had to make do with film. (Here’s a whole article on NASA’s film supplies and their specs during the Apollo Program.)

At the Earth’s (and moon’s) distance from the sun, the average square meter of surface receives about 342 watts per square meter (W/m^2) of power from the sun (see Solar Radiation at Earth). If the sun is directly overhead, that number is closer to 1368 W/m^2, but let’s stick with 342 W/m^2 because that’s the average over the sun-facing hemisphere and most of the surface is at some angle to the sun. The Moon reflects about 12 percent of the light that hits it. That doesn’t seem like a lot, but for the Apollo astronauts, that’s like standing on a surface where every square meter is, on average, as bright as a typical desk lamp. The astronauts’ white suits and the highly reflective landing modules were even brighter. As far as the film was concerned, the Apollo astronauts were flood lights standing in a lamp shop. That kind of light pollution doesn’t make for good astrophotography.

Regardless of the technology used, the correct exposure time is important to get a good picture of what you want and as little as possible of what you don’t want. The background stars were not important to the Apollo crews’ studies of the Moon, so their exposure times were calculated to get the best images of Moon rocks, astronauts, landing sites, etc. The upshot is that exposure times for most Apollo photographs were so short that the photo emulsion never received enough light from the background stars to react.

However, there are images taken by the Apollo crews with stars in them. But stars were never their targets, so they don’t look very good, as these UV images from Apollo 16 show:

NASA

NASA (*Note - false color UV photo of Earth’s Geocorona in 3 filters, rather poorly aligned judging by the stars)

This post originally appeared on Quora. Click here to view.