Do Cameras Really Add 10 Pounds?

Getty Images
Getty Images / Getty Images
facebooktwitterreddit


For everyone who’s ever been unhappy with the way they look in a picture or on video, there’s almost always someone there to try and comfort them by pointing out that the camera “adds ten pounds” to its subjects.

Sometimes this just excuses actual flabbiness, but some people swear up and down that the phenomenon is real and cameras actually fatten us up. What’s going on?

Flash Problems

A few different things, one simply being the way the subject is shot. Strong, flat light directed straight at a person—like from a bad lighting setup or the camera’s flash—flatten the features of a subject by killing shadows. Those head-on shots of you at the family reunion look bad, in part, because your cousin’s camera flash flattened and fattened you.

The camera itself also shoulders some of the blame. Telephoto and wide angle lenses each distort an image in their own ways. No matter the type lens, though, there’s also the problem of a camera having just one of them.

Seeing in Stereo

Most of us look at the world through two eyes, and our brains take what we see with each one and fuse it into a single image, which allows us to perceive depth. With only one eye—its lens—a camera lacks our accurate depth perception. Unless the photographer creates some illusion of depth by using distance cues, light, and shadow, or by composing their shots in certain ways, the lack of it makes their photos and subjects come out looking flatter than they really are, which also makes them seem wider.

Another difference between a two-eyed view of the world and a one-eyed view that factors in is the way they capture the background behind the subject. Background features hidden from one eye can be seen by its partner, and together they capture overlapping views that a single eye or camera can’t. This means that a single eye has a different perception of the width of the subject relative to the background than two eyes working together.

Michael Richmond, a physics professor at the Rochester Institute of Technology, illustrates this effect with a few photos of a coffee mug against a patterned background sheet. He took one photo straight on like the lone eye of a camera would see it, one photo four centimeters to the left of center the way your left eye would see it if your nose was directly at the center and one photo four centimeters to the right of center the way your right eye would see it. He then merged the perspectives of the latter two “eyes” by cutting both those pictures through the center of the mug and fusing the right side of the right eye's picture with the left side of the left eye's picture to get something like what the brain would create.

In both pictures, the mug is the same number of pixels across, but there’s a huge difference in the way the camera view and the combined “two-eyed” view capture the background. In the camera view, background appears narrower, and the mug looks much “fatter” against it.