Seeing Like a Camera


One of the hardest things to learn as a photographer is what will and won't make a good picture. It's easy (I think) to see things that are beautiful, but will a photograph of them be beautiful, too? There are a number of factors that go into this. They mostly come down to the way a camera sees things, which is different from the way we do. Let's look at some of them.

Cameras see only light and shadow
This is, I think, the biggest difference and the hardest to learn. When we look at something, our eyes perceive light and shadow, but that's not what our brain sees. Our brain sees objects. We process what our eyes take in to compensate for varying illumination, automatically darkening brightly lit areas and brightening darker ones. This is not the same as reducing contrast or adjusting the curves on an image. Your brain actually recognizes varying illumination as different than varying brightness. It implicitly knows how light interacts with objects and uses cues like motion and depth to identify those objects and infer the illumination. Once you have a flat picture, that is lost — the picture is evenly illuminated, the depth and motion object cues are lost, there's only the differences in brightness. You must train yourself to disregard the objects and see just the light and shadow. Sometimes it helps to sort of "defocus" your eyes, or close one eye, but mostly it's just a matter of practice.

Cameras don't record movement.
We're talking still cameras here, not video. Things that are beautiful because of their motion — for example waves on a beach — lose something when a camera sees them. Without the motion, the beauty is reduced, if not lost entirely. As a photographer, you need to learn to see the static image, not the motion. A corollary to this has to do with the way our brains process images. We cue in to motion, allowing us to detect relatively small objects easily if they are moving against a static background. Thus, a hawk soaring a couple hundred feet above you looks beautiful, but it takes a heck of a telephoto lens to capture that well. Without the movement to cue the eye in, it just looks like a small speck in a big field of sky.

Cameras don't record depth.
We live in a three dimensional world, and our eyes and brain are adjusted to seeing things in all three dimensions. Photographs are two dimensional. We have all sorts of tricks for seeing depth. The most obvious is the paralax from our binocular vision (in English, the difference between what our left and right eyes see). Close objects look very different to our two eyes; distant objects less so. In addition, the muscles that control the lens of our eye inform our brain how far the focus is. We use familiar objects for size cues, too. We know about how big a car is, so if it looks really small, it must be far away, and vice versa. (The reverse of this is the forced perspective used in movies to put a vast landscape on a small soundstage, or make a hobbit look shorter than a human.) In addition there are motion cues with familiar objects. We know about how fast a person walks. We can judge a person's distance in part by how fast they traverse our field of vision. If they flash across our view, they must be close (or unsually fast). If it takes longer, they are farther away. All of this is lost to the camera, so we must learn to disregard it when picturing a shot.

Cameras have a limited dynamic range
The human eye is amazing in its ability to see detail in both bright light and dark shadow. We can look at a bright sunlit scene with some parts in deep shadow and see fine detail in both areas at the same time. Cameras lack the dynamic range to do this. Many cameras and most printers and monitors can reproduce only 8 bits of brightness in an particular primary color (red, green and blue for cameras and monitors; cyan, magenta, yellow and black for printers). Even expensive cameras in RAW mod rarely record more than 12 bits of information per channel. The eye has a static range of only about 6½ bits, but because of the way it adjusts both chemically and physically when taking in an image, the effective dynamic range is more like 20 bits. Thus a scene with a wide range of brightness will not be rendered well by any camera or display. Again, this is something we must learn to see.

Cameras have only one white balance per shot
The recorded color of an object depends not only on its color, but also the color of the light illuminating it. This is why we must adjust the white balance on our cameras. Otherwise images lit by a "warm" light would look too red, and images lit by a "cool" light would look too blue. Fortunately, cameras can do this. That works great, as long as all the sources of illumination are the same color. Unfortunately, that's not always the case. If you are taking an interior shot using tungsten illumination, but also being lit by sunlight from a nearby window, the image will not have a consistent color balance. Again our brains are amazing at using the sense of objects and illumination to compensate for this. We don't see anything odd about the image described above, but when we look at a photograph of the scene, the imbalance of colors jumps out at us. This one is hard to train yourself not to see, but if you pay attention to what the illumination sources are you can reason it out.

Cameras have a limited field of view and a fixed aspect ratio
When we look at something, our brain holds an amazing image. We can see small precise details, but they are held in the context of our entire field of vision. Because we are always scanning a scene with our eyes, we assemble an image that covers an incredibly wide angle of view, but that has intricate detail wherever needed. A camera, on the otherhand has just a single anngle of view (depending on the lens used) with a uniform level of detail recorded across that angle. Thus vast landscapes with fascinating small details are not easily captured by cameras (particularly without going to medium or large format cameras). The photographer must pick some balance between the vastness or the detail, which ultimately may ruin the shot.

Frequently when I am shooting I will stop and stare intently at my subject for a minute or so (this can be very unnerving to models — I try to warn them about this beforehand!). What I'm doing is trying to see the image like the camera does. I don't consciously go through the above items (not anymore, anyway), but I am considering the result of them all in producing an image the camera can see. I'm not perfect (as many of my shots will show!), but with practice I'm learning to see what the camera sees.

Good luck and happy shooting!

Back to The Photographer's Journal



©2008 ArtSmith Photography, all rights reserved.     HomePortfolioAbout ASPCapabilitiesPricingContactJournal