Thursday, March 4, 2010

Depth perception and optics

It's often an assumption that two or more vision sources are required for depth perception. That's not strictly true; on one hand, there's simple experience that can teach a monocular person to recognize distances.

On the other hand, there's this idea I had...

*sidetrack*
It stemmed from thinking about image stacking in HDR photos. Image stacking is simply taking several photos of the same object, where the only change between them is a known, measured characteristic of the photograph recording. Typically, in HDR, it's your exposure adjustment, or how much light you demand your camera collect before saving off the frame. Since your camera's sensor can only precisely measure a finite range of light levels, taking several frames where you move that range around allows you to increase the amount of detail you've captured in bright and dark areas.

However, image stacking needn't *only* be done with exposure adjustments. The concept of stacking applies for any variable you can measurably control while observing a scene, and it just happens that exposure adjustment is the most immediately useful setting to vary.
*end sidetrack*

*second sidetrack*
And now a brief bit about aperture width and depth-of-field. A camera's aperture is the hole that allows light to pass through and land on its sensor. It's exactly analogous to your pupil. The larger your pupil, the more light passes through and lands on your retina. The smaller your pupil, the less light lands on your retina.

One weird side effect of optics and aperture size, though, has to do with focusing. With a narrow aperture (such as when you're outside on a bright, snow-covered day), your depth-of-field is very large, meaning that you can see near objects in focus just as well as you can see distant objects, with no additional effort on the part of your lens. (either that of your eye or of your camera) On the other hand, when the aperture is very wide (Indoors, lights off, etc.), the depth of field is very narrow. That means that you (or your camera) need to adjust the configuration of your lens in order to focus on near or distant objects, regardless of whether you have one or two eyes.

All this boils down to one interesting fact: You can know where your field of focus is if you know your aperture size and your lens configuration.
*end second sidetrack*

What all this means is that you could take several images, each with known aperture and lens characteristics, and learn how far away objects in the scene are simply by observing how in-focus each area of your scene for each known field of focus.

So, yes, you could have calculable measured distances to objects in your scene simply by stacking images of that scene where you know the focus range of each of those snaps.

Honestly, though, I think this applies more to machine vision than human vision. It's also probably most useful right now for telescopes.

No comments:

Post a Comment