STUPIDIUM

I have to admit I borrowed this word from a MacWorld writer commenting on the tech bloggers of the day, it’s a good word. But I want to use it in reference to so many photographers for not researching and understanding the basics of digital technology based pixel imaging. It’s not that personal computer digital imaging is all that new, I’ve been working with it for 20 years. But the average photographer of the past, and used to film, have been led to believe it is just another kind of photography, much like film photography, and the same mind-set works as well with digital. Well it does for the camera companies, but not for their customers, because they are being encouraged to buy what they may not need and are not demanding more of what digital can provide, and they could use if available.

With film the light that is focused on the frame that exposes the film and records the amount of light exactly as the lens has focused it in fine detail, forms a physical latent image recording three layers of color film to make a Red, Green and Blue image superimposed on top of each other. Then after exposure the film is developed and you have a film emulsion of layers developed into physical dye blobs that represent what the lens focused and recorded.

Does a digital sensor capture images similarly. No, it doesn’t, absolutely not. So let’s say to keep the numbers round the sensor chip has 3,000x4,000 sensor sites, that’s 12 megapixels, or 12,000,000 sensors. To keep it simple lets assume that the sensors are equally divided between Red, Green and Blue, that’s 4 million each. But with a 12 megapixel camera you get a file image that has 12 million pixels, each with an RGB value. So how does that happen? Well, the lateral association of the sets of 3 different colored pixels is calculated in relation to the surrounding pixels and an “educated” guess is made by a processor chip as to what the color balance of each pixel should be.

So now lets consider what is in each of these educated pixels and where the information came from. Lets say you are taking a picture across the street of your neighbor’s house and yard, and lets say at your lens setting the image area measures 60 feet high and 80 feet wide. When the camera is focused and framed to measure the value for each sensor site, the camera is projecting a virtual matrix upon the scene. So, each section relative to each sensor site is 1/4th of an inch square in the matrix of the scene. So then when the exposure is made each sensor site measures the Red, Green or Blue light value that is the average reflected by each 1/4 inch square of the scene’s matrix. If a pixel area has a part of a green leaf and a part of a brown fence within the area, the measurement is the average of those light values. So is your digital camera recording an image or is it measuring a pixel matrix, sensor site by sensor site, and recording the average value for each pixel area within the matrix? Yes, a digital camera is just that, a measuring device recording number values for the average value of light reflected from each separate pixel area in the matrix of the scene. Does any physical picture result? No, just a file of numbers.

So what does this recording look like. If the file is read by a digital editor it is line after line of numbers, with an XY number for the location of each pixel in the matrix, and a Red, a Green and a Blue number for each of those colors. If the camera is a high-end dSLR a Raw file can be made pretty much as the information comes off the sensor (called neutral or faithful), if the camera is set for no sharpening, no contrast and no saturation, in other words an essentially unprocessed Raw image as the sensor records it.

I am a curious person, so the first dSLR I used that had this option I used it, and with the camera on a tripod also shot a picture in normal mode with average sharpening, contrast and saturation enhancement. With both images opened in Photoshop there was only a vague similarity between the two, the neutral unsharpened and un-enhanced image was extremely soft, very flat in contrast and had little color saturation. In fact, using the tools in Photoshop I spent many hours trying to make the off-the-sensor image look like a “normal” shot of the same scene, and never even got close. Taking a small part of each image and then zooming in so each pixel could be seen clearly, the pixels looked a little alike in that every pixel in an image is an even flat square of tone, but the off the sensor unprocessed pixels were extremely low in value and differed little from one another.

What conclusion can one come to looking carefully at what a dSLR image sensor records, and what the images look like when ”normal” settings are used? I have asked this in several items I have written, and no photographer has acknowledged that they have made the tests I have, nor ever examined what an unprocessed off the sensor raw image looks like. Apparently no one has read about how the scientists at NASA use computing, and how the images made by the Hubbel telescope are processed. I think the answers are rather obvious, but it seems to me photographers want to believe using a dSLR is no different than what we did with film well back in the 20th century. I’d like very much to hear what you, and what your thinking is, just send a note to David B. Brooks at: goofotografx@gmail.com

X