I recently measured the linearity of my Nikon D70 sensor. I'd like to hear from others who have done or would like to do similar measurements to compare notes and discuss methods, issues, pitfalls, etc.
I'm talking about the linearity of the dSLR sensor of course.
I attached my calibrated ColorVision Spyder colorimeter to the back of an 8x10" piece of glass. In front I attached an 8x10" piece of black matboard with an 1.5" dia hole in it to expose the colorimeter sensors. In between the matboard and the glass I inserted a piece of white general purpose printer paper to diffuse the light and avoid reflections. Clamped the whole assembly together with file clamps and put it in a vertical position at the same height as my camera on a tripod. Aimed the camera with lens at the white circle and framed it so it filled about half the frame. Positioned a 200W 3200K lamp in reflector on a tripod with diffusor in front at the same height in an otherwise dark room, making sure no light fell directly on the lens of the camera. Set the camera to RAW and manual exposure and focussing. Adjusted the exposure so that the peak of the white circle in the histogram was as much to the right as possible without any overexposure blinkies. Averaged 3 colorimeter readings (because of reading variations) and took the first shot. Carefully moved the lamp backwards to get about 95% of the previous reading and again recorded the average of 3 readings and took a second shot with the same exposure. Repeated this process for 90%, 85% and so on. At about 15% of the initial reading I ran out of room to move the lamp back any further.
I processed the shots in Adobe Camera RAW and Photoshop. In ACR I only adjusted to color temperature to 2800K to get the R, G and B peaks of the white circle to overlay as best as possible; the "white" printer paper was apparently not neutrally white. In Photoshop I selected the white circle with the magic wand, contracted the selection by 50 pixels to avoid the white circle to black board transition and then averaged the selection using Filter/Blur/Average. I then calculated the brightness value of this selection from the 16bit RGB values.
I put the colorimeter values and calculated brightness values in a spreadsheet. ACR (like most RAW processors) applies a tonal curve with a gamma of 1/2.2 to the linear sensor RAW data to correct for the non-linearity of the human eye so I needed to calculate what brightness to expect after this conversion and compare those numbers to the calculated brightness numbers from the images taken. The resulting error or non-linearity was less than 1.1% after curve optimization, which is extremely low in my opinion given the possible sources of errors: sensor non-linearity, colorimeter error, shot to shot variations in camera exposure time and aperture opening and lamp stability.
Others like Stanford University have used special software to read the RGBR data in the Nikon NEF files to avoid the RAW tonal curve conversion and its inherent inaccuracies. They published the code, but I have no idea how to use it.
I"m very interested to hear from others that have done or want to make similar measurements.