EmbeddedRelated.com
Forums
Memfault Beyond the Launch

Simple Autoexposure algorithm for an Image sensor

Started by Bryan October 6, 2004
Hi,

I'm involved in a project where we are building a still image capture
device. We have a CMOS image sensor (Pixart PAS106B) connected via I2C
to a philips lpc2106 (Arm 7) microcontroller.

I'm able to capture images without trouble, however, I notice -
depending on the ambient light - my image is either too light or too
dark. To correct this problem, I want to write a small (hopefully
simple) and fast autoexposure routine to adjust the exposure setting
and hopefully get a more consistent image when the light varies.

I was hoping someone might be able to point me to some resources
(books, news groups, source code, people, etc) that might be able to
help me understand the methods and best practices for accomplishing
this.

I was hoping that I could just get a "luminance" reading from the
image sensor, but I guess I can't. So, I'm pretty sure that I need to
evaluate (somehow) the light/darkness of the image, change the image
sensor's exposre settings, grab another frame, test it, and so on.

If anyone has any experience or knows where I might go to get a grip
on the best way to do this, I'd be grateful.

Thanks
 
Bryan
Bryan wrote:

> Hi, > > I'm involved in a project where we are building a still image capture > device. We have a CMOS image sensor (Pixart PAS106B) connected via I2C > to a philips lpc2106 (Arm 7) microcontroller.
<snip>
> If anyone has any experience or knows where I might go to get a grip > on the best way to do this, I'd be grateful. >
I had this same problem to work on for a commercial product. I couldn't find any reference material directly on the subject so I just implemented a PID controller. I did an averaging of selected pixels to get a measure of the tone of the scene. This approach (reflective metering) is of course limited by assuming that the scene is of average reflectance. It won't work too well for highly reflective or unreflective scenes. Look for any good photography book for a more detailed explanation. I also suggest looking for control theory books (or on the web) for details on closed loop control. Felix
On 6 Oct, in article
     <a01fd7ec.0410061138.581728eb@posting.google.com>
     clapper@bluewin.ch "Bryan" wrote:
>Hi, > >I'm involved in a project where we are building a still image capture >device. We have a CMOS image sensor (Pixart PAS106B) connected via I2C >to a philips lpc2106 (Arm 7) microcontroller. > >I'm able to capture images without trouble, however, I notice - >depending on the ambient light - my image is either too light or too >dark. To correct this problem, I want to write a small (hopefully >simple) and fast autoexposure routine to adjust the exposure setting >and hopefully get a more consistent image when the light varies.
Can you take some images BEFORE the capture you want or continuously to calibrate the image levels?
>I was hoping someone might be able to point me to some resources >(books, news groups, source code, people, etc) that might be able to >help me understand the methods and best practices for accomplishing >this.
Suggest looking at either additional light sensor, or see how video camera AGC circuits work on averaging the light as they go.
>I was hoping that I could just get a "luminance" reading from the >image sensor, but I guess I can't. So, I'm pretty sure that I need to >evaluate (somehow) the light/darkness of the image, change the image >sensor's exposre settings, grab another frame, test it, and so on.
This will only tell you how much luminance has been reflected or directly transmitted to the sensor, a specific light sensor will give more general light levels.
>If anyone has any experience or knows where I might go to get a grip >on the best way to do this, I'd be grateful.
Look up Video AGC (Automatic Gain Control) implementations. -- Paul Carpenter | paul@pcserviceselectronics.co.uk <http://www.pcserviceselectronics.co.uk/> PC Services <http://www.gnuh8.org.uk/> GNU H8 & mailing list info <http://www.badweb.org.uk/> For those web sites you hate
Bryan <clapper@bluewin.ch> wrote:

> I'm able to capture images without trouble, however, I notice - > depending on the ambient light - my image is either too light or too > dark.
Here's a nasty question: how do you know that? Who/what defines when an image is "too" light or dark?
> To correct this problem, I want to write a small (hopefully > simple) and fast autoexposure routine to adjust the exposure setting > and hopefully get a more consistent image when the light varies.
Ouch. You may not be aware of it, but you've just introduced another criterion which actually contradicts the previous ones to some extent. Anyway, what do you plan on being consistent *with*, then? About the only auto-exposure algorithm you could use that uses no additional sensors at all, and no knowledge about the sensitivity curve of your image sensor either, is: repeat { l1 := brightnees of <n1>th lightest pixel in the entire image l2 := brightness of <n2>th darkest pixel in the entire image modify exposure by <factor>*(l1+l2 - (black + white)) } until (exposure correction < <tolerance>) if ((l1 - l2) < <min_contrast>) complain The problem is that the two first steps grow quite complex if <n1> and <n2> are not very small numbers, and that you need at least one complete "dummy" shot plus analysis time just to calibrate. The repeat loop is, of course, nothing else but a crude renditon of a closed-loop control algorithm (only the 'P' part of a PID controller), with (l1+l2)/2 as the controlled term, which you want to be equal to 50% gray. -- Hans-Bernhard Broeker (broeker@physik.rwth-aachen.de) Even if all the snow were burnt, ashes would remain.
On 6 Oct 2004 12:38:06 -0700, clapper@bluewin.ch (Bryan) wrote:

>I'm involved in a project where we are building a still image capture >device. We have a CMOS image sensor (Pixart PAS106B) connected via I2C >to a philips lpc2106 (Arm 7) microcontroller.
How many frames/scan you transfer over the I2C link ? Does the microcontroller have enough RAM to store the image to 8-16 bits/pixel? Shooting multiple frames and integrating each frame pixel by pixel into the frame store will increase the signal to noise ratio as more and more frames are added. Stop the imagining, when the first pixel hits the maximum value. The frame update rate should be such that the brightest pixel does not saturate the sensor or ADC, but on the other hand, there should be enough (1 LSB) dark current noise to act as dither. I have seen some astronomical imaging, in which in the beginning only a few bright stars were visibly, but after a few seconds more and more details became visible, when more frames were added. Paul
Paul Keinanen wrote:
> On 6 Oct 2004 12:38:06 -0700, clapper@bluewin.ch (Bryan) wrote: > >> I'm involved in a project where we are building a still image capture >> device. We have a CMOS image sensor (Pixart PAS106B) connected via I2C >> to a philips lpc2106 (Arm 7) microcontroller. > > How many frames/scan you transfer over the I2C link ? Does the > microcontroller have enough RAM to store the image to 8-16 bits/pixel? > > Shooting multiple frames and integrating each frame pixel by pixel > into the frame store will increase the signal to noise ratio as more > and more frames are added. Stop the imagining, when the first pixel > hits the maximum value. The frame update rate should be such that the > brightest pixel does not saturate the sensor or ADC, but on the other > hand, there should be enough (1 LSB) dark current noise to act as > dither. > > I have seen some astronomical imaging, in which in the beginning only > a few bright stars were visibly, but after a few seconds more and more > details became visible, when more frames were added.
This is much like the old CAT (computer of average transients) that I built 40 years ago. On a sync signal (internal or external) it inputs a time based set of input values, and sums these over some total number of scans. You have to limit the number to avoid overflow. The result is the noise averages out, and after a few (for appropriate values of few) the signal appears out of the noise. With the appropriate display you can watch the signal appearing out of the noise during the processing. The CAT was useful for such things as picking out brainwaves resulting from external stimulae, such as a sharp noise. It is much like a sampling scope. -- Chuck F (cbfalconer@yahoo.com) (cbfalconer@worldnet.att.net) Available for consulting/temporary embedded and systems. <http://cbfalconer.home.att.net> USE worldnet address!
clapper@bluewin.ch (Bryan) wrote in message news:<a01fd7ec.0410061138.581728eb@posting.google.com>...
> Hi, > > I'm involved in a project where we are building a still image capture > device. We have a CMOS image sensor (Pixart PAS106B) connected via I2C > to a philips lpc2106 (Arm 7) microcontroller. > > I'm able to capture images without trouble, however, I notice - > depending on the ambient light - my image is either too light or too > dark. To correct this problem, I want to write a small (hopefully > simple) and fast autoexposure routine to adjust the exposure setting > and hopefully get a more consistent image when the light varies. > > I was hoping someone might be able to point me to some resources > (books, news groups, source code, people, etc) that might be able to > help me understand the methods and best practices for accomplishing > this. > > I was hoping that I could just get a "luminance" reading from the > image sensor, but I guess I can't. So, I'm pretty sure that I need to > evaluate (somehow) the light/darkness of the image, change the image > sensor's exposre settings, grab another frame, test it, and so on. > > If anyone has any experience or knows where I might go to get a grip > on the best way to do this, I'd be grateful.
The first thing to keep in mind is that "auto exposure" from behind the lens is not actually generally possible, unless you some idea of what you're looking at. The problem is that unless you know how much light is *supposed* to be reflected, there's no way of telling how brightly the object is lit, and therefore you can't determine the correct exposure (which is properly based on the level of illumination, *not* the level of reflection). You'll notice that serious photographers are always running around with incident light meters, which measure the intensity of light falling on the object(s) in question, and they compute their proper exposure from there. In short, a certain intensity of light falling on a black object should get the same (film/sensor) exposure as the same amount of light falling on a white object, even though there will be much more light reflected towards the camera in the latter case. So the best you can do is fake it. The simplest scheme, used for many decades in cameras with built in exposure meters, is to assume that the scene has a certain reflectivity, and just measure the intensity of all the light coming through the lens, or perhaps just the center portion. The standard in photography is to balance against an 18% gray card (basically just a gray card of the correct density to reflect 18% of the incident light - available at any decent photo supply place). The "18% gray" card is supposed to be equivalent in density to a "typical" scene (which explains why it's used). For an application like this, you'd calibrate your sensor against such a card, and then adjust the exposure of the image by adjusting the exposure so that the entire brightness, averaged over the entire sampling region, came out to the same value as you calibrated for the gray card. Trivially, convert all the pixel values to a linear scale, add them all up, divide by the number of pixels, and adjust the exposure by the ratio of that number to the value you'd see for the reference card. One simple enhancement is to use a center-weighted metering scheme, where you bias the exposure towards the center of the frame. High end cameras perform quite complex schemes to try and "understand" a frame enough to measure it accurately, often breaking up the scene into many individual segments, and applying some balancing algorithm to those. Some even go so far as to attempt image analysis to try and recognize objects and apply reflectivity values to them. I'd start with something simple, maybe a basic center-weighted scheme, where the inner ninth (the middle third in both dimensions, or something more-or-less in that range) counts for two thirds of the exposure weight, and the remainder of the frame for the other third, and play from there. Given that responses are not real linear, this may cycle occur over several frame before it stabilizes (and you probably want to add some sort of damping to this function). I don't know if it's an issue for your application, but one thing to watch out for is that if you're far enough off scale to lose all detail (either via extreme under or overexposure), you'll want to have a fast step function to try and get the exposure back into a "reasonable" range.

Memfault Beyond the Launch