EmbeddedRelated.com
Forums

16-bits ADC anyone?

Started by Bruno Richard June 5, 2007
On 2007-06-07, Colin Paul Gloster <Colin_Paul_Gloster@ACM.org> wrote:
> > I do not understand the distinction. I agree that absolute accuracy is > not always important and that the ten most significant digits of a low > quality 16 bit analog to digital converter might not be as faithful as > a high quality ten bit analog to digital converter, and I agree that > the least significant bits of an analog to digital converter are less > likely to be as faithful as the most significant bits, but I do not > believe that a sixteen bit ADC is equivalent to nothing except a ten > bit ADC whose output is leftshifted by six bits. That would result in > a datatype which has a resolution of 16 bits but clearly no more > accuracy than a reading of ten bits. I believe Andrew Smallshaw was > talking about something else but I do not understand what. Would you > care to explain?
What I was referring to was that is many circumstances, the absolute value of whatever is being measured isn't particularly important compared to the ability to finely distinguish small changes in the imput. Since I used audio as an example I'll continue with it - you can get away with many subtle distortions that won't be particularly noticable. For instance, your analog input stage may be slightly frequency dependent with the result that low frequencies are reproduced too loudly. That isn't too noticeable. What is important is that the waveforms have more or less the right shape. That means that the gap between distinct levels must be small.
> "[..] the > eye is only good for eight bits [..] > > [..]" > > I do not know what the limit is, but I believe that it is > significantly above sixteen bits and below 33 bits. I believe > that much true color graphical work is done at 24 bits.
I was talking there strictly about monochrome image data, and did say as much is my original post, although I could have been more explicit about it. You're right that 24 bit colour is generally accepted as 'true colour' (although that is simplifying things slightly). That's eight bits each for red, green, and blue. If you're talking about monochrome obviously you only need eight bits in total for black through to white. 32 bit colour is actually quite rare. What is usually meant is 24 bit colour in a 32 bit format because many computers make it easier to deal with 32 than 24. -- Andrew Smallshaw andrews@sdf.lonestar.org
On 2007-06-07, Meindert Sprang <ms@NOJUNKcustomORSPAMware.nl> wrote:
> "Andrew Smallshaw" <andrews@sdf.lonestar.org> wrote in message > news:slrnf6e6hp.rr4.andrews@sdf.lonestar.org... >> >> While I'm certainly not an expert, as an example I'm reasonably >> familiar with astronomical imaging, which is often done with 16 >> bit (monochrome) CCDs. It might sound overkill considering the >> eye is only good for eight bits but there is often a heck of a lot >> of processing after image capture. > > As long as your captured image lies within the dynamic range of an 8 bit > converter, it is of course rediculous to use a 16 bit converter just to give > you the dynamic range for calculations.
A simple example. Let's say I have an image of something or other and the background sky is not true black due to the effects of skyglow caused by street lighting. I decide to improve my image by removing that and making the sky black by adjusting the contrast so that pixels below a certain value are scaled to make them darker. Values above that value must now be scaled to fill in the gap in the scale. That could mean that a difference of one in the input becomes a difference of two or three in the output - effectively we have lost some bits in the processing. If we had an eight bit sensor, we can now see the difference in individual levels that were detected. If we have a 16 bit type, the steps are still too small for these to be noticed. -- Andrew Smallshaw andrews@sdf.lonestar.org
On  5-Jun-2007, Vladimir Vassilevsky <antispam_bogus@hotmail.com> wrote:

> >>I am working on a project where I need some 16 bits ADC to retrieve > >>information from a sensor. I also need a small microcontroller such as > >>a PIC, AVR or 8051, and I got surprising quotes for the ADC: Around $5 > >>(qty 1000), which is 5 times more expensive than the controller! > > > > > > The problem with microcontrollers with on-chip ADC/DACs is that you > > might not get the nominal 98 dB SNR due to the noise from the > > controller. > > Besides, the high performance ADCs and the microcontrollers are the two > different technologies. The MCUs with the good ADC/DACs usually contain > two separate dies in one package. For that reason they are more > expensive then the equvalent micro + equvalent ADC/DAC. > > > > > Is DC accuracy (drifts) important in your application ? > > > > Is this ADC part of a control loop, in which case it would be > > preferable that the ADC is monotonous. > > I am wondering of what could be a sensor which requires the ADC with the > true 16-bit accuracy. For the sensor application, that sounds > unreasonable to me. Especially considering that the rest of application > is handled by a small micro. Apparently there is a problem with the > concept.
I do analog stuff at 16 bits all the time, where 12 or even 10 bits would have matched the sensor. It helps sell the product (which is more important then saving a buck).
On Thu, 7 Jun 2007 13:54:09 +0000 (UTC), the renowned Andrew Smallshaw
<andrews@sdf.lonestar.org> wrote:

>I was talking there strictly about monochrome image data, and did >say as much is my original post, although I could have been more >explicit about it. You're right that 24 bit colour is generally >accepted as 'true colour' (although that is simplifying things >slightly). That's eight bits each for red, green, and blue. If >you're talking about monochrome obviously you only need eight bits >in total for black through to white. 32 bit colour is actually >quite rare. What is usually meant is 24 bit colour in a 32 bit >format because many computers make it easier to deal with 32 than >24.
Eight bits is a bit dubious for images, particularly if there is much manipulation to be done (for example, you'll lose detail in the highlights or dark areas that cannot be brought back again). My digiSLR allows 12 bits/color (36 bits per pixel), which is significantly better. When all the manipulation is done, it can be converted to 8 bit (24 bits/pixel) with little or no visible loss of quality. Best regards, Spehro Pefhany -- "it's the network..." "The Journey is the reward" speff@interlog.com Info for manufacturers: http://www.trexon.com Embedded software/hardware/analog Info for designers: http://www.speff.com
On Jun 7, 10:08 am, "Hershel" <hershelr_s...@spamcop.net> wrote:
> On 5-Jun-2007, Vladimir Vassilevsky <antispam_bo...@hotmail.com> wrote: > > > > > > > >>I am working on a project where I need some 16 bits ADC to retrieve > > >>information from a sensor. I also need a small microcontroller such as > > >>a PIC, AVR or 8051, and I got surprising quotes for the ADC: Around $5 > > >>(qty 1000), which is 5 times more expensive than the controller! > > > > The problem with microcontrollers with on-chip ADC/DACs is that you > > > might not get the nominal 98 dB SNR due to the noise from the > > > controller. > > > Besides, the high performance ADCs and the microcontrollers are the two > > different technologies. The MCUs with the good ADC/DACs usually contain > > two separate dies in one package. For that reason they are more > > expensive then the equvalent micro + equvalent ADC/DAC. > > > > Is DC accuracy (drifts) important in your application ? > > > > Is this ADC part of a control loop, in which case it would be > > > preferable that the ADC is monotonous. > > > I am wondering of what could be a sensor which requires the ADC with the > > true 16-bit accuracy. For the sensor application, that sounds > > unreasonable to me. Especially considering that the rest of application > > is handled by a small micro. Apparently there is a problem with the > > concept. > > I do analog stuff at 16 bits all the time, where 12 or even 10 bits would > have > matched the sensor. It helps sell the product (which is more important then > saving a buck).- Hide quoted text - > > - Show quoted text -
yes, we use faster A/D's and more memory then needed for the same reason, well some times anyway. Some products sell because the are the cheapest and barely do what needs to be done, other products sell because they are way over designed, the customer wants way more then he actually needs, for whatever reason. I wish I knew how to reliably distinguish the two potential customer demands in the design phase, but it's not very predictable
"Andrew Smallshaw" <andrews@sdf.lonestar.org> wrote in message
news:slrnf6g45h.gpb.andrews@sdf.lonestar.org...
> On 2007-06-07, Meindert Sprang <ms@NOJUNKcustomORSPAMware.nl> wrote: > > "Andrew Smallshaw" <andrews@sdf.lonestar.org> wrote in message > > news:slrnf6e6hp.rr4.andrews@sdf.lonestar.org... > > As long as your captured image lies within the dynamic range of an 8 bit > > converter, it is of course rediculous to use a 16 bit converter just to
give
> > you the dynamic range for calculations. > > A simple example. Let's say I have an image of something or other > and the background sky is not true black due to the effects of > skyglow caused by street lighting. I decide to improve my image > by removing that and making the sky black by adjusting the contrast > so that pixels below a certain value are scaled to make them darker.
Wait a minute, you already have an image you say. That can of course be processed in 16 bit for better results. But that has nothing to do with the dynamic range of the original video signal captured from the sensor. If that signal has a S/N ratio of less than 8 bit, it brings you zip when sampled with a 16 bit converter. But you are free to extend the word size of the already digitized image in order to give you more room for calculated results. Meindert
Hershel wrote:

> I do analog stuff at 16 bits all the time, where 12 or even 10 bits would > have matched the sensor. It helps sell the product (which is more important then > saving a buck).
You appear to have rather unusual customers. For most of us out here, saving a buck is *way* more important that giving marketing a meaningless bullet point to brag about. FWIW, you could be put out of business by a copycat who saves that buck, and then *pretends* to have a 16-bit ADC in there --- nobody could tell the difference anyway.
Content-Transfer-Encoding: 8Bit


Hans-Bernhard Br&#4294967295;ker wrote:
> >Hershel wrote: > >> I do analog stuff at 16 bits all the time, where 12 or even 10 bits would >> have matched the sensor. It helps sell the product (which is more important
then
>> saving a buck). > >You appear to have rather unusual customers. For most of us out here, >saving a buck is *way* more important that giving marketing a >meaningless bullet point to brag about. FWIW, you could be put out of >business by a copycat who saves that buck, and then *pretends* to have a >16-bit ADC in there --- nobody could tell the difference anyway.
When I was working at Mattel making toys, saving a penny was *huge*. When I was at Parker Aerospace making test fixtures, the engineering costs made the difference in cost between a 12-bit and a 16-bit ADC disapear in the noise; I just picked the part with the best specs. I never worked for Rolex, but I imagine that for them giving marketing a meaningless bullet point to brag about is more important than saving a buck. -- Guy Macon <http://www.guymacon.com/>
On  7-Jun-2007, =?ISO-8859-1?Q?Hans-Bernhard_Br=F6ker?=
<HBBroeker@t-online.de> wrote:

> > I do analog stuff at 16 bits all the time, where 12 or even 10 bits > > would > > have matched the sensor. It helps sell the product (which is more > > important then > > saving a buck). > > You appear to have rather unusual customers. For most of us out here, > saving a buck is *way* more important that giving marketing a > meaningless bullet point to brag about. FWIW, you could be put out of > business by a copycat who saves that buck, and then *pretends* to have a > 16-bit ADC in there --- nobody could tell the difference anyway.
Not everybody here is designing consumer products. It's more of an industry preference then a customer preference. I've got a small number of competitors in a specialized area of industrial control. When I visit with a customer, I generally know which of my competors were there the week before, and (in this case) the resolution of their ADC or whatever. The math is really simple. If you spend and extra buck on a device that sells for $2K with a 70% PM, and you sell 5% more, then you make more money.
In article <1181030631.316540.197960@g4g2000hsf.googlegroups.com>, 
bruno.richard.fr@gmail.com says...
> Hi all, > > I am working on a project where I need some 16 bits ADC to retrieve > information from a sensor. I also need a small microcontroller such as > a PIC, AVR or 8051, and I got surprising quotes for the ADC: Around $5 > (qty 1000), which is 5 times more expensive than the controller! > > Does anyone have an idea about how I can get some low cost ADC- > Controller solution? I need only few dozens of samples per second, so > some of you may have nice tricks to do that (op-amps, capacitor charge > time stuff and the like). > > Thanks, Bruno > >
You haven't said what type of sensor and whether its output is ratiometric to the power supply. If it is not ratiometric, then your ADC needs a precision reference and that will bump up the ADC cost significantly. Mark Borgerson