EmbeddedRelated.com
Forums

16-bits ADC anyone?

Started by Bruno Richard June 5, 2007

mw wrote:

> Vladimir Vassilevsky wrote: > >> I am wondering of what could be a sensor which requires the ADC with >> the true 16-bit accuracy. For the sensor application, that sounds >> unreasonable to me. Especially considering that the rest of >> application is handled by a small micro. Apparently there is a problem >> with the concept. > > > Plenty of sensors are read with 16-bit ADCs. Examples: pressure sensor, > strain gauge, position sensors, etc.
And there is absolutely no need for the 16 bit accuracy in all of those cases, because the sensors are only accurate to somewhat 0.1% at the very best. All that required is a 10-bit ADC with the proper gain and offset.
> Just take a look at the Analog > Devices app notes.
If I was making the 100bit ADCs, I am sure I would recommend a 100bit ADC. Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com
Vladimir Vassilevsky wrote:
> > > mw wrote: > >> Vladimir Vassilevsky wrote: >> >>> I am wondering of what could be a sensor which requires the ADC with >>> the true 16-bit accuracy. For the sensor application, that sounds >>> unreasonable to me. Especially considering that the rest of >>> application is handled by a small micro. Apparently there is a >>> problem with the concept. >> >> >> >> Plenty of sensors are read with 16-bit ADCs. Examples: pressure >> sensor, strain gauge, position sensors, etc. > > > And there is absolutely no need for the 16 bit accuracy in all of those > cases, because the sensors are only accurate to somewhat 0.1% at the > very best. All that required is a 10-bit ADC with the proper gain and > offset.
Yes, but you might want to do some gain in the ADC, to simplify the design (0.1% precision VGAs are not cheap either ), and then there is repeatability and granularity, to consider, so a designer might want to ensure the 'weak link' is dominated by the Sensor. Combine all those, and designer's can rightly choose 12-14b ADCs, for systems with 10b sensor precisions. -jg
On 2007-06-06, CBFalconer <cbfalconer@yahoo.com> wrote:
> mw wrote: >> >> Plenty of sensors are read with 16-bit ADCs. Examples: pressure >> sensor, strain gauge, position sensors, etc. Just take a look >> at the Analog Devices app notes. > > Yes, but the question is why? That gives a result to 1 part in > 65535 over the range, and even if we assign the high order bit to > allow overruns by a factor of 2, it is still 1 part in 32767. I > find such precision requirements to be extremely rare.
Often the requirement is simply for resolution, absolute accuracy isn't too important. This is the case for audio for instance. Other applications can have higher accuracy requirements for unexpected reasons. While I'm certainly not an expert, as an example I'm reasonably familiar with astronomical imaging, which is often done with 16 bit (monochrome) CCDs. It might sound overkill considering the eye is only good for eight bits but there is often a heck of a lot of processing after image capture. Brightness/contrast is almost invariably tweaked with rounding errors as a result. More advanced techniques can combine hundreds or even thousands of images - the rounding errors can accumulate in such techniques easily. This can apply to many other areas where a lot of processing is performed - better to start off with much more than you need so that you have enough usable bits left after all the manipulations have been done. -- Andrew Smallshaw andrews@sdf.lonestar.org
On Wed, 06 Jun 2007 02:53:28 -0400, CBFalconer <cbfalconer@yahoo.com>
wrote:

>mw wrote: >> Vladimir Vassilevsky wrote: >> >>> I am wondering of what could be a sensor which requires the ADC >>> with the true 16-bit accuracy. For the sensor application, that >>> sounds unreasonable to me. Especially considering that the rest >>> of application is handled by a small micro. Apparently there is >>> a problem with the concept. >> >> Plenty of sensors are read with 16-bit ADCs. Examples: pressure >> sensor, strain gauge, position sensors, etc. Just take a look >> at the Analog Devices app notes. > >Yes, but the question is why? That gives a result to 1 part in >65535 over the range, and even if we assign the high order bit to >allow overruns by a factor of 2, it is still 1 part in 32767. I >find such precision requirements to be extremely rare.
This assumes that all 65536 possible values do occur with a constant likelihood. However, if the signal has exponential likelihoods, the interesting values are at one end of the scale. In case of audio, the interesting part is the values in the mid-range. For instance to generate the telephone u-law/A-law signal with 8 bit message words, a 12 bit linear ADC should be used. You have to know something about the signal distribution in order to decide, if 16 (linear) bits are enough or not. Paul
On Jun 6, 10:52 am, Jim Granville <no.s...@designtools.maps.co.nz>
wrote:
> Vladimir Vassilevsky wrote: > > > mw wrote: > > >> Vladimir Vassilevsky wrote: > > >>> I am wondering of what could be a sensor which requires the ADC with > >>> the true 16-bit accuracy. For the sensor application, that sounds > >>> unreasonable to me. Especially considering that the rest of > >>> application is handled by a small micro. Apparently there is a > >>> problem with the concept. > > >> Plenty of sensors are read with 16-bit ADCs. Examples: pressure > >> sensor, strain gauge, position sensors, etc. > > > And there is absolutely no need for the 16 bit accuracy in all of those > > cases, because the sensors are only accurate to somewhat 0.1% at the > > very best. All that required is a 10-bit ADC with the proper gain and > > offset. > > Yes, but you might want to do some gain in the ADC, to simplify the > design (0.1% precision VGAs are not cheap either ), and then there > is repeatability and granularity, to consider, so a designer might want > to ensure the 'weak link' is dominated by the Sensor. > Combine all those, and designer's can rightly choose 12-14b ADCs, for > systems with 10b sensor precisions. > > -jg- Hide quoted text - > > - Show quoted text -
yea we use 12 bit A/D's all the time with 1% (7 bit ) sensors, so we don't have to use any offset/gain stages
On Wed, 06 Jun 2007 15:37:01 -0700, the renowned steve
<bungalow_steve@yahoo.com> wrote:

>On Jun 6, 10:52 am, Jim Granville <no.s...@designtools.maps.co.nz> >wrote: >> Vladimir Vassilevsky wrote: >> >> > mw wrote: >> >> >> Vladimir Vassilevsky wrote: >> >> >>> I am wondering of what could be a sensor which requires the ADC with >> >>> the true 16-bit accuracy. For the sensor application, that sounds >> >>> unreasonable to me. Especially considering that the rest of >> >>> application is handled by a small micro. Apparently there is a >> >>> problem with the concept. >> >> >> Plenty of sensors are read with 16-bit ADCs. Examples: pressure >> >> sensor, strain gauge, position sensors, etc. >> >> > And there is absolutely no need for the 16 bit accuracy in all of those >> > cases, because the sensors are only accurate to somewhat 0.1% at the >> > very best. All that required is a 10-bit ADC with the proper gain and >> > offset. >> >> Yes, but you might want to do some gain in the ADC, to simplify the >> design (0.1% precision VGAs are not cheap either ), and then there >> is repeatability and granularity, to consider, so a designer might want >> to ensure the 'weak link' is dominated by the Sensor. >> Combine all those, and designer's can rightly choose 12-14b ADCs, for >> systems with 10b sensor precisions. >> >> -jg- Hide quoted text - >> >> - Show quoted text - > >yea we use 12 bit A/D's all the time with 1% (7 bit ) sensors, so we >don't have to use any offset/gain stages
There you go- digital calibration. Calculating the derivative (or even, sometimes, the 2nd derivative) of the PV is difficult when you have enormous quantization steps, far above the noise level of an analog system. So you need high resolution, not necessarily extreme accuracy. Best regards, Spehro Pefhany -- "it's the network..." "The Journey is the reward" speff@interlog.com Info for manufacturers: http://www.trexon.com Embedded software/hardware/analog Info for designers: http://www.speff.com
Vladimir Vassilevsky wrote:
> > Paul Keinanen wrote: >> The problem with microcontrollers with on-chip ADC/DACs is that you >> might not get the nominal 98 dB SNR due to the noise from the >> controller. > > Besides, the high performance ADCs and the microcontrollers are the two > different technologies. The MCUs with the good ADC/DACs usually contain > two separate dies in one package. For that reason they are more > expensive then the equvalent micro + equvalent ADC/DAC. > >> Is DC accuracy (drifts) important in your application ? >> >> Is this ADC part of a control loop, in which case it would be >> preferable that the ADC is monotonous. > > I am wondering of what could be a sensor which requires the ADC with the > true 16-bit accuracy. For the sensor application, that sounds > unreasonable to me. Especially considering that the rest of application > is handled by a small micro. Apparently there is a problem with the > concept.
Vladimir, apparently you have little touch with sensing. It is a dream come true to be able to measure an NTC, a Platinum or a Thermocouple with the same ADC, and thus offering true flexibility for the customer to choose sensors. Or to use the same design for multiple applications. Meaning, I'm really fond of the 20 and 20bit converters. And yes, my stuff is connected to an AVR, no need for something bigger Rene
"Andrew Smallshaw" <andrews@sdf.lonestar.org> wrote in message
news:slrnf6e6hp.rr4.andrews@sdf.lonestar.org...
> Often the requirement is simply for resolution, absolute accuracy > isn't too important. This is the case for audio for instance. > Other applications can have higher accuracy requirements for > unexpected reasons. > > While I'm certainly not an expert, as an example I'm reasonably > familiar with astronomical imaging, which is often done with 16 > bit (monochrome) CCDs. It might sound overkill considering the > eye is only good for eight bits but there is often a heck of a lot > of processing after image capture.
As long as your captured image lies within the dynamic range of an 8 bit converter, it is of course rediculous to use a 16 bit converter just to give you the dynamic range for calculations. Meindert
In news:slrnf6e6hp.rr4.andrews@sdf.lonestar.org timestamped Wed, 6 Jun
2007 20:34:07 +0000 (UTC), Andrew Smallshaw <andrews@sdf.lonestar.org>
posted:
     "[..]
     
     Often the requirement is simply for resolution, absolute accuracy
     isn't too important.  This is the case for audio for instance.
     [..]"

Hello,

I do not understand the distinction. I agree that absolute accuracy is
not always important and that the ten most significant digits of a low
quality 16 bit analog to digital converter might not be as faithful as
a high quality ten bit analog to digital converter, and I agree that
the least significant bits of an analog to digital converter are less
likely to be as faithful as the most significant bits, but I do not
believe that a sixteen bit ADC is equivalent to nothing except a ten
bit ADC whose output is leftshifted by six bits. That would result in
a datatype which has a resolution of 16 bits but clearly no more
accuracy than a reading of ten bits. I believe Andrew Smallshaw was
talking about something else but I do not understand what. Would you
care to explain?
     
     "[..] the
     eye is only good for eight bits [..]
     
     [..]"

I do not know what the limit is, but I believe that it is
significantly above sixteen bits and below 33 bits. I believe
that much true color graphical work is done at 24 bits.

Regards,
Colin Paul Gloster
Colin Paul Gloster wrote:
> In news:slrnf6e6hp.rr4.andrews@sdf.lonestar.org timestamped Wed, 6 Jun > 2007 20:34:07 +0000 (UTC), Andrew Smallshaw <andrews@sdf.lonestar.org>
<snip>
> > "[..] the > eye is only good for eight bits [..] > > [..]" > > I do not know what the limit is, but I believe that it is > significantly above sixteen bits and below 33 bits. I believe > that much true color graphical work is done at 24 bits. >
I don't want to sound like a net-nanny (we have others in this group), but would you *please* learn to use your newsreader and stop your absurd quotation style? Andrew was referring to monochrome resolution, and yes, 8 bits is a reasonable guess for the eye. The dynamic range of the eye, however, is very much larger - a monochrome CCD with no options to control the shutter speed or aperture would need close to 32 bits to get the full range an eye can work with. So a 16-bit CCD sounds like a reasonable compromise. The reason high-end graphics work is done using more than 8-bit resolution is to have overhead for working with the picture without losing accuracy, and so that it can be displayed (on-screen or in print) in higher resolution, letting the viewer see full contrast no matter which part of the picture he concentrates on at the time.