Does anyone (or Philips Techs) know the max input current leakage of the ADC channel input pins? The LPC2148 datasheet says that the input leakage current is max 4uA, but I am thinking that this MAY apply just to the digital signal pins. We are trying to use one of the ADC channels in conjunction with a very accurate voltage divider for determining battery voltage for a battery powered application. We need the actual SPECIFICATION max of the input leakage current (and the polarity if it can be depended upon) so as to be able to calculate a worst case error.
ADC input leakage current
Started by ●March 5, 2006
Reply by ●March 5, 20062006-03-05
Sutton Mehaffey wrote: >Does anyone (or Philips Techs) know the max input current leakage of >the ADC channel input pins? > >The LPC2148 datasheet says that the input leakage current is max 4uA, >but I am thinking that this MAY apply just to the digital signal pins. >We are trying to use one of the ADC channels in conjunction with a >very accurate voltage divider for determining battery voltage for a >battery powered application. > >We need the actual SPECIFICATION max of the input leakage current >(and the polarity if it can be depended upon) so as to be able to >calculate a worst case error. > > Just some observations from the peanut gallery, FWIW. I, too, have noticed, what appears to be, a definate avoidance of giving complete spec's regarding pin loading / source / sink current for the LPC2000 processors. I generally see DC specs which contain a lot more detail than that of the LPC2000s. IMHO, this is deliberate? If they are using an approximation technique for measuring the voltage, you may not have a consistant load presented by the ADC input. Assuming that they are using a fixed resistance ladder with FET switches gating the samples within the ladder may not be the case. Therefore, the act of performing a conversion may present you with a varying load? I mean, if you are *that* sensitive to changes in load, then why not eliminate them (Philips) from the unknowns and use a unity gain buffer? Place a simple op amp buffer between your voltage divider and the input to the ADC. Something with a FET input. Done. BUT, you still have the Common Mode spec of the op amp to contend with... Otherwise, you may have to lower the "source" resistance (increase the current through the divider) such that it is 10..20 times the max load presented by the ADC input? Look at it this way, if the voltage divider is drawing 1ma and the input to the ADC is @ 0.004ma (source vs load ratio of 250), how much shift in the voltage would you see if the ADC input varied by 4ua (connect / disconnect)? Having a load 1/250th of the source is pretty darn negligable! Kirchoff's Law... If you are *that* sensitive to the 4ua load, perhaps your units would need to be individually calibrated to match the processor's ADC input load, op amp common mode, etc.. With the degree of accuracy you suggest, this is probably the case in any event. Regards, TomW -- Tom Walsh - WN3L - Embedded Systems Consultant http://openhardware.net, http://cyberiansoftware.com "Windows? No thanks, I have work to do..." ----------------
Reply by ●March 5, 20062006-03-05
Tom, I expect to put a tantalum cap of perhaps 1uf or so on the voltage divider junction and sample very infrequently, perhaps once per second. Perhaps it is "no worries" about what the analog input leakage current is in such circumstances. If it is 4ua then the offset on a 100k source Z would be only about 40mv and reflected up the divider for a 15 volt max reading would equal a measurement error of about 200mv which is NOT inclusive of resistor tolerance errors. However, if the leakage were actually specified and I knew the polarity of the analog input leakage current (and it did not change direction under any circumstances) I would be able to design for a typical setting with knowledge of the worst case parameters. Without such information, It is a "shot in the dark" as to what the unit-to-unit error spread in production will be. I do not like to use pots or select-at-test resistors. I would much rather have a fully characterized part. As to the option of a unity gain amp in front of the input.. Added cost, plus the added power drain make that option unattractive unless I cannot get it to work otherwise. Sutton --- In lpc2000@lpc2..., Tom Walsh <tom@...> wrote: > > Sutton Mehaffey wrote: > > >Does anyone (or Philips Techs) know the max input current leakage of > >the ADC channel input pins? > > > >The LPC2148 datasheet says that the input leakage current is max 4uA, > >but I am thinking that this MAY apply just to the digital signal pins. > >We are trying to use one of the ADC channels in conjunction with a > >very accurate voltage divider for determining battery voltage for a > >battery powered application. > > > >We need the actual SPECIFICATION max of the input leakage current > >(and the polarity if it can be depended upon) so as to be able to > >calculate a worst case error. > > > > > Just some observations from the peanut gallery, FWIW. I, too, have > noticed, what appears to be, a definate avoidance of giving complete > spec's regarding pin loading / source / sink current for the LPC2000 > processors. I generally see DC specs which contain a lot more detail > than that of the LPC2000s. IMHO, this is deliberate? > > > If they are using an approximation technique for measuring the voltage, > you may not have a consistant load presented by the ADC input. Assuming > that they are using a fixed resistance ladder with FET switches gating > the samples within the ladder may not be the case. Therefore, the act > of performing a conversion may present you with a varying load? > > I mean, if you are *that* sensitive to changes in load, then why not > eliminate them (Philips) from the unknowns and use a unity gain buffer? > Place a simple op amp buffer between your voltage divider and the input > to the ADC. Something with a FET input. Done. BUT, you still have the > Common Mode spec of the op amp to contend with... > > Otherwise, you may have to lower the "source" resistance (increase the > current through the divider) such that it is 10..20 times the max load > presented by the ADC input? Look at it this way, if the voltage divider > is drawing 1ma and the input to the ADC is @ 0.004ma (source vs load > ratio of 250), how much shift in the voltage would you see if the ADC > input varied by 4ua (connect / disconnect)? Having a load 1/250th of > the source is pretty darn negligable! Kirchoff's Law... > > If you are *that* sensitive to the 4ua load, perhaps your units would > need to be individually calibrated to match the processor's ADC input > load, op amp common mode, etc.. With the degree of accuracy you > suggest, this is probably the case in any event. > > Regards, > > TomW > > > -- > Tom Walsh - WN3L - Embedded Systems Consultant > http://openhardware.net, http://cyberiansoftware.com > "Windows? No thanks, I have work to do..." > ---------------- >
Reply by ●March 5, 20062006-03-05
--- In lpc2000@lpc2..., "Sutton Mehaffey" <sutton@...>
wrote:
>
> Tom,
>
> I expect to put a tantalum cap of perhaps 1uf or so on the voltage
> divider junction and sample very infrequently, perhaps once per
> second. Perhaps it is "no worries" about what the analog input
leakage
> current is in such circumstances. If it is 4ua then the offset on a
> 100k source Z would be only about 40mv and reflected up the divider
> for a 15 volt max reading would equal a measurement error of about
> 200mv which is NOT inclusive of resistor tolerance errors. However,
> if the leakage were actually specified and I knew the polarity of the
> analog input leakage current (and it did not change direction under
> any circumstances) I would be able to design for a typical setting
> with knowledge of the worst case parameters.
>
> Without such information, It is a "shot in the dark" as to
what the
> unit-to-unit error spread in production will be. I do not like to use
> pots or select-at-test resistors. I would much rather have a fully
> characterized part.
>
> As to the option of a unity gain amp in front of the input.. Added
> cost, plus the added power drain make that option unattractive unless
> I cannot get it to work otherwise.
>
> Sutton
Other manufacturers (notably Microchip) specify that the source
impedance of analog signals must be below 10k ohms. I certainly
wouldn't be driving the converter with 100k.
Richard
Reply by ●March 5, 20062006-03-05
At 11:35 PM 3/5/2006 +0000, rtstofer wrote: >--- In lpc2000@lpc2..., "Sutton Mehaffey" <sutton@...> wrote: > > I expect to put a tantalum cap of perhaps 1uf or so on the voltage > > divider junction and sample very infrequently, perhaps once per > > second. Perhaps it is "no worries" about what the analog input leakage > > current is in such circumstances. If it is 4ua then the offset on a > > 100k source Z would be only about 40mv and reflected up the divider > >Other manufacturers (notably Microchip) specify that the source >impedance of analog signals must be below 10k ohms. I certainly >wouldn't be driving the converter with 100k. He won't driving the input with 100K (from what I read of his description anyway), he'll be driving it with a 1uF cap. It presumably has a resistance well below 100KOhms. That should also swamp the input capacitance on the sample and hold. Now, he just needs to concerned about the leakage across the cap as well. Robert " 'Freedom' has no meaning of itself. There are always restrictions, be they legal, genetic, or physical. If you don't believe me, try to chew a radio signal. " -- Kelvin Throop, III http://www.aeolusdevelopment.com/
Reply by ●March 5, 20062006-03-05
Hi Sutton,
Just one small problem, 4uA on 100K is 400mV, not 40mV, which when
reflected up the divider likely means an error of 2V.
Mike Anton
> -----Original Message-----
> From: lpc2000@lpc2...
> [mailto:lpc2000@lpc2...]On Behalf
> Of Sutton Mehaffey
> Sent: Sunday, March 05, 2006 4:24 PM
> To: lpc2000@lpc2...
> Subject: [lpc2000] Re: ADC input leakage current
>
>
> Tom,
>
> I expect to put a tantalum cap of perhaps 1uf or so on the voltage
> divider junction and sample very infrequently, perhaps once per
> second. Perhaps it is "no worries" about what the analog input
leakage
> current is in such circumstances. If it is 4ua then the offset on a
> 100k source Z would be only about 40mv and reflected up the divider
> for a 15 volt max reading would equal a measurement error of about
> 200mv which is NOT inclusive of resistor tolerance errors. However,
> if the leakage were actually specified and I knew the polarity of the
> analog input leakage current (and it did not change direction under
> any circumstances) I would be able to design for a typical setting
> with knowledge of the worst case parameters.
>
> Without such information, It is a "shot in the dark" as to
what the
> unit-to-unit error spread in production will be. I do not like to use
> pots or select-at-test resistors. I would much rather have a fully
> characterized part.
>
> As to the option of a unity gain amp in front of the input.. Added
> cost, plus the added power drain make that option unattractive unless
> I cannot get it to work otherwise.
>
> Sutton
>
>
>
> --- In lpc2000@lpc2..., Tom Walsh <tom@...> wrote:
> >
> > Sutton Mehaffey wrote:
> >
> > >Does anyone (or Philips Techs) know the max input current
> leakage of
> > >the ADC channel input pins?
> > >
> > >The LPC2148 datasheet says that the input leakage current
> is max 4uA,
> > >but I am thinking that this MAY apply just to the digital
> signal pins.
> > >We are trying to use one of the ADC channels in conjunction with a
> > >very accurate voltage divider for determining battery voltage for
a
> > >battery powered application.
> > >
> > >We need the actual SPECIFICATION max of the input leakage current
> > >(and the polarity if it can be depended upon) so as to be able to
> > >calculate a worst case error.
> > >
> > >
> > Just some observations from the peanut gallery, FWIW. I, too, have
> > noticed, what appears to be, a definate avoidance of giving
> complete
> > spec's regarding pin loading / source / sink current for
> the LPC2000
> > processors. I generally see DC specs which contain a lot
> more detail
> > than that of the LPC2000s. IMHO, this is deliberate?
> >
> >
> > If they are using an approximation technique for measuring
> the voltage,
> > you may not have a consistant load presented by the ADC input.
> Assuming
> > that they are using a fixed resistance ladder with FET
> switches gating
> > the samples within the ladder may not be the case.
> Therefore, the act
> > of performing a conversion may present you with a varying load?
> >
> > I mean, if you are *that* sensitive to changes in load,
> then why not
> > eliminate them (Philips) from the unknowns and use a unity gain
> buffer?
> > Place a simple op amp buffer between your voltage divider
> and the input
> > to the ADC. Something with a FET input. Done. BUT, you still have
> the
> > Common Mode spec of the op amp to contend with...
> >
> > Otherwise, you may have to lower the "source" resistance
> (increase the
> > current through the divider) such that it is 10..20 times
> the max load
> > presented by the ADC input? Look at it this way, if the voltage
> divider
> > is drawing 1ma and the input to the ADC is @ 0.004ma
> (source vs load
> > ratio of 250), how much shift in the voltage would you see
> if the ADC
> > input varied by 4ua (connect / disconnect)? Having a load
> 1/250th of
> > the source is pretty darn negligable! Kirchoff's Law...
> >
> > If you are *that* sensitive to the 4ua load, perhaps your
> units would
> > need to be individually calibrated to match the processor's
> ADC input
> > load, op amp common mode, etc.. With the degree of accuracy you
> > suggest, this is probably the case in any event.
> >
> > Regards,
> >
> > TomW
> >
> >
> > --
> > Tom Walsh - WN3L - Embedded Systems Consultant
> > http://openhardware.net, http://cyberiansoftware.com
> > "Windows? No thanks, I have work to do..."
> > ----------------
> >
>
>
>
>
>
>
>
> Yahoo! Groups Links
>
>
>
>
>
>
>
Reply by ●March 6, 20062006-03-06
Hi Richard,
Microchip do more than that - they give an explanation of why they
suggest that impedance.
The microchip switched-capacitor successive-approximation ADC
takes a sample by charging up an internal capacitor (through an
analog switch).
When you tell it to convert, the analog switch is opened and then
the conversion takes place on the voltage stored on that
capacitor. (I do not know if the voltage on the capacitor remains
unchanged or if it is left at a random voltage).
When the conversion is complete, the analog switch is closed again.
The impedance of your analog source (and the switch impedance)
determine the time it takes to charge the internal capacitor before
you dare start the next conversion.
This is known as the acquisition time.
It is up to YOU to provide the delay for the acquisition time. The ADC
has no hardware to assist you.
Also if you change the analog source, you have to wait the acquisition
time before initiating a conversion - or you will get a reading
somewhere between the old source voltage and the new source
voltage.
I have not seen anything that tells me the lpc family do A to D
conversion in a similar way. But figure 7 of the LPC2148 data sheet
gives values which can be interpreted in the "switched-capacitor"
way:
5pF is stray capacitance of the pin, the bond pad and the bond wire.
20k is the on-resistance of the analog switch
3pF is the capacitor you are trying to charge during acquisition.
So the acquisition time is the time it takes to charge that 3pF
through (20k + your source impedance) to the desired precision.
Leakage current would be dominated by the "digital" leakage
of +/- 3uA.
Does anyone know how the Philips 8051 family ADC behaved?
I suspect the LPC2000 ADC is based on that one.
Hope this helps,
Danish
--- In lpc2000@lpc2..., "rtstofer" <rstofer@...> wrote:
>
>
> Other manufacturers (notably Microchip) specify that the source
> impedance of analog signals must be below 10k ohms. I certainly
> wouldn't be driving the converter with 100k.
>
> Richard
>
Reply by ●March 6, 20062006-03-06
At 09:01 PM 3/6/2006 +0000, Danish Ali wrote: >The microchip switched-capacitor successive-approximation ADC >takes a sample by charging up an internal capacitor (through an >analog switch). >When you tell it to convert, the analog switch is opened and then >the conversion takes place on the voltage stored on that >capacitor. (I do not know if the voltage on the capacitor remains >unchanged or if it is left at a random voltage). That's a description of a sample and hold circuit not a switched capacitor SA convertor. Most A/Ds work this way. The biggest exception would be those that require an external sample and hold circuit and possibly flash convertors. It would be possible to have a op-amp buffer in front of the sample and hold but I don't know of any micros off hand that do (OK there's an opportunity for someone :) A switched capacitor convertor is probably the most common microcontroller convertor and I would be somewhat surprised if the LPC convertor was something different but that's masked by the sample and hold circuitry. Note that the S/H cap also means that if you scan across channels too quickly you can introduce crosstalk between channels. Robert " 'Freedom' has no meaning of itself. There are always restrictions, be they legal, genetic, or physical. If you don't believe me, try to chew a radio signal. " -- Kelvin Throop, III http://www.aeolusdevelopment.com/