> In article <p72dndn_Y4-U2ubSnZ2dnUVZ_r6dnZ2d@web-ster.com>,
> tim@seemywebsite.com says...
>>
>> On Fri, 30 Mar 2012 04:08:38 -0700, j.m.granville wrote:
>>
>> > On Wednesday, March 28, 2012 6:56:49 AM UTC+12, Walter Banks wrote:
>> >> I did a fixed point support package for our 8 bit embedded systems
>> >> compilers and one interesting metric came out of the project.
>> >>
>> >> Given a number of bits in a number and similar error checking fixed or
>> >> float took very similar amounts of execution time and code size in
>> >> applications.
>> >>
>> >> For example 32 bit float and 32 bit fixed point. They are not exact but
>> >> they are close. In the end much to my surprise the choice is dynamic
>> >> range or resolution.
>> >
>> > That makes sense for 8 bit cores, but there is another issue besides
>> > speed the OP may need to consider and that is granularity.
>> >
>> > We had one application where floating point was more convenient, but
>> > gave lower precision than a 32*32:64/32 because the float uses 23+1
>> > bits to store the number. The other bits are exponent, and give dynamic
>> > range, but NOT precision.
>> >
>> > With 24b ADCs that may start to matter and certainly with 32 bit ADCs,
>> > you would need to watch it very carefully.
>>
>> If you do any filtering at all, the 25 bits of precision often matter
>> with a _16_ bit ADC, when they aren't a show-stopper altogether. It
>> wouldn't be sensible to even _think_ about filtering the output of a 24-
>> bit ADC with single-precision floating point data paths unless the ADC
>> had been exceedingly poorly chosen or applied, and had essentially
>> useless content in the last several bits.
>
> I agree with your point about filtering with 16-bit ADCs. I generally
> implement FIRs with about 20 taps---which is easiy done
> with a 16 x 16 -> 32-bit MAC. There's no real advantage to floating
> point there, and with 16-bit data inputs, dynamic range is not
> a problem.
>
> I've usually found that getting the full 24 bits from a 24-bit ADC is
> next to impossible. The CS5534 that I've used comes with a table that
> lists the effective number of bits vs cycle time. IIRC, need to go to
> 7-1/2 conversions per second to get over 20 bits. At 30 or 60
> conversions per second, you're down in the 18 bits range. However, the
> built-in 60Hz rejection is quite helpful for some applications.
>
> Floating point does have it's uses though--where dynamic range is high
> and some of the numbers start out very large----as in chemistry
> calculations where you may start with constants like 6.02245x10^23.
> 32-bit floating point may not be suitable for exactly counting the
> hydrogen ions in a beaker of analyte, but it can give you reasonable
> results within the limits of chemical sensors you might use
> (Such as pH meter with a 4-digit display.)
I find it can be nice for generating the final "result" when a
complicated formula is involved. Or even if not that complicated but
there is some horrible mixture of units involved, Convert everything to
floating point SI unit and just do the calculation, instead of carefully
scaling everything and checking for loss of precision and overflows at
every sub-step.
--
John Devereux
Reply by Mark Borgerson●April 4, 20122012-04-04
In article <p72dndn_Y4-U2ubSnZ2dnUVZ_r6dnZ2d@web-ster.com>,
tim@seemywebsite.com says...
>
> On Fri, 30 Mar 2012 04:08:38 -0700, j.m.granville wrote:
>
> > On Wednesday, March 28, 2012 6:56:49 AM UTC+12, Walter Banks wrote:
> >> I did a fixed point support package for our 8 bit embedded systems
> >> compilers and one interesting metric came out of the project.
> >>
> >> Given a number of bits in a number and similar error checking fixed or
> >> float took very similar amounts of execution time and code size in
> >> applications.
> >>
> >> For example 32 bit float and 32 bit fixed point. They are not exact but
> >> they are close. In the end much to my surprise the choice is dynamic
> >> range or resolution.
> >
> > That makes sense for 8 bit cores, but there is another issue besides
> > speed the OP may need to consider and that is granularity.
> >
> > We had one application where floating point was more convenient, but
> > gave lower precision than a 32*32:64/32 because the float uses 23+1
> > bits to store the number. The other bits are exponent, and give dynamic
> > range, but NOT precision.
> >
> > With 24b ADCs that may start to matter and certainly with 32 bit ADCs,
> > you would need to watch it very carefully.
>
> If you do any filtering at all, the 25 bits of precision often matter
> with a _16_ bit ADC, when they aren't a show-stopper altogether. It
> wouldn't be sensible to even _think_ about filtering the output of a 24-
> bit ADC with single-precision floating point data paths unless the ADC
> had been exceedingly poorly chosen or applied, and had essentially
> useless content in the last several bits.
I agree with your point about filtering with 16-bit ADCs. I generally
implement FIRs with about 20 taps---which is easiy done
with a 16 x 16 -> 32-bit MAC. There's no real advantage to floating
point there, and with 16-bit data inputs, dynamic range is not
a problem.
I've usually found that getting the full 24 bits from a 24-bit ADC is
next to impossible. The CS5534 that I've used comes with a table that
lists the effective number of bits vs cycle time. IIRC, need to go to
7-1/2 conversions per second to get over 20 bits. At 30 or 60
conversions per second, you're down in the 18 bits range. However, the
built-in 60Hz rejection is quite helpful for some applications.
Floating point does have it's uses though--where dynamic range is high
and some of the numbers start out very large----as in chemistry
calculations where you may start with constants like 6.02245x10^23.
32-bit floating point may not be suitable for exactly counting the
hydrogen ions in a beaker of analyte, but it can give you reasonable
results within the limits of chemical sensors you might use
(Such as pH meter with a 4-digit display.)
Mark Borgerson
Reply by Paul●April 4, 20122012-04-04
In article <87sjgkj0bs.fsf@devereux.me.uk>, john@devereux.me.uk says...
>
> Anders.Montonen@kapsi.spam.stop.fi.invalid writes:
>
> > John Devereux <john@devereux.me.uk> wrote:
> >
> >> Only actual chip I have heard of is a sigma-delta from TI. Of course
> >> 8-10 of these bit are marketing. I would look it up for you but the
> >> flash selection tool is still "initializing" for me on their site...
> >
> > Off-topic, but as far as I can tell TI are not using Flash in any of
> > their selection tools, only HTML5. Unfortunately their backend sometimes
> > glitches out, usually when you need to look up one of their
> > components.
>
> Oh really? Good for them. I apologise to TI, I admit I was using quite
> an old browser.
>
> In fact it seems to work very well in a slightly more modern one. It is
> one of the few such manufacturer "selection tools" that uses the whole
> width of the browser window. Most are crippled to uselessness by some
> stupid marketeers desire to exactly control appearance.
On Fri, 30 Mar 2012 04:08:38 -0700, j.m.granville wrote:
> On Wednesday, March 28, 2012 6:56:49 AM UTC+12, Walter Banks wrote:
>> I did a fixed point support package for our 8 bit embedded systems
>> compilers and one interesting metric came out of the project.
>>
>> Given a number of bits in a number and similar error checking fixed or
>> float took very similar amounts of execution time and code size in
>> applications.
>>
>> For example 32 bit float and 32 bit fixed point. They are not exact but
>> they are close. In the end much to my surprise the choice is dynamic
>> range or resolution.
>
> That makes sense for 8 bit cores, but there is another issue besides
> speed the OP may need to consider and that is granularity.
>
> We had one application where floating point was more convenient, but
> gave lower precision than a 32*32:64/32 because the float uses 23+1
> bits to store the number. The other bits are exponent, and give dynamic
> range, but NOT precision.
>
> With 24b ADCs that may start to matter and certainly with 32 bit ADCs,
> you would need to watch it very carefully.
If you do any filtering at all, the 25 bits of precision often matter
with a _16_ bit ADC, when they aren't a show-stopper altogether. It
wouldn't be sensible to even _think_ about filtering the output of a 24-
bit ADC with single-precision floating point data paths unless the ADC
had been exceedingly poorly chosen or applied, and had essentially
useless content in the last several bits.
--
My liberal friends think I'm a conservative kook.
My conservative friends think I'm a liberal kook.
Why am I not happy that they have found common ground?
Tim Wescott, Communications, Control, Circuits & Software
http://www.wescottdesign.com
> John Devereux <john@devereux.me.uk> wrote:
>
>> Only actual chip I have heard of is a sigma-delta from TI. Of course
>> 8-10 of these bit are marketing. I would look it up for you but the
>> flash selection tool is still "initializing" for me on their site...
>
> Off-topic, but as far as I can tell TI are not using Flash in any of
> their selection tools, only HTML5. Unfortunately their backend sometimes
> glitches out, usually when you need to look up one of their
> components.
Oh really? Good for them. I apologise to TI, I admit I was using quite
an old browser.
In fact it seems to work very well in a slightly more modern one. It is
one of the few such manufacturer "selection tools" that uses the whole
width of the browser window. Most are crippled to uselessness by some
stupid marketeers desire to exactly control appearance.
> Anyway, their ADS1281/1282 advertise a 31 bit resolution. The ADS1282-HT
> high-temperature variant is even available in DIP packaging for the low,
> low price of $218.75 ea.
>
> -a
--
John Devereux
Reply by ●April 3, 20122012-04-03
John Devereux <john@devereux.me.uk> wrote:
> Only actual chip I have heard of is a sigma-delta from TI. Of course
> 8-10 of these bit are marketing. I would look it up for you but the
> flash selection tool is still "initializing" for me on their site...
Off-topic, but as far as I can tell TI are not using Flash in any of
their selection tools, only HTML5. Unfortunately their backend sometimes
glitches out, usually when you need to look up one of their components.
Anyway, their ADS1281/1282 advertise a 31 bit resolution. The ADS1282-HT
high-temperature variant is even available in DIP packaging for the low,
low price of $218.75 ea.
-a
Reply by John Devereux●April 3, 20122012-04-03
Mark Borgerson <mborgerson@comcast.net> writes:
> In article <18231389.1481.1333105718864.JavaMail.geo-discussion-
> forums@yneo2>, j.m.granville@gmail.com says...
>>
>> On Wednesday, March 28, 2012 6:56:49 AM UTC+12, Walter Banks wrote:
>> > I did a fixed point support package for our 8 bit embedded systems
>> > compilers and one interesting metric came out of the project.
>> >
>> > Given a number of bits in a number and similar error checking fixed
>> > or float took very similar amounts of execution time and code size
>> > in applications.
>> >
>> > For example 32 bit float and 32 bit fixed point. They are not exact
>> > but they are close. In the end much to my surprise the choice is
>> > dynamic range or resolution.
>>
>> That makes sense for 8 bit cores, but there is another issue besides speed the OP may need to consider and that is granularity.
>>
>> We had one application where floating point was more convenient, but gave lower precision than a 32*32:64/32 because the float uses 23+1 bits to store the number. The other bits are exponent, and give dynamic range, but NOT precision.
>>
>> With 24b ADCs that may start to matter and certainly with 32 bit ADCs, you would need to watch it very carefully.
>>
> Have you actually found and used a 32-bit ADC? For and ADC with a 5V
> range, that would mean just a few nanovolts per LSB!!!
Only actual chip I have heard of is a sigma-delta from TI. Of course
8-10 of these bit are marketing. I would look it up for you but the
flash selection tool is still "initializing" for me on their site...
The best ADC I have seen is a HP 3458A meter, the equivalent of a 28 bit
chip ADC.
It might just be possible to make a 32 bit ADC using a josephson
junction array, if you have a liquid helium supply handy :)
[...]
--
John Devereux
Reply by Mark Borgerson●April 3, 20122012-04-03
In article <18231389.1481.1333105718864.JavaMail.geo-discussion-
forums@yneo2>, j.m.granville@gmail.com says...
>
> On Wednesday, March 28, 2012 6:56:49 AM UTC+12, Walter Banks wrote:
> > I did a fixed point support package for our 8 bit embedded systems
> > compilers and one interesting metric came out of the project.
> >
> > Given a number of bits in a number and similar error checking fixed
> > or float took very similar amounts of execution time and code size
> > in applications.
> >
> > For example 32 bit float and 32 bit fixed point. They are not exact
> > but they are close. In the end much to my surprise the choice is
> > dynamic range or resolution.
>
> That makes sense for 8 bit cores, but there is another issue besides speed the OP may need to consider and that is granularity.
>
> We had one application where floating point was more convenient, but gave lower precision than a 32*32:64/32 because the float uses 23+1 bits to store the number. The other bits are exponent, and give dynamic range, but NOT precision.
>
> With 24b ADCs that may start to matter and certainly with 32 bit ADCs, you would need to watch it very carefully.
>
Have you actually found and used a 32-bit ADC? For and ADC with a 5V
range, that would mean just a few nanovolts per LSB!!!
> Compiler suppliers for 32 bit cores, really should provide optimised libraries for Gain/Scale type calibrates, that use a 64 bit result in the intermediate steps.
My experience is that I'm lucky to get 20 noise-free bits on any system
actually connected to an MPU (for a single conversion). Still, that
would push the limits on FP with only 24 bits in the mantissa if I were
to do any significant oversampling. I remember professors in
chemistry and physics warning me that the uncertainty in my final result
should have error limits corresponding the the precision of my inputs.
Still, roundoff errors could eventually degrade the result past the
limits of the input for some calculations.
The reality of the oceanographic sensors I work with is that 16 bits
gets you right into the noise level of the real world for most
experiments.
However, if you are doing long-term integrations of variable inputs,
roundoff error could come back to haunt you.
Mark Borgerson
Reply by Clifford Heath●April 1, 20122012-04-01
On 03/29/12 03:20, Tim Wescott wrote:
> But on the x86 -- which is the _only_ processor that I've tried it that
> had floating point -- 32-bit fractional arithmetic is slower than 64-bit
> floating point.
I think I recall that transition point occurring around 1994.
I was writing a scalable vector graphics subsystem, and carefully using
integer (sometimes fixed-point) math wherever possible, only to find that,
when I changed the basic type of the coordinate to float (or double, I
can't recall) the system actually rendered *faster*.
The integer unit was busy computing addresses and array offsets, and
being interrupted with *coordinate* math, while the FPU lay idle.
This was still in the Pentium days, before even the 686 and PII.
On a modern note, has anyone tried to use the TI OMAP ARM CPUs?
I haven't looked at the DSP instruction set, but the hardware FP is sweet.
Clifford Heath.
Reply by ●March 30, 20122012-03-30
On Wednesday, March 28, 2012 6:56:49 AM UTC+12, Walter Banks wrote:
> I did a fixed point support package for our 8 bit embedded systems
> compilers and one interesting metric came out of the project.
>
> Given a number of bits in a number and similar error checking fixed
> or float took very similar amounts of execution time and code size
> in applications.
>
> For example 32 bit float and 32 bit fixed point. They are not exact
> but they are close. In the end much to my surprise the choice is
> dynamic range or resolution.
That makes sense for 8 bit cores, but there is another issue besides speed the OP may need to consider and that is granularity.
We had one application where floating point was more convenient, but gave lower precision than a 32*32:64/32 because the float uses 23+1 bits to store the number. The other bits are exponent, and give dynamic range, but NOT precision.
With 24b ADCs that may start to matter and certainly with 32 bit ADCs, you would need to watch it very carefully.
Compiler suppliers for 32 bit cores, really should provide optimised libraries for Gain/Scale type calibrates, that use a 64 bit result in the intermediate steps.