converting float to ascii w/o printf

Started by Ron Blancarte March 27, 2007
Now, i have googled this some, with not much as far as results, so I
though I would ask here to see what kind of luck I would have.

I am using the Analog Devices ADuC831 (8052 ISA) part to do data
acquisition and transmission.  I have some data that is being held as
floating point (single precision IEEE 754 format).  Some of this data
needs to be outputted to serial on request.  This is where my request
comes in.  Currently I am using sprintf() to format this output into
my buffer.  HOWEVER, the footprint of this code is making it less than
desirable to use (over 1100 bytes of code space and nearly 30 bytes of
memory).  So I am looking to do the output on my own.

And to this means, I have succeeded, to an extent.  So far I have
removed my need for sprintf() in printing HEX format, as well as ints
and chars.  My problem is with floats.  I have not yet attempted
standard notation:
			3.1415926
But I did write an exponent formula.
			3.141593E+00
I even got the rounding.  HOWEVER, this is using simple multiplies and
divides, by 10, to achieve it's goals.  And this introduces my
problem.  Since a few of the numbers go out to 6 digits of precision,
I am having problems with these last digits outputting correctly.  For
example:

Value entered:		6.791775E-08
Value displayed:	6.791777E-08
Hex value on chip:	  0x3391DA2E (6.7917753e-8)

Clearly, the entered value is being stored correctly and the displayed
value, while close, is just slightly off.  So is there a way to do
this without this error induced by my multiplications by 10 (and still
not using sprintf()?

RonB


--------------------------------------------------
"It is human nature to take shortcuts in thinking"
--------------------------------------------------
On Tue, 27 Mar 2007 15:45:49 -0500, Ron Blancarte
<ron@---TAKETHISOUT---.blancarte.com> wrote:

>Now, i have googled this some, with not much as far as results, so I >though I would ask here to see what kind of luck I would have. > >I am using the Analog Devices ADuC831 (8052 ISA) part to do data >acquisition and transmission. I have some data that is being held as >floating point (single precision IEEE 754 format). Some of this data >needs to be outputted to serial on request. This is where my request >comes in. Currently I am using sprintf() to format this output into >my buffer. HOWEVER, the footprint of this code is making it less than >desirable to use (over 1100 bytes of code space and nearly 30 bytes of >memory). So I am looking to do the output on my own. > >And to this means, I have succeeded, to an extent. So far I have >removed my need for sprintf() in printing HEX format, as well as ints >and chars. My problem is with floats. I have not yet attempted >standard notation: > 3.1415926 >But I did write an exponent formula. > 3.141593E+00 >I even got the rounding. HOWEVER, this is using simple multiplies and >divides, by 10, to achieve it's goals. And this introduces my >problem. Since a few of the numbers go out to 6 digits of precision, >I am having problems with these last digits outputting correctly. For >example: > >Value entered: 6.791775E-08 >Value displayed: 6.791777E-08 >Hex value on chip: 0x3391DA2E (6.7917753e-8) > >Clearly, the entered value is being stored correctly and the displayed >value, while close, is just slightly off. So is there a way to do >this without this error induced by my multiplications by 10 (and still >not using sprintf()? >
Depending on exactly which development environment you are using, check if you have the ftoa function available. Quite a lot of the compilers for the 8 bit MCUs include this function. Do you really need to store the information in floating point format ? The 8051 is not particularly suitable for handling floating point. Keeping the data in integer or fixed point format, and only converting to floating point when necessary should help reduce your code footprint. Regards Anton Erasmus
"Ron Blancarte" <ron@---TAKETHISOUT---.blancarte.com> wrote in message 
news:c0ui03dctcb8t4s04fmrs65v837jimars1@4ax.com...

> Clearly, the entered value is being stored correctly and the displayed > value, while close, is just slightly off. So is there a way to do > this without this error induced by my multiplications by 10 (and still > not using sprintf()?
The correct way of doing this is to only do one normalizing multiply or divide by a power of 10, so you only get one roundoff error. Powers of 10 are exact in floats up to 10^10, for a wider dynamic range you get multiple roundoff errors unless you use more precision. After normalization you use integer arithmetic to extract the digits. My advice would be to use integer-only arithmetic for normalization, this way you get less roundoff error over much larger ranges. Getting this right is non-trivial, that is why floating point printf is so large. If you don't mind a large roundoff error then you can make it smaller, but it is extremely hard to make it both small and accurate... Wilco
Wilco Dijkstra wrote:
> "Ron Blancarte" <ron@---TAKETHISOUT---.blancarte.com> wrote in message > news:c0ui03dctcb8t4s04fmrs65v837jimars1@4ax.com... > >>Clearly, the entered value is being stored correctly and the displayed >>value, while close, is just slightly off. So is there a way to do >>this without this error induced by my multiplications by 10 (and still >>not using sprintf()? > > The correct way of doing this is to only do one normalizing multiply > or divide by a power of 10, so you only get one roundoff error. Powers > of 10 are exact in floats up to 10^10, for a wider dynamic range you > get multiple roundoff errors unless you use more precision. After > normalization you use integer arithmetic to extract the digits.
Agreed.
> My advice would be to use integer-only arithmetic for normalization, > this way you get less roundoff error over much larger ranges.
Hmmm, I don't understand the recommendation here. I would say multiply or divide by the proper (exact) power of 10 to get a number in the range -9999999..9999999, add a rounding factor (-0.5 or +0.5), convert to a 32-bit integer, convert that to a 7-digit character string, then format. -- Thad
On Tue, 27 Mar 2007 23:19:29 +0200 (while OU was sucking), Anton
Erasmus wrote:
>On Tue, 27 Mar 2007 15:45:49 -0500, Ron Blancarte ><ron@---TAKETHISOUT---.blancarte.com> wrote: > >>Now, i have googled this some, with not much as far as results, so I >>though I would ask here to see what kind of luck I would have. >> >>I am using the Analog Devices ADuC831 (8052 ISA) part to do data >>acquisition and transmission. I have some data that is being held as >>floating point (single precision IEEE 754 format). Some of this data >>needs to be outputted to serial on request. This is where my request >>comes in. Currently I am using sprintf() to format this output into >>my buffer. HOWEVER, the footprint of this code is making it less than >>desirable to use (over 1100 bytes of code space and nearly 30 bytes of >>memory). So I am looking to do the output on my own. >> >>And to this means, I have succeeded, to an extent. So far I have >>removed my need for sprintf() in printing HEX format, as well as ints >>and chars. My problem is with floats. I have not yet attempted >>standard notation: >> 3.1415926 >>But I did write an exponent formula. >> 3.141593E+00 >>I even got the rounding. HOWEVER, this is using simple multiplies and >>divides, by 10, to achieve it's goals. And this introduces my >>problem. Since a few of the numbers go out to 6 digits of precision, >>I am having problems with these last digits outputting correctly. For >>example: >> >>Value entered: 6.791775E-08 >>Value displayed: 6.791777E-08 >>Hex value on chip: 0x3391DA2E (6.7917753e-8) >> >>Clearly, the entered value is being stored correctly and the displayed >>value, while close, is just slightly off. So is there a way to do >>this without this error induced by my multiplications by 10 (and still >>not using sprintf()? >> > >Depending on exactly which development environment you are using, >check if you have the ftoa function available. Quite a lot of the >compilers for the 8 bit MCUs include this function. >Do you really need to store the information in floating point format ? >The 8051 is not particularly suitable for handling floating point. >Keeping the data in integer or fixed point format, and only converting >to floating point when necessary should help reduce your code >footprint.
Well I hvae to pick my poison. Since this is for a measurement application, that does involve trig, I will take my chances with FP in this case. Besides, that portion of my code is not the problem. Around half of my code (in size) is a command handler, of which this output is part of. I have been able to take chunks out of it slowly, cutting nearly 8k out of it, with some optimization and pulling out standard functions (like replacing strcmp with my own version). And unfortunatly Keil doesn't appear to have the ftoa() funciton included w/ it. So that is out. RonB -------------------------------------------------- "It is human nature to take shortcuts in thinking" --------------------------------------------------
On Tue, 27 Mar 2007 15:45:49 -0500, Ron Blancarte
<ron@---TAKETHISOUT---.blancarte.com> wrote:

>I am using the Analog Devices ADuC831 (8052 ISA) part to do data >acquisition and transmission. I have some data that is being held as >floating point (single precision IEEE 754 format). Some of this data >needs to be outputted to serial on request.
Is this serial port connected to a bigger system, such as a PC ? Could you run your own receiver program on this system ? If this is the case and you don't want to send binary data directly over the UART, simply split the 32 bit float into eight 4 bit fields, convert each field to hex and send over the serial line. At the receiver end of the line, do the opposite conversion. Since the receiving system would most likely also use IEEE 754 floats, no extra conversion on the receiver end is needed and the value can be displayed using printf on the receiving device. Paul
> And unfortunatly Keil doesn't appear to have the ftoa() funciton
> included w/ it. So that is out.
A moderate amount of digging on the web will get you loads of C source code for just about all those classic runtime C lib functions that you're missing. That stuff has been in the public domain for years. There are efficient and not so efficient versions of sprintf() and ftoa() to be found so you'll want to evaluate several to find the one that suits your needs. The bonus is that you're getting source code so you can customize to your heart's content. JJS
"Thad Smith" <ThadSmith@acm.org> wrote in message 
news:4609b2ec$0$47168$892e0abb@auth.newsreader.octanews.com...
> Wilco Dijkstra wrote: >> "Ron Blancarte" <ron@---TAKETHISOUT---.blancarte.com> wrote in message >> news:c0ui03dctcb8t4s04fmrs65v837jimars1@4ax.com... >> >>>Clearly, the entered value is being stored correctly and the displayed >>>value, while close, is just slightly off. So is there a way to do >>>this without this error induced by my multiplications by 10 (and still >>>not using sprintf()? >> >> The correct way of doing this is to only do one normalizing multiply >> or divide by a power of 10, so you only get one roundoff error. Powers >> of 10 are exact in floats up to 10^10, for a wider dynamic range you >> get multiple roundoff errors unless you use more precision. After >> normalization you use integer arithmetic to extract the digits. > > Agreed. > >> My advice would be to use integer-only arithmetic for normalization, >> this way you get less roundoff error over much larger ranges. > > Hmmm, I don't understand the recommendation here. I would say multiply or divide by the > proper (exact) power of 10 to get a number in the range -9999999..9999999, add a > rounding factor (-0.5 or +0.5), convert to a 32-bit integer, convert that to a 7-digit > character string, then format.
What I mean is that if you do the normalization multiply/divide using integer arithmetic, you get more precision, eg. 32 bits rathern than 24 when using float. This gives you larger powers of 10 that can be represented exactly and a more accurate result after normalization (and more control over rounding). Integer arithmetic makes even more sense if you use emulated floating point. Wilco
fcvt and ecvt might also bear thinking about.

"John Speth" <johnspeth@yahoo.com> wrote in message 
news:eudu8n$s8b$1@aioe.org...
>> And unfortunatly Keil doesn't appear to have the ftoa() funciton >> included w/ it. So that is out. > > A moderate amount of digging on the web will get you loads of C source > code for just about all those classic runtime C lib functions that you're > missing. That stuff has been in the public domain for years. > > There are efficient and not so efficient versions of sprintf() and ftoa() > to be found so you'll want to evaluate several to find the one that suits > your needs. The bonus is that you're getting source code so you can > customize to your heart's content. > > JJS > >
"Wilco Dijkstra" <Wilco_dot_Dijkstra@ntlworld.com> writes:
> "Thad Smith" <ThadSmith@acm.org> wrote in message > > Wilco Dijkstra wrote: > >> "Ron Blancarte" <ron@---TAKETHISOUT---.blancarte.com> wrote in message > >> > >>>Clearly, the entered value is being stored correctly and the displayed > >>>value, while close, is just slightly off. So is there a way to do > >>>this without this error induced by my multiplications by 10 (and still > >>>not using sprintf()? > >> > >> The correct way of doing this is to only do one normalizing multiply > >> or divide by a power of 10, so you only get one roundoff error. Powers > >> of 10 are exact in floats up to 10^10, for a wider dynamic range you > >> get multiple roundoff errors unless you use more precision. After > >> normalization you use integer arithmetic to extract the digits. > > > > Agreed. > > > >> My advice would be to use integer-only arithmetic for normalization, > >> this way you get less roundoff error over much larger ranges. > > > > Hmmm, I don't understand the recommendation here. I would say multiply or divide by the > > proper (exact) power of 10 to get a number in the range -9999999..9999999, add a > > rounding factor (-0.5 or +0.5), convert to a 32-bit integer, convert that to a 7-digit > > character string, then format. > > What I mean is that if you do the normalization multiply/divide using > integer arithmetic, you get more precision, eg. 32 bits rathern than 24 > when using float. This gives you larger powers of 10 that can be > represented exactly and a more accurate result after normalization > (and more control over rounding). Integer arithmetic makes even more > sense if you use emulated floating point.
And don't forget to look at the accuracy of the data before getting wound around the axle of precision. If the source of the data is an 8-bit ADC, for instance, no amount of fancy footwork is going to get you three decimal place accuracy. If the problem is ill-conditioned, not much of anything will help.
In article <pub5135u3ht8o5i56brfu83rq5vu5n5phb@4ax.com>, Anton Erasmus <nobody@spam.prevent.net> writes:
> > As far as I am aware there is no main stream language available for > from small 8-bit micros up to 64 bit processors that has support for > fixed point. This is true whether one considers Ada a main stream > language or not. C , and maybe Forth is probably the only languages > which is supported across the full range of micros. >
Ada is now available on the AVR. What I don't know, because I don't use the AVR, is if Ada's fixed point packages are also available on the AVR. See http://avr-ada.sourceforge.net for further details if you are interested. Simon. -- Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP Microsoft: Bringing you 1980's technology to a 21st century world
On Mon, 02 Apr 2007 13:29:31 +0300, Niklas Holsti
<niklas.holsti@nospam.please> wrote:

>Anton Erasmus wrote: >> A pity that none of the main stream languages has support for fixed point. > >Ada supports fixed point. I consider it a "main stream language", but >some perhaps don't...
As far as I am aware there is no main stream language available for from small 8-bit micros up to 64 bit processors that has support for fixed point. This is true whether one considers Ada a main stream language or not. C , and maybe Forth is probably the only languages which is supported across the full range of micros. Regards Anton Erasmus
On Mon, 02 Apr 2007 13:29:31 +0300, Niklas Holsti
<niklas.holsti@nospam.please> wrote:

>Anton Erasmus wrote: >> A pity that none of the main stream languages has support for fixed point. > >Ada supports fixed point. I consider it a "main stream language", but >some perhaps don't...
So do COBOL and PL/1, but are these main stream languages in the embedded world ;-). Paul
On Fri, 30 Mar 2007 09:28:38 -0600 (while OU was sucking), Not Really
Me wrote:
> >"Ron Blancarte" <ron@---TAKETHISOUT---.blancarte.com> wrote in message >news:c0ui03dctcb8t4s04fmrs65v837jimars1@4ax.com... >> Now, i have googled this some, with not much as far as results, so I >> though I would ask here to see what kind of luck I would have. >> >> I am using the Analog Devices ADuC831 (8052 ISA) part to do data >> acquisition and transmission. I have some data that is being held as >> floating point (single precision IEEE 754 format). Some of this data >> needs to be outputted to serial on request. This is where my request >> comes in. Currently I am using sprintf() to format this output into >> my buffer. HOWEVER, the footprint of this code is making it less than >> desirable to use (over 1100 bytes of code space and nearly 30 bytes of >> memory). So I am looking to do the output on my own. >> >> And to this means, I have succeeded, to an extent. So far I have >> removed my need for sprintf() in printing HEX format, as well as ints >> and chars. My problem is with floats. I have not yet attempted >> standard notation: >> 3.1415926 >> But I did write an exponent formula. >> 3.141593E+00 >> I even got the rounding. HOWEVER, this is using simple multiplies and >> divides, by 10, to achieve it's goals. And this introduces my >> problem. Since a few of the numbers go out to 6 digits of precision, >> I am having problems with these last digits outputting correctly. For >> example: >> >> Value entered: 6.791775E-08 >> Value displayed: 6.791777E-08 >> Hex value on chip: 0x3391DA2E (6.7917753e-8) >> >> Clearly, the entered value is being stored correctly and the displayed >> value, while close, is just slightly off. So is there a way to do >> this without this error induced by my multiplications by 10 (and still >> not using sprintf()? >> >> RonB >> >> >> -------------------------------------------------- >> "It is human nature to take shortcuts in thinking" >> -------------------------------------------------- > >Ron, > >I emailed you some sample code that we wrote for the same problem on another >member of the ADuC8xx family. Let me know if you did not receive the direct >email. > >Scott
I didn't get it (or my spam filters got to it first). Try sending it to ron dot blancarte at benchtree dot net. That is my work address and would be where I was looking for this solution. -R
Anton Erasmus wrote:
> A pity that none of the main stream languages has support for fixed point.
Ada supports fixed point. I consider it a "main stream language", but some perhaps don't... -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
On Sun, 01 Apr 2007 18:00:07 -0400, CBFalconer <cbfalconer@yahoo.com>
wrote:

>Grant Edwards wrote: >> On 2007-04-01, Anton Erasmus <nobody@spam.prevent.net> wrote: >> >... snip ... >>> >>> Doing a robust floating point implimentation that handles all >>> the exceptions properly is non-trivial. Typical 8-bit >>> implimentations takes short cuts. >> >> I've been using floating point on 8-bit platforms for 15 years, >> and I've got no complaints. > >I think Anton is including omission of detection and propagation of >INF and NAN as "short cuts". The other approach is to trap those >operations at occurance.
While in-band signalling of special conditions is useful in some cases, when there are a limited number of special cases, however, with more complex situations, when the signal is processed at various stages, getting any meaningful information through the whole chain is quite hard. However, when using out-of-band signalling, i.e. there is a status field is attached to the actual value. This status field can express much more situations, such as sensor open/short circuit, out of range etc. For instance, in the Profibus PA fieldbus, each signal is represented by 5 bytes, four bytes for the actual floating point value and one byte for the actual status ("quality") of the signal. So if you are planning to use a floating point value with the sole purpose of using in-band signalling with NAN, +INF and -INF, it might make more sense to use a 16 or 24 bit integer value field and an 8 bit status field, which has the same or smaller memory footprint than the 32 bit float. On an 8 bitter, less memory access is required to check for the status byte than trying to determine, if the float is a very large value, INF or NAN, requiring testing up to 4 bytes. This is significant, since the exception conditions should be checked every time before using the value. Paul
On 2007-04-01, CBFalconer <cbfalconer@yahoo.com> wrote:
> ... snip ... >>> >>> Doing a robust floating point implimentation that handles all >>> the exceptions properly is non-trivial. Typical 8-bit >>> implimentations takes short cuts. >> >> I've been using floating point on 8-bit platforms for 15 years, >> and I've got no complaints. > > I think Anton is including omission of detection and propagation of > INF and NAN as "short cuts". The other approach is to trap those > operations at occurance.
All of the implementations I've used handled INF and NAN properly. That includes processors as small as a 6811 back in 1989 and about 8 others since then. Perhaps others are less careful when choosing their tools? -- Grant Edwards grante Yow! One FISHWICH coming at up!! visi.com
Grant Edwards wrote:
> On 2007-04-01, Anton Erasmus <nobody@spam.prevent.net> wrote: >
... snip ...
>> >> Doing a robust floating point implimentation that handles all >> the exceptions properly is non-trivial. Typical 8-bit >> implimentations takes short cuts. > > I've been using floating point on 8-bit platforms for 15 years, > and I've got no complaints.
I think Anton is including omission of detection and propagation of INF and NAN as "short cuts". The other approach is to trap those operations at occurance. -- Chuck F (cbfalconer at maineline dot net) Available for consulting/temporary embedded and systems. <http://cbfalconer.home.att.net> -- Posted via a free Usenet account from http://www.teranews.com
On Sun, 01 Apr 2007 15:34:23 -0000, Grant Edwards <grante@visi.com>
wrote:

>On 2007-04-01, Anton Erasmus <nobody@spam.prevent.net> wrote: > >>>> Any design worthy of the name should be able to define system >>>> limits and provide recovery for offlimit values. >>> >>>Maybe it's just the places I've worked, but in practice that >>>seems to be rather rare. >> >> Going to floating point because one do not properly define system >> limits and the required ranges is just asking for trouble. If one ends >> up with NAN or INF, this tends to propogate. > >I know, and that's one of the best features of IEEE floating >point. It allow prevents you from having bogus (but apparently >valid) outputs when one of the inputs is missing or invalide. > >> On a system with an OS, this is trapped, and one gets an >> error. On a typical embedded system one ends up with total >> garbage. > >No, you end up with INF or NAN.
Yes, but if this is not checked, and the INF or NAN is cast to an integer to drive a DAC for example, then one is driving the DAC with garbage. Typical embedded floating point implimentations will happely give a result if one casts NAN or INF to an integer. AFAIK there is no guarantee that INF cast to an integer will be the maximum integer value. Even if one uses an OS which actually provides floating point exceprion handling, it is quite difficult to keep on controlling if one suddenly get a NAN or INF because one did a number/(big_number -(bignumber-delta)) calculation and ended up with number/0, which in turn caused the value to be NAN. To non get this one has to scale the whole algorithm in any case, so one can just as well use fixed point maths. On some systems one has to reset the floating point co-processor and basically restart the system. For some control systems this is a VERY bad idea.
>> Then there is the fact that with integer one can do varaible+1 >> for the full range and always have the correct value. > >How do you represent infinity or an invalid value in an integer?
Make sure thet one has enough bits to represent the range, and then saturate at the maximum or minimum. For many control systems this is good enough. If one gets a huge error, commanding the maximum corrective response is all one can do in any case. The only other invalid case then normally is divide by 0, and this can be tested for and handled. Often again by commanding the maximum corrective response.
> >> With floating point one quickly get to a stage where the >> varaible does not have the right value at all. > >No you _do_ end up with the right value: INF or NAN. That's >the whole point. If an output is being calculated from a set >of inputs where on of the inputs is invalid (NAN), then the >output is invalid (NAN). That's very difficult to do with >integer representations.
In a typical control system handling INF is problematical, handling NAN is a nightmare. I have worked on a system where floating point was used in the control algorithms as well as in a background debugging task that displayed various floating point parameters. The system had hardware floating point support using a co-processor. The control task ran in a timer interrupt, while the debug task used all the left over CPU time. Great care had to be taken to switch the co-processor state each time the timer int routine was entered and exited again. The debug task caused a NAN which propogated to the control task even though it was in a separate thread. One had to re-initialise the co processor to get out of the NAN state.
>> Trying to do scaled integer implimentations in the minimu theoretical >> bit size needed is difficult. With gcc supporting 64 even on something >> like the AVR makes life much easier. There are few applications where >> scaled 64 integers does not provide enough precision and range. >> >>>>> That's very difficult to do right. When I started working in >>>>> the instrumentation business 15 years ago, most products were >>>>> done in fixed point. A _lot_ of time was spent trying to get >>>>> the scaling and represention right. And it never was quite >>>>> right. There were always problems in the field with wrap >>>>> around and over/underflow in cases that the developers didn't >>>>> forsee. >>>> >>>> Poor system design / specification ?. >>> >>>Being able to provide a robust working solution when given a >>>poor system design and specification can be a very good thing. >>>:) >> >> Doing a robust floating point implimentation that handles all >> the exceptions properly is non-trivial. Typical 8-bit >> implimentations takes short cuts. > >I've been using floating point on 8-bit platforms for 15 years, >and I've got no complaints.
Support for floating point on 8-bit platforms used to run major OSes tend to be robust and sorted out. On embedded micros shortcuts are taken for speed and size reasons. What was the main driving force for the development of floating point ? (This is a serious question - not a troll) One can represent the range and precision in fixed point and AFAIK it is much easier to code this in assembler than to code a robust floating point implimentation. The only reason I can think of was that memory was so expensive that people had to do everything possible to minimize memory usage. Today memory is orders of magnitude cheaper and one has relatively a lot even on quite small micros. On bigger systems using fixed point representations with hardware acceleration of 256bit or 1024bit or even bigger would be a lot less complex and should be a lot faster than floating point hardware. A pity that none of the main stream languages has support for fixed point. Regards Anton Erasmus
On 2007-04-01, Anton Erasmus <nobody@spam.prevent.net> wrote:

>>> Any design worthy of the name should be able to define system >>> limits and provide recovery for offlimit values. >> >>Maybe it's just the places I've worked, but in practice that >>seems to be rather rare. > > Going to floating point because one do not properly define system > limits and the required ranges is just asking for trouble. If one ends > up with NAN or INF, this tends to propogate.
I know, and that's one of the best features of IEEE floating point. It allow prevents you from having bogus (but apparently valid) outputs when one of the inputs is missing or invalide.
> On a system with an OS, this is trapped, and one gets an > error. On a typical embedded system one ends up with total > garbage.
No, you end up with INF or NAN.
> Then there is the fact that with integer one can do varaible+1 > for the full range and always have the correct value.
How do you represent infinity or an invalid value in an integer?
> With floating point one quickly get to a stage where the > varaible does not have the right value at all.
No you _do_ end up with the right value: INF or NAN. That's the whole point. If an output is being calculated from a set of inputs where on of the inputs is invalid (NAN), then the output is invalid (NAN). That's very difficult to do with integer representations.
> Trying to do scaled integer implimentations in the minimu theoretical > bit size needed is difficult. With gcc supporting 64 even on something > like the AVR makes life much easier. There are few applications where > scaled 64 integers does not provide enough precision and range. > >>>> That's very difficult to do right. When I started working in >>>> the instrumentation business 15 years ago, most products were >>>> done in fixed point. A _lot_ of time was spent trying to get >>>> the scaling and represention right. And it never was quite >>>> right. There were always problems in the field with wrap >>>> around and over/underflow in cases that the developers didn't >>>> forsee. >>> >>> Poor system design / specification ?. >> >>Being able to provide a robust working solution when given a >>poor system design and specification can be a very good thing. >>:) > > Doing a robust floating point implimentation that handles all > the exceptions properly is non-trivial. Typical 8-bit > implimentations takes short cuts.
I've been using floating point on 8-bit platforms for 15 years, and I've got no complaints. -- Grant Edwards grante Yow! I smell like a wet at reducing clinic on Columbus visi.com Day!