EmbeddedRelated.com
Forums

converting float to ascii w/o printf

Started by Ron Blancarte March 27, 2007
Grant Edwards wrote:

> > Maybe it's just the places I've worked, but in practice that > seems to be rather rare. >
Have worked on several projects where the "requirements spec" developed in parallel with software development, but where there are big changes, it can result in rewrites of major sections of code simply because the original software architecture is unable to support the new features. It's a dangerous path and always results in added expense and time to market for the client. Guess who gets the blame when the project is late, or the product unreliable etc ?, It's up to engineers to hold the line against such sloppy project management, even if it means walking away from the job. You always have a choice. There are a lot of smaller to medium sized companies that pay lip service to good development practice and quality, but it gets lost in the political noise, especially if marketing / finance / engineering are at war with each other, (pick any two, or even all 3) which is not uncommon. Lack of understanding, ethics, communication, ego and mistrust etc all contribute to this.
> > Being able to provide a robust working solution when given a > poor system design and specification can be a very good thing. :) >
Agreed, and is to be expected, but there are limits. There are more constructive ways to be challenged by your work, without being a hero or martyr :-)... Chris
Vladimir Vassilevsky <antispam_bogus@hotmail.com> writes:
> ChrisQuayle wrote: > > One of the main problems with the use of C library floating point is > > that you are then more or less forced to use all the library functions > > that support the internal format. > > I would be really surprised if the internal format is not IEEE-754.
Although this is true, most do not support the infinities, (signalling/non-signalling) NaNs, and denormalized values of the full standard.
> The only major problem that I have encountered with the vendor floating > point libraries is that in some cases the library is not reentrant. > There were quite a few small bugs though.
There are buggy users also. I once had a vendor complain that the tan(pi/2 - one bit) didn't agree to seven decimal places with a value obtained from another source. I had to do a lot of explaining to convince them that at that point even the first decimal place is questionable, never mind the seventh.
On Fri, 30 Mar 2007 09:34:36 PST,
mojaveg@mojaveg.lsan.mdsg-pacwest.com (Everett M. Greene) wrote:

>Vladimir Vassilevsky <antispam_bogus@hotmail.com> writes: >> ChrisQuayle wrote: >> > One of the main problems with the use of C library floating point is >> > that you are then more or less forced to use all the library functions >> > that support the internal format. >> >> I would be really surprised if the internal format is not IEEE-754. > >Although this is true, most do not support the infinities, >(signalling/non-signalling) NaNs, and denormalized values >of the full standard.
Unless strict conformance to IEEE-754 is required, at least for an 8 bitter floating point software emulation, it would make sense to use some other floating point bit layout and interpretation than the messy IEEE format. For instance the hidden bit normalisation adds extra overhead and I am not sure if using sign/magnitude representation (instead of 2's complement) for the mantissa or the excess notation for the exponent or even use some different base than 2 is the only possible option. It might even make sense to use a full 32 bit mantissa (depending on availability of 8/16 bit multiply instructions) and 8-16 bits for the exponent, but this would cause sizeof(float) to be 5 or 6, which at least would cause problems with unions. An example of such software implementation was the Borland Turbo Pascal "real" data type, in the days before the 8087 floating point co-processor became widely available. Paul
Paul Keinanen <keinanen@sci.fi> writes:
> mojaveg@mojaveg.lsan.mdsg-pacwest.com (Everett M. Greene) wrote: > >Vladimir Vassilevsky <antispam_bogus@hotmail.com> writes: > >> ChrisQuayle wrote: > >> > One of the main problems with the use of C library floating point is > >> > that you are then more or less forced to use all the library functions > >> > that support the internal format. > >> > >> I would be really surprised if the internal format is not IEEE-754. > > > >Although this is true, most do not support the infinities, > >(signalling/non-signalling) NaNs, and denormalized values > >of the full standard. > > Unless strict conformance to IEEE-754 is required, at least for an 8 > bitter floating point software emulation, it would make sense to use > some other floating point bit layout and interpretation than the messy > IEEE format. For instance the hidden bit normalisation adds extra > overhead and I am not sure if using sign/magnitude representation > (instead of 2's complement) for the mantissa or the excess notation > for the exponent or even use some different base than 2 is the only > possible option.
Amazingly, sign-magnitude notation works quite well. I once questioned its use as well, but experience with doing float operation interpreting on small processors finds it to be a help, not a hindrance.
> It might even make sense to use a full 32 bit mantissa (depending on > availability of 8/16 bit multiply instructions) and 8-16 bits for the > exponent, but this would cause sizeof(float) to be 5 or 6, which at > least would cause problems with unions. An example of such software > implementation was the Borland Turbo Pascal "real" data type, in the > days before the 8087 floating point co-processor became widely > available.
I once worked with a machine that had a 32/32 hardware float format. It gave something between single- and double-precision float range for the mantissa and severe overkill for the exponent range.
On Thu, 29 Mar 2007 18:50:40 -0000, Grant Edwards <grante@visi.com>
wrote:

>On 2007-03-29, ChrisQuayle <nospam@devnul.co.uk> wrote: >> Grant Edwards wrote: >> >>> I disagree. Doing something in fixed-point instead of floating >>> point will greatly increase the design work required and make >>> that work more difficult. It will cause bugs. There will be >>> overlow and wrap-around bugs. >> >> Any design worthy of the name should be able to define system >> limits and provide recovery for offlimit values. > >Maybe it's just the places I've worked, but in practice that >seems to be rather rare.
Going to floating point because one do not properly define system limits and the required ranges is just asking for trouble. If one ends up with NAN or INF, this tends to propogate. On a system with an OS, this is trapped, and one gets an error. On a typical embedded system one ends up with total garbage. Then there is the fact that with integer one can do varaible+1 for the full range and always have the correct value. With floating point one quickly get to a stage where the varaible does not have the right value at all. Trying to do scaled integer implimentations in the minimu theoretical bit size needed is difficult. With gcc supporting 64 even on something like the AVR makes life much easier. There are few applications where scaled 64 integers does not provide enough precision and range.
>>> That's very difficult to do right. When I started working in >>> the instrumentation business 15 years ago, most products were >>> done in fixed point. A _lot_ of time was spent trying to get >>> the scaling and represention right. And it never was quite >>> right. There were always problems in the field with wrap >>> around and over/underflow in cases that the developers didn't >>> forsee. >> >> Poor system design / specification ?. > >Being able to provide a robust working solution when given a >poor system design and specification can be a very good thing. :)
Doing a robust floating point implimentation that handles all the exceptions properly is non-trivial. Typical 8-bit implimentations takes short cuts. Regards Anton Erasmus

Anton Erasmus wrote:

> Going to floating point because one do not properly define system > limits and the required ranges is just asking for trouble. If one ends > up with NAN or INF, this tends to propogate. On a system with an OS, > this is trapped, and one gets an error. On a typical embedded system > one ends up with total garbage. Then there is the fact that with > integer one can do varaible+1 for the full range and always have the > correct value. With floating point one quickly get to a stage where > the varaible does not have the right value at all.
With integers, it is possible to verify the implementation of an algorithm against the model by bit to bit comparison. It is not so trivial with the floating point because of the implementation dependent precision issues.
> Trying to do scaled integer implimentations in the minimu theoretical > bit size needed is difficult. With gcc supporting 64 even on something > like the AVR makes life much easier. There are few applications where > scaled 64 integers does not provide enough precision and range.
I know one practical application where it may be necessary to use the integers of more then 64 bits: CIC filters. Can you give another example?
> Doing a robust floating point implimentation that handles all the > exceptions properly is non-trivial. Typical 8-bit implimentations
> takes short cuts. This is more of the application problem rather then the float library problem. It is difficult to foresee and handle all special cases even for moderately complicated system of equations. However it is impossible to approach the task of that level of complexity with the integer arithmetics. Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com
On 2007-04-01, Anton Erasmus <nobody@spam.prevent.net> wrote:

>>> Any design worthy of the name should be able to define system >>> limits and provide recovery for offlimit values. >> >>Maybe it's just the places I've worked, but in practice that >>seems to be rather rare. > > Going to floating point because one do not properly define system > limits and the required ranges is just asking for trouble. If one ends > up with NAN or INF, this tends to propogate.
I know, and that's one of the best features of IEEE floating point. It allow prevents you from having bogus (but apparently valid) outputs when one of the inputs is missing or invalide.
> On a system with an OS, this is trapped, and one gets an > error. On a typical embedded system one ends up with total > garbage.
No, you end up with INF or NAN.
> Then there is the fact that with integer one can do varaible+1 > for the full range and always have the correct value.
How do you represent infinity or an invalid value in an integer?
> With floating point one quickly get to a stage where the > varaible does not have the right value at all.
No you _do_ end up with the right value: INF or NAN. That's the whole point. If an output is being calculated from a set of inputs where on of the inputs is invalid (NAN), then the output is invalid (NAN). That's very difficult to do with integer representations.
> Trying to do scaled integer implimentations in the minimu theoretical > bit size needed is difficult. With gcc supporting 64 even on something > like the AVR makes life much easier. There are few applications where > scaled 64 integers does not provide enough precision and range. > >>>> That's very difficult to do right. When I started working in >>>> the instrumentation business 15 years ago, most products were >>>> done in fixed point. A _lot_ of time was spent trying to get >>>> the scaling and represention right. And it never was quite >>>> right. There were always problems in the field with wrap >>>> around and over/underflow in cases that the developers didn't >>>> forsee. >>> >>> Poor system design / specification ?. >> >>Being able to provide a robust working solution when given a >>poor system design and specification can be a very good thing. >>:) > > Doing a robust floating point implimentation that handles all > the exceptions properly is non-trivial. Typical 8-bit > implimentations takes short cuts.
I've been using floating point on 8-bit platforms for 15 years, and I've got no complaints. -- Grant Edwards grante Yow! I smell like a wet at reducing clinic on Columbus visi.com Day!
On Sun, 01 Apr 2007 15:34:23 -0000, Grant Edwards <grante@visi.com>
wrote:

>On 2007-04-01, Anton Erasmus <nobody@spam.prevent.net> wrote: > >>>> Any design worthy of the name should be able to define system >>>> limits and provide recovery for offlimit values. >>> >>>Maybe it's just the places I've worked, but in practice that >>>seems to be rather rare. >> >> Going to floating point because one do not properly define system >> limits and the required ranges is just asking for trouble. If one ends >> up with NAN or INF, this tends to propogate. > >I know, and that's one of the best features of IEEE floating >point. It allow prevents you from having bogus (but apparently >valid) outputs when one of the inputs is missing or invalide. > >> On a system with an OS, this is trapped, and one gets an >> error. On a typical embedded system one ends up with total >> garbage. > >No, you end up with INF or NAN.
Yes, but if this is not checked, and the INF or NAN is cast to an integer to drive a DAC for example, then one is driving the DAC with garbage. Typical embedded floating point implimentations will happely give a result if one casts NAN or INF to an integer. AFAIK there is no guarantee that INF cast to an integer will be the maximum integer value. Even if one uses an OS which actually provides floating point exceprion handling, it is quite difficult to keep on controlling if one suddenly get a NAN or INF because one did a number/(big_number -(bignumber-delta)) calculation and ended up with number/0, which in turn caused the value to be NAN. To non get this one has to scale the whole algorithm in any case, so one can just as well use fixed point maths. On some systems one has to reset the floating point co-processor and basically restart the system. For some control systems this is a VERY bad idea.
>> Then there is the fact that with integer one can do varaible+1 >> for the full range and always have the correct value. > >How do you represent infinity or an invalid value in an integer?
Make sure thet one has enough bits to represent the range, and then saturate at the maximum or minimum. For many control systems this is good enough. If one gets a huge error, commanding the maximum corrective response is all one can do in any case. The only other invalid case then normally is divide by 0, and this can be tested for and handled. Often again by commanding the maximum corrective response.
> >> With floating point one quickly get to a stage where the >> varaible does not have the right value at all. > >No you _do_ end up with the right value: INF or NAN. That's >the whole point. If an output is being calculated from a set >of inputs where on of the inputs is invalid (NAN), then the >output is invalid (NAN). That's very difficult to do with >integer representations.
In a typical control system handling INF is problematical, handling NAN is a nightmare. I have worked on a system where floating point was used in the control algorithms as well as in a background debugging task that displayed various floating point parameters. The system had hardware floating point support using a co-processor. The control task ran in a timer interrupt, while the debug task used all the left over CPU time. Great care had to be taken to switch the co-processor state each time the timer int routine was entered and exited again. The debug task caused a NAN which propogated to the control task even though it was in a separate thread. One had to re-initialise the co processor to get out of the NAN state.
>> Trying to do scaled integer implimentations in the minimu theoretical >> bit size needed is difficult. With gcc supporting 64 even on something >> like the AVR makes life much easier. There are few applications where >> scaled 64 integers does not provide enough precision and range. >> >>>>> That's very difficult to do right. When I started working in >>>>> the instrumentation business 15 years ago, most products were >>>>> done in fixed point. A _lot_ of time was spent trying to get >>>>> the scaling and represention right. And it never was quite >>>>> right. There were always problems in the field with wrap >>>>> around and over/underflow in cases that the developers didn't >>>>> forsee. >>>> >>>> Poor system design / specification ?. >>> >>>Being able to provide a robust working solution when given a >>>poor system design and specification can be a very good thing. >>>:) >> >> Doing a robust floating point implimentation that handles all >> the exceptions properly is non-trivial. Typical 8-bit >> implimentations takes short cuts. > >I've been using floating point on 8-bit platforms for 15 years, >and I've got no complaints.
Support for floating point on 8-bit platforms used to run major OSes tend to be robust and sorted out. On embedded micros shortcuts are taken for speed and size reasons. What was the main driving force for the development of floating point ? (This is a serious question - not a troll) One can represent the range and precision in fixed point and AFAIK it is much easier to code this in assembler than to code a robust floating point implimentation. The only reason I can think of was that memory was so expensive that people had to do everything possible to minimize memory usage. Today memory is orders of magnitude cheaper and one has relatively a lot even on quite small micros. On bigger systems using fixed point representations with hardware acceleration of 256bit or 1024bit or even bigger would be a lot less complex and should be a lot faster than floating point hardware. A pity that none of the main stream languages has support for fixed point. Regards Anton Erasmus
Grant Edwards wrote:
> On 2007-04-01, Anton Erasmus <nobody@spam.prevent.net> wrote: >
... snip ...
>> >> Doing a robust floating point implimentation that handles all >> the exceptions properly is non-trivial. Typical 8-bit >> implimentations takes short cuts. > > I've been using floating point on 8-bit platforms for 15 years, > and I've got no complaints.
I think Anton is including omission of detection and propagation of INF and NAN as "short cuts". The other approach is to trap those operations at occurance. -- Chuck F (cbfalconer at maineline dot net) Available for consulting/temporary embedded and systems. <http://cbfalconer.home.att.net> -- Posted via a free Usenet account from http://www.teranews.com
On 2007-04-01, CBFalconer <cbfalconer@yahoo.com> wrote:
> ... snip ... >>> >>> Doing a robust floating point implimentation that handles all >>> the exceptions properly is non-trivial. Typical 8-bit >>> implimentations takes short cuts. >> >> I've been using floating point on 8-bit platforms for 15 years, >> and I've got no complaints. > > I think Anton is including omission of detection and propagation of > INF and NAN as "short cuts". The other approach is to trap those > operations at occurance.
All of the implementations I've used handled INF and NAN properly. That includes processors as small as a 6811 back in 1989 and about 8 others since then. Perhaps others are less careful when choosing their tools? -- Grant Edwards grante Yow! One FISHWICH coming at up!! visi.com