Reply by Simon Clubley April 4, 20072007-04-04
In article <pub5135u3ht8o5i56brfu83rq5vu5n5phb@4ax.com>, Anton Erasmus <nobody@spam.prevent.net> writes:
> > As far as I am aware there is no main stream language available for > from small 8-bit micros up to 64 bit processors that has support for > fixed point. This is true whether one considers Ada a main stream > language or not. C , and maybe Forth is probably the only languages > which is supported across the full range of micros. >
Ada is now available on the AVR. What I don't know, because I don't use the AVR, is if Ada's fixed point packages are also available on the AVR. See http://avr-ada.sourceforge.net for further details if you are interested. Simon. -- Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP Microsoft: Bringing you 1980's technology to a 21st century world
Reply by Anton Erasmus April 3, 20072007-04-03
On Mon, 02 Apr 2007 13:29:31 +0300, Niklas Holsti
<niklas.holsti@nospam.please> wrote:

>Anton Erasmus wrote: >> A pity that none of the main stream languages has support for fixed point. > >Ada supports fixed point. I consider it a "main stream language", but >some perhaps don't...
As far as I am aware there is no main stream language available for from small 8-bit micros up to 64 bit processors that has support for fixed point. This is true whether one considers Ada a main stream language or not. C , and maybe Forth is probably the only languages which is supported across the full range of micros. Regards Anton Erasmus
Reply by Paul Keinanen April 3, 20072007-04-03
On Mon, 02 Apr 2007 13:29:31 +0300, Niklas Holsti
<niklas.holsti@nospam.please> wrote:

>Anton Erasmus wrote: >> A pity that none of the main stream languages has support for fixed point. > >Ada supports fixed point. I consider it a "main stream language", but >some perhaps don't...
So do COBOL and PL/1, but are these main stream languages in the embedded world ;-). Paul
Reply by Ron Blancarte April 2, 20072007-04-02
On Fri, 30 Mar 2007 09:28:38 -0600 (while OU was sucking), Not Really
Me wrote:
> >"Ron Blancarte" <ron@---TAKETHISOUT---.blancarte.com> wrote in message >news:c0ui03dctcb8t4s04fmrs65v837jimars1@4ax.com... >> Now, i have googled this some, with not much as far as results, so I >> though I would ask here to see what kind of luck I would have. >> >> I am using the Analog Devices ADuC831 (8052 ISA) part to do data >> acquisition and transmission. I have some data that is being held as >> floating point (single precision IEEE 754 format). Some of this data >> needs to be outputted to serial on request. This is where my request >> comes in. Currently I am using sprintf() to format this output into >> my buffer. HOWEVER, the footprint of this code is making it less than >> desirable to use (over 1100 bytes of code space and nearly 30 bytes of >> memory). So I am looking to do the output on my own. >> >> And to this means, I have succeeded, to an extent. So far I have >> removed my need for sprintf() in printing HEX format, as well as ints >> and chars. My problem is with floats. I have not yet attempted >> standard notation: >> 3.1415926 >> But I did write an exponent formula. >> 3.141593E+00 >> I even got the rounding. HOWEVER, this is using simple multiplies and >> divides, by 10, to achieve it's goals. And this introduces my >> problem. Since a few of the numbers go out to 6 digits of precision, >> I am having problems with these last digits outputting correctly. For >> example: >> >> Value entered: 6.791775E-08 >> Value displayed: 6.791777E-08 >> Hex value on chip: 0x3391DA2E (6.7917753e-8) >> >> Clearly, the entered value is being stored correctly and the displayed >> value, while close, is just slightly off. So is there a way to do >> this without this error induced by my multiplications by 10 (and still >> not using sprintf()? >> >> RonB >> >> >> -------------------------------------------------- >> "It is human nature to take shortcuts in thinking" >> -------------------------------------------------- > >Ron, > >I emailed you some sample code that we wrote for the same problem on another >member of the ADuC8xx family. Let me know if you did not receive the direct >email. > >Scott
I didn't get it (or my spam filters got to it first). Try sending it to ron dot blancarte at benchtree dot net. That is my work address and would be where I was looking for this solution. -R
Reply by Niklas Holsti April 2, 20072007-04-02
Anton Erasmus wrote:
> A pity that none of the main stream languages has support for fixed point.
Ada supports fixed point. I consider it a "main stream language", but some perhaps don't... -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
Reply by Paul Keinanen April 2, 20072007-04-02
On Sun, 01 Apr 2007 18:00:07 -0400, CBFalconer <cbfalconer@yahoo.com>
wrote:

>Grant Edwards wrote: >> On 2007-04-01, Anton Erasmus <nobody@spam.prevent.net> wrote: >> >... snip ... >>> >>> Doing a robust floating point implimentation that handles all >>> the exceptions properly is non-trivial. Typical 8-bit >>> implimentations takes short cuts. >> >> I've been using floating point on 8-bit platforms for 15 years, >> and I've got no complaints. > >I think Anton is including omission of detection and propagation of >INF and NAN as "short cuts". The other approach is to trap those >operations at occurance.
While in-band signalling of special conditions is useful in some cases, when there are a limited number of special cases, however, with more complex situations, when the signal is processed at various stages, getting any meaningful information through the whole chain is quite hard. However, when using out-of-band signalling, i.e. there is a status field is attached to the actual value. This status field can express much more situations, such as sensor open/short circuit, out of range etc. For instance, in the Profibus PA fieldbus, each signal is represented by 5 bytes, four bytes for the actual floating point value and one byte for the actual status ("quality") of the signal. So if you are planning to use a floating point value with the sole purpose of using in-band signalling with NAN, +INF and -INF, it might make more sense to use a 16 or 24 bit integer value field and an 8 bit status field, which has the same or smaller memory footprint than the 32 bit float. On an 8 bitter, less memory access is required to check for the status byte than trying to determine, if the float is a very large value, INF or NAN, requiring testing up to 4 bytes. This is significant, since the exception conditions should be checked every time before using the value. Paul
Reply by Grant Edwards April 1, 20072007-04-01
On 2007-04-01, CBFalconer <cbfalconer@yahoo.com> wrote:
> ... snip ... >>> >>> Doing a robust floating point implimentation that handles all >>> the exceptions properly is non-trivial. Typical 8-bit >>> implimentations takes short cuts. >> >> I've been using floating point on 8-bit platforms for 15 years, >> and I've got no complaints. > > I think Anton is including omission of detection and propagation of > INF and NAN as "short cuts". The other approach is to trap those > operations at occurance.
All of the implementations I've used handled INF and NAN properly. That includes processors as small as a 6811 back in 1989 and about 8 others since then. Perhaps others are less careful when choosing their tools? -- Grant Edwards grante Yow! One FISHWICH coming at up!! visi.com
Reply by CBFalconer April 1, 20072007-04-01
Grant Edwards wrote:
> On 2007-04-01, Anton Erasmus <nobody@spam.prevent.net> wrote: >
... snip ...
>> >> Doing a robust floating point implimentation that handles all >> the exceptions properly is non-trivial. Typical 8-bit >> implimentations takes short cuts. > > I've been using floating point on 8-bit platforms for 15 years, > and I've got no complaints.
I think Anton is including omission of detection and propagation of INF and NAN as "short cuts". The other approach is to trap those operations at occurance. -- Chuck F (cbfalconer at maineline dot net) Available for consulting/temporary embedded and systems. <http://cbfalconer.home.att.net> -- Posted via a free Usenet account from http://www.teranews.com
Reply by Anton Erasmus April 1, 20072007-04-01
On Sun, 01 Apr 2007 15:34:23 -0000, Grant Edwards <grante@visi.com>
wrote:

>On 2007-04-01, Anton Erasmus <nobody@spam.prevent.net> wrote: > >>>> Any design worthy of the name should be able to define system >>>> limits and provide recovery for offlimit values. >>> >>>Maybe it's just the places I've worked, but in practice that >>>seems to be rather rare. >> >> Going to floating point because one do not properly define system >> limits and the required ranges is just asking for trouble. If one ends >> up with NAN or INF, this tends to propogate. > >I know, and that's one of the best features of IEEE floating >point. It allow prevents you from having bogus (but apparently >valid) outputs when one of the inputs is missing or invalide. > >> On a system with an OS, this is trapped, and one gets an >> error. On a typical embedded system one ends up with total >> garbage. > >No, you end up with INF or NAN.
Yes, but if this is not checked, and the INF or NAN is cast to an integer to drive a DAC for example, then one is driving the DAC with garbage. Typical embedded floating point implimentations will happely give a result if one casts NAN or INF to an integer. AFAIK there is no guarantee that INF cast to an integer will be the maximum integer value. Even if one uses an OS which actually provides floating point exceprion handling, it is quite difficult to keep on controlling if one suddenly get a NAN or INF because one did a number/(big_number -(bignumber-delta)) calculation and ended up with number/0, which in turn caused the value to be NAN. To non get this one has to scale the whole algorithm in any case, so one can just as well use fixed point maths. On some systems one has to reset the floating point co-processor and basically restart the system. For some control systems this is a VERY bad idea.
>> Then there is the fact that with integer one can do varaible+1 >> for the full range and always have the correct value. > >How do you represent infinity or an invalid value in an integer?
Make sure thet one has enough bits to represent the range, and then saturate at the maximum or minimum. For many control systems this is good enough. If one gets a huge error, commanding the maximum corrective response is all one can do in any case. The only other invalid case then normally is divide by 0, and this can be tested for and handled. Often again by commanding the maximum corrective response.
> >> With floating point one quickly get to a stage where the >> varaible does not have the right value at all. > >No you _do_ end up with the right value: INF or NAN. That's >the whole point. If an output is being calculated from a set >of inputs where on of the inputs is invalid (NAN), then the >output is invalid (NAN). That's very difficult to do with >integer representations.
In a typical control system handling INF is problematical, handling NAN is a nightmare. I have worked on a system where floating point was used in the control algorithms as well as in a background debugging task that displayed various floating point parameters. The system had hardware floating point support using a co-processor. The control task ran in a timer interrupt, while the debug task used all the left over CPU time. Great care had to be taken to switch the co-processor state each time the timer int routine was entered and exited again. The debug task caused a NAN which propogated to the control task even though it was in a separate thread. One had to re-initialise the co processor to get out of the NAN state.
>> Trying to do scaled integer implimentations in the minimu theoretical >> bit size needed is difficult. With gcc supporting 64 even on something >> like the AVR makes life much easier. There are few applications where >> scaled 64 integers does not provide enough precision and range. >> >>>>> That's very difficult to do right. When I started working in >>>>> the instrumentation business 15 years ago, most products were >>>>> done in fixed point. A _lot_ of time was spent trying to get >>>>> the scaling and represention right. And it never was quite >>>>> right. There were always problems in the field with wrap >>>>> around and over/underflow in cases that the developers didn't >>>>> forsee. >>>> >>>> Poor system design / specification ?. >>> >>>Being able to provide a robust working solution when given a >>>poor system design and specification can be a very good thing. >>>:) >> >> Doing a robust floating point implimentation that handles all >> the exceptions properly is non-trivial. Typical 8-bit >> implimentations takes short cuts. > >I've been using floating point on 8-bit platforms for 15 years, >and I've got no complaints.
Support for floating point on 8-bit platforms used to run major OSes tend to be robust and sorted out. On embedded micros shortcuts are taken for speed and size reasons. What was the main driving force for the development of floating point ? (This is a serious question - not a troll) One can represent the range and precision in fixed point and AFAIK it is much easier to code this in assembler than to code a robust floating point implimentation. The only reason I can think of was that memory was so expensive that people had to do everything possible to minimize memory usage. Today memory is orders of magnitude cheaper and one has relatively a lot even on quite small micros. On bigger systems using fixed point representations with hardware acceleration of 256bit or 1024bit or even bigger would be a lot less complex and should be a lot faster than floating point hardware. A pity that none of the main stream languages has support for fixed point. Regards Anton Erasmus
Reply by Grant Edwards April 1, 20072007-04-01
On 2007-04-01, Anton Erasmus <nobody@spam.prevent.net> wrote:

>>> Any design worthy of the name should be able to define system >>> limits and provide recovery for offlimit values. >> >>Maybe it's just the places I've worked, but in practice that >>seems to be rather rare. > > Going to floating point because one do not properly define system > limits and the required ranges is just asking for trouble. If one ends > up with NAN or INF, this tends to propogate.
I know, and that's one of the best features of IEEE floating point. It allow prevents you from having bogus (but apparently valid) outputs when one of the inputs is missing or invalide.
> On a system with an OS, this is trapped, and one gets an > error. On a typical embedded system one ends up with total > garbage.
No, you end up with INF or NAN.
> Then there is the fact that with integer one can do varaible+1 > for the full range and always have the correct value.
How do you represent infinity or an invalid value in an integer?
> With floating point one quickly get to a stage where the > varaible does not have the right value at all.
No you _do_ end up with the right value: INF or NAN. That's the whole point. If an output is being calculated from a set of inputs where on of the inputs is invalid (NAN), then the output is invalid (NAN). That's very difficult to do with integer representations.
> Trying to do scaled integer implimentations in the minimu theoretical > bit size needed is difficult. With gcc supporting 64 even on something > like the AVR makes life much easier. There are few applications where > scaled 64 integers does not provide enough precision and range. > >>>> That's very difficult to do right. When I started working in >>>> the instrumentation business 15 years ago, most products were >>>> done in fixed point. A _lot_ of time was spent trying to get >>>> the scaling and represention right. And it never was quite >>>> right. There were always problems in the field with wrap >>>> around and over/underflow in cases that the developers didn't >>>> forsee. >>> >>> Poor system design / specification ?. >> >>Being able to provide a robust working solution when given a >>poor system design and specification can be a very good thing. >>:) > > Doing a robust floating point implimentation that handles all > the exceptions properly is non-trivial. Typical 8-bit > implimentations takes short cuts.
I've been using floating point on 8-bit platforms for 15 years, and I've got no complaints. -- Grant Edwards grante Yow! I smell like a wet at reducing clinic on Columbus visi.com Day!