EmbeddedRelated.com
Forums
The 2026 Embedded Online Conference

two fpu compared

Started by alb February 12, 2015
On 17/02/15 11:35, alb wrote:
> Hi Paul, > > Colin Paul de Gloucester <not_a_real_email@address.com> wrote: > [] >> |------------------------------------------------------------------------| >> |"Not that it will help me solve my problems with the customer but | >> |it's a good start ;-) | >> | | >> |Al" | >> |------------------------------------------------------------------------| >> >> You can't have everything :) > > <OT> > I did not know alpine could quote in such a 'fancy' way! Do you use > anything on top of it? What about double quoting? > </OT> > > Al >
Colin's style of quotation is ugly and non-standard, messes up threading and branches, and is completely useless when there is more than one level of quotation. He has been asked many times, by many people, to stop using it - and has always disregarded them. So please do not attempt to copy it.
On Tue, 17 Feb 2015 12:18:59 +0100, David Brown
<david.brown@hesbynett.no> wrote:

>On 17/02/15 11:35, alb wrote: >> Hi Paul, >> >> Colin Paul de Gloucester <not_a_real_email@address.com> wrote: >> [] >>> |------------------------------------------------------------------------| >>> |"Not that it will help me solve my problems with the customer but | >>> |it's a good start ;-) | >>> | | >>> |Al" | >>> |------------------------------------------------------------------------| >>> >>> You can't have everything :) >> >> <OT> >> I did not know alpine could quote in such a 'fancy' way! Do you use >> anything on top of it? What about double quoting? >> </OT> >> >> Al >> > >Colin's style of quotation is ugly and non-standard, messes up threading >and branches, and is completely useless when there is more than one >level of quotation. He has been asked many times, by many people, to >stop using it - and has always disregarded them. So please do not >attempt to copy it.
It looks like HTML quoting rendered into plain text. Could be the editor and he doesn't known how to turn it off. George
On 17/02/15 15:26, George Neuner wrote:
> On Tue, 17 Feb 2015 12:18:59 +0100, David Brown > <david.brown@hesbynett.no> wrote: > >> On 17/02/15 11:35, alb wrote: >>> Hi Paul, >>> >>> Colin Paul de Gloucester <not_a_real_email@address.com> wrote: >>> [] >>>> |------------------------------------------------------------------------| >>>> |"Not that it will help me solve my problems with the customer but | >>>> |it's a good start ;-) | >>>> | | >>>> |Al" | >>>> |------------------------------------------------------------------------| >>>> >>>> You can't have everything :) >>> >>> <OT> >>> I did not know alpine could quote in such a 'fancy' way! Do you use >>> anything on top of it? What about double quoting? >>> </OT> >>> >>> Al >>> >> >> Colin's style of quotation is ugly and non-standard, messes up threading >> and branches, and is completely useless when there is more than one >> level of quotation. He has been asked many times, by many people, to >> stop using it - and has always disregarded them. So please do not >> attempt to copy it. > > It looks like HTML quoting rendered into plain text. Could be the > editor and he doesn't known how to turn it off. >
He's using a text-based email client/newsreader (alpine, a descendent of the venerable pine), so it is unlikely that HTML is involved directly - although it could be a poor attempt at imitating HTML quotations. Either he thinks it is "cool" to be different from everyone else, or someone else has configured his newsreader and he doesn't know how to fix it.
On 15.2.2015 &#1075;. 16:33, glen herrmannsfeldt wrote:
> Dimiter_Popoff <dp@tgi-sci.com> wrote: > > (snip on validating floating point calculations, where someone wrote) > >>>>>> assuming two differently implemented FPU, both validated against the >>>>>> IEEE754, and limiting operations to the golden 5 (+,-, *, /, sqr), can I >>>>>> be sure that provided the inputs operands are the same the result would >>>>>> be the same bit wise? > > (snip, then I wrote) > >>> Are you sure the calculation should be done in floating point? >>> Without more details, I suspect it could be done in fixed point, >>> the results would not vary depending on implementations, and >>> everyone would be happy. > >> Sometimes the choice to do something using an FPU is not because things >> cannot be done using the integer unit but just because one needs the >> horsepowers the FPU adds to the chip. Was the case with me on my >> netMCA-3 design - the FPU did make things a bit more complicated but >> using it was the only way to do what I did. Apart from the sheer >> ops per clock cycle it (being 64 bit of course, 32 bit FPU-s are >> pretty much useless) gave me the dynamic range I needed (32 bits >> integers would not have sufficed and multiprecision would have >> slowed things down beyond any practical usability). > > Many science and engineering problems have a large dynamic > range, and require results with a relative error (uncertainty). > > Lengths can be from nanometers to gigameter, times from picoseconds > to gigayears, and masses from eV to the mass of large stars. > One can measure the atomic spacing in a crystal lattice or the > distance between planets with a relative uncertainty of about > one part in a million. Floating point is great for this.
Yes, this is the primary purpose of FP. I just gave an example how it can be practical to use an FPU for some other reason, not the primary one (i.e. one can make use of the horsepowers it provides in parallel to the integer unit even though the calculations can be done quite well in integers (the first time I implemented that same conversion algorithm of mine was on a TI 5420 DSP, using integers).
> > There are a fair number of problems in mathematics where one > wants a modulo (remainder) operations with this property.
I did not realize you were talking integer divide, I though you were referring to FP division hence me being puzzled. FP (IEEE FP at least) maintains the mantissa as an absolute value and the sign is not a part of it, then there is no remainder etc :-). Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/
On 17.2.2015 &#1075;. 17:47, David Brown wrote:
> On 17/02/15 15:26, George Neuner wrote: >> On Tue, 17 Feb 2015 12:18:59 +0100, David Brown >> <david.brown@hesbynett.no> wrote: >> >>> On 17/02/15 11:35, alb wrote: >>>> Hi Paul, >>>> >>>> Colin Paul de Gloucester <not_a_real_email@address.com> wrote: >>>> [] >>>>> |------------------------------------------------------------------------| >>>>> |"Not that it will help me solve my problems with the customer but | >>>>> |it's a good start ;-) | >>>>> | | >>>>> |Al" | >>>>> |------------------------------------------------------------------------| >>>>> >>>>> You can't have everything :) >>>> >>>> <OT> >>>> I did not know alpine could quote in such a 'fancy' way! Do you use >>>> anything on top of it? What about double quoting? >>>> </OT> >>>> >>>> Al >>>> >>> >>> Colin's style of quotation is ugly and non-standard, messes up threading >>> and branches, and is completely useless when there is more than one >>> level of quotation. He has been asked many times, by many people, to >>> stop using it - and has always disregarded them. So please do not >>> attempt to copy it. >> >> It looks like HTML quoting rendered into plain text. Could be the >> editor and he doesn't known how to turn it off. >> > > He's using a text-based email client/newsreader (alpine, a descendent of > the venerable pine), so it is unlikely that HTML is involved directly - > although it could be a poor attempt at imitating HTML quotations. > > Either he thinks it is "cool" to be different from everyone else, or > someone else has configured his newsreader and he doesn't know how to > fix it. > >
Oh everybody please do not start such a nonsense, I thought we were past that. Dimiter
Dimiter_Popoff <dp@tgi-sci.com> wrote:

(snip, I wrote)

>> Many science and engineering problems have a large dynamic >> range, and require results with a relative error (uncertainty).
>> Lengths can be from nanometers to gigameter, times from picoseconds >> to gigayears, and masses from eV to the mass of large stars. >> One can measure the atomic spacing in a crystal lattice or the >> distance between planets with a relative uncertainty of about >> one part in a million. Floating point is great for this.
> Yes, this is the primary purpose of FP. I just gave an example how > it can be practical to use an FPU for some other reason, not the > primary one (i.e. one can make use of the horsepowers it provides > in parallel to the integer unit even though the calculations can > be done quite well in integers (the first time I implemented that > same conversion algorithm of mine was on a TI 5420 DSP, > using integers).
Yes, but you want to be sure that it gives the right answer.
>> There are a fair number of problems in mathematics where one >> wants a modulo (remainder) operations with this property.
> I did not realize you were talking integer divide, I though you > were referring to FP division hence me being puzzled. FP (IEEE FP > at least) maintains the mantissa as an absolute value and the sign > is not a part of it, then there is no remainder etc :-).
If a problem is defined in fixed point, and you want to do it in floating point instead, you want to be sure it will give the right result. Floating point rounding can give a different result than integer divide would, after converting the result to integer. The discussion on remainder has to do with the way integer divide works. It should always be true that (a/b)*b+a%b==b (C notation). That is, divide and modulo (remainder) have to be consistent, though in the case of negative operands there are two possible choices. But even with both positive, floating point can round in an unexpected way. Assuming you avoid the problems of negative division, fixed point should always give the same result, but floating point might not. -- glen
On 18.2.2015 &#1075;. 03:55, glen herrmannsfeldt wrote:
> Dimiter_Popoff <dp@tgi-sci.com> wrote: > > (snip, I wrote) > >>> Many science and engineering problems have a large dynamic >>> range, and require results with a relative error (uncertainty). > >>> Lengths can be from nanometers to gigameter, times from picoseconds >>> to gigayears, and masses from eV to the mass of large stars. >>> One can measure the atomic spacing in a crystal lattice or the >>> distance between planets with a relative uncertainty of about >>> one part in a million. Floating point is great for this. > >> Yes, this is the primary purpose of FP. I just gave an example how >> it can be practical to use an FPU for some other reason, not the >> primary one (i.e. one can make use of the horsepowers it provides >> in parallel to the integer unit even though the calculations can >> be done quite well in integers (the first time I implemented that >> same conversion algorithm of mine was on a TI 5420 DSP, >> using integers). > > Yes, but you want to be sure that it gives the right answer.
Well of course. But then arithmetic is not such a high science, you know. All you have to do is do the respective homework :-).
> >>> There are a fair number of problems in mathematics where one >>> wants a modulo (remainder) operations with this property. > >> I did not realize you were talking integer divide, I though you >> were referring to FP division hence me being puzzled. FP (IEEE FP >> at least) maintains the mantissa as an absolute value and the sign >> is not a part of it, then there is no remainder etc :-). > > If a problem is defined in fixed point, and you want to do it > in floating point instead, you want to be sure it will give the > right result. Floating point rounding can give a different result > than integer divide would, after converting the result to integer.
Hmmmm, not really. As long as there is no precision lost during normalization/denormalization the result should be the same, rounded to the nearest (or whatever the rounding mode has been set to, if settable). Integer divide would round to the smaller value if it returns quotient and remainder, it is up to the programmer to resolve the remainder and round to the nearest (I usually do it this way unless there are other considerations).
> The discussion on remainder has to do with the way integer divide > works. > > It should always be true that (a/b)*b+a%b==b (C notation). > That is, divide and modulo (remainder) have to be consistent, > though in the case of negative operands there are two possible > choices.
Yes, I have lost the count of divides I have written for various processors over the years, there is nothing special about it.
> But even with both positive, floating point can round in an > unexpected way.
Ouch, no. Different yes, unexpected - no, one has to know how the FPU behaves, otherwise it is just broken.
> > Assuming you avoid the problems of negative division, fixed point > should always give the same result, but floating point might not.
Hmmm, I can't think of a way the FPU would yield two different results for the same calculation without its controls being altered. Especially if the operation is doable using integer division, this would mean no mantissa bits will be lost during normalization/denormalization (assuming 32 bit integer divide, obviously one can write integer divide for a longer word than the typical 53 bits on FP). Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/
Dimiter_Popoff <dp@tgi-sci.com> wrote:

(snip regarding integer divide)

>> Yes, but you want to be sure that it gives the right answer.
> Well of course. But then arithmetic is not such a high science, > you know. All you have to do is do the respective homework :-).
(snip)
>> If a problem is defined in fixed point, and you want to do it >> in floating point instead, you want to be sure it will give the >> right result. Floating point rounding can give a different result >> than integer divide would, after converting the result to integer.
> Hmmmm, not really. As long as there is no precision lost during > normalization/denormalization the result should be the same, rounded > to the nearest (or whatever the rounding mode has been set to, if > settable). Integer divide would round to the smaller value if > it returns quotient and remainder, it is up to the programmer to > resolve the remainder and round to the nearest (I usually do it > this way unless there are other considerations).
I haven't thought of the fine details recently, but I believe that if you have round to nearest floating point, it is not so easy to get the appropriate truncated integer quotient. From the double rounding rule, I suspect that it can be done if the floating point quotient is more than twice the needed length. Of course, for efficent use of hardware you don't want it that long. (snip)
> Yes, I have lost the count of divides I have written for various > processors over the years, there is nothing special about it.
>> But even with both positive, floating point can round in an >> unexpected way.
> Ouch, no. Different yes, unexpected - no, one has to know how > the FPU behaves, otherwise it is just broken.
Given a 53 bit quotient of two integers, can you find the correct 32 bit integer quotient?
>> Assuming you avoid the problems of negative division, fixed point >> should always give the same result, but floating point might not.
> Hmmm, I can't think of a way the FPU would yield two different > results for the same calculation without its controls being altered. > Especially if the operation is doable using integer division, > this would mean no mantissa bits will be lost during > normalization/denormalization (assuming 32 bit integer divide, > obviously one can write integer divide for a longer word than > the typical 53 bits on FP).
The favorite for many years was the x87 temporary real. You couldn't (from a high-level language) be sure that your values were consistently kept at 53 bits or 64 bits, so sometimes got different results. As the compiler might keep values in registers between statements, even assigning to a double wasn't always enough. Otherwise, an older favorite was the Cray machine with non-commutative multiply. A*B-B*A might not be zero. -- glen
Am 18.02.2015 um 17:52 schrieb glen herrmannsfeldt:

> Otherwise, an older favorite was the Cray machine with non-commutative > multiply. A*B-B*A might not be zero.
If memory serves Seymour Cray also gained some notoriety for building machines where A*1 might not be equal to A.
On 18.2.2015 &#1075;. 18:52, glen herrmannsfeldt wrote:
> Dimiter_Popoff <dp@tgi-sci.com> wrote: > > (snip regarding integer divide) > >>> Yes, but you want to be sure that it gives the right answer. > >> Well of course. But then arithmetic is not such a high science, >> you know. All you have to do is do the respective homework :-). > > (snip) >>> If a problem is defined in fixed point, and you want to do it >>> in floating point instead, you want to be sure it will give the >>> right result. Floating point rounding can give a different result >>> than integer divide would, after converting the result to integer. > >> Hmmmm, not really. As long as there is no precision lost during >> normalization/denormalization the result should be the same, rounded >> to the nearest (or whatever the rounding mode has been set to, if >> settable). Integer divide would round to the smaller value if >> it returns quotient and remainder, it is up to the programmer to >> resolve the remainder and round to the nearest (I usually do it >> this way unless there are other considerations). > > I haven't thought of the fine details recently, but I believe that > if you have round to nearest floating point, it is not so easy to > get the appropriate truncated integer quotient.
Absolutely correct of course. Setting the FPU rounding mode "to zero" would solve this (if available, otherwise it would take some work). I got bitten not so long ago by a similar, simpler error I had made; instead of using "convert FP to integer and round to zero" (there is such a power architecture opcode) I had used just "move FP to integer". The latter rounds to nearest and I had to locate and fix it.... :-).
> ... >> Yes, I have lost the count of divides I have written for various >> processors over the years, there is nothing special about it. > >>> But even with both positive, floating point can round in an >>> unexpected way. > >> Ouch, no. Different yes, unexpected - no, one has to know how >> the FPU behaves, otherwise it is just broken. > > Given a 53 bit quotient of two integers, can you find the correct 32 > bit integer quotient?
Hmmmm, you make me scratch my head. I think yes, using the correct rounding modes etc., but I would not claim anything without thinking about it in "doing work" mode, which I can't at the moment (head busy doing other things).
>>> Assuming you avoid the problems of negative division, fixed point >>> should always give the same result, but floating point might not. > >> Hmmm, I can't think of a way the FPU would yield two different >> results for the same calculation without its controls being altered. >> Especially if the operation is doable using integer division, >> this would mean no mantissa bits will be lost during >> normalization/denormalization (assuming 32 bit integer divide, >> obviously one can write integer divide for a longer word than >> the typical 53 bits on FP). > > The favorite for many years was the x87 temporary real. You > couldn't (from a high-level language) be sure that your values > were consistently kept at 53 bits or 64 bits, so sometimes got > different results. > > As the compiler might keep values in registers between statements, > even assigning to a double wasn't always enough.
Well this is a compiler issue, not an FP one, although related.
> > Otherwise, an older favorite was the Cray machine with non-commutative > multiply. A*B-B*A might not be zero.
Hah! Now such a bug could drive one insane if one has to discover it :D . Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/
The 2026 Embedded Online Conference