EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

Use of soft float?

Started by Philipp Klaus Krause May 20, 2018
On 22/05/18 07:36, upsidedown@downunder.com wrote:
> On Mon, 21 May 2018 12:12:26 +0200, David Brown > <david.brown@hesbynett.no> wrote: > >> On 20/05/18 11:56, Philipp Klaus Krause wrote: >>> While those who make heavy use of floating-point computations in >>> embedded sytems probably go for systems with hardware support for >>> floating-point calculations, there are plenty of systems without such >>> support out there. >>> And the existence of soft float implementations indicates that there is >>> a real need for doing floating-point computations on systems without >>> hardware floating-point support. I'd like to know a bit more about that. >>> >>> * What kind of computations do people use soft float for? Any real-world >>> examples? >>> * Which types of soft float do they use? IEEE 32- or 64-bit? Some other >>> C standard-compliant type? 16-bit IEEE? Other 16-bit? >>> >> >> Any C compiler will provide software floating point for IEEE 32-bit >> "float". Most also provide 64-bit "double" support, but for 8-bit >> targets it is not unusual to have non-standard 32-bit "double". >> >> I would expect the vast majority of software floating point usage to be >> directly from a C compiler. Some few people write their own functions >> optimised to their particular needs (such as unusual accuracy >> requirements), but that is rare. For the basic arithmetic, you are >> unlikely to make significantly better general-purpose floating point >> functions than the ones provided by the compiler, and you have the huge >> advantage of convenience of use. >> >> Maths functions (like trig functions) are a different matter. In >> embedded systems, even those with floating point hardware, it is not >> uncommon to have your own functions with a more appropriate balance >> between speed and accuracy than the slow but IEEE bit-perfect standard >> functions. > > What is so "bit perfect" about IEEE floats ?
It's not that they're perfect. It's that they're predictably imperfect, so the same program yields the same results on different implementations.
> In practice, it is not a big issue using any proprietary (non IEEE) > float library internally, unless you need to handle INF or other > special values or need to communicate binary floats to/from the > external world.
Or you want to develop & test the program on a different computer than it will run on. My AD9969 Arduino code runs its tests on Intel just fine. Much easier to diagnose issues there before moving to target hardware. Clifford Heath.
On 21/05/18 23:36, upsidedown@downunder.com wrote:
> On Mon, 21 May 2018 12:12:26 +0200, David Brown > <david.brown@hesbynett.no> wrote: > >> On 20/05/18 11:56, Philipp Klaus Krause wrote: >>> While those who make heavy use of floating-point computations in >>> embedded sytems probably go for systems with hardware support for >>> floating-point calculations, there are plenty of systems without such >>> support out there. >>> And the existence of soft float implementations indicates that there is >>> a real need for doing floating-point computations on systems without >>> hardware floating-point support. I'd like to know a bit more about that. >>> >>> * What kind of computations do people use soft float for? Any real-world >>> examples? >>> * Which types of soft float do they use? IEEE 32- or 64-bit? Some other >>> C standard-compliant type? 16-bit IEEE? Other 16-bit? >>> >> >> Any C compiler will provide software floating point for IEEE 32-bit >> "float". Most also provide 64-bit "double" support, but for 8-bit >> targets it is not unusual to have non-standard 32-bit "double". >> >> I would expect the vast majority of software floating point usage to be >> directly from a C compiler. Some few people write their own functions >> optimised to their particular needs (such as unusual accuracy >> requirements), but that is rare. For the basic arithmetic, you are >> unlikely to make significantly better general-purpose floating point >> functions than the ones provided by the compiler, and you have the huge >> advantage of convenience of use. >> >> Maths functions (like trig functions) are a different matter. In >> embedded systems, even those with floating point hardware, it is not >> uncommon to have your own functions with a more appropriate balance >> between speed and accuracy than the slow but IEEE bit-perfect standard >> functions. > > What is so "bit perfect" about IEEE floats ? The only thing that I can > think of is handling of infinity and denorms and other special > values. t seems that the IEEE standard committee thought that is > completely OK to have INF intermediate results and still continue > doing further calculations with such sick values.
IEEE specifies rounding, errors and limits for all sorts of operations. The idea is that if you have two IEEE compatible implementations (hardware, software, doesn't matter) and pick the same settings for roundings, you will get exactly the same (or perhaps a LSB out) results for many operations. In particular, if you have a maths library that is IEEE compliant and use a function like "sin", you will get an accuracy to within a bit or two. Following C standards, IEEE standards, and common practice, the "sin" function is typically done at 64-bit double resolution - thus your "sin" function will be bit-perfect to 52 bits. This is, of course, utterly pointless if you are using that function for driving a motor and have a 10-bit PWM resolution.
> > Some "IEEE" soft libraries handle ordinary values quite well, but > might not handle these special cases properly. > > In an embedded system you usually try to clean the mathematics before > coding in order to avoid divide by zero or having extremely small > (less than 1E-38) intermediate results.
Absolutely agreed. <snip>
> > In practice, it is not a big issue using any proprietary (non IEEE) > float library internally, unless you need to handle INF or other > special values or need to communicate binary floats to/from the > external world. >
Using IEEE makes it far simpler to work with tools, because they all give the same results. In particular, it means your compiler (if it is smart enough, and the right optimisations are enabled) can often do calculations at compile time, knowing the results will be the same as at run time. Very often you only actually need /approximate/ IEEE compatibility. For gcc (and clang), the flag "-ffast-math" is very useful - it tells the compiler that you don't care about infinities, denormals, and other such awkwardness, and you are happy to assume the floating point maths is associative, commutative, etc. This can make results vary marginally depending on the optimisation details, but can give you much smaller and faster code. You still need to write your own transcendental functions if you need speed, however.
Philipp Klaus Krause <e> wrote:

> > * What kind of computations do people use soft float for? Any real-world > examples? > * Which types of soft float do they use? IEEE 32- or 64-bit? Some other > C standard-compliant type? 16-bit IEEE? Other 16-bit? >
Probably not what you&rsquo;re asking, but... Ada has always, AFAIK, had fixed point types as well as floating point types. Fixed point is implemented with integers.
AT Wednesday 06 June 2018 11:37, Luke A.  Guest wrote:

> Philipp Klaus Krause <e> wrote: > >> >> * What kind of computations do people use soft float for? Any real-world >> examples? >> * Which types of soft float do they use? IEEE 32- or 64-bit? Some other >> C standard-compliant type? 16-bit IEEE? Other 16-bit? >> > > Probably not what you&rsquo;re asking, but... Ada has always, AFAIK, had fixed > point types as well as floating point types. Fixed point is implemented > with integers.
Nothing new. It was there in PL/I since the 1960'ties. binary fixed(p,q) -- Reinhardt

The 2024 Embedded Online Conference