On 2016-04-06, Tauno Voipio <tauno.voipio@notused.fi.invalid> wrote:> On 6.4.16 19:00, Wouter van Ooijen wrote:>> Reality check: what is the use of having the values in floating point >> format, if you want to avoid using an FP library? What are you going to >> do with those values? > > The OP said that the rest of the system expects the direction values > in single-precision floating point format.Presumably the OP is just shipping the FP values out via some interface to another piece of the system that does have FP support of some sort. That's pretty common for embeded firmware in "sensor" type applicaions: you do all the of the data acquisition and signal conditioning using fixed point, and then convert it to IEEE FP before shipping it out the door. On devices like that, often the only FP operation that's available is a custom-written "convert from fixed point to floating point with some hard-wired scaling" function. -- Grant Edwards grant.b.edwards Yow! ! Now I understand at advanced MICROBIOLOGY and gmail.com th' new TAX REFORM laws!!
Integer/Fixedpoint to 32 bit float
Started by ●April 6, 2016
Reply by ●April 6, 20162016-04-06
Reply by ●April 6, 20162016-04-06
On 06.4.2016 г. 20:56, Grant Edwards wrote:> On 2016-04-06, Robert Wessel <robertwessel2@yahoo.com> wrote: > >> [16-bit signed to IEEE single conversion algorithm] > > And then write a test program that tests and verifies the conversion > function for all 65536 possible input values. > > You could try to figure out a minimal set of inputs that provides 100% > test coverage, but > > a) you'll be wrong[*] > > b) with only a 16-bit input space, it'll be faster and easier to > just test all possible inputs > > [*] At least that's been my experience with stuff like this. >Oh come on, the only special case with a 16 bit integer -> 32 bit IEEE FP is the case of 0 which is obvious enough (or will make itself obvious if one tries to shift left and count until finding the 1....). The scenario when you can be wrong is possible if we throw in all the NaNs and tiny and rounding etc. but these just do not apply to this case, it is straight forward. Dimiter
Reply by ●April 6, 20162016-04-06
On Wed, 06 Apr 2016 15:17:05 +0200, Vincent vB wrote:> Currently I'm in the process of replacing a custom compass / > accelerometer with an ST LSM303D. The 'old' custom device produced > single precision floats. Without parsing the values there where just > passed inside a UDP packet. > > Unfortunately the LSM303D produces 16 bit signed integers. So, the > embedded system needs to convert these values to floats. The scaling it > self is quite simple: Values -32768..32767 need to be scaled to [-2,2). > > Now, my hardware has no floating point support. However doing the > following: > > float output = ( (float)input ) / 16384f; > > will require quite a bit of FP magic. I would imagine that this: > > const float scale = 1f / 16384f; > float output = ( (float)input ) * scale; > > may be faster, but still requires FP multiply support. > > Is there a simple and fast way which I can use to convert these integers > to floats without the aid of an FP library? I have not found much code > in this respect.What processor are you using that does not have easy to find floating point software? I would expect that to come with the tools. IIRC, the IEEE floating point standard has a whole bunch of stuff that you need to do to handle exceptions, but doing your conversion should be pretty direct. You'll need to find the leading significant digit (which will probably be what takes the longest), shift, and then shove mantissa, sign, and exponent into floating point format. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com
Reply by ●April 6, 20162016-04-06
On 07.4.2016 г. 00:05, Tim Wescott wrote:> On Wed, 06 Apr 2016 15:17:05 +0200, Vincent vB wrote: > >> Currently I'm in the process of replacing a custom compass / >> accelerometer with an ST LSM303D. The 'old' custom device produced >> single precision floats. Without parsing the values there where just >> passed inside a UDP packet. >> >> Unfortunately the LSM303D produces 16 bit signed integers. So, the >> embedded system needs to convert these values to floats. The scaling it >> self is quite simple: Values -32768..32767 need to be scaled to [-2,2). >> >> Now, my hardware has no floating point support. However doing the >> following: >> >> float output = ( (float)input ) / 16384f; >> >> will require quite a bit of FP magic. I would imagine that this: >> >> const float scale = 1f / 16384f; >> float output = ( (float)input ) * scale; >> >> may be faster, but still requires FP multiply support. >> >> Is there a simple and fast way which I can use to convert these integers >> to floats without the aid of an FP library? I have not found much code >> in this respect. > > What processor are you using that does not have easy to find floating > point software? I would expect that to come with the tools. > > IIRC, the IEEE floating point standard has a whole bunch of stuff that > you need to do to handle exceptions, but doing your conversion should be > pretty direct. You'll need to find the leading significant digit (which > will probably be what takes the longest), shift, and then shove mantissa, > sign, and exponent into floating point format. >Hi Tim, finding the leading 1 is not necessarily taking the longest, on power it is 1 cycle (cntlz, count leading zeroes). The 68020 had "bitfield find first 1" (don't remember the exact mnemonic, I did miss that on the CPU32 which was more or less a downgraded 020). I expect ARM have that too in some form (?), it is key to being able to say find the largest contiguous block in a bitmap allocated space. Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/
Reply by ●April 6, 20162016-04-06
Dimiter_Popoff wrote:> On 07.4.2016 �. 00:05, Tim Wescott wrote: >> On Wed, 06 Apr 2016 15:17:05 +0200, Vincent vB wrote: >> >>> Currently I'm in the process of replacing a custom compass / >>> accelerometer with an ST LSM303D. The 'old' custom device produced >>> single precision floats. Without parsing the values there where just >>> passed inside a UDP packet. >>> >>> Unfortunately the LSM303D produces 16 bit signed integers. So, the >>> embedded system needs to convert these values to floats. The scaling it >>> self is quite simple: Values -32768..32767 need to be scaled to [-2,2). >>> >>> Now, my hardware has no floating point support. However doing the >>> following: >>> >>> float output = ( (float)input ) / 16384f; >>> >>> will require quite a bit of FP magic. I would imagine that this: >>> >>> const float scale = 1f / 16384f; >>> float output = ( (float)input ) * scale; >>> >>> may be faster, but still requires FP multiply support. >>> >>> Is there a simple and fast way which I can use to convert these integers >>> to floats without the aid of an FP library? I have not found much code >>> in this respect. >> >> What processor are you using that does not have easy to find floating >> point software? I would expect that to come with the tools. >> >> IIRC, the IEEE floating point standard has a whole bunch of stuff that >> you need to do to handle exceptions, but doing your conversion should be >> pretty direct. You'll need to find the leading significant digit (which >> will probably be what takes the longest), shift, and then shove mantissa, >> sign, and exponent into floating point format. >> > > Hi Tim, > finding the leading 1 is not necessarily taking the longest, on power > it is 1 cycle (cntlz, count leading zeroes). The 68020 had "bitfield > find first 1" (don't remember the exact mnemonic, I did miss that > on the CPU32 which was more or less a downgraded 020). > I expect ARM have that too in some form (?), it is key to being able > to say find the largest contiguous block in a bitmap allocated > space. > > Dimiter > > ------------------------------------------------------ > Dimiter Popoff, TGI http://www.tgi-sci.com > ------------------------------------------------------ > http://www.flickr.com/photos/didi_tgi/ > >CLZ. Available for ARM v.5 and above in full instruction mode (not available in Thumb). -- Rob Gaddi, Highland Technology -- www.highlandtechnology.com Email address domain is currently out of order. See above to fix.
Reply by ●April 6, 20162016-04-06
On 07/04/16 07:30, Rob Gaddi wrote:> CLZ. Available for ARM v.5 and above in full instruction mode (not > available in Thumb).gcc has "int __builtin_clz (unsigned int x)", lso long and long-long versions. These map to whatever is most efficient for your hardware. It's a pity that there's no integer equivalent of ldexp; maybe called ldiexp. To the OP: If your endian-ness and compiler bit-fields work out, you can use this (works for me on x64 with gcc) for building and breaking float values. typedef union { float f; struct { uint32_t mantissa:23; uint32_t exponent:8; uint32_t sign:1; }; } FloatU; Note that building a floating point value like this is likely to be slower than just saying "(float)l" - with any decent compiler. But it will help you understand what's going on. Clifford Heath.
Reply by ●April 6, 20162016-04-06
On Thu, 07 Apr 2016 00:23:42 +0300, Dimiter_Popoff wrote:> On 07.4.2016 г. 00:05, Tim Wescott wrote: >> On Wed, 06 Apr 2016 15:17:05 +0200, Vincent vB wrote: >> >>> Currently I'm in the process of replacing a custom compass / >>> accelerometer with an ST LSM303D. The 'old' custom device produced >>> single precision floats. Without parsing the values there where just >>> passed inside a UDP packet. >>> >>> Unfortunately the LSM303D produces 16 bit signed integers. So, the >>> embedded system needs to convert these values to floats. The scaling >>> it self is quite simple: Values -32768..32767 need to be scaled to >>> [-2,2). >>> >>> Now, my hardware has no floating point support. However doing the >>> following: >>> >>> float output = ( (float)input ) / 16384f; >>> >>> will require quite a bit of FP magic. I would imagine that this: >>> >>> const float scale = 1f / 16384f; >>> float output = ( (float)input ) * scale; >>> >>> may be faster, but still requires FP multiply support. >>> >>> Is there a simple and fast way which I can use to convert these >>> integers to floats without the aid of an FP library? I have not found >>> much code in this respect. >> >> What processor are you using that does not have easy to find floating >> point software? I would expect that to come with the tools. >> >> IIRC, the IEEE floating point standard has a whole bunch of stuff that >> you need to do to handle exceptions, but doing your conversion should >> be pretty direct. You'll need to find the leading significant digit >> (which will probably be what takes the longest), shift, and then shove >> mantissa, >> sign, and exponent into floating point format. >> >> > Hi Tim, > finding the leading 1 is not necessarily taking the longest, on power it > is 1 cycle (cntlz, count leading zeroes). The 68020 had "bitfield find > first 1" (don't remember the exact mnemonic, I did miss that on the > CPU32 which was more or less a downgraded 020). > I expect ARM have that too in some form (?), it is key to being able to > say find the largest contiguous block in a bitmap allocated space.Based solely on the fact that he doesn't have floating point support already, I'm assuming that he's using some dire little 8-bitter. Which could be horribly wrong, but -- no floating point support? -- Tim Wescott Wescott Design Services http://www.wescottdesign.com
Reply by ●April 6, 20162016-04-06
Reply by ●April 7, 20162016-04-07
On 6-4-2016 at 20:47, Grant Edwards wrote:> On 2016-04-06, Tauno Voipio <tauno.voipio@notused.fi.invalid> wrote: >> On 6.4.16 19:00, Wouter van Ooijen wrote: > >>> Reality check: what is the use of having the values in floating point >>> format, if you want to avoid using an FP library? What are you going to >>> do with those values? >> >> The OP said that the rest of the system expects the direction values >> in single-precision floating point format. > > Presumably the OP is just shipping the FP values out via some > interface to another piece of the system that does have FP support of > some sort. That's pretty common for embeded firmware in "sensor" type > applicaions: you do all the of the data acquisition and signal > conditioning using fixed point, and then convert it to IEEE FP before > shipping it out the door. On devices like that, often the only FP > operation that's available is a custom-written "convert from fixed > point to floating point with some hard-wired scaling" function. >Indeed, I'm afraid I can not change the output format. The older modules also need to be supported.
Reply by ●April 7, 20162016-04-07
On 7-4-2016 at 2:03, lasselangwadtchristensen@gmail.com wrote:> http://locklessinc.com/articles/i2f/ > > tweak the exponent to get scale to +-2 > > -Lasse >Thanks for the link! Very informative







