EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

converting float to ascii w/o printf

Started by Ron Blancarte March 27, 2007
On Wed, 28 Mar 2007 10:22:05 PST (while OU was sucking), Everett M.
Greene wrote:
>"Wilco Dijkstra" <Wilco_dot_Dijkstra@ntlworld.com> writes: >> "Thad Smith" <ThadSmith@acm.org> wrote in message >> > Wilco Dijkstra wrote: >> >> "Ron Blancarte" <ron@---TAKETHISOUT---.blancarte.com> wrote in message >> >> >> >>>Clearly, the entered value is being stored correctly and the displayed >> >>>value, while close, is just slightly off. So is there a way to do >> >>>this without this error induced by my multiplications by 10 (and still >> >>>not using sprintf()? >> >> >> >> The correct way of doing this is to only do one normalizing multiply >> >> or divide by a power of 10, so you only get one roundoff error. Powers >> >> of 10 are exact in floats up to 10^10, for a wider dynamic range you >> >> get multiple roundoff errors unless you use more precision. After >> >> normalization you use integer arithmetic to extract the digits. >> > >> > Agreed. >> > >> >> My advice would be to use integer-only arithmetic for normalization, >> >> this way you get less roundoff error over much larger ranges. >> > >> > Hmmm, I don't understand the recommendation here. I would say multiply or divide by the >> > proper (exact) power of 10 to get a number in the range -9999999..9999999, add a >> > rounding factor (-0.5 or +0.5), convert to a 32-bit integer, convert that to a 7-digit >> > character string, then format. >> >> What I mean is that if you do the normalization multiply/divide using >> integer arithmetic, you get more precision, eg. 32 bits rathern than 24 >> when using float. This gives you larger powers of 10 that can be >> represented exactly and a more accurate result after normalization >> (and more control over rounding). Integer arithmetic makes even more >> sense if you use emulated floating point. > >And don't forget to look at the accuracy of the data >before getting wound around the axle of precision. >If the source of the data is an 8-bit ADC, for >instance, no amount of fancy footwork is going to >get you three decimal place accuracy. If the >problem is ill-conditioned, not much of anything >will help.
What I am doing here is a tool for measurements during drilling applications. Data sources are a set of three accelerometers and three magnetometers. Conversion is handled via a 12 bit ADC that is temperature corrected during acquisition. Generally one acquisition cycle is about 100 samples a second around 5-10 seconds. As far as my problem, I ended up giving Thad's solution a shot. It ended up working great, better than the multiply/divide by 10 solution. And while mildly slower than sprintf(), it is more than workable for this application. RonB
Ron Blancarte wrote:
> Now, i have googled this some, with not much as far as results, so I > though I would ask here to see what kind of luck I would have. > > I am using the Analog Devices ADuC831 (8052 ISA) part to do data > acquisition and transmission. I have some data that is being held as > floating point (single precision IEEE 754 format). Some of this data > needs to be outputted to serial on request. This is where my request > comes in. Currently I am using sprintf() to format this output into > my buffer. HOWEVER, the footprint of this code is making it less than > desirable to use (over 1100 bytes of code space and nearly 30 bytes of > memory). So I am looking to do the output on my own. >
For embedded work, the rule here is never use floating point at all if it can be avoided. If you know what the range of values and required accuracy is, scaled integer arithmetic will be faster and more efficient, especially if you can arrange the data to allow shifts, rather than multiply and divide. If you need trig functions, then write your own using lookup tables, perhaps adding simple interpolation as a tradeoff against table size. If you do this, you can define your own internal representation and design it for ease of use within the system and conversion to human readable / external format. Floating point on small machines is slow, inefficient, adds complexity and rounding errors everywhere it's used and you can usually get better control and predictable accuracy using other methods. It's a bit of a sledgehammer and if I were your project manager, I would encourage you to find a more creative solution :-). Having said all that, 1100 bytes doesn't sound to bad for a sprintf, especially on an 8 bit machine. Still wouldn't use it though... Chris

ChrisQuayle wrote:


> For embedded work, the rule here is never use floating point at all if > it can be avoided.
I strongly disagree. Floating point greatly simplifies the development and the support of the program. I also like when the physical parameters are represented as the natural values, such as Volts, Amperes, Seconds and not in the weird units like 12345*ADC_RESOLUTION/CPU_CLOCK_RATE.
> If you know what the range of values and required > accuracy is, scaled integer arithmetic will be faster and more > efficient,
Yes. On the 8-bitter, float is somewhat 20 times slower then 16-bit integer. It also costs several kilobytes of ROM. However it doesn't matter in many practical cases.
> especially if you can arrange the data to allow shifts, > rather than multiply and divide. If you need trig functions, then write > your own using lookup tables, perhaps adding simple interpolation as a > tradeoff against table size.
Yes, you can. But what for?
> If you do this, you can define your own > internal representation and design it for ease of use within the system > and conversion to human readable / external format.
Reinventing the wheel. Wasted effort.
> Floating point on small machines is slow, inefficient, adds complexity > and rounding errors everywhere it's used and you can usually get better > control and predictable accuracy using other methods.
When you are using a micro of 16k ROM or more, the floating point overhead doesn't really matter. Of course, it does matter on the machines with less then 8k.
> It's a bit of a > sledgehammer and if I were your project manager, I would encourage you > to find a more creative solution :-).
Your job as a manager is get the project done on time, within budget and for good. There is no place for religious beliefs and super optimization.
> Having said all that, 1100 bytes doesn't sound to bad for a sprintf, > especially on an 8 bit machine. Still wouldn't use it though...
Is there any reasonable argument for not using sprintf? Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com
On 2007-03-29, ChrisQuayle <nospam@devnul.co.uk> wrote:

> For embedded work, the rule here is never use floating point at all if > it can be avoided.
I disagree. Doing something in fixed-point instead of floating point will greatly increase the design work required and make that work more difficult. It will cause bugs. There will be overlow and wrap-around bugs. If you've got the code space and the CPU cycles, use floating point: it's far easier to get the right answer and results in far fewer bugs.
> If you know what the range of values and required accuracy is,
The problem is that many people _think_ they know. But they don't.
> scaled integer arithmetic will be faster and more efficient, > especially if you can arrange the data to allow shifts, > rather than multiply and divide. If you need trig functions, > then write your own using lookup tables, perhaps adding simple > interpolation as a tradeoff against table size.
Why not design your own CPU, fab your own chips, and write your own compiler too? ;)
> If you do this, you can define your own internal > representation and design it for ease of use within the system > and conversion to human readable / external format.
That's very difficult to do right. When I started working in the instrumentation business 15 years ago, most products were done in fixed point. A _lot_ of time was spent trying to get the scaling and represention right. And it never was quite right. There were always problems in the field with wrap around and over/underflow in cases that the developers didn't forsee. Today, almost all of those projects use floating point. The result? Much faster development time and far fewer bugs. It requires a few KB more code space, but code space has gotten very, very ceap. The main rub is speed when you're running with little power at clock speeds <1MHz.
> Floating point on small machines is slow, inefficient,
In may applications it doesn't matter if the calculation takes 50ms or 5ms. Trading complexity, development time, and bugs for useless speed is a false economy.
> adds complexity
I find exactly the opposite: trying to do things in fixed point is far more complex than doing them in floating point.
> and rounding errors everywhere it's used and you can usually > get better control and predictable accuracy using other > methods. It's a bit of a sledgehammer and if I were your > project manager, I would encourage you to find a more creative > solution :-).
Sounds like you're a big fan of premature optimization to me. Do it the easy, obvious way first (that usually means floating point for measurement and control apps). Only _after_ you've measured performance and decided it's going to be too slow do you profile the code and decide if the extra work and bugs of fixed point is worthwhile.
> Having said all that, 1100 bytes doesn't sound to bad for a > sprintf, especially on an 8 bit machine. Still wouldn't use it > though...
-- Grant Edwards grante Yow! All right, you at degenerates! I want this visi.com place evacuated in 20 seconds!
Vladimir Vassilevsky wrote:
> > > ChrisQuayle wrote: > > >> For embedded work, the rule here is never use floating point at all if >> it can be avoided. > > > I strongly disagree. Floating point greatly simplifies the development > and the support of the program. I also like when the physical parameters > are represented as the natural values, such as Volts, Amperes, Seconds > and not in the weird units like 12345*ADC_RESOLUTION/CPU_CLOCK_RATE.
One of the main problems with the use of C library floating point is that you are then more or less forced to use all the library functions that support the internal format. As library implementations vary between vendors, so do the bugs, implentation detail and the fact that you then have a whole wedge of code in your project whose internals are invisible, (do you have the sources ?) cannot easily be verified etc. For small system embedded work, it's better (IMHO, YMMV etc) to develop more efficient libraries over several projects / years. You then have fully debugged, efficient, tested code that you have complete control over. Such libraries can of course include scaled integer support and are actually surprisingly trivial to write, as are table based trig functions.
> > When you are using a micro of 16k ROM or more, the floating point > overhead doesn't really matter. Of course, it does matter on the > machines with less then 8k.
Memory space is often not the problem. Rather, it's cpu throughput issues that limit it's use. There's also the knockon effect in terms of design constraint, innapropriate standard library data types, conversion function overhead etc. For a lot of embedded work, much of the standard C library is unusable because of such issues. Example:- recent pat tester project, 8051 legacy hardware, qtr vga graphics, loads of text and icons, 3 decimal place live readout of current and voltage etc, which was unusably slow until we ditched all the C lib floating point code. This thread was about 8051 class processors. Arguably different for embedded Linux on Arm, but that's not what was being discussed.
>> It's a bit of a sledgehammer and if I were your project manager, I >> would encourage you to find a more creative solution :-). > > > Your job as a manager is get the project done on time, within budget and > for good. There is no place for religious beliefs and super optimization. >
Right, get it out the door and stuff the quality, because all we had was a hammer and no imagination. Nothing to do with religious belief, actually... Chris
Grant Edwards wrote:

> I disagree. Doing something in fixed-point instead of floating > point will greatly increase the design work required and make > that work more difficult. It will cause bugs. There will be > overlow and wrap-around bugs.
Any design worthy of the name should be able to define system limits and provide recovery for offlimit values.
> That's very difficult to do right. When I started working in > the instrumentation business 15 years ago, most products were > done in fixed point. A _lot_ of time was spent trying to get > the scaling and represention right. And it never was quite > right. There were always problems in the field with wrap > around and over/underflow in cases that the developers didn't > forsee.
Poor system design / specification ?.
> > Sounds like you're a big fan of premature optimization to me.
Not quite sure what you mean by that, but good design starts at the beginning, taking into consideration all software and hardware issues.
> > Do it the easy, obvious way first (that usually means floating > point for measurement and control apps). Only _after_ you've > measured performance and decided it's going to be too slow do > you profile the code and decide if the extra work and bugs of > fixed point is worthwhile.
Depends on the company culture, product and other issues. Quick hack proof of concept code can become the product when business sees it working and declares development over. Best to do it right to start with. Better product and saves time and money in the end... Chris

ChrisQuayle wrote:


>>> For embedded work, the rule here is never use floating point at all >>> if it can be avoided.
>> Floating point greatly simplifies the development >> and the support of the program. I also like when the physical >> parameters are represented as the natural values, such as Volts, >> Amperes, Seconds and not in the weird units like >> 12345*ADC_RESOLUTION/CPU_CLOCK_RATE.
> One of the main problems with the use of C library floating point is > that you are then more or less forced to use all the library functions > that support the internal format.
I would be really surprised if the internal format is not IEEE-754. The only major problem that I have encountered with the vendor floating point libraries is that in some cases the library is not reentrant. There were quite a few small bugs though. As library implementations vary
> between vendors, so do the bugs, implentation detail and the fact that > you then have a whole wedge of code in your project whose internals are > invisible, (do you have the sources ?) cannot easily be verified etc.
The less I have to look under the hood, the better. I don't understand the open source philosophy.
> Memory space is often not the problem. Rather, it's cpu throughput > issues that limit it's use. There's also the knockon effect in terms of > design constraint, innapropriate standard library data types, conversion > function overhead etc. For a lot of embedded work, much of the standard > C library is unusable because of such issues. Example:- recent pat > tester project, 8051 legacy hardware, qtr vga graphics, loads of text > and icons, 3 decimal place live readout of current and voltage etc, > which was unusably slow until we ditched all the C lib floating point > code. This thread was about 8051 class processors.
In many cases, the 8-bit software looks like that: for(;;) { Do_Everything(); } Instead of trying to do everything in one loop, employ a preemptive multitasker. Thus the slow floating point calculation will be spread over the time. This avoids the annoying delays for the other processes.
> Arguably different > for embedded Linux on Arm, but that's not what was being discussed.
I use the multithreading and the floating point calculations with AVR and HC12. Works fine. Of course, the speed critical parts are optimized, but they are not too many. Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com
"Vladimir Vassilevsky" <antispam_bogus@hotmail.com> wrote in message 
news:ijTOh.3155$YL5.1609@newssvr29.news.prodigy.net...
> > > I would be really surprised if the internal format is not IEEE-754. > The only major problem that I have encountered with the vendor floating > point libraries is that in some cases the library is not reentrant. There > were quite a few small bugs though. >
My favorite was a compiler library that supported the floating point functions that we called "sometimes less than" and "maybe greater than". This was the finest example of truly fuzzy logic I've come across. Mark Walsh
On 2007-03-29, ChrisQuayle <nospam@devnul.co.uk> wrote:
> Grant Edwards wrote: > >> I disagree. Doing something in fixed-point instead of floating >> point will greatly increase the design work required and make >> that work more difficult. It will cause bugs. There will be >> overlow and wrap-around bugs. > > Any design worthy of the name should be able to define system > limits and provide recovery for offlimit values.
Maybe it's just the places I've worked, but in practice that seems to be rather rare.
>> That's very difficult to do right. When I started working in >> the instrumentation business 15 years ago, most products were >> done in fixed point. A _lot_ of time was spent trying to get >> the scaling and represention right. And it never was quite >> right. There were always problems in the field with wrap >> around and over/underflow in cases that the developers didn't >> forsee. > > Poor system design / specification ?.
Being able to provide a robust working solution when given a poor system design and specification can be a very good thing. :) -- Grant Edwards grante Yow! Why was I BORN? at visi.com
"Ron Blancarte" <ron@---TAKETHISOUT---.blancarte.com> wrote in message 
news:c0ui03dctcb8t4s04fmrs65v837jimars1@4ax.com...
> Now, i have googled this some, with not much as far as results, so I > though I would ask here to see what kind of luck I would have. > > I am using the Analog Devices ADuC831 (8052 ISA) part to do data > acquisition and transmission. I have some data that is being held as > floating point (single precision IEEE 754 format). Some of this data > needs to be outputted to serial on request. This is where my request > comes in. Currently I am using sprintf() to format this output into > my buffer. HOWEVER, the footprint of this code is making it less than > desirable to use (over 1100 bytes of code space and nearly 30 bytes of > memory). So I am looking to do the output on my own. > > And to this means, I have succeeded, to an extent. So far I have > removed my need for sprintf() in printing HEX format, as well as ints > and chars. My problem is with floats. I have not yet attempted > standard notation: > 3.1415926 > But I did write an exponent formula. > 3.141593E+00 > I even got the rounding. HOWEVER, this is using simple multiplies and > divides, by 10, to achieve it's goals. And this introduces my > problem. Since a few of the numbers go out to 6 digits of precision, > I am having problems with these last digits outputting correctly. For > example: > > Value entered: 6.791775E-08 > Value displayed: 6.791777E-08 > Hex value on chip: 0x3391DA2E (6.7917753e-8) > > Clearly, the entered value is being stored correctly and the displayed > value, while close, is just slightly off. So is there a way to do > this without this error induced by my multiplications by 10 (and still > not using sprintf()? > > RonB > > > -------------------------------------------------- > "It is human nature to take shortcuts in thinking" > --------------------------------------------------
Ron, I emailed you some sample code that we wrote for the same problem on another member of the ADuC8xx family. Let me know if you did not receive the direct email. Scott

The 2024 Embedded Online Conference