EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

C18 basic question

Started by Dennis September 12, 2011
On 12/09/2011 09:44, Arlet Ottens wrote:
> On 09/12/2011 09:31 AM, Tim wrote: >> On Mon, 12 Sep 2011 09:16:27 +0200, Arlet Ottens wrote: >> >>>> I need to used this in the following calculation where "Data" is the 14 >>>> bit raw sensor data consisting of Data[0]*16 + Data[1]: >>> >>> You probably mean Data[0] * 256 + Data[1] instead, or (Data[0]<< 8) + >>> Data[1], if your compiler isn't smart enough. >>> >> C18 may not automatically cast to integer type for the computation, so >> it's best to force it to be explicit: > > According to the user guide, if you use the -Oi command line option, the > C18 compiler will follow standard integer promotion rules. For a novice > programmer, I'd recommend enabling this option. > > I think Microchip should have made that the default, and implemented a > command line option to deviate from the standard, or made their compiler > smart enough to recognize where this optimization would be safe.
Why do some manufacturers provide these pseudo-C compilers that don't follow fundamental C standards? I can understand why a toolchain writer might want to provide the option of something that lets you generate smaller or faster code than you can get with a full C standard implementation - but it must be a non-default option that the user explicitly picks, and it must be something that actually makes sense (integer promotion rules may /look/ like they produce worst code on an 8-bit machine, but that's only if the optimiser is not good enough). From this post it sounds like C18 is broken-by-design, and a trap for the unwary - novices and experts alike. It's like TI's Code Composer and their stupid "don't zero the bss" startup code.
On Mon, 12 Sep 2011 11:10:56 +0200, David Brown
<david@westcontrol.removethisbit.com> wrote:

>On 12/09/2011 09:44, Arlet Ottens wrote: >> On 09/12/2011 09:31 AM, Tim wrote: >>> On Mon, 12 Sep 2011 09:16:27 +0200, Arlet Ottens wrote: >>> >>>>> I need to used this in the following calculation where "Data" is the 14 >>>>> bit raw sensor data consisting of Data[0]*16 + Data[1]: >>>> >>>> You probably mean Data[0] * 256 + Data[1] instead, or (Data[0]<< 8) + >>>> Data[1], if your compiler isn't smart enough. >>>> >>> C18 may not automatically cast to integer type for the computation, so >>> it's best to force it to be explicit: >> >> According to the user guide, if you use the -Oi command line option, the >> C18 compiler will follow standard integer promotion rules. For a novice >> programmer, I'd recommend enabling this option. >> >> I think Microchip should have made that the default, and implemented a >> command line option to deviate from the standard, or made their compiler >> smart enough to recognize where this optimization would be safe. > >Why do some manufacturers provide these pseudo-C compilers that don't >follow fundamental C standards? I can understand why a toolchain writer >might want to provide the option of something that lets you generate >smaller or faster code than you can get with a full C standard >implementation - but it must be a non-default option that the user >explicitly picks, and it must be something that actually makes sense >(integer promotion rules may /look/ like they produce worst code on an >8-bit machine, but that's only if the optimiser is not good enough). > > From this post it sounds like C18 is broken-by-design, and a trap for >the unwary - novices and experts alike. It's like TI's Code Composer >and their stupid "don't zero the bss" startup code.
I used Microchip's C18's version 1.0 c compiler -- still have a number of editions from that era floating about here. I also had many chances to talk to one of the authors on the phone and actually meet him when he flew out here on vacation for a week. I have my own very limited perspective through which I 'see' this. So keep in mind it may not be 'truth,' but just a distorted view of a piece of it. And there are a few viewers here who know a LOT MORE about it than I do because they were also developing such compilers long ago, too. And know a lot more than I do. Maybe they will provide some better context. The general class of compiler at that time came out of the PIC16 and PIC17 era. I'm not sure if anyone these days remembers the PIC17 much or the PIC16C54, C55, C56, and C57 of that time. I used all of them in EPROM and OTP form, in the late '80s and beyond. (And Microchip still makes the darned things. One of Microchip's defining features is to support products, long term.) Early on, and I can't remember when I first started using their c compiler but I think it would have been in the early '90s, the same compiler supported these early chips with a two-level hardware stack, as little as 512 instruction words and not that much more on the high end, plus a massive 24 bytes of sram, and a W register. It wasn't an easy target and my guess is that they focused on getting something working. It's possible, though I've no specific memory of it anymore, that the first edition of their c compiler predates the 1990 c standard. If not, it wasn't too long after it. And there were many things that needed doing which were difficult to do or "too expensive" to do when talking about a target with 24 bytes of sram and a two level hardware stack (inaccessible, as well) for a PC. The usual integral promotion stuff, such as widening 'char' parameters to functions into 'int', would actually make customers _MORE_ angry, than less so, and would likely turn a program that _might_ fit into one that wouldn't have any chance at all and would force another choice. This was a very early time in the ownership period for the new venture capitalists who bought up the old IP and made the decision to compete very differently as Arizona MicroChip Technology and enter into the embedded control market. It's likely that they would have lost potential wins at this critical juncture if they had directed their compiler developers to stand on some formal c-pedestal. It was about taking what they had at the time, one FAB in Chandler, AZ, and making some tricky balancing decisions about combining EPROM with a very old cpu design and putting together docs and tools and various promotional materials; and knowing that perhaps the one other thing they could mix in was to support what they made and ensure end users would succeed rather than fail in their own product efforts. I'm guessing, but part of that focus sifted down to the developer group, as well, and held their own focus on a c compiler that would at the very least enable the widest possible range of minimalist applications to succeed. And this, I would further guess, meant not necessarily following integer promotions -- most especially "as a default." Because they couldn't be sure that customers' programmers would all be the smartest and most talented from operating system hosted crowd, but instead more likely just electronic design engineers who know about latches and registers and so on and would "think that way" about what they were coding. I want you to also keep in mind that the times here were very different in another way, which is why I bring up "electronic design engineers" in this context. Designers, at that time, were _less often_ familiar with coding. Many of them _then_ had little or no experience there. Today, of course, it's almost a requirement -- to have at least some basic capability there. But back then, fewer did. They had their hands full enough with existing design issues and often shuffled the programming load onto coders. Also, microcontrollers without bus systems externally were a kind of new deal. Prior to about 1985 or so, and say with the 8031/32 core, one would design a system with external EPROM and SRAM along with the big cpu with lots of pins. To add a micro meant adding a lot of chips and taking up a lot of space. Once Microchip came into the market (and one or two others around that period), they were cast out as self contained systems without external buses -- but with I/O pins with specialized functions and everything else needed for small applications contained inside the chip. And this opened the door for a whole new set of cost and size constrained application spaces. For example, thermometers. Imagine a hardware engineer who has carefully looked at the hardware details of the PIC16C54, determined that the thresholds and drive capabilities of the I/O are acceptable, that the number of I/O pins match the need, and that 25 bytes of sram is more than enough for the concept. Package size is right, cost is right, etc. Using it will give them "an edge" over the competition. And now imagine some c compiler, which if it had chosen to follow the c standard by default (defaults and possible overrides about which the poor engineer wouldn't know much about because they were, almost by definition, relatively ignorant of programming issues like this back then) would cause the 25 bytes to become exceeded with various integer promotions "by rule." Now imagine that Microchip made the decision to set up the defaults so that the c compiler uses the least amount of sram possible -- doesn't promote 'char' to 'int' in parameter passing, for example. The engineer succeeds without even knowing why. But succeeds per plan. The c compiler didn't dictate rules poorly understood or cared about. It just worked as expected. It is the difference between a design win and success -- and there is no doubt about Microchip's success, in hindsight -- and furious customers with expensive lessons learned but no products using microchip controllers. Standing on ceremony, at that time, would have been unwise. I guess I'm not all that bothered. We are talking about 8 bit cpus, some of them, with as close to zero resources as it is possible to have and still have a microcontroller worthy of the name. If this were resource-rich 32-bit cpus? Yes. I'd be all over it. But I guess I don't see this from a Marie Antoinette "Why don't they just eat the cakes if they don't have any more bread left on the table?" point of view. Perhaps because I remember being on the other end of such pious views and in the trenches with nothing much to 'eat' and glad that there was a c compiler, at all. I didn't gnash my teeth because it didn't fit some standards org's idea of 'proper.' I was just glad to have any meal, at all. Jon
On 12/09/2011 16:10, Jon Kirwan wrote:
> On Mon, 12 Sep 2011 11:10:56 +0200, David Brown > <david@westcontrol.removethisbit.com> wrote: > >> On 12/09/2011 09:44, Arlet Ottens wrote: >>> On 09/12/2011 09:31 AM, Tim wrote: >>>> On Mon, 12 Sep 2011 09:16:27 +0200, Arlet Ottens wrote: >>>> >>>>>> I need to used this in the following calculation where "Data" is the 14 >>>>>> bit raw sensor data consisting of Data[0]*16 + Data[1]: >>>>> >>>>> You probably mean Data[0] * 256 + Data[1] instead, or (Data[0]<< 8) + >>>>> Data[1], if your compiler isn't smart enough. >>>>> >>>> C18 may not automatically cast to integer type for the computation, so >>>> it's best to force it to be explicit: >>> >>> According to the user guide, if you use the -Oi command line option, the >>> C18 compiler will follow standard integer promotion rules. For a novice >>> programmer, I'd recommend enabling this option. >>> >>> I think Microchip should have made that the default, and implemented a >>> command line option to deviate from the standard, or made their compiler >>> smart enough to recognize where this optimization would be safe. >> >> Why do some manufacturers provide these pseudo-C compilers that don't >> follow fundamental C standards? I can understand why a toolchain writer >> might want to provide the option of something that lets you generate >> smaller or faster code than you can get with a full C standard >> implementation - but it must be a non-default option that the user >> explicitly picks, and it must be something that actually makes sense >> (integer promotion rules may /look/ like they produce worst code on an >> 8-bit machine, but that's only if the optimiser is not good enough). >> >> From this post it sounds like C18 is broken-by-design, and a trap for >> the unwary - novices and experts alike. It's like TI's Code Composer >> and their stupid "don't zero the bss" startup code. > > I used Microchip's C18's version 1.0 c compiler -- still have > a number of editions from that era floating about here. I > also had many chances to talk to one of the authors on the > phone and actually meet him when he flew out here on vacation > for a week. > > I have my own very limited perspective through which I 'see' > this. So keep in mind it may not be 'truth,' but just a > distorted view of a piece of it. And there are a few viewers > here who know a LOT MORE about it than I do because they were > also developing such compilers long ago, too. And know a lot > more than I do. Maybe they will provide some better context. > > The general class of compiler at that time came out of the > PIC16 and PIC17 era. I'm not sure if anyone these days > remembers the PIC17 much or the PIC16C54, C55, C56, and C57 > of that time. I used all of them in EPROM and OTP form, in > the late '80s and beyond. (And Microchip still makes the > darned things. One of Microchip's defining features is to > support products, long term.) >
I've used PIC16 devices, programmed in assembly. With a bit of macros to give sane names to the opcodes, they were quite usable. The paging for the memory was "fun", especially because the emulator didn't quite match the real devices when it came to interrupts and paging.
> Early on, and I can't remember when I first started using > their c compiler but I think it would have been in the early > '90s, the same compiler supported these early chips with a > two-level hardware stack, as little as 512 instruction words > and not that much more on the high end, plus a massive 24 > bytes of sram, and a W register. >
It's okay to make a C-like compiler for such brain-dead architectures as the PIC. You don't call it a C compiler, and you make it clear where it differs from standard C. Then it is a useful tool. The PIC18 architecture is perfectly capable of supporting C with reasonable efficiency - this is one of the main selling points of it (over the PIC16 which - practically speaking - did not support C, and the PIC17 which supported it inefficiently).
> It wasn't an easy target and my guess is that they focused on > getting something working. It's possible, though I've no > specific memory of it anymore, that the first edition of > their c compiler predates the 1990 c standard. If not, it > wasn't too long after it. And there were many things that > needed doing which were difficult to do or "too expensive" to > do when talking about a target with 24 bytes of sram and a > two level hardware stack (inaccessible, as well) for a PC. >
Integer promotion stems from K&R's first version.
> The usual integral promotion stuff, such as widening 'char' > parameters to functions into 'int', would actually make > customers _MORE_ angry, than less so, and would likely turn a > program that _might_ fit into one that wouldn't have any > chance at all and would force another choice. >
I disagree. Some compilers for 8-bit processors offer an "8-bit int" flag, which is a much more sensible way to handle this situation. It's a flag that customers can use to generate smaller and faster code if they understand it's limitations, and it lets the compiler follow all the rules - including integer promotion - except for the single "minimum size of int" rule. A compiler should produce /correct/ code, according the expectations a programmer has when coding to the C standards. It is fine to have switches allowing it to produce smaller and faster code in a non-standard way - but that cannot be the default for a C compiler.
> This was a very early time in the ownership period for the > new venture capitalists who bought up the old IP and made the > decision to compete very differently as Arizona MicroChip > Technology and enter into the embedded control market. It's > likely that they would have lost potential wins at this > critical juncture if they had directed their compiler > developers to stand on some formal c-pedestal. It was about > taking what they had at the time, one FAB in Chandler, AZ, > and making some tricky balancing decisions about combining > EPROM with a very old cpu design and putting together docs > and tools and various promotional materials; and knowing that > perhaps the one other thing they could mix in was to support > what they made and ensure end users would succeed rather than > fail in their own product efforts. I'm guessing, but part of > that focus sifted down to the developer group, as well, and > held their own focus on a c compiler that would at the very > least enable the widest possible range of minimalist > applications to succeed. And this, I would further guess, > meant not necessarily following integer promotions -- most > especially "as a default." Because they couldn't be sure > that customers' programmers would all be the smartest and > most talented from operating system hosted crowd, but instead > more likely just electronic design engineers who know about > latches and registers and so on and would "think that way" > about what they were coding. > > I want you to also keep in mind that the times here were very > different in another way, which is why I bring up "electronic > design engineers" in this context. Designers, at that time, > were _less often_ familiar with coding. Many of them _then_ > had little or no experience there. Today, of course, it's > almost a requirement -- to have at least some basic > capability there. But back then, fewer did. They had their > hands full enough with existing design issues and often > shuffled the programming load onto coders. >
The lack of experience of your target users does not give you the right to confuse them more by deliberately breaking standards! It means you have to be even more careful to give people what they expect - after all, your users will be newbie programmers armed with /standard/ C books. Unlike many developers these days, embedded programmers at that time were much more willing to read manuals and documentation (being brought up as hardware engineers with written datasheets, rather than modern "programmers" used to clicking on guis). I don't think it would be much challenge for people to read information about switches to make the compiler generate smaller and faster code for certain types of program.
> Also, microcontrollers without bus systems externally were a > kind of new deal. Prior to about 1985 or so, and say with > the 8031/32 core, one would design a system with external > EPROM and SRAM along with the big cpu with lots of pins. To > add a micro meant adding a lot of chips and taking up a lot > of space. Once Microchip came into the market (and one or > two others around that period), they were cast out as self > contained systems without external buses -- but with I/O pins > with specialized functions and everything else needed for > small applications contained inside the chip. And this > opened the door for a whole new set of cost and size > constrained application spaces. For example, thermometers. > > Imagine a hardware engineer who has carefully looked at the > hardware details of the PIC16C54, determined that the > thresholds and drive capabilities of the I/O are acceptable, > that the number of I/O pins match the need, and that 25 bytes > of sram is more than enough for the concept. Package size is > right, cost is right, etc. Using it will give them "an edge" > over the competition. > > And now imagine some c compiler, which if it had chosen to > follow the c standard by default (defaults and possible > overrides about which the poor engineer wouldn't know much > about because they were, almost by definition, relatively > ignorant of programming issues like this back then) would > cause the 25 bytes to become exceeded with various integer > promotions "by rule." >
I still fail to see why users should not be expected to read the manual - or read options on the checkboxes in Microchip's IDE. It really is not rocket science - Microchip had an IDE with a project manager before they had a compiler. I also fail to understand why you think integer promotion is such an evil on these devices. With a naive implementation, integer promotion can lead to some overhead - but not as much as you seem to think. And once the compiler implements some basic patterns here, most of that disappears. Integer promotion is really only a problem in cases where the compiler front-end is pre-written and targeted for 16-bit or 32-bit systems and doesn't keep track of the original size of the object.
> Now imagine that Microchip made the decision to set up the > defaults so that the c compiler uses the least amount of sram > possible -- doesn't promote 'char' to 'int' in parameter > passing, for example. The engineer succeeds without even > knowing why. But succeeds per plan. The c compiler didn't > dictate rules poorly understood or cared about. It just > worked as expected. >
"Integral promotion" refers mainly to arithmetic promotion, such as changing (un)signed chars into (un)signed ints in arithmetic. You are, I think, referring to the default types of parameters in functions - if a function is used without a prototype then its parameters are assumed to be integers. If the function was defined properly and has its arguments given as "char" in its prototype, then no integral promotion is performed. So what you are recommending here is that the compiler should silently break properly written code by knowledgeable programmers, so that sloppily written code by amateurs might work slightly faster.
> It is the difference between a design win and success -- and > there is no doubt about Microchip's success, in hindsight -- > and furious customers with expensive lessons learned but no > products using microchip controllers. > > Standing on ceremony, at that time, would have been unwise. > > I guess I'm not all that bothered. We are talking about 8 > bit cpus, some of them, with as close to zero resources as it > is possible to have and still have a microcontroller worthy > of the name. If this were resource-rich 32-bit cpus? Yes. > I'd be all over it. But I guess I don't see this from a > Marie Antoinette "Why don't they just eat the cakes if they > don't have any more bread left on the table?" point of view. > > Perhaps because I remember being on the other end of such > pious views and in the trenches with nothing much to 'eat' > and glad that there was a c compiler, at all. I didn't gnash > my teeth because it didn't fit some standards org's idea of > 'proper.' I was just glad to have any meal, at all. > > Jon
Microchip made a lot of bad decisions with their "C" compilers, and were renowned for their poor implementation of C standards and features, and poor quality code. They have, in the past, sold tools for a great deal of money that are incapable of handling an array of structs. If we were talking about a tool from the nineties, then I would have a bit more sympathy. But Microchip sells their C18 compiler /today/ - and selling a compiler as a "C compiler" that doesn't follow basic and reasonable C standards by default is not something that can be justified by claims about what some users expect. (Obviously there are some C standard rules that an embedded compiler will not follow - you can't support the level of function nesting required, or the size of objects required, on a system with small ram. And you can't expect full stdlib support, etc.)
On Mon, 12 Sep 2011 11:10:56 +0200, David Brown wrote:

> On 12/09/2011 09:44, Arlet Ottens wrote: >> On 09/12/2011 09:31 AM, Tim wrote: >>> On Mon, 12 Sep 2011 09:16:27 +0200, Arlet Ottens wrote: >>>
>> snip <<
>> I think Microchip should have made that the default, and implemented a >> command line option to deviate from the standard, or made their >> compiler smart enough to recognize where this optimization would be >> safe. > > Why do some manufacturers provide these pseudo-C compilers that don't > follow fundamental C standards?
The answer is easy, and is backed up by evidence from the market place: because it sells. No one branded C so that a compiler had to be compliant before it could be marketed as such -- and if they had, maybe there'd be some other language that covers the market. At any rate, it's "buyer beware" when getting any C compiler that doesn't tout itself as ANSI C compliant. -- Tim Wescott Control system and signal processing consulting www.wescottdesign.com
On Mon, 12 Sep 2011 17:05:57 +0200, David Brown
<david@westcontrol.removethisbit.com> wrote:

><snip> >I disagree. ><snip>
I think we will have to leave it there and for others to imagine and worry about. I just wanted to offer one person's perspective, not debate it. I started out saying it was my own distorted perspective. So there it is. Jon
On Mon, 12 Sep 2011 17:05:57 +0200, David Brown
<david@westcontrol.removethisbit.com> wrote:

>> It wasn't an easy target and my guess is that they focused on >> getting something working. It's possible, though I've no >> specific memory of it anymore, that the first edition of >> their c compiler predates the 1990 c standard. If not, it >> wasn't too long after it. And there were many things that >> needed doing which were difficult to do or "too expensive" to >> do when talking about a target with 24 bytes of sram and a >> two level hardware stack (inaccessible, as well) for a PC. >> > >Integer promotion stems from K&R's first version.
I am very much aware of that. I worked on Unix v6 kernel, in the 70's. Jon
Jon Kirwan wrote:
[ ... ]
> Now imagine that Microchip made the decision to set up the > defaults so that the c compiler uses the least amount of sram > possible -- doesn't promote 'char' to 'int' in parameter > passing, for example. The engineer succeeds without even > knowing why. But succeeds per plan. The c compiler didn't > dictate rules poorly understood or cared about. It just > worked as expected.
Granted, those were the wild times, that was the frontier, and everything was being done for the first time. Nowadays that market optimization turns around to bite projects in the butt. Programs that work for reasons nobody has thought about are dangerous. My example of a modern design win is the project I prototyped with an AVR, since I had the stuff to work with AVRs. The client, who only had the stuff for PICs took my code, changed a half-dozen small functions that amounted to a Hardware-Abstraction-Layer, and implemented in PIC18s. That's what can be done with real C. Back when I was working with those old PICs I learned the environment and never thought thereafter about using anything beyond assembler. I'd got a copy of the Cross-32 Meta Assembler and used it on all the micro platforms I did. It would have been awkward if I'd ever got to DSPs, but since I was spared that it worked out fine. Mel.
On Mon, 12 Sep 2011 16:11:22 +0800, Dennis wrote:

> "Tim" <tim@seemywebsite.please> wrote in message > news:19OdnQX0Q7vDKPDTnZ2dnUVZ_tKdnZ2d@web-ster.com... >> On Mon, 12 Sep 2011 09:16:27 +0200, Arlet Ottens wrote: >> >>>> I need to used this in the following calculation where "Data" is the >>>> 14 bit raw sensor data consisting of Data[0]*16 + Data[1]: >>> >>> You probably mean Data[0] * 256 + Data[1] instead, or (Data[0] << 8) + >>> Data[1], if your compiler isn't smart enough. >>> >> C18 may not automatically cast to integer type for the computation, so >> it's best to force it to be explicit: >> >> (((unsigned int)Data[0]) << 8) + Data[1], or if the right header is in >> there, >> (((uint32_t)Data[0]) << 8) + Data[1]. >> >> >>>> Temp = 17572L*Data/65536 - 4685; //calc temperature using raw sensor >>>> data >>>> >>>> What is the correct way to achieve this. It seems messy to convert >>>> the Data[] values using arithmetic. Is there a nice way ? >>> >>> I wouldn't worry about it. Just use multiply or shift, like above. The >>> performance will likely be determined by your long multiply and divide >>> anyway. >> >> Note that the above calculations will have problems with underflow -- >> any 16-bit unsigned, divided by 65536, will equal 0 with a large >> remainder. Since the compiler doesn't enforce order per the standard, >> you need to: >> >> Temp = (17572L*Data)/65536 - 4685 >> >> Note that I would do: >> >> Temp = (17572L * (long int)Data)/65536L - 4685L >> >> just to make sure that the compiler didn't get too clever on me. >> >> Consider also that temperatures vary slowly: if you have enough memory, >> you may well have the time to do everything in floating point after you >> collect the data. That's your design decision to make, however. >> >> -- >> Tim Wescott >> Control system and signal processing consulting www.wescottdesign.com > > Thanks Tim. I wasn't aware of the issue of the potential overflow. At > some stage I was going to try and write some test code to try and > simulate all possible values and export the result from MPLAB to check > for error modes I was not aware of. > > I probably could use floating point - maybe that is the next step. > Readings are likely to be taken on intevals of at least 10s of seconds > so time is not really an issue!
Floating point is good because it is less likely that you'll be confused when writing it, and because it is less likely that someone who comes after you will be confused (and therefor either screw the code up royally, or refuse to work on it). Floating point is bad because on many processors (particularly small ones) it is much slower than integer arithmetic and it is memory hungry. It is also bad because with many tool chains it either doesn't work exactly right, or it doesn't work at all! "Not exactly right" can include not being IEEE-compliant (and we already know how firmly wedded C18 is to standards compliance), not being reentrant (which causes no end of problems if you have an RTOS and two tasks that use floating point), or just having subtle bugs (Code Composter, for a while and for the TMS320F2812, shipped with code that would occasionally be off by nearly a factor of 2 when converting from a floating point number that was close to 2^n to an integer). All in all, these days, I use floating point when I can and if I trust it. -- www.wescottdesign.com
On 12/09/11 18:47, Jon Kirwan wrote:
> On Mon, 12 Sep 2011 17:05:57 +0200, David Brown > <david@westcontrol.removethisbit.com> wrote: > >> <snip> >> I disagree. >> <snip> > > I think we will have to leave it there and for others to > imagine and worry about. I just wanted to offer one person's > perspective, not debate it. I started out saying it was my > own distorted perspective. So there it is. >
OK. Having wasted a lot of time because of stupid and unnecessary non-standard "features" in a supposed "C compiler", my own perspective is distorted in a different direction.
On Mon, 12 Sep 2011 20:59:17 +0200, David Brown
<david.brown@removethis.hesbynett.no> wrote:

>On 12/09/11 18:47, Jon Kirwan wrote: >> On Mon, 12 Sep 2011 17:05:57 +0200, David Brown >> <david@westcontrol.removethisbit.com> wrote: >> >>> <snip> >>> I disagree. >>> <snip> >> >> I think we will have to leave it there and for others to >> imagine and worry about. I just wanted to offer one person's >> perspective, not debate it. I started out saying it was my >> own distorted perspective. So there it is. > >OK. Having wasted a lot of time because of stupid and unnecessary >non-standard "features" in a supposed "C compiler", my own perspective >is distorted in a different direction.
Oh, I never said I hadn't spent a lot of time like you on such things. You asked a question and I just tried to provide some context. Jon

The 2024 Embedded Online Conference