EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

LPC900/80C51 Compiler Toolchain

Started by Unknown June 20, 2007
CBFalconer wrote:
> Paul Taylor wrote: >> wilco.dijkstra wrote: >> >>> and emitting frame pointers when few compilers do so today do not >>> instill a professional image. >> >> The embedded code I have written is mostly C code, and I very >> rarely look at the assembler code. But that comment caught my >> attention because I thought that almost all compilers, particularly >> 32-bit ones, use stack frames and with frame pointers - or am I >> interpreting that comment incorrectly? > > If the machine uses a stack, and the compiler keeps careful track > of the state of that stack, it can generate SP relative addresses. > However this normally requires other restrictions on the generated > code.
BTW, about 30 years ago, when I did that for the 8080, I thought I was breaking new ground. Tweren't so. -- <http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.txt> <http://www.securityfocus.com/columnists/423> <http://www.aaxnet.com/editor/edit043.html> cbfalconer at maineline dot net -- Posted via a free Usenet account from http://www.teranews.com
On 26 Jun, 23:10, CBFalconer <cbfalco...@yahoo.com> wrote:
> Paul Taylor wrote: > > wilco.dijkstra wrote: > > >> and emitting frame pointers when few compilers do so today do not > >> instill a professional image. > > > The embedded code I have written is mostly C code, and I very > > rarely look at the assembler code. But that comment caught my > > attention because I thought that almost all compilers, particularly > > 32-bit ones, use stack frames and with frame pointers - or am I > > interpreting that comment incorrectly? > > If the machine uses a stack, and the compiler keeps careful track > of the state of that stack, it can generate SP relative addresses. > However this normally requires other restrictions on the generated > code.
Most compilers stopped using frame pointers a long time ago. They are inefficient and don't actually provide any benefit. Rather than changing SP repeatedly inside a function, SP is adjusted only on entry and exit of the function, further improving efficiency. This also makes it easier to track stack variables in debuggers (as offsets from SP are fixed). The drawback is that stacksize can grow in some circumstances. Functions containing alloca or C99 arrays could still use a frame pointer. Wilco
On Tue, 26 Jun 2007 21:31:26 -0700, wilco.dijkstra wrote:

> On 26 Jun, 23:10, CBFalconer <cbfalco...@yahoo.com> wrote: >> Paul Taylor wrote: >> > wilco.dijkstra wrote: >> >> >> and emitting frame pointers when few compilers do so today do not >> >> instill a professional image. >> >> > The embedded code I have written is mostly C code, and I very >> > rarely look at the assembler code. But that comment caught my >> > attention because I thought that almost all compilers, particularly >> > 32-bit ones, use stack frames and with frame pointers - or am I >> > interpreting that comment incorrectly? >> >> If the machine uses a stack, and the compiler keeps careful track >> of the state of that stack, it can generate SP relative addresses. >> However this normally requires other restrictions on the generated >> code. > > Most compilers stopped using frame pointers a long time ago. They are > inefficient and don't actually provide any benefit. Rather than > changing SP repeatedly inside a function, SP is adjusted only on entry > and exit of the function, further improving efficiency. This also > makes it easier to track stack variables in debuggers (as offsets from > SP are fixed). The drawback is that stacksize can grow in some > circumstances. Functions containing alloca or C99 arrays could still > use a frame pointer.
OK - you have lost me..... :-) My understanding is that a frame pointer only gets set up at entry of a function and is restored at exit? And variables are easy to track because with a frame pointer offsets to the variables are fixed? Regards, Paul.
Paul Taylor wrote:
> On Tue, 26 Jun 2007 21:31:26 -0700, wilco.dijkstra wrote: > >> On 26 Jun, 23:10, CBFalconer <cbfalco...@yahoo.com> wrote: >>> Paul Taylor wrote: >>>> wilco.dijkstra wrote: >>>>> and emitting frame pointers when few compilers do so today do not >>>>> instill a professional image. >>>> The embedded code I have written is mostly C code, and I very >>>> rarely look at the assembler code. But that comment caught my >>>> attention because I thought that almost all compilers, particularly >>>> 32-bit ones, use stack frames and with frame pointers - or am I >>>> interpreting that comment incorrectly? >>> If the machine uses a stack, and the compiler keeps careful track >>> of the state of that stack, it can generate SP relative addresses. >>> However this normally requires other restrictions on the generated >>> code. >> Most compilers stopped using frame pointers a long time ago. They are >> inefficient and don't actually provide any benefit. Rather than >> changing SP repeatedly inside a function, SP is adjusted only on entry >> and exit of the function, further improving efficiency. This also >> makes it easier to track stack variables in debuggers (as offsets from >> SP are fixed). The drawback is that stacksize can grow in some >> circumstances. Functions containing alloca or C99 arrays could still >> use a frame pointer. > > OK - you have lost me..... :-) > > My understanding is that a frame pointer only gets set up at entry of a > function and is restored at exit? And variables are easy to track because > with a frame pointer offsets to the variables are fixed? >
Yes, that's exactly the point of the frame pointer. It gives you a fixed base so that local variables have constant offsets from the FP, while the SP may change during the function as things are pushed or popped onto the stack (contrary to Wilco's wild generalisations, the SP *is* adjusted during the function execution - if another function is called which requires parameters on the stack, then it is very likely that the SP will be changed, especially on processors with push and pop primitives). However, since the compiler knows (hopefully!) what code it has produced, then at any given time the frame pointer is a fixed offset from the stack pointer. Thus you can save a little of the function prologue and epilogue, as well as freeing an extra register to play with, if you access the stack as offsets from the stack pointer, rather than having an explicit frame pointer (using "virtual frame pointer", if you like). There are several situations where frame pointers are still useful, however. While compilers are generally now clever enough to keep track of the "virtual frame pointer", debuggers are not necessarily so - some find the frame pointer useful, especially if they don't have full debugging information about the code in question. On some processors, such as the AVR, there is little or no support for (SP + offset) addressing modes - a frame pointer in a pointer register solves that problem. And sometimes (as Wilco suggested) there is not quite such a neat relationship between the stack pointer and the frame pointer, such as after using alloca() or variable length local arrays. Finally, a frame pointer can make the code shorter or faster for some types of code - exceptions, gotos, or other jumps across large parts of the code may be best implemented with a frame pointer, so that individual branches can manipulate the stack pointer (for function calls) while the exception code still knows where everything is. Even with simple branches, a frame pointer may let the compiler pile up on the stack without bothering to clean up after function calls, leaving the tidying to the epilogue (which uses the frame pointer to clear up the stack). That might or might not be smaller and faster - it depends on the architecture and the code. In most practical cases, however, the best code is generated without using a frame pointer. mvh., David
In news:46813e27$0$8383$8404b019@news.wineasy.se timestamped Tue, 26
Jun 2007 18:53:30 +0200, David Brown
<david@westcontrol.removethisbit.com> posted:
     "wilco.dijkstra@ntlworld.com wrote:
     > On 26 Jun, 14:23, "Michael N. Moran" <mnmo...@bellsouth.net> wrote:
     >> wilco.dijks...@ntlworld.com wrote:
     >>> On 25 Jun, 15:50, "Michael N. Moran" <mnmo...@bellsouth.net> wrote:
     [..]
     >>> I'd be surprised if the number of paid contributors is larger than the
     >>> unpaid ones, or are you counting employees of companies whose main
     >>> business is not open source?
     >> Why wouldn't I count those whose main business is not
     >> open source? Many have an interest in having their products
     >> supported by GCC, and so they invest.
     >
     > The companies that hire fulltime staff to work on GCC are often just
     > supporting their own products (eg. a backend for their proprietary
     > CPU) and don't improve competing targets or GCC as a whole. Few large
     > companies hire fulltime staff to improve the core of GCC, especially
     > if they already have their in-house compiler. If resources are
     > constrained, which is going to win?
     >
     
     Companies with a particular interest in the performance of gcc for a
     given backend will support improvements to that backend, and to gcc as a
     whole, as that's what benefits them.  You are correct that they have
     little interest in improving other backends, but front-end improvements
     help them too.
     
     >>> Big businesses have their reasons
     >>> for contributing, but most have their own commercial compilers already
     >>> - and that is where much of the effort goes.
     >> Or perhaps they have found that having their own compiler is
     >> unjustified when they could instead simply invest in the
     >> community and have a comparable or better product by drawing
     >> on a larger expertise.
     >
     > That is true for smaller companies who cannot afford to put a full
     > compiler team in place. I know GCC is very popular with startups.
     > However when you dig deeper many would create their own compiler if
     > they could afford it as they are not that happy with the code quality
     > they get. I do not believe that if you want comparable quality to
     > commercial compilers that GCC would ultimately be a cheaper option.
     >"

Less than five years ago, one of the main companies in the Symbian consortium
sent a recruitment advertisement for a short term contract which
interested me. I had applied but by the time someone with technical
knowledge has spoken with me, he has revealed that upon reconsideration it was
so difficult to find a suitable person that the notion of a short term
contract has been replaced with a longer term job. This did not suit
me as I would (and did) go to a planned better position which could
not begin until a number of months later. So I never worked on the job
of the advertisement nor the real job which replaced it. However, I
have been told during the technical conversation that the job would
entail replacing parts of the Symbian consortium's own inhouse C++
compiler with parts of the GNU C++ compiler's (at least the
frontend). Recruitment advertisements of the Symbian consortium's
which I looked at tended to offer rates of pay of approximately a few
hundred pounds sterling (approximately a few hundred dollars) a day or
a week or a month (I do not remember which, but even at a few
hundred pounds sterling a month it is not a very low rate of pay). I
do not remember if the rate of pay for this job was supposed to be
comparable.


     "I'm sure Altera, Xilinx, and Atmel, amongst others, appreciate you
     referring to them as "startups" or implying they have gone for the
     cheapo option"

They have gone for what would naively seem to be a cheap option.

     "because they are unwilling or incapable of "digging deeper".
     
     Of course, they may perhaps have actively chosen to work on gcc ports on
     the basis of past successes,"

In fairness, all of the compilers for Atmel AVRs except for GCC and
perhaps except for the Pascal compiler are apparently significantly
better than GCC. Atmel actively provided favoritism to a compiler
vendor to provide a good compiler for Atmel AVRs and eventually became
supportive to GCC for Atmel AVRs (even when GCC was one of the worst
compilers for these targets) and went much further by actively porting
GCC for AVR32s by itself before the first AVR32 was released. People
in Atmel may realize that no matter what price which is not gratis,
many people will prefer to spend money on an Internet connection and a
bigger chip to naively avoid paying for a cross compiler.

     " expected future successes, value for their
     investment in time and money, customer pressure, and supported source
     code (such as a linux port to the architecture in question).  In
     particular, it is extremely unlikely that both Altera and Xilinx would
     have made such total commitments to their gcc ports (there are, as far
     as I know, no non-gcc compilers for their soft processors) if they
     thought that a non-gcc compiler (in-house or external) would be
     significantly better.  Their competitiveness runs too deep to miss out
     on such an opportunity - especially if, as you claim, it would be
     cheaper overall."

I do not believe this. Altera and Xilinx could vastly improve items
essential to their businesses (e.g. their synthesis backends) in order
to be competitive but they do not. Third parties could write their own
compilers for NiosII and MicroBlaze if they wanted to.


     "[..]
     
     [..] No one who knows
     what they are doing compiles with -O0 on any compiler, gcc or otherwise.

     [..]"

Symbian runs on ARMs. The Symbian person I mentioned above seemed to
think that machine code is the lowest level anyone can go (in fairness,
in that job it probably would not have been possible to go any
lower). An out of date webpage written after I spoke to him is
WWW.Symbian.com/developer/techlib/v9.2docs/doc_source/faqsdk/faq_1026.html
which is clearly written by someone who did not (or who wrote for
people who do not) know how to type
info gcc "Invoking GCC" "Optimize Options"
as
WWW.Symbian.com/developer/techlib/v9.2docs/doc_source/faqsdk/faq_1026.html
contains:
"[..]
Created: 04/05/2004 Modified: 10/17/2005
[..]

[..]

Question:
I'm porting some code from standard C++ to Symbian OS and I'm using the newer GCC 3.x, which has some really good optimisation options. Can I use this for Symbian OS ?

Answer:
[..]

The short answer is 'No' - you can't use any GCC version beyond 2.9.
[..]

Now, the reasons why Symbian chose not to use anything other than -o0 is because that GCC 2.9x wouldn't handle some ARM optimisations very well, in many cases. Moreover, -o0 doesn't mean it is less optimised than -o2 for example; the 1-2-3 switches denote different kinds of optimisation (one is for speed, the other is for space, etc.)

Maybe if you really want to optimise some functions, you could compile them to assembler source first. In particular for number-crunching function that you may have written, you should consider compiling the source to assembler first and optimize by hand the few critical paths to gain the improvements you require."

Regards,
Colin Paul Gloster
In news:V4SfCNGtmufGFALu@phaedsys.demon.co.uk timestamped Sun, 24 Jun
2007 23:01:17 +0100, Chris Hills <chris@phaedsys.org> posted:
     "[..]
     
     [..] much better compression [..]
     I have seen several programs that the Keil 2/4/8k limited compilers
     could get to run on a 51 that the unlimited SDCC could not.
     
     The problem is when they want to add "just one more feature"  without
     changing the whole design.  For example... smart cards  and mobile SIMS
     and many other things. Especially when by law you need to add something
     to an old system or to change some IO because they are new sensors.
     
     Ideally you would scrap the whole system and start again to ad a small
     change to an end of life product."

If you intend to have some spare memory then the smallest output from
the compilers is not necessarily the most important criterion.


     "[..]
     
     I do find it strange that people are arguing so strongly for using
     second rate tools in their profession."

Who has argued for using second rate tools in their profession?
Provide exact references to justify that claim.

     "What would you think of a doctor,
     dentist, aeronautical engineer who argued the same?"

Microcontrollers running software are used by such people. Should they
use instead things such as FPGAs because FPGAs are better? FPGAs are
used by such people. Should they use ASICs instead because FPGAs are
inferior? Electronics can suffer from electromagnetic
interference. Should such people not use electronics?

Chris Hills said in
news:7s7UXLAVp7YGFAG4@phaedsys.demon.co.uk
in the thread "Re: What's more important optimisations or debugging?" on 2007 June 4th:
"[..]

Note some safety critical systems do not permit optimisations."

Chris Hills said in
news:27nnSYEog9bGFAec@phaedsys.demon.co.uk
in the thread "Re: What's more important optimisations or debugging?" on 2007 June 13th:
"[..]

[..] disable all compiler optimisations."

Does Chris Hills berate people for disabling all optimizations?

Regards,
Colin Paul Gloster
Paul Taylor wrote:
> On Mon, 25 Jun 2007 22:28:05 -0700, wilco.dijkstra wrote: > >> Other things like -O0 generating ridiculously inefficient code > > I'm possibly showing my naivity here - I'm certainly not on expert on > these matters, but isn't there a step in the compilation process just > before optimisations (after tokenizing/parsing) that gets an internal > representation of the source code (RTL with the GCC?), and where that step > in the process is essentially a "dumb" part of the process, with the > really clever bit, the optimisations, occurring *after* this step? >
There are optimisations done at all stages. The front-end can do some optimisations (like turning "x = y + 2*4" into "x = y + 8"). Then the code is turned into an intermediary representation, and some optimisations are done during this "middle-end" part (like turning "x = y + 2; z = y + 2;" into "x = y + 2; z = x;"). Then comes the back-end which does the code generation, and some optimisations are applied after that (like turning "jsr foo; ret" into "jmp foo").
> If so, then a compiler writer surely is going to need access to this > unoptimised code at least for unit tests (or whatever gcc uses)? In which > case -O0 is how you get at it. Unoptimised code is always going to be > ridiculously inefficient, and you really wouldn't want to use it. Again > I'm not an expert on the compilation process, but the above is my > understanding as of now - please enlighten me :-) >
There are a few reasons for using unoptimised code, that I can think of. One is that it is easy for source code debugging (not assembly level debugging - that is easier with a bit of optimisation). Another is, as you say, during testing and development of the tools themselves.
Colin Paul Gloster wrote:

Would you *please* learn to use a newsreader?  You have some interesting 
things to say, some of which warrant response, but have such an absurd 
quoting "style" that it is impossible to hold a proper thread of 
conversation with you.

> > Question: I'm porting some code from standard C++ to Symbian OS and > I'm using the newer GCC 3.x, which has some really good optimisation > options. Can I use this for Symbian OS ? > > Answer: [..] > > The short answer is 'No' - you can't use any GCC version beyond 2.9. > [..] > > Now, the reasons why Symbian chose not to use anything other than -o0 > is because that GCC 2.9x wouldn't handle some ARM optimisations very
Gcc 2.9x for the ARM was very poor, as pretty much everyone knows. Many proponents of closed source alternatives are so happy about this that they make all sorts of claims of generating code that is several times faster than gcc's code, when they really mean the early ARM gcc.
> well, in many cases. Moreover, -o0 doesn't mean it is less optimised > than -o2 for example; the 1-2-3 switches denote different kinds of > optimisation (one is for speed, the other is for space, etc.) >
That's total and utter drivel - certainly regarding gcc, but also regarding any other compiler I have ever used.
> Maybe if you really want to optimise some functions, you could > compile them to assembler source first. In particular for > number-crunching function that you may have written, you should > consider compiling the source to assembler first and optimize by hand > the few critical paths to gain the improvements you require." >
With modern compilers (gcc or otherwise), there is seldom good reason for hand-optimising your assembly unless you are taking advantage of specific features that your compiler is unaware of. It is often more useful to compile to assembly, study the assembly, and then modify your source code to get better results.
wilco.dijkstra@ntlworld.com wrote:
>
... snip ...
> > Most compilers stopped using frame pointers a long time ago. They > are inefficient and don't actually provide any benefit. Rather > than changing SP repeatedly inside a function, SP is adjusted only > on entry and exit of the function, further improving efficiency. > This also makes it easier to track stack variables in debuggers > (as offsets from SP are fixed). The drawback is that stacksize can > grow in some circumstances. Functions containing alloca or C99 > arrays could still use a frame pointer.
You are confused as to the use of an 'SP'. This is a stack pointer, which is altered whenever a value is pushed or popped. It can't be constant, so the compiler has to keep track of it. You are probably thinking of a 'BP', or block pointer, which normally holds specific SP values such as the value on function entry. -- <http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.txt> <http://www.securityfocus.com/columnists/423> <http://www.aaxnet.com/editor/edit043.html> cbfalconer at maineline dot net -- Posted via a free Usenet account from http://www.teranews.com
David Brown wrote: ** to Colin Paul Gloster, top-posted **
> > Would you *please* learn to use a newsreader? You have some > interesting things to say, some of which warrant response, but > have such an absurd quoting "style" that it is impossible to hold > a proper thread of conversation with you.
I second that motion. -- <http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.txt> <http://www.securityfocus.com/columnists/423> <http://www.aaxnet.com/editor/edit043.html> cbfalconer at maineline dot net -- Posted via a free Usenet account from http://www.teranews.com

The 2024 Embedded Online Conference