EmbeddedRelated.com
Forums

Interrupt occurs in ISR

Started by agab...@... August 15, 2005
Sure Augusto,  NMI can overflow! This is the nature of this beast :) By
having it masked, TI has defeated the
true nature and purpose of it :( It can not be called any more NMI :(
The true NMI is meant only for very fast, one or two lines of code. It is an
EMERGENCY interrupt!
One can say that the difference between the  master and the student is the
handling of a TRUE NMI !.

Alex




----- Original Message ----- 
From: "augusto einsfeldt" <aee@aee@...>
To: <msp430@msp4...>
Sent: Tuesday, August 16, 2005 2:29 PM
Subject: RE: [msp430] Re: Interrupt occurs in ISR A


> Alex, regarding NMI there is one more thing: the
NMI's handler must reset
> the NMI's sources (OFIE, NMIE, ACCVIE) because otherwise it may nest
itself
> and overflow the stack since they are level
triggered and not edge
> triggered.
> -Augusto


Beginning Microcontrollers with the MSP430

> Hi Clifford. Actually there is (or was) at least one compiler out
there

That's why I used the qualifier "good" compiler. Proper compilers
that
use
a BURG (bottom up rewriting grammar - an LR parser that evaluates
possible
translations) code generator can always generate locally optimal code.
The
differences between such compilers lie in how good their register
allocation strategies are, and the resultant degree of global
optimisation.
The term "global" here usually means function-global, though that can
extend
to program-global.

In the non-embedded world, there are compilers that compile your code
with
instrumentation, allow you to run a standard workload through it, then
re
compile with knowledge of how many times each loop was executed and each
branch taken - across the entire program. Such compilers are unbeatable
by
the best human programmers. I've never heard of one in the embedded
world.

> >emotional, stylised and aesthetic.
> You mean C programmers don't do the same?

Of course, but a good global optimiser will produce excellent code in
almost any case. And that's the point - you're free to express
yourself
in code (i.e., write your program for *humans*), but you get a result
that's good for the *machine* anyway. There's a tradeoff, though not a
big one, nevertheless how you lean depends on whether you have a greater
need to pander to people or to the machine. When you're working in a
team
of ten or twenty, on 2 million lines of code (as I am), the people
issues
matter a *lot*.

> The code posted here to the
> group says differently. It ranges from plain/open to almost qualifying
> for the obfuscated C competition.

Yes. Embedded programmers' frequent failure to learn to program properly
is the reason why my mother-in-law *still* can't find a VCR she can
program, and why my microwave oven has *firmware* bugs. Same whether
they
write in C or assembly. I'm not saying that run-of-the-mill programmers
of non-embedded systems are any better, but given that they build
systems
that are 3-6 *orders of magnitude* more complex, the tools they use have
to be given some credit!

> system they are designing. As you state below. I
would therefore
> disagree with the ONLY EVER comment.

Well, as they say, you could make a compiler foolproof, if only fools
weren't so d^%n ingenious :-). But generally, when a compiler is treated
vaguely sensibly, I stand by my claim.

Clifford Heath.

Um, may be neither here or there, but BURS and BURG are not based on LR 
parsing. You may be thinking of Susan Graham's work using LR parser for 
code generation back in... early 1980s. There are a few commercial 
compilers that use that techniques.

At 06:10 PM 8/16/2005, Clifford Heath wrote:

> > Hi Clifford. Actually there is (or was) at
least one compiler out
>there
>
>That's why I used the qualifier "good" compiler. Proper
compilers that
>use
>a BURG (bottom up rewriting grammar - an LR parser that evaluates
>possible
>translations) code generator can always generate locally optimal code.
>The
>differences between such compilers lie in how good their register
>allocation strategies are, and the resultant degree of global
>optimisation.
>The term "global" here usually means function-global, though that
can
>extend
>to program-global.
>
>In the non-embedded world, there are compilers that compile your code
>with
>instrumentation, allow you to run a standard workload through it, then
>re
>compile with knowledge of how many times each loop was executed and each
>branch taken - across the entire program. Such compilers are unbeatable
>by
>the best human programmers. I've never heard of one in the embedded
>world.
>
> > >emotional, stylised and aesthetic.
> > You mean C programmers don't do the same?
>
>Of course, but a good global optimiser will produce excellent code in
>almost any case. And that's the point - you're free to express
yourself
>in code (i.e., write your program for *humans*), but you get a result
>that's good for the *machine* anyway. There's a tradeoff, though
not a
>big one, nevertheless how you lean depends on whether you have a greater
>need to pander to people or to the machine. When you're working in a
>team
>of ten or twenty, on 2 million lines of code (as I am), the people
>issues
>matter a *lot*.
>
> > The code posted here to the
> > group says differently. It ranges from plain/open to almost qualifying
> > for the obfuscated C competition.
>
>Yes. Embedded programmers' frequent failure to learn to program
properly
>is the reason why my mother-in-law *still* can't find a VCR she can
>program, and why my microwave oven has *firmware* bugs. Same whether
>they
>write in C or assembly. I'm not saying that run-of-the-mill programmers
>of non-embedded systems are any better, but given that they build
>systems
>that are 3-6 *orders of magnitude* more complex, the tools they use have
>to be given some credit!
>
> > system they are designing. As you state below. I would therefore
> > disagree with the ONLY EVER comment.
>
>Well, as they say, you could make a compiler foolproof, if only fools
>weren't so d^%n ingenious :-). But generally, when a compiler is
treated
>vaguely sensibly, I stand by my claim.
>
>Clifford Heath.
>
>
>
>.
>
>
>Yahoo! Groups Links
>
>
>
>

// richard (This email is for mailing lists. To reach me directly, please 
use richard at imagecraft.com) 


> Um, may be neither here or there, but BURS and BURG are not based on
LR
> parsing.

Look at lcc - doesn't that use a BURG? - IIRC the tool is called lburg
or iburg. The interesting part of using an LR parser this way is that
it must consider and cost many "ambiguities", which are alternate
forms of code that match the AST. So it's not like yacc or whatever,
that can get away with one or two levels of look-ahead to resolve
conflict. It turns into an implemention of the heuristic search algo,
where you look at all reachable states considered so far and explore
the options from the state that looks closest.

Even gcc effectively uses a BURG, but I believe that the parser is
mostly hand-written, rather than generated from the grammar. The grammar
here is the set of production rules that say "when you see a tree that
looks like this, you can emit *this* code at some cost and change the
tree like this. It's interesting that a parser can be used like this,
which seems to be "in reverse".

Clifford.

Um, don't think so. I am somewhat a small authority on LCC since I
have 
been using it as the basis of our compilers for the last 10+ years. I also 
even a modified LBURG for the first HP Itanium code generator. The bottom 
up tree rewriting system doesn't deal with look ahead to resolve conflicts 
etc., it just does a bottom up tree walk and find the minimum tree covering 
of the nodes.

GCC uses RTL rewriting based on more like peephole rewriting rules and not 
minimum covering or parsing per se.

At 06:34 PM 8/16/2005, you wrote:

> > Um, may be neither here or there, but BURS
and BURG are not based on
>LR
> > parsing.
>
>Look at lcc - doesn't that use a BURG? - IIRC the tool is called lburg
>or iburg. The interesting part of using an LR parser this way is that
>it must consider and cost many "ambiguities", which are alternate
>forms of code that match the AST. So it's not like yacc or whatever,
>that can get away with one or two levels of look-ahead to resolve
>conflict. It turns into an implemention of the heuristic search algo,
>where you look at all reachable states considered so far and explore
>the options from the state that looks closest.
>
>Even gcc effectively uses a BURG, but I believe that the parser is
>mostly hand-written, rather than generated from the grammar. The grammar
>here is the set of production rules that say "when you see a tree that
>looks like this, you can emit *this* code at some cost and change the
>tree like this. It's interesting that a parser can be used like this,
>which seems to be "in reverse".
>
>Clifford.
>
>

// richard (This email is for mailing lists. To reach me directly, please 
use richard at imagecraft.com) 


Ok, interesting. I found all this stuff about 6 years ago while looking
for solutions to another problem. Got interested enough to download a
bunch of stuff but never used it :-). I was thinking of creating a
generic protocol (and protocol-transformation) engine based on modified
grammars.

So how do your compilers achieve minimum cost, if you just do simple
bottom-up matching? Or is that what limits the power of your optimiser?
I guess if you have enough transformation rules you can get pretty close
with most architectures in any case.

Clifford Heath.

BURS/BURG answers exactly the question on how do you find the minimum tree 
covering. The most interesting cases are of course architectures with 
interesting addressing modes. On a pure load store RISC model, there aren't

a whole lot of possible tree covering so the minimum is usually the obvious 
or just about the only one. This site answers it well enough:
http://www.program-transformation.org/Transform/BURG
lburg generates the dynamic programming needed for minimal cost calculation 
at lburg compile time so in theory it's slower to generate the lburg code 
but faster at compiler runtime. In practice, it's not a problem at all... 
lburg also limits to constant cost but again it has not been an issue at all...

As for optimizer, that's usually done separately. You can have post code 
generation peephole optimizations, or (usually) pre code generation global 
optimization. We have peephole optimizations already and we are finishing 
up our global (function level) optimizer.

At 07:05 PM 8/16/2005, you wrote:

>Ok, interesting. I found all this stuff about 6
years ago while looking
>for solutions to another problem. Got interested enough to download a
>bunch of stuff but never used it :-). I was thinking of creating a
>generic protocol (and protocol-transformation) engine based on modified
>grammars.
>
>So how do your compilers achieve minimum cost, if you just do simple
>bottom-up matching? Or is that what limits the power of your optimiser?
>I guess if you have enough transformation rules you can get pretty close
>with most architectures in any case.
>
>Clifford Heath.
>

// richard (This email is for mailing lists. To reach me directly, please 
use richard at imagecraft.com) 


Dear Clifford,
This group server still refusing to work properly or I'd have replyed 
to you earlier. I had to go to Groups.Yahoo web site to be able to 
get all posts since I received less than half by email.

So, answering your very first question: No, I don't think C compiler 
writers are idiots and would not use this kind of unpolite language.
From your other posts you seem to be an inteligent man but now seems 
to be also very shallow.
I don't know any C compiler made by God and even one from Vulcan 
could only *optimize* registers usage. Full optimization can be done 
only in machine language, like the kind of control you have when 
designing the CPU's hardware. This is the difference between 
implementing a hardware in FPGA or in ASIC. Both works well but an 
ASIC uses less silicon and a fraction of the power. But you must be 
very good to implement a good ASIC design while FPGA's silicon is 
already done and you can go to market faster.
I don't think you can implement Al's oyster grader in plain C and in 
the same MCU.
Anyway, C should be a standard language and compilers must follow 
safety rules that avoid using different set of registers in different 
functions. Also, C is stack intensive and for standard reasons it may 
not use registers to pass and return parameters in functions. For the 
ones that does it will, of course, optimize timing and memory usage.
Using memory addressing modes to handle data without using any 
register is a good way to avoid changing registers during an ISR and 
would save memory and timing of push/pop context in short ISR code.
Since I am not a C compiler producer I don't know what is their 
approach. I guess some of them just go to the easier way.
Maybe I was arrogant in my comment regarding a simplistic view of 
what a plain compiler would do wasting time and registers. I do 
apologize for such abrupt comment.
If you are a C compiler writer you could answer that with facts 
instead of just saying that you are above any other person who writes 
in assembler.
Be polite and you may last longer.
-Augusto



--- In msp430@msp4..., "Clifford Heath" <cjh@m...> wrote:
> Augusto,
> 
> You seem to think C compiler writers must be idiots...? The compiler
> obviously uses registers, and when an interrupt occurs, it will save
> only the registers it uses. How could it work any other way? The
> compilers know the cost of using or not using each register, and the
> cost of saving it. They make a decision based on *calculated facts*
> from the architecture's instruction cycle timings, unlike the 
decisions
> made by assembly code developers, which *may* be
optimal, but are 
often
> emotional, stylised and aesthetic. Assembly code
is only ever better
> than a good compiler because the developer has more global context, 
and
> is able to detect and work around the consequences
of a wrong
> assumption.
> 
> So please refrain from uninformed speculation about what C might or
> might
> not do. Try to write some C, enable full optimization, and look at 
the
> generated code, then you might have grounds to
complain. Meantime, 
think
> on my motto: it's better to keep your mouth
shut and be thought an 
idiot
> than open it and remove all doubt :-).
> 
> Clifford Heath.




Alex, indeed true NMI's nature was lost here.

--- In msp430@msp4..., <alex@s...> wrote:
> Sure Augusto,  NMI can overflow! This is the
nature of this 
beast :) By
> having it masked, TI has defeated the
> true nature and purpose of it :( It can not be called any more NMI :
(
> The true NMI is meant only for very fast, one or
two lines of code. 
It is an
> EMERGENCY interrupt!
> One can say that the difference between the  master and the student 
is the
> handling of a TRUE NMI !.
> 
> Alex
> 
> 
> 
> 
> ----- Original Message ----- 
> From: "augusto einsfeldt" <aee@t...>
> To: <msp430@msp4...>
> Sent: Tuesday, August 16, 2005 2:29 PM
> Subject: RE: [msp430] Re: Interrupt occurs in ISR A
> 
> 
> > Alex, regarding NMI there is one more thing: the NMI's handler 
must reset
> > the NMI's sources (OFIE, NMIE, ACCVIE)
because otherwise it may 
nest
> itself
> > and overflow the stack since they are level triggered and not edge
> > triggered.
> > -Augusto



> So, answering your very first question: No, I don't think C
compiler
> writers are idiots and would not use this kind of
unpolite language.

I don't believe I was impolite to you. Your unsupported statements
required some response. You said:

: In C I have no idea what compilers do with registers but, I guess,
: all context is saved (what a waste of time) and then you would have
: no problems there, too.

So you think that C compiler writers are too stupid to spot a "waste of
time" and avoid it? ... and...

: Thinking again, I wonder if C uses registers at all. It's nature is
: for memory/stack handling and there may be a waste of not used
registers.

You've already said you know nothing about it, so why offer an opinion?
I mean, to wonder is a wonderful thing, but in this case, you can just
*ask*.
There's no need to wonder. There are people here who can answer.

> From your other posts you seem to be an inteligent
man but now seems
> to be also very shallow.

Well, it was your apparent shallowness which triggered my response.
Specifically when you say things like:

> Full optimization can be done only in machine
language...
> This is the difference between implementing a hardware in FPGA or in
ASIC.
> I don't think you can implement Al's
oyster grader in plain C and in
> the same MCU.
> Also, C [...] for standard reasons may
> not use registers to pass and return parameters in functions.

You're entitled to hold your opinions. But when you try to defend them
publically by decrying a language of which you apparently have no
experience,
you seem shallow and should expect to be challenged.

For what it's worth, I don't believe that any of your 4 statements
(quoted
above) is correct. And I can relate actual experiences and examples that
support my opinion. Can you?

> I guess some of them just go to the easier way.

Certainly true, and a good reason to be careful when choosing tools.

> instead of just saying that you are above any
other person who writes
> in assembler.

If you read, you'll see that I made *no claim* about myself at all.
I made claims about C compilers with which I am intimately familiar.
I studied and ported the very first C compiler (by Ritchie) in 1979
and have used C ever since.

> Be polite and you may last longer.

Support your speculations with either facts or experiences and you may
also.

Clifford.