EmbeddedRelated.com
Forums
Memfault Beyond the Launch

Instrumenting ISR's

Started by D Yuniskis November 11, 2010
Hi,

Subject line gives the gist of the issue -- though
the reality is a bit more "involved" (of course!  :> )

I'm trying to figure out how to instrument (certain)
"ISR's" to more accurately track resource usage in
an application.  "Time" is the easiest to envision
but the concept applies to other resources as well.
[I've been struggling to come up with application
neutral examples that folks can wrap their heads around]

For "long running" applications (e.g., many found in
embedded systems instead of more transient load-and-run
desktop applications), some tasks/threads/processes
(sidestepping the formal definition, here) have
one-to-one relationships with other tasks (et al.)
sometimes to the detriment of those other tasks.

E.g., a task may set up an I/O action, blocking
on it's completion.  That action may cause an ISR to
be triggered some (fixed) time later (e.g., the I/O
action primes the Tx register in a UART which, one
character time later, signals a Tx IRQ).  The "cost"
of that ISR is born by whichever "task" happens to
be running "one character time AFTER the instigating
task blocked.

[in an application that can often be synchronous in
nature -- do this, do that, lather, rinse, repeat -- this
can result in task A's operations *always* penalizing
task B (because of the scheduling priorities, etc.)]

The same sort of "problem" exists with many network
stack implementations -- task A initiates some traffic
and task N(etwork) "suddenly" becomes ready to *process*
that traffic... at the "expense" of task B (by either
deferring task B's execution or by scheduling network
I/O that will result in ISR's stealing from task B).

I can handle the latter case (my RTOS "charges" task A
with task N's related costs) but the ISR issue slips
through the cracks -- it's too small to easily track
PER INSTANCE costs (though the overall time spent in the
ISR can be significant).

The naive workaround is just to "fudge" quanta for the
affected tasks and "hope for the best".  But, this isn't
an *engineered* solution as much as a kludge.  Or, if you
have a surplus of resources, you can just choose to
"ignore the problem" (i.e., "derate appropriately").

The best solution I've been able to come up with is to
wrap each ISR in a preamble-postamble that takes snapshots
of a high-speed timer at the start and end of each activation
(you have to reset the timer to reflect the impact of
nested ISRs, etc.).  Of course, this doesn't completely
solve the problem as ISRs may have causal relationships...
:-/

Are there any other schemes I can employ to adjust for
these issues -- preferably at design time (instead of
run time)?

--don
On Nov 11, 11:24=A0am, D Yuniskis <not.going.to...@seen.com> wrote:
> Hi, > > Subject line gives the gist of the issue -- though > the reality is a bit more "involved" (of course! =A0:> ) > > I'm trying to figure out how to instrument (certain) > "ISR's" to more accurately track resource usage in > an application. =A0"Time" is the easiest to envision > but the concept applies to other resources as well. > [I've been struggling to come up with application > neutral examples that folks can wrap their heads around]
For time at least I still prefer a logic analyzer or oscilloscope. That allows easy measurement of min, max, latency and other parameters with minimal overhead. And with multiple lines you can easily visualize the interaction between tasks and ISRs. I really like an MSO for this, makes it easy to see the effect of an on external events as well.
> Are there any other schemes I can employ to adjust for > these issues -- preferably at design time (instead of > run time)?
I usually just regard ISRs as a negative CPU. If you can determine a maximum load from the ISRs just subtract it from the CPU and use the remaining CPU for you scheduling. IE if your clock timer interrupt uses 1% of your CPU you effectively have a CPU 99% as fast as you would w/o a clock timer interrupt. This, of course, assumes that the interrupts are fast wrt your tasks. Robert
"D Yuniskis" <not.going.to.be@seen.com> wrote in message 
news:ibh4sv$fg4$1@speranza.aioe.org...
> Hi, > > Subject line gives the gist of the issue -- though > the reality is a bit more "involved" (of course! :> ) > > I'm trying to figure out how to instrument (certain) > "ISR's" to more accurately track resource usage in > an application. "Time" is the easiest to envision > but the concept applies to other resources as well. > [I've been struggling to come up with application > neutral examples that folks can wrap their heads around] > > For "long running" applications (e.g., many found in > embedded systems instead of more transient load-and-run > desktop applications), some tasks/threads/processes > (sidestepping the formal definition, here) have > one-to-one relationships with other tasks (et al.) > sometimes to the detriment of those other tasks. > > E.g., a task may set up an I/O action, blocking > on it's completion. That action may cause an ISR to > be triggered some (fixed) time later (e.g., the I/O > action primes the Tx register in a UART which, one > character time later, signals a Tx IRQ). The "cost" > of that ISR is born by whichever "task" happens to > be running "one character time AFTER the instigating > task blocked. > > [in an application that can often be synchronous in > nature -- do this, do that, lather, rinse, repeat -- this > can result in task A's operations *always* penalizing > task B (because of the scheduling priorities, etc.)] > > The same sort of "problem" exists with many network > stack implementations -- task A initiates some traffic > and task N(etwork) "suddenly" becomes ready to *process* > that traffic... at the "expense" of task B (by either > deferring task B's execution or by scheduling network > I/O that will result in ISR's stealing from task B). > > I can handle the latter case (my RTOS "charges" task A > with task N's related costs) but the ISR issue slips > through the cracks -- it's too small to easily track > PER INSTANCE costs (though the overall time spent in the > ISR can be significant). > > The naive workaround is just to "fudge" quanta for the > affected tasks and "hope for the best". But, this isn't > an *engineered* solution as much as a kludge. Or, if you > have a surplus of resources, you can just choose to > "ignore the problem" (i.e., "derate appropriately"). > > The best solution I've been able to come up with is to > wrap each ISR in a preamble-postamble that takes snapshots > of a high-speed timer at the start and end of each activation > (you have to reset the timer to reflect the impact of > nested ISRs, etc.). Of course, this doesn't completely > solve the problem as ISRs may have causal relationships... > :-/ > > Are there any other schemes I can employ to adjust for > these issues -- preferably at design time (instead of > run time)? > > --don
If you have money to spend on the solution try www.rapitasystems.com - not cheap but quite powerful. Michael Kellett

Memfault Beyond the Launch