EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

Stack analysis tool that really work?

Started by pozz July 29, 2021
On 8/7/2021 4:09 AM, Niklas Holsti wrote:
> On 2021-08-07 2:04, Don Y wrote: >> On 8/6/2021 12:58 PM, Niklas Holsti wrote: > >>>> OK. I'll have to build a more current version of GNAT. >>> >>> For playing around, I would just use the GNAT Community Edition. Or the FSF >>> GNAT that comes with MinGW32. >> >> Ugh! No, I'll just build a fresh copy. I do all my development >> under NetBSD so have everything I want/need ('cept the Ada tools) >> there, already. > > You are braver than I am. Note that the front-end of GNAT is implemented in > Ada, so it has to be bootstrapped with an Ada compiler. And the current version > of GNAT requires quite an up-to-date Ada compiler for the bootstrap. In > practice, I think people have found that only GNAT can build GNAT. > > Many Linux distributions come with GNAT pre-built. Debian-derived distributions > have good support, I hear.
I don't run Linux. I already support Solaris/SPARC, Windows and NetBSD hosts. Adding yet another just makes things harder. I build all of the "programs" ("packages") on my NetBSD boxes. In the past, the userland was built by folks of dubious abilities. They'd just try to "get it to compile" (not even "compile without warnings"). So, there would often be bugs and failures in the ported code. [I recall a prebuilt gnuplot that didn't even successfully pass the COMPREHENSIVE test suite -- because the person porting it didn't know what certain functions should look like, graphically!] Building myself lets me see the warnings/errors thrown. And, also lets me inspect the sources as, often, the configuration options are not completely documented (but apparent *if* you look at the sources). As building package Q may rely on package B and L, which, in turn rely on F, G and Z, this is often a bit involved. But, if those dependencies can be used by other packages, that's less work, later. [I keep track of what I build, the order I build them, any patches I make to the sources, etc.]
>>> The analysis works on the control-flow graph (per subprogram). The WCET for >>> each basic block is the sum of the WCETs of the instructions in that block. >>> The WCET for an execution path through the graph (from entry to return) is >>> the sum of the WCETs of the basic blocks in the bath, plus the WCETs >>> assigned to each edge (control transfer) between basic blocks in the path. >> >> So, the costs of conditional control transfers are handled separately (?) > > I'm not sure I understand your question. Each edge in the control-flow graph, > that is, each transition from one instruction or one basic block to a possible > next instruction or block, is assigned a WCET value when the instructions are > decoded and entered in the control-flow graph. Usually this WCET is zero for a > fall-through edge from a non-branch instruction to the next, and is non-zero > only for edges from branch instructions, whether conditional or unconditional. > For a conditional branch, the "not taken" edge usually has a zero WCET and the > "taken" edge has some non-zero WCET, as specified in the processor > documentation (remember we are assuming a simple processor).
OK, that's what I was addressing; namely, that the two outcomes of a (conditional) branch can have different costs -- yet those appeared to be "outside" the blocks that you were describing.
>>>> I assume this is semi table-driven (?) >>> >>> I don't understand that question. Please clarify. >> >> A large number of opcodes, each with particular costs. >> Do you build a jungle of conditionals to subdivide the >> "opcode space" into groups of similar-cost operations? >> Or, do you just have a table of: >> >> {opcode, mask, type[1], cost} >> >> [1] used to apply some further heuristics to your >> refinement of "cost" > > I would say that the approach to translating instructions into their analysis > models (elements in the control-flow graph) is usually not table-driven, but > emulates the field-by-field decoding and case analysis that would be done by a > disassembler or emulator (and Bound-T can produce a disassembly of the code).
Oh! I would have opted for a (possibly big) table. But, that's just personal preference (I like table driven algorithms)
> The internal model includes not only the cost of the instruction, but also its > effect on the computation. For example, an instruction like "increment the > accumulator", INC A, would be decoded into a model that says > > WCET: 1 cycle. > Control flow: to next instruction, unconditional. > Effect: A := A + 1. > > and also the effect, if any, on condition flags.
Why do you care about the "effect"? Are you trying to determine which branches will be taken? How many iterations through loops? etc. Doesn't this require a fairly complete model of the processor?
>> I don't rely on "live" tests for my code. Rather, I use tools to >> generate good test coverage and then verify the results are what I >> expect. I find this easier and more easily extensible (I can test >> ARM code by running an x86 port of that code) > > At my former job, implementing SW for ESA satellite applications, we almost > always tested the code first on normal PCs, in a simulated I/O environment, and > then on the embedded target. Easier for Ada than for C.
Historically, it's been hard for me to have access to real hardware. So, my development style pushes all the I/Os out to the fringes of the algorithms; so there is little more than a "copy" that occurs between the algorithm and the I/O. So, I can debug most of my code (95+%) just by simulating input values and capturing output values. It's no worse than testing a math library. Then, to move to real hardware, just ensure that the transfers in and out are atomic/intact/correct. E.g., for my speech synthesizers, I just had them generate D/AC values that I captured using some test scaffolding. Then, massaged the captured data into a form that a media player could process and "played" them on a PC. For my gesture recognizer, I captured/simulated input from the various types of input devices in a form that the algorithms expected. Then, just passed the data to the algorithms and noted their "conclusions".
>>> For processors where cache misses are much slower than cache hits (which is >>> fast coming to mean almost all processors) IMO an I-cache analysis is >>> necessary for static WCET analysis to be useful. >> >> I look at it the other way around. Assume there is NO cache. You >> know that your code WILL run in less time than this case. Regardless >> of the frequency or presence of competing events. >> >> Processors are fast enough, now, that you can usually afford to "step up" >> in capability for comparably little cost. > > Sometimes, but not always. The ESA applications we made were usually running > (worst case) at about 70% processor load, and faster processors for space > applications were very expensive, not only in euros but also in power, volume > and mass, which are scarce resources in space.
Yes, of course. I'm speaking of the types of applications that I have dealt with, not "universally". My "cost constrained" days were back in the days of the "small/cheap CPU" where you had to predict performance accurately -- but, doing so was relatively easy (because processors had very predictable instruction execution times).
On 2021-08-07 18:09, Don Y wrote:
> On 8/7/2021 4:09 AM, Niklas Holsti wrote:
(On how the Bound-T tool models instructions for stack and WCET analysis:)
>> The internal model includes not only the cost of the instruction, but >> also its effect on the computation. For example, an instruction like >> "increment the accumulator", INC A, would be decoded into a model that >> says >> >> ���� WCET: 1 cycle. >> ���� Control flow: to next instruction, unconditional. >> ���� Effect: A := A + 1. >> >> and also the effect, if any, on condition flags. > > Why do you care about the "effect"? Are you trying to determine > which branches will be taken? How many iterations through loops? > etc.
The effect is necessary for any analysis that depends on run-time values. Loop bounds are the prime example for WCET analysis, but stack-usage analysis also needs it. For example, the effect of a push instruction is to increment the "local stack height" model variable by some number that reflects the size of the pushed data (which size can, in principle, be non-static and computed at run time). The stack-usage analysis finds the maximum possible value of the local stack height in each subprogram, both globally and at each call site within the subprogram, then adds up the local stack heights along each possible call path to find the total usage for that call path, and then finds and reports the path with the largest total usage.
> Doesn't this require a fairly complete model of the processor?
Yes indeed. The model could be somewhat simpler for stack-usage analysis than for WCET analysis. For stack usage, the relevant effect of an instruction is just to increase or decrease the local stack height by a static constant. If push/pop happens in loops they are usually balanced so that a loop iteration has no net effect on stack usage, making loop iteration bounds unnecessary.
> My "cost constrained" days were back in the days of the "small/cheap > CPU" where you had to predict performance accurately -- but, doing so > was relatively easy (because processors had very predictable > instruction execution times).
And that is the kind of processor that Bound-T was designed to support for WCET analysis. Stack-usage analysis is not so sensitive.
Oops, a correction:

On 2021-08-07 19:50, Niklas Holsti wrote:

> For stack usage, the relevant effect of an instruction is just to > increase or decrease the local stack height by a static constant.
I meant to say that the change in local stack height is _often_ a static constant, not that it _always_ is static.
On 8/7/2021 4:20 AM, Niklas Holsti wrote:
> On 2021-08-07 2:06, Don Y wrote: >> On 8/6/2021 4:04 PM, Don Y wrote: >>>> That said, for some processors it is easy to recognize at decode-time most >>>> of the instructions that access the stack, and some versions of Bound-T let >>>> one specify different access times for stack accesses and for general >>>> (unclassified) accesses. That can be useful if the stack is located in fast >>>> memory, but other data are in slower memory. >>> >>> I'm thinking, specifically, about I/Os -- which are increasingly memory >>> mapped (including register spaces). >> >> Sorry, I should be more clear. I'm looking at the issues that would affect >> *my* needs in *my* environment -- realizing these may be different than >> those you've previously targeted. (e.g., VMM, FPU emulation, etc.) > > > Ok. If you describe your needs, perhaps I can comment.
I'm targeting some of the bigger ARM offerings -- A53/55. My impression is they go through more "contortions" to eke out additional performance than some of the smaller processors.
> In my experience, access to memory-mapped registers tends to be not much slower > than access to memory. Also, quite often the addresses in MMIO accesses are > static or easy to derive in static analysis, so the WCET tool could choose the > proper access time, if the tool knows enough about the system architecture.
Yes, that last phrase being the kicker. Can this be simplified to something as crude as a "memory map"?
> If by "FPU emulation" you mean SW-implemented FP instructions, of course we > have encountered those. They are often hand-coded assembler with very complex > and tricky control flow, which makes it hard to find loop bounds automatically, > so manual annotations must be used instead. > > What is VMM?
Virtual Memory Management. I.e., when an opcode fetch (or argument reference) can not only take longer than a cache miss... but *considerably* longer as the physical memory is mapped in while the instruction stream "stalls". [Note that a page fault need not map physical memory in the traditional sense. It can also cause some "special" function to be invoked to provide the requisite data/access. So, the cost of a fault can vary depend on *what* is faulting and which pager is handling that fault]
On 8/7/2021 9:50 AM, Niklas Holsti wrote:
> On 2021-08-07 18:09, Don Y wrote: >> On 8/7/2021 4:09 AM, Niklas Holsti wrote: > > > (On how the Bound-T tool models instructions for stack and WCET analysis:) > >>> The internal model includes not only the cost of the instruction, but also >>> its effect on the computation. For example, an instruction like "increment >>> the accumulator", INC A, would be decoded into a model that says >>> >>> WCET: 1 cycle. >>> Control flow: to next instruction, unconditional. >>> Effect: A := A + 1. >>> >>> and also the effect, if any, on condition flags. >> >> Why do you care about the "effect"? Are you trying to determine >> which branches will be taken? How many iterations through loops? >> etc. > > The effect is necessary for any analysis that depends on run-time values. Loop > bounds are the prime example for WCET analysis, but stack-usage analysis also > needs it. For example, the effect of a push instruction is to increment the > "local stack height" model variable by some number that reflects the size of > the pushed data (which size can, in principle, be non-static and computed at > run time).
OK. But, you're not trying to emulate/simulate the complete algorithm; just handle the side effects of value changes. But wouldn't you *have* to do a more thorough emulation? foo(count) { for (i = 0; i < 5 * count; i++) { diddle() } }
> The stack-usage analysis finds the maximum possible value of the local stack > height in each subprogram, both globally and at each call site within the > subprogram, then adds up the local stack heights along each possible call path > to find the total usage for that call path, and then finds and reports the path > with the largest total usage. > >> Doesn't this require a fairly complete model of the processor? > > Yes indeed. The model could be somewhat simpler for stack-usage analysis than > for WCET analysis. For stack usage, the relevant effect of an instruction is > just to increase or decrease the local stack height by a static constant. If > push/pop happens in loops they are usually balanced so that a loop iteration > has no net effect on stack usage, making loop iteration bounds unnecessary.
reverse(count, string) { while (count-- > 0) { reverse(count-1, &string[1]) emit(string[0]) } } No, that's a shitty example. I'll have to think on it when I have more time...
>> My "cost constrained" days were back in the days of the "small/cheap >> CPU" where you had to predict performance accurately -- but, doing so >> was relatively easy (because processors had very predictable >> instruction execution times). > > And that is the kind of processor that Bound-T was designed to support for WCET > analysis. Stack-usage analysis is not so sensitive.
Ah, OK. Time to get my neighbor's lunch ready...
On 2021-08-07 20:26, Don Y wrote:
> On 8/7/2021 4:20 AM, Niklas Holsti wrote:
(In a discussion about WCET analysis more than stack analysis:)
>> Ok. If you describe your needs, perhaps I can comment. > > I'm targeting some of the bigger ARM offerings -- A53/55. My > impression is they go through more "contortions" to eke out > additional performance than some of the smaller processors.
Out of scope for Bound-T, I'm sure. Even the AbsInt tool has no support for static WCET analysis of those processors and their ARM page suggests a hybrid tool instead (https://www.absint.com/ait/arm.htm).
>> In my experience, access to memory-mapped registers tends to be not >> much slower than access to memory. Also, quite often the addresses in >> MMIO accesses are static or easy to derive in static analysis, so the >> WCET tool could choose the proper access time, if the tool knows >> enough about the system architecture. > > Yes, that last phrase being the kicker. Can this be simplified to > something as crude as a "memory map"?
It seems so, if the access time is simply a function of the accessed address.
>> What is VMM? > > Virtual Memory Management. I.e., when an opcode fetch (or argument > reference) can not only take longer than a cache miss... but > *considerably* longer as the physical memory is mapped in while the > instruction stream "stalls".
I don't recall seeing any static WCET analysis for page faults, but there may be some for Translation Look-aside Buffer misses. Out of my competence, and out of scope for Bound-T, certainly.
> [Note that a page fault need not map physical memory in the > traditional sense. It can also cause some "special" function to be > invoked to provide the requisite data/access. So, the cost of a > fault can vary depend on *what* is faulting and which pager is > handling that fault]
You may be able to map some of that into a very capable schedulability analyzer, one that can handle chains of "tasks" passing data/messages to each other. But translating the application logic and system behaviour into a model for such a schedulability analyzer is not trivial.
On 2021-08-07 20:34, Don Y wrote:
> On 8/7/2021 9:50 AM, Niklas Holsti wrote: >> On 2021-08-07 18:09, Don Y wrote: >>> On 8/7/2021 4:09 AM, Niklas Holsti wrote: >> >> >> (On how the Bound-T tool models instructions for stack and WCET >> analysis:) >> >>>> The internal model includes not only the cost of the instruction, >>>> but also its effect on the computation. For example, an instruction >>>> like "increment the accumulator", INC A, would be decoded into a >>>> model that says >>>> >>>> &#4294967295;&#4294967295;&#4294967295;&#4294967295; WCET: 1 cycle. >>>> &#4294967295;&#4294967295;&#4294967295;&#4294967295; Control flow: to next instruction, unconditional. >>>> &#4294967295;&#4294967295;&#4294967295;&#4294967295; Effect: A := A + 1. >>>> >>>> and also the effect, if any, on condition flags. >>> >>> Why do you care about the "effect"?&#4294967295; Are you trying to determine >>> which branches will be taken?&#4294967295; How many iterations through loops? >>> etc. >> >> The effect is necessary for any analysis that depends on run-time >> values. Loop bounds are the prime example for WCET analysis, but >> stack-usage analysis also needs it. For example, the effect of a push >> instruction is to increment the "local stack height" model variable by >> some number that reflects the size of the pushed data (which size can, >> in principle, be non-static and computed at run time). > > OK.&#4294967295; But, you're not trying to emulate/simulate the complete algorithm; > just handle the side effects of value changes. > > But wouldn't you *have* to do a more thorough emulation? > > foo(count) { > &#4294967295;&#4294967295; for (i = 0; i < 5 * count; i++) { > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; diddle() > &#4294967295;&#4294967295; } > }
The point and aim of _static_ analysis is to abstract the computation to model only the aspects that are relevant for the goals of the analysis, and especially to avoid any step-by-step emulation or interpretation -- that would become "dynamic analysis". For that example, analysing the subprogram foo stand-alone, the WCET analysis in Bound-T would conclude that, in the loop: - The variable "i" is an induction variable that increases by one on each iteration. This looks promising, and means that Bound-T can model the value of "i" as the initial value (zero) plus the loop iteration number (starting the count at iteration number zero). - But the analysis will not find a numeric upper bound on the iteration number, because the value of "count" is unknown. If this analysis of foo occurs while analysing some higher-level subprogram that calls foo, Bound-T would next try to compute bounds for the actual value of "count" in each such call. Say that the call is simply foo (33); This would make Bound-T reanalyze foo in the context count=33, which would show that the loop can be repeated only under the condition 5 * (iteration number) < 33 equivalent to (iteration number) < 33/5 = 6, which bounds the number of loop iterations and lets Bound-T produce a WCET bound for this specific call of foo (assuming that Bound-T can produce a WCET bound for the "diddle" subprogram too). Such repeated analyses down a call-path with increasing amount of context will of course increase the analysis time.
On 8/7/2021 10:46 AM, Niklas Holsti wrote:
> On 2021-08-07 20:26, Don Y wrote: >> On 8/7/2021 4:20 AM, Niklas Holsti wrote: > > (In a discussion about WCET analysis more than stack analysis:) > >>> Ok. If you describe your needs, perhaps I can comment. >> >> I'm targeting some of the bigger ARM offerings -- A53/55. My >> impression is they go through more "contortions" to eke out >> additional performance than some of the smaller processors. > > Out of scope for Bound-T, I'm sure. Even the AbsInt tool has no support for > static WCET analysis of those processors and their ARM page suggests a hybrid > tool instead (https://www.absint.com/ait/arm.htm).
You can see why I favor hand-waving away all of the "tricks" the processors can play to improve performance! Granted, the numbers that result are TRULY "worst case" -- and likely significantly inflated! But, if you size for that, then all of the "noncritical" stuff runs "for free"!
>>> In my experience, access to memory-mapped registers tends to be not much >>> slower than access to memory. Also, quite often the addresses in MMIO >>> accesses are static or easy to derive in static analysis, so the WCET tool >>> could choose the proper access time, if the tool knows enough about the >>> system architecture. >> >> Yes, that last phrase being the kicker. Can this be simplified to something >> as crude as a "memory map"? > > It seems so, if the access time is simply a function of the accessed address. > >>> What is VMM? >> >> Virtual Memory Management. I.e., when an opcode fetch (or argument >> reference) can not only take longer than a cache miss... but >> *considerably* longer as the physical memory is mapped in while the >> instruction stream "stalls". > > I don't recall seeing any static WCET analysis for page faults, but there may > be some for Translation Look-aside Buffer misses. Out of my competence, and out > of scope for Bound-T, certainly.
Unlike a cache miss, it's not easy to predict those costs without an intimate model of the software. E.g., two "closely located" addresses can have entirely different behaviors (timing) on a page fault. And, you likely can't predict where those addresses will reside as that's a function of how they are mapped at runtime. Unlike a conventional VMM system (faulting in/out pages from a secondary/disk store), I rely on the mechanism heavily for communication and process container hacks. I.e., there's a good reason to over-specify the hardware -- so you can be inefficient in your use of it! :-/
>> [Note that a page fault need not map physical memory in the >> traditional sense. It can also cause some "special" function to be >> invoked to provide the requisite data/access. So, the cost of a >> fault can vary depend on *what* is faulting and which pager is >> handling that fault] > > You may be able to map some of that into a very capable schedulability > analyzer, one that can handle chains of "tasks" passing data/messages to each > other. But translating the application logic and system behaviour into a model > for such a schedulability analyzer is not trivial.
As my workload is dynamically defined (can grow or shrink, algorithmically), I don't really fret this. It's not like a closed system where what's inside *has* to work. If something isn't working, I can shed load (or, move it to another processor). And, if something is underutilized, *add* load (possibly moved from another processor). You typically do this intuitively on your workstation; if things start to run sluggishly, you kill off some applications (and make a mental note to run them, again, later). And, if the workstation isn't "doing anything", you *add* application processes. If you needed to "make world", you might power up another workstation so it didn't impede the activities of the "normal" workstation you're using.
On Sat, 7 Aug 2021 13:40:19 +0300, Niklas Holsti
<niklas.holsti@tidorum.invalid> wrote:

>On 2021-08-07 5:55, George Neuner wrote: > >> More importantly, how do you handle chains of conditional branches >> and/or switch/case constructs which can mispredict at every decision >> point? The branch targets may not be in cache - neither mispredicted >> targets nor the actual one. Worst case for a horribly mispredicted >> switch/case can be absolutely dreadful. > > >I don't know if those questions are addressed to me (re the Bound-T >tool) or to WCET analysis in general.
A little of both really. I was hoping you had some smart(er) approach to estimating misprediction effects in systems that use dynamic prediction - even if it's just heuristic. A lot of more powerful chips now are being used even in 'small' systems, and some of them do have dynamic prediction. Note that Don is talking about Cortex A53, A55, etc.
>The advanced WCET tools, like the >ones from AbsInt, do try to cover such cases by their cache analysis to >provide a safe upper bound on the WCET, but the degree of pessimism >(over-estimation) increases with the complexity of the code. > >That said, in many programs most of the processing time is taken by >relatively simple loops, for which the present I-cache analyses work >quite well, and for which even D-cache analysis can (I believe) work >satisfactorily if the memory addressing patterns are regular.
Dynamic prediction handles loop control quite well ... it's all the other branching code that is the problem. WCET has use outside the 'embedded' world also. A lot of my work was in QA/QC vision systems: tons of code, tons of features, workstation class processors, and still having to be /hard real time/. George
On 2021-08-09 1:34, George Neuner wrote:
> On Sat, 7 Aug 2021 13:40:19 +0300, Niklas Holsti > <niklas.holsti@tidorum.invalid> wrote: > >> On 2021-08-07 5:55, George Neuner wrote: >> >>> More importantly, how do you handle chains of conditional branches >>> and/or switch/case constructs which can mispredict at every decision >>> point? The branch targets may not be in cache - neither mispredicted >>> targets nor the actual one. Worst case for a horribly mispredicted >>> switch/case can be absolutely dreadful. >> >> >> I don't know if those questions are addressed to me (re the Bound-T >> tool) or to WCET analysis in general. > > A little of both really. I was hoping you had some smart(er) approach > to estimating misprediction effects in systems that use dynamic > prediction - even if it's just heuristic.
Sorry, but no.
> A lot of more powerful chips now are being used even in 'small' > systems, and some of them do have dynamic prediction. Note that Don > is talking about Cortex A53, A55, etc.
Indeed, and therefore static WCET analysis is waning, and hybrid (partly measurement-based) WCET estimation is waxing.
> Dynamic prediction handles loop control quite well ... it's all the > other branching code that is the problem. > > WCET has use outside the 'embedded' world also. A lot of my work was > in QA/QC vision systems: tons of code, tons of features, workstation > class processors, and still having to be /hard real time/.
(I would call that "embedded", because it is a computerized system used for a single purpose.) One could say, perhaps meanly, that that is an ill-posed problem, with the wrong choice of processors. But cost matters, of course... If you are not satisfied with Don's approach (extreme over-provision of processor power) you could try the hybrid WCET-estimation tools (RapiTime or TimeWeaver) which do not need to model the processors, but need to measure fine-grained execution times (on the basic-block level). The problem with such tools is that they cannot guarantee to produce an upper bound on the WCET, only a bound that holds with high probability. And, AIUI, at present that probability cannot be computed, and certainly depends on the test suite being measured. For example, on whether those tests lead to mispredictions in chains of conditional branches.

The 2024 Embedded Online Conference