EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

Stack analysis tool that really work?

Started by pozz July 29, 2021
On 8/4/2021 7:43 AM, pozz wrote:
>> As I said, the first step is understanding what the dynamics of execution >> in your task happen to be. You may be surprised to see functions being >> dragged in that you'd not have thought were part of the mix! ("why is >> printf() being called, here?? and, couldn't I use something like itoa() >> instead?") > > Oh yes, but a simple tool that generates automatically a call graph could be > very useful for this.
How "simple" does it have to be? There are already tools that will do this for you. Even GUI out. How "pretty" is a matter of debate given that it's hard to make an algorithm that can find the "prettiest" way of drawing such a graph.
>> The compiler won't know anything about your run-time environment. It won't >> know if you have a separate stack for ISRs, if the function you are analyzing >> runs as the "root" of a particular process tree, etc. Do you know how much >> work is done BEFORE the first line of your function executed? >> >> It also won't know which execution paths *will* be exercised in practice. >> You may have paths that can't be executed (given a particular set of >> inputs). How will you know to recognize their impact on any figures >> reported? > > I know that compiler can't know everything, but *I* can instruct the tool with > that kind of info.
When does your effort fall to the level of being "by hand"? Note that you really would like a tool to do everything (at least, everything IMPORTANT) for you so you don't screw up or misapply it.
>>> -fstack-usage is not usable "by hands". >> >> No, but you can use that data *with* the call tree to get an idea of >> where your maximum lies -- assuming no recursion and worst-case path >> coverage. > > I think this job could be done by a single tool that creates a call graph and > fill in the values from stack usage.
Have you looked at egypt? There are other similar tools (that may require more than one command to build the output)
>> You can also fill the stack space with 0x2B00B1E5, run your COMPREHENSIVE >> test case suite that exercises ALL paths through the code and check to see >> what the high-water mark was. (you *are* testing the code, right?) > > This is the dynamic approach, I was exploring the static approach.
My point is that you still have to perform tests that exercise every path through your code. So, the test suite can be instrumented to harvest this data KNOWING that every option has been examined.
>> It costs money to engage in any business (non-hobbyist) endeavor. >> Expecting all of your tools to be "free" is a bit naive. > > Anyway there are plenty of complex and good free and open-source software (gcc > is one of them). > So it's strange there isn't a similar tool for stack analysis. That's all, it's > strange for me, but I don't pretend all my preferred tools must be free.
What's available is what SOMEONE decided they wanted to develop *and* offer up to others. You typically have fewer choices for "free" than "for pay". Developing any tool (and maintaining it -- or, providing enough information that OTHERS can maintain it without you) isn't a trivial job. And, *a* free tool will likely soak up much of the effort that others may have spent on some OTHER approach to that free tool. So, the first drives/hinders future development; it's hard to completely redesign an existing tool -- esp if others have bought into it on the maintenance side! I build lots of tools to address *my* needs. But, rarely publish them (beyond close colleagues) because I don't want to "pay the tax" of supporting others -- even if they just have simple questions and aren't looking for some change to the tool. I have my own interests; maintaining a tool FOR OTHERS isn't one of them! <frown>
On 2021-08-04 20:50, Don Y wrote:
> On 8/4/2021 10:00 AM, Niklas Holsti wrote:
...
>> I am the main author of a WCET-analysis tool, Bound-T, that also does >> stack analysis and is now free. However, there is (as yet) no port for >> ARM Cortex-M. See http://www.bound-t.com/. > > But not free as in open (?)
The source code is downloadable; see http://bound-t.com/download/src. The copyright text is of my own writing, but I'm open to switching to some better-known copyright such as GPL or even some non-viral version. However, you may want to read about the state of the tool at http://bound-t.com/status.html.
On 2021-08-03 12:28, Don Y wrote:
> On 8/3/2021 1:43 AM, Niklas Holsti wrote: >> On 2021-08-03 3:56, Don Y wrote: >>> On 7/29/2021 9:22 AM, pozz wrote: >>>> arm gcc and Cortex-Mx MCUs embedded systems. >>>> >>>> Is there a compilation-time (static) tool for stack analysis that >>>> really works? >>> >>> What do you mean by "really works"?&#4294967295; It's a (potentially) unbounded >>> problem >>> as the compiler can't know what the inputs will drive the code to do, in >>> practice. >> >> There are practical static-analysis methods to compute an upper bound >> on the stack usage that work well for many embedded programs. An upper >> bound is usually all one needs. The exact worst case is less interesting. > > That depends on the actual algorithm being implemented, the style of the > developer and, often, the inputs provided. > > For example, I often use recursive algorithms for pattern matching (because > they are easier to "get right").&#4294967295; Looking at the algorithm, you can't know > how deep the recursion will be -- without seeing the input it will be fed.
Well, most embedded programmers avoid recursion. And if you implement recursion in a memory-constrained system, your design and code should set a hard limit on the recursion depth and reject input that would require deeper recursion. Some (admittedly complex) static analyses can discover such limits from the machine code, in the same way as they discover loop iteration limits for WCET analysis. (In fact, I believe that the AbsInt tools translate loops into recursions before the analysis.)
>> Some problems arise for programs that branch to dynamically computed >> addresses, for example by calling functions via function pointers. >> Static analyses may not be able to find the set of possible target >> addresses. When that happens you have to guide the analysis by >> specifying where such control transfers can end up. For typical >> embedded code that is not hard to do, but a program that relies >> extensively on virtual function calls may be hard to analyze. > > The same is true of a program that is driven by external events. > The code doesn't (typically) "know" the constraints placed on those > events.&#4294967295; (one can argue as to whether or not this is "good practice") > So, it can't "find the end".
Again, if some sequences of external events might use an unbounded amount of stack, the design and code should set a hard limit. I have not encountered such programs. The typical design is to allow some fixed maximum nesting of event processing (eg. a finite set of interrupt priority levels) and otherwise enqueue the events for sequential processing, which does not require an unbounded stack.
>>> There are general tools that you can probably instrument to coax a >>> number out of the source (if you don't have access to the source, you're >>> typically SoL) but they will be expensive in time or money. >> >> The exact stack usage of a subprogram can't be derived from source >> code without knowing what machine code the compiler and linker will >> produce. The static stack-analysis tools typically work on the final >> executable code (which also means that they can be independent of the >> programming language). > > If you don't have the sources, you can't *do* anything with the results > of the analysis -- other than define a minimum stack size (which may > be a surprise to you *and* your hardware!)
If the program is single-threaded, the allocated stack size is usually set in the linker command file. Sure, you need the source for that file if you want to change the stack size. Of course you are screwed if your machine does not have enough memory for the size of stack you want.
> If you don't have the sources, you likely haven't a clue as to how > the code behaves, under the complete set of I/O conditions.
I've analyzed the stack usage (upper bounds) of many libraries without access to source code. Typically embedded libraries have call-graphs that can be extracted from the machine code without assumptions about the I/O or other run-time values. But that may no longer be the case for libraries written in C++ or other languages with run-time binding of calls. I agree that for complete programs one sometimes needs more insight, in particular if there are calls through function pointers or local variables with dynamic (data-dependent) size.
>> But (as I said in an earlier post) the tools I am aware of, for the >> OP's target, are not free. > > You can possibly instrument some DSE-like tools.
Sorry, what is a "DSE-like" tool? DSE = ?
> How efficient are those you've used?&#4294967295; Do they operate in "near compile > time"?
For stack-usage analysis the tools are quite fast, typically, because embedded programs tend not to allocate dynamically-sized local variables. The main time is spent in reading the executable and decoding the instructions. For programs that allocate local variables of dynamic size the analysis becomes much more complex, can need lots of time and memory, and can often fail.
> I.e., are they practical to use iteratively? On an entire > application?
Yes, for many applications. And if you are implementing a safety-critical system, you design your application to be analysable.
On 8/4/2021 12:14 PM, Niklas Holsti wrote:
> On 2021-08-03 12:28, Don Y wrote: >> On 8/3/2021 1:43 AM, Niklas Holsti wrote: >>> On 2021-08-03 3:56, Don Y wrote: >>>> On 7/29/2021 9:22 AM, pozz wrote: >>>>> arm gcc and Cortex-Mx MCUs embedded systems. >>>>> >>>>> Is there a compilation-time (static) tool for stack analysis that really >>>>> works? >>>> >>>> What do you mean by "really works"? It's a (potentially) unbounded problem >>>> as the compiler can't know what the inputs will drive the code to do, in >>>> practice. >>> >>> There are practical static-analysis methods to compute an upper bound on the >>> stack usage that work well for many embedded programs. An upper bound is >>> usually all one needs. The exact worst case is less interesting. >> >> That depends on the actual algorithm being implemented, the style of the >> developer and, often, the inputs provided. >> >> For example, I often use recursive algorithms for pattern matching (because >> they are easier to "get right"). Looking at the algorithm, you can't know >> how deep the recursion will be -- without seeing the input it will be fed. > > Well, most embedded programmers avoid recursion. And if you implement recursion > in a memory-constrained system, your design and code should set a hard limit on > the recursion depth and reject input that would require deeper recursion.
Yes, but you don't need that limit to be enforced by a "depth counter". Instead, it can be encoded in the data that *drives* the recursion. A *human* understands that there is a limit (and exactly what that limit is, for any dataset). But, an algorithm may be taxed with trying to determine this. I shouldn't have to adopt a coding style just for the sake of a tool.
> Some (admittedly complex) static analyses can discover such limits from the > machine code, in the same way as they discover loop iteration limits for WCET > analysis. (In fact, I believe that the AbsInt tools translate loops into > recursions before the analysis.) > >>> Some problems arise for programs that branch to dynamically computed >>> addresses, for example by calling functions via function pointers. Static >>> analyses may not be able to find the set of possible target addresses. When >>> that happens you have to guide the analysis by specifying where such control >>> transfers can end up. For typical embedded code that is not hard to do, but >>> a program that relies extensively on virtual function calls may be hard to >>> analyze. >> >> The same is true of a program that is driven by external events. >> The code doesn't (typically) "know" the constraints placed on those >> events. (one can argue as to whether or not this is "good practice") >> So, it can't "find the end". > > Again, if some sequences of external events might use an unbounded amount of > stack, the design and code should set a hard limit. I have not encountered such > programs. > > The typical design is to allow some fixed maximum nesting of event processing > (eg. a finite set of interrupt priority levels) and otherwise enqueue the > events for sequential processing, which does not require an unbounded stack.
Events need not be interrupts. They may be user actions/keystrokes. But, the "grammar" (odd choice of words) that defines the range of permissible actions can limit how deeply the "generic" algorithm recurses. Again, something that a human can see and verify but that could evade anything sort of a sophisticated analysis of code+data.
>>>> There are general tools that you can probably instrument to coax a >>>> number out of the source (if you don't have access to the source, you're >>>> typically SoL) but they will be expensive in time or money. >>> >>> The exact stack usage of a subprogram can't be derived from source code >>> without knowing what machine code the compiler and linker will produce. The >>> static stack-analysis tools typically work on the final executable code >>> (which also means that they can be independent of the programming language). >> >> If you don't have the sources, you can't *do* anything with the results >> of the analysis -- other than define a minimum stack size (which may >> be a surprise to you *and* your hardware!) > > If the program is single-threaded, the allocated stack size is usually set in > the linker command file. Sure, you need the source for that file if you want to > change the stack size. Of course you are screwed if your machine does not have > enough memory for the size of stack you want.
But you can't *change* your algorithm, having discovered that it uses more stack than you had anticipated (or, that other consumers have left available for its use). Without the sources, your only option is to change the hardware to make more stack available. [And, even that may not be possible; e.g., multithreaded in a single address space]
>> If you don't have the sources, you likely haven't a clue as to how >> the code behaves, under the complete set of I/O conditions. > > I've analyzed the stack usage (upper bounds) of many libraries without access > to source code. Typically embedded libraries have call-graphs that can be > extracted from the machine code without assumptions about the I/O or other > run-time values. But that may no longer be the case for libraries written in > C++ or other languages with run-time binding of calls. > > I agree that for complete programs one sometimes needs more insight, in > particular if there are calls through function pointers or local variables with > dynamic (data-dependent) size. > >>> But (as I said in an earlier post) the tools I am aware of, for the OP's >>> target, are not free. >> >> You can possibly instrument some DSE-like tools. > > Sorry, what is a "DSE-like" tool? DSE = ?
Dynamic Symbolic Execution.
>> How efficient are those you've used? Do they operate in "near compile time"? > > For stack-usage analysis the tools are quite fast, typically, because embedded > programs tend not to allocate dynamically-sized local variables. The main time > is spent in reading the executable and decoding the instructions. > > For programs that allocate local variables of dynamic size the analysis becomes > much more complex, can need lots of time and memory, and can often fail.
Ah. The symbolic tools (alluded to above) are *really* resource intensive. You can spend HOURS analyzing a piece of code -- with a fast workstation. (my point being this is a tool that you'd use to VERIFY your assumptions; relying on it to tell you what you don't yet know -- esp if your code is evolving -- is way too costly!)
>> I.e., are they practical to use iteratively? On an entire >> application? > > Yes, for many applications. And if you are implementing a safety-critical > system, you design your application to be analysable.
But, wouldn't you inherently *know* where your design is headed? And, just need the tool to confirm that? (and put real numbers on it)
On 8/4/2021 11:50 AM, Niklas Holsti wrote:
> On 2021-08-04 20:50, Don Y wrote: >> On 8/4/2021 10:00 AM, Niklas Holsti wrote: > > ... > >>> I am the main author of a WCET-analysis tool, Bound-T, that also does stack >>> analysis and is now free. However, there is (as yet) no port for ARM >>> Cortex-M. See http://www.bound-t.com/. >> >> But not free as in open (?) > > The source code is downloadable; see http://bound-t.com/download/src. The > copyright text is of my own writing, but I'm open to switching to some > better-known copyright such as GPL or even some non-viral version.
Ah, OK. I may have a look at it to see how well it works and how much work to port to ARM objects. Given that you've not done so, already (despite your familiarity with the codebase), I suspect that's not a trivial task?
> However, you may want to read about the state of the tool at > http://bound-t.com/status.html.
Hmmm... that sounds ominous! :> (or, am I just too much of a Cynic?)
On 2021-08-04 21:06, Don Y wrote:
> On 8/4/2021 7:43 AM, pozz wrote: >>> As I said, the first step is understanding what the dynamics of >>> execution in your task happen to be. You may be surprised to see >>> functions being dragged in that you'd not have thought were part >>> of the mix! ("why is printf() being called, here?? and, >>> couldn't I use something like itoa() instead?")
>> >> Oh yes, but a simple tool that generates automatically a call graph >> could be very useful for this. > > How "simple" does it have to be?&#4294967295; There are already tools that will > do this for you.&#4294967295; Even GUI out.&#4294967295; How "pretty" is a matter of debate > given that it's hard to make an algorithm that can find the "prettiest" > way of drawing such a graph.
The "dot" tool in the GraphViz package does a good job of graph lay-out. See https://en.wikipedia.org/wiki/Graphviz. Bound-T generates its graphs in "dot" format. The AbsInt tools have their own graph-layout package; it may even be sold separately. However, as a said earlier, a source-level call-graph is often not complete with respect to the calls that happen at run time.
>> This is the dynamic approach, I was exploring the static approach. > > My point is that you still have to perform tests that exercise every path > through your code.
That (full path coverage) is almost never done, because it is exponential in the number of branches. In case the sizes of local variables depend on input data, it may be very hard to find the input values that lead to the actual worst-case stack-usage path. Safe upper bounds are easier to find by analysis.
> What's available is what SOMEONE decided they wanted to develop *and* > offer up to others.&#4294967295; You typically have fewer choices for "free" than > "for pay".&#4294967295; Developing any tool (and maintaining it -- or, providing enough > information that OTHERS can maintain it without you) isn't a trivial job.
My experience is that rather few developers favour the static-analysis approach to resource analysis -- most just instrument and test. There are a few application domains that require better guarantees -- aerospace, automotive, nuclear -- and they are also prepared to pay (a bit) for the tools. However, tools for such domains usually need some kind of formal qualification or certification, which can be expensive. Also, while static analysis is still possible for stack usage, it is becoming impossible for WCET, because of the complex, dynamic architecture of current high-end embedded processors (out-of-order execution, complex caches, multi-core with shared-resource conflicts, and so on and on). The field seems to be moving towards hybrid analysis methods that combine measured execution times with static flow analys, for example the tools from Rapita (https://www.rapitasystems.com/).
On 2021-08-04 20:50, Don Y wrote:
> On 8/4/2021 10:00 AM, Niklas Holsti wrote: >> On 2021-08-03 12:49, pozz wrote: >>> Il 03/08/2021 02:56, Don Y ha scritto: >>>> On 7/29/2021 9:22 AM, pozz wrote: >>>>> arm gcc and Cortex-Mx MCUs embedded systems. >>>>> >>>>> Is there a compilation-time (static) tool for stack analysis that >>>>> &#4294967295;really works? >>>> >>>> What do you mean by "really works"?&#4294967295; It's a (potentially) unbounded >>>> problem as the compiler can't know what the inputs will drive the >>>> code to do, in practice. >>> >>> I mean a simple tool that can be instructed, even manually, to >>> produce a good result. >>> >>> -fstack-usage is not usable "by hands". >>> >>> You need at least a call-graph (generated by a tool), fill each >>> branch (function) with a stack usage (generated by the compiler), >>> fill every branch that is not known at compilation time by the tools >>> (call functions by pointers, recursive, and so on). >>> >>> It's surprisingly to me there isn't a single non-expensive tool that >>> helps in this, considering there are a multitude of good free tools >>> for developers. >> >> >> One reason is that for good results, such a tool has to analyze the >> executable code, and therefore must be target-specific, or at least >> have ports to the various targets, increasing the effort to implement >> and maintain the tool. The gnatstack tool gets around that, to some >> extent, by relying on stack-usage and call information from the >> compiler (gcc). > > I am seeing an increasing number of tools relying on intermediate > encodings (e.g., via LLVM) to give more portability to their > application.
LLVM IR and other similar program representations are a good "level" for some semantic analyses, such as value analysis (finding possible ranges of variable values) and control-flow analysis. But it is typically too high a level for analysing machine resources such as stack usage and WCET. If one can set up a reliable mapping between the IR entities (variables and control flow) and the machine-level entities (registers, memory locations, branch instructions) the two levels of analysis can support each other. Unfortunately that mapping is defined by the compiler and linker and is usually described only incompletely in the debugging information emitted from the compiler and linker.
On 2021-08-05 1:46, Don Y wrote:
> On 8/4/2021 11:50 AM, Niklas Holsti wrote: >> On 2021-08-04 20:50, Don Y wrote: >>> On 8/4/2021 10:00 AM, Niklas Holsti wrote: >> >> &#4294967295;&#4294967295;&#4294967295; ... >> >>>> I am the main author of a WCET-analysis tool, Bound-T, that also >>>> does stack analysis and is now free. However, there is (as yet) no >>>> port for ARM Cortex-M. See http://www.bound-t.com/. >>> >>> But not free as in open (?) >> >> The source code is downloadable; see http://bound-t.com/download/src. >> The copyright text is of my own writing, but I'm open to switching to >> some better-known copyright such as GPL or even some non-viral version. > > Ah, OK.&#4294967295; I may have a look at it to see how well it works and > how much work to port to ARM objects.&#4294967295; Given that you've not > done so, already (despite your familiarity with the codebase), > I suspect that's not a trivial task?
There is an ARM TDMI port (32-bit ARM code and THUMB, pre Cortex), but of course that architecture is out of date. I started a port to ARM Cortex-M, but did not (yet) finish it for the reasons described on the status page. Note that the tool is implemented in Ada. Porting to a new target processor means writing procedures to decode the machine instructions and translate them to the internal representation used by the analysis. Sometimes it is also necessary to modify or extend the tool parts that read the executable files and especially the debugging information -- some of that may be compiler-specific, unfortunately. It is a fair amount of work and requires a good understanding both of the target processor and of the Bound-T internal representation. And Ada, of course, but every programmer should know Ada, IMO :-)
>> However, you may want to read about the state of the tool at >> http://bound-t.com/status.html. > > Hmmm... that sounds ominous!&#4294967295; :>&#4294967295; (or, am I just too much of a Cynic?)
The problems described on the status page are more relevant to WCET analysis than to stack analysis, but can affect stack analysis too, in some cases.
On 8/5/2021 2:00 AM, Niklas Holsti wrote:
> On 2021-08-04 21:06, Don Y wrote: >>> This is the dynamic approach, I was exploring the static approach. >> >> My point is that you still have to perform tests that exercise every path >> through your code. > > That (full path coverage) is almost never done, because it is exponential in > the number of branches.
It's impractical for a "whole program" but is relatively easy to accomplish for tasks designed to be small and single-focus. A solution that is implemented in this sort of "decomposed" manner is easier to analyze whereas something "monolithic" is hard to (later) "chop up" to yield testable SMALLER/simpler pieces.
> In case the sizes of local variables depend on input data, it may be very hard > to find the input values that lead to the actual worst-case stack-usage path. > Safe upper bounds are easier to find by analysis. > >> What's available is what SOMEONE decided they wanted to develop *and* >> offer up to others. You typically have fewer choices for "free" than >> "for pay". Developing any tool (and maintaining it -- or, providing enough >> information that OTHERS can maintain it without you) isn't a trivial job. > > My experience is that rather few developers favour the static-analysis approach > to resource analysis -- most just instrument and test. There are a few > application domains that require better guarantees -- aerospace, automotive, > nuclear -- and they are also prepared to pay (a bit) for the tools. However, > tools for such domains usually need some kind of formal qualification or > certification, which can be expensive.
Ditto medical/pharma/gamiing. In addition to tool qualification, there are also *process* qualification issues. This tends to reduce the number of variants/releases of a "product" as each release requires running through the validation effort, again. [And, a release begs the question: "Do you mean there were things that DID NOT WORK in the prior release? If so, then how comprehensive was the previous validation effort??? Assure me that your process isn't inherently flawed..."] The fact that this effort exists (is required) means you will spend money to lessen it's cost AND increase the apparent integrity of your process to customers/agencies. Esp as you will be doing this repeatedly (for this product or others). Imagine having to *manually* repeat the entire effort from the previous release PLUS the changes brought about by the new release... for EVERY successive release! [No, you can't just *claim* that all of the stuff you validated before STILL WORKS!]
> Also, while static analysis is still possible for stack usage, it is becoming > impossible for WCET, because of the complex, dynamic architecture of current > high-end embedded processors (out-of-order execution, complex caches, > multi-core with shared-resource conflicts, and so on and on). The field seems > to be moving towards hybrid analysis methods that combine measured execution > times with static flow analys, for example the tools from Rapita > (https://www.rapitasystems.com/).
I've taken a different approach on my current (real-time) project: assume any task can fail to meet it's deadlines and provide mechanisms to handle those overruns. (a "hard" real-time task has a very simple deadline handler: kill the task! :> ) But, that's because the resource load (and resource complement) is not known a priori so you can't make *any* guarantees.
On 8/5/2021 3:18 AM, Niklas Holsti wrote:
> On 2021-08-04 20:50, Don Y wrote: >> On 8/4/2021 10:00 AM, Niklas Holsti wrote: >>> On 2021-08-03 12:49, pozz wrote: >>>> Il 03/08/2021 02:56, Don Y ha scritto: >>>>> On 7/29/2021 9:22 AM, pozz wrote: >>>>>> arm gcc and Cortex-Mx MCUs embedded systems. >>>>>> >>>>>> Is there a compilation-time (static) tool for stack analysis that >>>>>> really works? >>>>> >>>>> What do you mean by "really works"? It's a (potentially) unbounded >>>>> problem as the compiler can't know what the inputs will drive the >>>>> code to do, in practice. >>>> >>>> I mean a simple tool that can be instructed, even manually, to >>>> produce a good result. >>>> >>>> -fstack-usage is not usable "by hands". >>>> >>>> You need at least a call-graph (generated by a tool), fill each >>>> branch (function) with a stack usage (generated by the compiler), >>>> fill every branch that is not known at compilation time by the tools >>>> (call functions by pointers, recursive, and so on). >>>> >>>> It's surprisingly to me there isn't a single non-expensive tool that >>>> helps in this, considering there are a multitude of good free tools >>>> for developers. >>> >>> >>> One reason is that for good results, such a tool has to analyze the >>> executable code, and therefore must be target-specific, or at least have >>> ports to the various targets, increasing the effort to implement and >>> maintain the tool. The gnatstack tool gets around that, to some extent, by >>> relying on stack-usage and call information from the compiler (gcc). >> >> I am seeing an increasing number of tools relying on intermediate >> encodings (e.g., via LLVM) to give more portability to their >> application. > > LLVM IR and other similar program representations are a good "level" for some > semantic analyses, such as value analysis (finding possible ranges of variable > values) and control-flow analysis.
Yes. I use such tools for automatically determining test coverage conditions. "Look at my code and figure out the inputs necessary to 'go everywhere'".
> But it is typically too high a level for > analysing machine resources such as stack usage and WCET.
Timing, agreed. But, I suspect there might be a way to add hooks to the analysis that "expose" the current SP to each function -- and then extract that. (I've not thought about it beyond conceptually; the tool WILL visit each variant of function invocation so if it can be coerced to take note of SP then it would simply be a matter of looking for max() -- and, it would be able to tell you *how* it got to that point!)
> If one can set up a reliable mapping between the IR entities (variables and > control flow) and the machine-level entities (registers, memory locations, > branch instructions) the two levels of analysis can support each other. > Unfortunately that mapping is defined by the compiler and linker and is usually > described only incompletely in the debugging information emitted from the > compiler and linker.

The 2024 Embedded Online Conference