On Sat, 13 Dec 2014 15:22:15 +0100, Hans-Bernhard Br�ker <HBBroeker@t-online.de> wrote:>Am 13.12.2014 um 13:56 schrieb Simon Clubley: >> On 2014-12-13, Paul Rubin <no.email@nospam.invalid> wrote: >>> Les Cargill <lcargill99@comcast.com> writes: >>>> - Only allocate utility counters ) i, j , k ) on the stack. Use static >>>> for everything else you can. > >>> Why do you say this? Just to avoid having to analyze stack depth? > >> I've done this myself in tightly constrained environments (8-bit MCUs) >> even though it goes against all my natural instincts for nicely modular >> coding with variables only defined in the scope(s) they are needed. > >I'm afraid your reasoning is backwards in this case. It's precisely in >memory-starved environments that you can not afford doing this. Making >variables static when they don't need to be blows up memory consumption >considerably. > >The stack isn't your enemy. It's the cheapest memory usage conservation >technology there is, so use it.The problem with many small 8 bitters is that they do not have good stack pointer relative addressing modes. You either have to sacrifice an index register (if you have one) or use fixed address references calculated at compile/link time.
Languages, is popularity dominating engineering?
Started by ●December 12, 2014
Reply by ●December 13, 20142014-12-13
Reply by ●December 13, 20142014-12-13
Hi Simon, On 12/13/2014 6:08 AM, Simon Clubley wrote:> On 2014-12-13, Don Y <this@is.not.me.com> wrote: >> >> Braces really help sort out nesting on if-else. > > In my personal coding standards, _everything_ (ie: single statements) > gets placed between braces in brace orientated languages.Agreed. It is too difficult to be tricked (e.g., by indent) into THINKING braces exist where they don't. Especially if your coding style tries to minimize newlines. E.g., (my preference): if () { ... } or even if () { ... } vs. if () { ... } One of the criteria that (on initial exposure) seemed "arbitrary" in my first language design class was "be able to write functions/subroutines on a single page" (which was never formally defined). It doesn't take long to realize why this can be A Good Thing. However, languages can also use this goal to justify being unduly cryptic and overly reliant on syntactic sugar. The same sort of thing can be said for parens in expressions. Being redundant isn't necessarily A Bad Thing.>> Even though you "know stuff" (about the language, application, etc.) >> it can't hurt to put that in writing to make sure others also know it. > > :-) > > My former employer for my day job (embedded work is a hobby for me) has > just written in a reference that I like to document things. He's right. :-)I use lots of pointers -- they tend to let me make the code tighter and cut down on resource requirements (information is encoded in the pointer's *position* in addition to the thing it points *at*. So, a big issue (for me) is keeping track of what the pointer is currently referencing. The biggest downside to adding documentation is there is no way for the compiler (or any other tool) to ensure it is kept up to date AND coincides with what the code is ACTUALLY doing. Many folks get bit because they "debug the *comments*" (i.e., what those CLAIM the code is doing) and not "debug the *code*". Documenting what each statement is trying to do gets to be clutter. Instead, document what the function/block of code's goal happens to be. Then, salt the code with details to anchor particular lines to that description (easier to maintain -- if someone changes the approach, they don't have to go through and remove/update/replace lots of individual comments but, rather, reformulate the description for the block. Or, elide it entirely if appropriate/lazy) I now document algorithms in PDF's and *describe* the code (in the sources) in a manner consistent with that (external) documentation. This lets me be more thorough AND draw on alternative media in my presentation. E.g., graphs, illustrations, animations, sound clips, etc. -- things that just aren't practical in a "text" source file.>> Browse the objects periodically to be sure WYSIWYG. > > By this, do you mean using objdump and friends to give the generated code > a once-over just to make sure you haven't done something that's hopelessly > inefficient or invalid ? > > If yes, it's nice to see I'm not the only one who does this. :-)Yes (let the compiler produce a .s file that you can peruse. Many will annotate that file -- as much as is practical). I'm not saying you should *distrust* the compiler. Rather, use this as a means of verifying that what you *thought* was happening was, in fact, the case. E.g., if adding a line of code suddenly makes a dramatic change in the size/complexity/speed of the resulting code, you should wonder "why WAS that the case?". (it is always amusing to see embedded newbies add a printf() and watch the size of their code mushroom: "Yikes! printf() is THAT BIG???") Similarly, when looking at the .s version, if lots of YOUR code has apparently been elided (optimized away), you should think about the reasons for that and its consequences -- did that code really NOT need to be present, here? Has the compiler seen something that I've missed (i.e., a "D'oh" moment)? Or, have I specified something in a way that allows the compiler to elide it WHEN IT SHOULDN'T HAVE (because I wrote something incorrectly).
Reply by ●December 13, 20142014-12-13
Hi Simon, On 12/13/2014 8:49 AM, Simon Clubley wrote:> On 2014-12-13, Hans-Bernhard Bröker <HBBroeker@t-online.de> wrote: >> Am 13.12.2014 um 13:56 schrieb Simon Clubley: >>> In my case, pulling them out into .bss means it's easy to look at the >>> linker map and see, at compile time, exactly how much memory is required >>> for the variables making it far easier and reliable to analyze memory >>> usage. >> >> The problem is that it doesn't just make memory consumption easier to >> see ... it also makes it larger than it needs to be. So there's a good >> chance you'll run out of memory _because_ you wanted to figure out >> if/when you run out of memory. > > OTOH, it's a lot better than having to deal with subtle memory trashing > errors because your now larger stack has descended into the space > allocated to .bss (or even .data) and you find out the hard way that > your code is now too big for the resources on the MCU you are currently > using.Statics are a real downer if you're writing reentrant code. You have to ("manually") ensure (by design) that no two consumers can access <whatever> is reliant on that static "at the same time". Even if the static isn't "required" to preserve data between function invocations (e.g., like strtok). A developer then needs intimate familiarity with every "library" that he calls upon (i.e., code written by the guy in the next cubicle) to ensure he isn't exposing his code to one of those (typical) "intermittents" that you never manage to track down (because its impractical to reproduce the EXACT conditions that caused it to manifest).> I prefer to try and find out at compile time if the available resources > are insufficient rather than have to find out the hard way at run-time. > I accept what you say about the memory size increasing may be true in > some cases, but if you are close enough to a resource boundary for this > to make a difference, then maybe it's time for a larger MCU anyway.A cheap way of checking stack penetration is to use a "fence" on the stack and reexamine it, periodically. E.g., during development, my "create_task()" fills the stack with a regular pattern. At each reschedule(), I look at the value of the stack pointer that my context switch will now preserve and: - determine if it is deeper into the stack than any previously recorded instance - if so, store that value (deepest_stack_pointer) and - verify that it is within the range of valid addresses for the memory allocated for the stack (if not, fall into the debugger before the "contamination" spreads, obfuscating the underlying *cause* Then, periodically, explore the region *beyond* the stack pointer to see how much of this "regular pattern" has been obliterated *between* reschedule()'s. One system that I designed allowed me to instrument every function call. So, I could perform these tests in a much finer-grained manner -- on entry and exit from each function. In the larger systems I am working with currently, I track which memory is faulted in for the stack.> After all, code doesn't always remain static and quite often has > functionality added to it, so you may hit the resource limit even with > your stack based approach anyway.You also have to examine your algorithms to verify that their behavior is appropriately bounded. E.g., I rely on recursive algorithms a lot (simple, elegant). But, have to ensure the constraints governing the recursion are known a priori. One of my speech synthesizers does a recursive pattern match. But, controls the match (and recursion) by the CONST TEMPLATE in the code and not the VARIABLE INPUT TEXT (that is completely unconstrained). So, I can guarantee the maximum recursion the code will ever experience at compile time regardless of the input that it may encounter "in use".> I suppose the major thing driving me here is to use development techniques > which allow me a better chance to find out about these potential issues in > a controlled deterministic manner as early on in the development process > as possible.The first time you have to chase down this sort of "problem" will pay for every precaution you ever take against it in future efforts! :-/> OTOH, as a hobbyist, I'm not churning these devices out by the thousands > so there may be a cost tradeoff for you (in terms of using a cheaper > less resource rich MCU that's a few pence cheaper) that simply doesn't > exist for me. > > If that's true however, I would ask if the additional cost of your > development time outweighs the savings from the cheaper MCU when you have > to start debugging run-time resource availability issues. > > BTW, in my makefiles (especially the 8-bit target ones) there's a size > command executed as part of the makefile target so I can see how the > resources needed are increasing as I add functionality. It's a nice way > to keep an eye on resource use without any additional manual effort.
Reply by ●December 13, 20142014-12-13
Paul Rubin wrote:> Les Cargill <lcargill99@comcast.com> writes: >> The subset of 'C' you really need is rather small: > > [interesting list, some comments] > >> - Resource Acquisition Is Initialization. Holds for 'C', too. Use >> ternary operators or "constructors" to achieve this. > > I don't understand this: RAII is a C++ idiom that relies on C++'s > exceptions calling object destructorsSort of. I don't like the idea that RAII is only specific to C++ even though that's where it came from. The point of it is to make sure everything is properly initialized to a reasonable value. I understand RAII was developed to close this hole in C++, but I think there's a more general principle inside that.> in case of abnormal return from > some lower level of the program. How do you do something comparable in C? >Using 'C' idioms: const double numerator = ((x*z)+y); const double denominator = (....); // we may want range checking to return error codes or something here. const double ratio = (abs(denominator)>t_epsi) ? LARGENUM : (numerator/denominator); The point is to break calculations, especially those that invoke division into manageable chunks for clarity and to control divide-by-zero problems. Have the declarations tell the story of how the ratio is derived one step at a time. -- or -- char *thng(....) { static char beast[n] = {0}; sprintf(beast,...); ... if (cond) return NULL; return beast. } The point is to use one-time rules to manage what would be exceptions in C++. What you want of this is to have all unhappy paths be completely covered by unit tests.>> - Only use "for" loops for integer-index loops. >> - Use while (...) { ... if (...) break; } for everything else. > > Hmm, ok a lot of the time, though idioms like > for (p = list_head; p != NULL; p = p->next) { ... } > seem perfectly fine. >I am being very specific to 'C' here. It's an iterator, so it goes well with the integer-index approach. What I've found is that "for every time you need to use allocated pointers, there is a cleaner implementation using static arrays and indexing into them." You need good array bounds checking, but I find that less tiresome than exceptions.>> - Early Return Is The Right Way; enumerate and prioritize constraint >> testing in this way. Happy path at the bottom... > > OK, but what do you do with the return code in the error case? >It depends. Default signature of functions is void return. I prefer to have function returns be used for a list of constraint violations until the last one, which is the happy path.>> - Serialization of internal state is the Path to True Enlightenment > > This is interesting and I haven't seen it put like that before. I'll > give it some thought. A currently trendy practice (functional > programming) is to minimize internal state and localize it to the extent > possible, segregating stateful from stateless functions using the type > system in the case of languages like Haskell. >This is another fine approach, but it's not one I think you can use as much in 'C'. I don't automatically assume "stateful is bad"; it's just something that must be managed properly. That probably means "kept to a miniumum.">> ( properly factored code cannot be understood statically ). > > What do you mean by that? It sounds like the way OOP obscures control > flow. >Not so much.>> - Callbacks rule when you need variant behavior. > > You mean instead of an old fashioned switch statement? >Mostly; yes. But suppose you have a configuration option to use metric instead of English units. I find it somewhat cleaner to have the "metric" version of the calculation seperated from the English version, and switched by a callback.>> Serialization of callback state is part of the Path of True >> Enlightenment. > > Not sure what you mean by that. > >> - Only allocate utility counters ) i, j , k ) on the stack. Use static >> for everything else you can. >I would also add "intermediate calculation values" to the list.> Why do you say this? Just to avoid having to analyze stack depth? >You can often allocate buffers and intermediate values statically and this helps with serializing for testing. -- Les Cargill
Reply by ●December 13, 20142014-12-13
Hans-Bernhard Br�ker wrote:> Am 13.12.2014 um 13:56 schrieb Simon Clubley: >> On 2014-12-13, Paul Rubin <no.email@nospam.invalid> wrote: >>> Les Cargill <lcargill99@comcast.com> writes: >>>> - Only allocate utility counters ) i, j , k ) on the stack. Use static >>>> for everything else you can. > >>> Why do you say this? Just to avoid having to analyze stack depth? > >> I've done this myself in tightly constrained environments (8-bit MCUs) >> even though it goes against all my natural instincts for nicely modular >> coding with variables only defined in the scope(s) they are needed. > > I'm afraid your reasoning is backwards in this case. It's precisely in > memory-starved environments that you can not afford doing this. Making > variables static when they don't need to be blows up memory consumption > considerably. >This varies.> The stack isn't your enemy. It's the cheapest memory usage conservation > technology there is, so use it. >Mmmmm... maybe. I find that 8 bit programs will lend themselves to a small number of globals to be used. It, of course, depends. This being said, I haven't seen an 8 bit micro in some time. Even for PIC, they've been 16 or 32 bit and not all that memory constrained. And the programs on them are small enough that you can more or less keep the state of the thing in your head.>> In my case, pulling them out into .bss means it's easy to look at the >> linker map and see, at compile time, exactly how much memory is required >> for the variables making it far easier and reliable to analyze memory >> usage. > > The problem is that it doesn't just make memory consumption easier to > see ... it also makes it larger than it needs to be. So there's a good > chance you'll run out of memory _because_ you wanted to figure out > if/when you run out of memory.That's true. You may have to play a little game with yourself in memory-constrained environments. -- Les Cargill
Reply by ●December 13, 20142014-12-13
Grant Edwards wrote:> On 2014-12-13, Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote: >> On 2014-12-13, Hans-Bernhard Bröker <HBBroeker@t-online.de> wrote: >>> Am 13.12.2014 um 13:56 schrieb Simon Clubley: >>>> In my case, pulling them out into .bss means it's easy to look at the >>>> linker map and see, at compile time, exactly how much memory is required >>>> for the variables making it far easier and reliable to analyze memory >>>> usage. >>> >>> The problem is that it doesn't just make memory consumption easier to >>> see ... it also makes it larger than it needs to be. So there's a good >>> chance you'll run out of memory _because_ you wanted to figure out >>> if/when you run out of memory. >> >> OTOH, it's a lot better than having to deal with subtle memory trashing >> errors because your now larger stack has descended into the space >> allocated to .bss (or even .data) and you find out the hard way that >> your code is now too big for the resources on the MCU you are currently >> using. > > Making everything static creates all sorts of restrictions: you have > to write seperate versions of functions for forground and interrupt > use, you can't use threads, you can't use coroutines, you can't use > recursion, etc. >I've never had a lick of trouble with it. YMMV. Coroutines in particular are "run to completion". Likewise threads generally need to have as spare an interface as possible - the main program only writes to the buffer, the thread only reads. Semaphores may be necessary, but usually aren't. Having interrupt routines have their own buffers is pretty good practice anyway. There are, of course lots of approaches. Enforcing producer/consumer roles is pretty important if you use shared buffers. My default habit is to declare it static, then move to a malloc/free or stack regime if that's all that can work and I can prove out that it'll never leak. The point is to not reach for dynamic memory right off; escalate to it. -- Les Cargill
Reply by ●December 13, 20142014-12-13
Don Y wrote:> Hi Simon, > > On 12/13/2014 8:49 AM, Simon Clubley wrote: >> On 2014-12-13, Hans-Bernhard Bröker <HBBroeker@t-online.de> wrote: >>> Am 13.12.2014 um 13:56 schrieb Simon Clubley: >>>> In my case, pulling them out into .bss means it's easy to look at the >>>> linker map and see, at compile time, exactly how much memory is >>>> required >>>> for the variables making it far easier and reliable to analyze memory >>>> usage. >>> >>> The problem is that it doesn't just make memory consumption easier to >>> see ... it also makes it larger than it needs to be. So there's a good >>> chance you'll run out of memory _because_ you wanted to figure out >>> if/when you run out of memory. >> >> OTOH, it's a lot better than having to deal with subtle memory trashing >> errors because your now larger stack has descended into the space >> allocated to .bss (or even .data) and you find out the hard way that >> your code is now too big for the resources on the MCU you are currently >> using. > > Statics are a real downer if you're writing reentrant code. You have to > ("manually") ensure (by design) that no two consumers can access <whatever> > is reliant on that static "at the same time". Even if the static isn't > "required" to preserve data between function invocations (e.g., like > strtok). > > A developer then needs intimate familiarity with every "library" that he > calls upon (i.e., code written by the guy in the next cubicle) to ensure > he isn't exposing his code to one of those (typical) "intermittents" that > you never manage to track down (because its impractical to reproduce the > EXACT conditions that caused it to manifest). >I just wonder what you guys are doing where you have these problems. :) I wouldn't use too many statics in *library* code. This being said, the 'C' library uses them all over the place. Obviously, if you have contention over a memory object/reentrancy issues, you can't do this. But mainly it's about each ... thread/routine suite having its own memory - and not using dynamic memory when you don't need it. And when in doubt, use a semaphore. Arrange the larger structure of the peice to where things interact minimally and you'll have no problems. *This* buffer is only used for *this* purpose. That's part of the point. If you're memory constrained, you just have to use stack or globals. But hopefully, the problem to be solved is enough smaller that this doesn't render the thing incomprehensible. Sytstems I've seen lately, each "thread" is pretty much completely isolated form other "threads" except for a few variables that manage interaction. You can "prove out" the interface with grep and a piece of paper. When that gets too messy, you add interface routines.>> I prefer to try and find out at compile time if the available resources >> are insufficient rather than have to find out the hard way at run-time. >> I accept what you say about the memory size increasing may be true in >> some cases, but if you are close enough to a resource boundary for this >> to make a difference, then maybe it's time for a larger MCU anyway. > > A cheap way of checking stack penetration is to use a "fence" on the stack > and reexamine it, periodically. >That's somewhat unreliable.> E.g., during development, my "create_task()" fills the stack with a regular > pattern. At each reschedule(), I look at the value of the stack pointer > that my context switch will now preserve and: > - determine if it is deeper into the stack than any previously recorded > instance > - if so, store that value (deepest_stack_pointer) and > - verify that it is within the range of valid addresses for the memory > allocated for the stack (if not, fall into the debugger before the > "contamination" spreads, obfuscating the underlying *cause* > Then, periodically, explore the region *beyond* the stack pointer > to see how much of this "regular pattern" has been obliterated *between* > reschedule()'s. >Oy. I think you're explaining exactly why I prefer the way I do it :) I'll also make sure that plenty of stack is allocated; over do it.> One system that I designed allowed me to instrument every function call. > So, I could perform these tests in a much finer-grained manner -- on entry > and exit from each function. > > In the larger systems I am working with currently, I track which memory is > faulted in for the stack. > >> After all, code doesn't always remain static and quite often has >> functionality added to it, so you may hit the resource limit even with >> your stack based approach anyway. > > You also have to examine your algorithms to verify that their behavior > is appropriately bounded. E.g., I rely on recursive algorithms a lot > (simple, elegant). But, have to ensure the constraints governing the > recursion are known a priori. >Ah, Well, I only use recursion very sparingly if at all.> One of my speech synthesizers does a recursive pattern match. But, > controls > the match (and recursion) by the CONST TEMPLATE in the code and not the > VARIABLE INPUT TEXT (that is completely unconstrained). So, I can > guarantee > the maximum recursion the code will ever experience at compile time > regardless > of the input that it may encounter "in use". > >> I suppose the major thing driving me here is to use development >> techniques >> which allow me a better chance to find out about these potential >> issues in >> a controlled deterministic manner as early on in the development process >> as possible. > > The first time you have to chase down this sort of "problem" will pay for > every precaution you ever take against it in future efforts! :-/ >I haven't seen a stack overflow in decades, excepting where I'm porting code.>> OTOH, as a hobbyist, I'm not churning these devices out by the thousands >> so there may be a cost tradeoff for you (in terms of using a cheaper >> less resource rich MCU that's a few pence cheaper) that simply doesn't >> exist for me. >> >> If that's true however, I would ask if the additional cost of your >> development time outweighs the savings from the cheaper MCU when you have >> to start debugging run-time resource availability issues. >> >> BTW, in my makefiles (especially the 8-bit target ones) there's a size >> command executed as part of the makefile target so I can see how the >> resources needed are increasing as I add functionality. It's a nice way >> to keep an eye on resource use without any additional manual effort. >-- Les Cargill
Reply by ●December 13, 20142014-12-13
On Fri, 12 Dec 2014 12:02:32 -0800 (PST), Ed Prochak <edprochak@gmail.com> wrote:>As I began my career in software and systems, >choosing a programming language was at times serious. >Over the years it seems that choosing a programming language has become: >what is popular (or perceived popular by management).One should consider the expected lifetime of the software. If the software expected life time is one or more decades, one must think about the amount of competent programmers available at the end of that period. Using some exotic languages or something gaining rapidly popularity recently (and possibly falling off as quickly) would be a risk. Using some main stream languages (such as C/C++) and there will still be competent programmers for a few decades. I haven't done COBOL since the Y2K issues, but still encounter Fortran applications written two decades ago and the users wondering what to do during the next decade and when to rewrite it, thus the existing code base needs maintenance during the next 0-10 years. If those applications had been written with some exotic languages or using some special vendor specific extensions, maintenance becomes harder by each year.
Reply by ●December 13, 20142014-12-13
On 12/13/2014 1:48 PM, Les Cargill wrote:> Don Y wrote: >> Hi Simon, >> >> On 12/13/2014 8:49 AM, Simon Clubley wrote: >>> On 2014-12-13, Hans-Bernhard Bröker <HBBroeker@t-online.de> wrote: >>>> Am 13.12.2014 um 13:56 schrieb Simon Clubley: >>>>> In my case, pulling them out into .bss means it's easy to look at the >>>>> linker map and see, at compile time, exactly how much memory is >>>>> required >>>>> for the variables making it far easier and reliable to analyze memory >>>>> usage. >>>> >>>> The problem is that it doesn't just make memory consumption easier to >>>> see ... it also makes it larger than it needs to be. So there's a good >>>> chance you'll run out of memory _because_ you wanted to figure out >>>> if/when you run out of memory. >>> >>> OTOH, it's a lot better than having to deal with subtle memory trashing >>> errors because your now larger stack has descended into the space >>> allocated to .bss (or even .data) and you find out the hard way that >>> your code is now too big for the resources on the MCU you are currently >>> using. >> >> Statics are a real downer if you're writing reentrant code. You have to >> ("manually") ensure (by design) that no two consumers can access <whatever> >> is reliant on that static "at the same time". Even if the static isn't >> "required" to preserve data between function invocations (e.g., like >> strtok). >> >> A developer then needs intimate familiarity with every "library" that he >> calls upon (i.e., code written by the guy in the next cubicle) to ensure >> he isn't exposing his code to one of those (typical) "intermittents" that >> you never manage to track down (because its impractical to reproduce the >> EXACT conditions that caused it to manifest). > > I just wonder what you guys are doing where you have these problems. :)Simple: dealing with others that aren't as skilled/disciplined/motivated/etc. Usually, folks from desktop environments where they don't have to worry much about the code they are writing. Over the years, I have learned to write my code so others can't break *it*. Tired of having to prove that *my* code is working properly -- by finding the bug in someone *else's* code (e.g., accessing a private struct in my code that isn't exported by the interface; failing to observe the contract for a particular piece of code, etc.)> I wouldn't use too many statics in *library* code. This being said, the 'C' > library uses them all over the place.Most functions with internal state (e.g., strtok, asctime/localtim, et al.) have obvious workarounds (foo_r, etc.). Still others *obviously* need to be munged to support reentrancy/multithreaded use (errno, malloc, signal et al., etc.) I've encountered things like printf() that choke badly (due to static buffers used for conversion). And, even floating point support ("helper routines") that precluded use in multithreaded environments (i.e., you cah to treat the state of those helpers AS IF they were registers in an FPU)> Obviously, if you have contention over a memory object/reentrancy issues, you > can't do this. But mainly it's about each ... thread/routine > suite having its own memory - and not using dynamic memory when you don't need > it. And when in doubt, use a semaphore.If the language doesn't inherently (explicitly) support concurrency, those hooks can be costly. E.g., up to and including a trap. If the developer doesn't *know* he's being screwed (because the interface for the library doesn't *disclose* this sort of detail!), he won't know to protect "shared objects" (and the objects won't know how to protect themselves)> Arrange the larger structure of the peice to where things interact minimally > and you'll have no problems. *This* buffer is only used > for *this* purpose. That's part of the point.Then why have it persistent? It only needs to be around for *that* purpose so let it go away afterwards.> If you're memory constrained, you just have to use stack or globals.Globals are The Root of All (most) Evil. You always want to control the exposure of every datum/object. E.g., if you (a function/subr) have to rely on me to give you a pointer (reference) to an object, then you won't stomp on it without my knowledge of that.> But hopefully, the problem to be solved is enough smaller that this > doesn't render the thing incomprehensible. > > Sytstems I've seen lately, each "thread" is pretty much completely isolated > form other "threads" except for a few variables that manage interaction. You > can "prove out" the interface with grep and a piece of paper. When that gets > too messy, you add interface routines.I am a huge fan of true isolation (protection domains). It makes coding and debugging *so* much easier -- step out of line and the OS brings down the hammer on you! But, this is a more costly feature (e.g., processes vs. threads). The "easy way out" is to adopt things like C-S relationships between producers and consumers. *Formalize* their interactions (at some cost for the interface). In my current designs, most interfaces can (potentially) span processor boundaries. This is a blessing, of sorts, in that it forces the ENTIRE interface to be specified in the IDL. There are no "back doors" whereby data can leak *around* the interface (you simply don't have physical access to it unless made available via the IDL!)>>> I prefer to try and find out at compile time if the available resources >>> are insufficient rather than have to find out the hard way at run-time. >>> I accept what you say about the memory size increasing may be true in >>> some cases, but if you are close enough to a resource boundary for this >>> to make a difference, then maybe it's time for a larger MCU anyway. >> >> A cheap way of checking stack penetration is to use a "fence" on the stack >> and reexamine it, periodically. > > That's somewhat unreliable.It's not intended to be the cat's meow. It's meant to give you an idea of what your stack penetration is so you can verify it is on a par with what you expected *or* completely out of whack. When I write ASM code, in addition to describing the call/return interface in terms of "changes to the machine's state" (registers going in vs. out, memory altered, etc.) I also quantify the maximum stack penetration (as a function of inputs, if thusly related)>> E.g., during development, my "create_task()" fills the stack with a regular >> pattern. At each reschedule(), I look at the value of the stack pointer >> that my context switch will now preserve and: >> - determine if it is deeper into the stack than any previously recorded >> instance >> - if so, store that value (deepest_stack_pointer) and >> - verify that it is within the range of valid addresses for the memory >> allocated for the stack (if not, fall into the debugger before the >> "contamination" spreads, obfuscating the underlying *cause* >> Then, periodically, explore the region *beyond* the stack pointer >> to see how much of this "regular pattern" has been obliterated *between* >> reschedule()'s. > > Oy. I think you're explaining exactly why I prefer the way I do it :) > I'll also make sure that plenty of stack is allocated; over do it.I live with tightly constrained resources. It's important for me to size large objects (e.g., stacks, heaps, etc.) to fit their actual *needs* and not just "throw memory at them". E.g., my memory allocator lets me "trim" allocations (instead of being forced to release them in the same chunks that they were allocated) as well as *extend* existing allocations (the policy that the allocator uses can be specified in the allocation request -- e.g., find me a piece of memory that adjoins *this* piece). This allows me to "move" memory between a consumer and producer. E.g., arrange for the producer and consumer's memory to be contiguous and "free" it from one while "alloc"ing it to the other.>>> After all, code doesn't always remain static and quite often has >>> functionality added to it, so you may hit the resource limit even with >>> your stack based approach anyway. >> >> You also have to examine your algorithms to verify that their behavior >> is appropriately bounded. E.g., I rely on recursive algorithms a lot >> (simple, elegant). But, have to ensure the constraints governing the >> recursion are known a priori. > > Ah, Well, I only use recursion very sparingly if at all.The iterative/recursive duality implies you can always avoid it. But, often iterative solutions are much more difficult to "get right" than their equivalent recursive solutions (too much manual housekeeping).>>> I suppose the major thing driving me here is to use development >>> techniques >>> which allow me a better chance to find out about these potential >>> issues in >>> a controlled deterministic manner as early on in the development process >>> as possible. >> >> The first time you have to chase down this sort of "problem" will pay for >> every precaution you ever take against it in future efforts! :-/ > > I haven't seen a stack overflow in decades, excepting where I'm porting > code.Then you haven't been tasked with trimming your stack to fit the needs of the routines using it! Or your heaps. :> One of the "problems" with "embedded" is it groups a wide variety of application domains and implementation constraints into a single subject. E.g., in the same application/codebase, I have devices that use Q10.14 while others use "Big Rationals" based on the resources available to each. An unfortunate consequence of many languages is that you're pretty much "stuck" with the types that the language gives you. This forces you to express "other" things in unnatural ways -- that are more prone to error (e.g., it would be nice to be able to use infix notation on ints, BCD's, floats, Qs, bigrats, decimals, etc. -- even INTERCHANGEABLY!). If you're working in a resource rich environment, this isn't an issue. But, as you get more resource constrained, you tend to *need* to do these more often *and* have less capabilities to do them portably/safely/intuitively.
Reply by ●December 13, 20142014-12-13
Am 13.12.2014 um 21:48 schrieb Les Cargill:> I wouldn't use too many statics in *library* code. This being said, the > 'C' library uses them all over the place.It's really not that bad, actually. Yes, there are a few such things, but mostly in functions that I, for one, have never felt tempted to use in embedded code. And most of those have been deprecated by thread-safe alternatives since back then.> Arrange the larger structure of the peice to where things interact > minimally and you'll have no problems. *This* buffer is only used > for *this* purpose.But it'll still occupy resources even while the program is doing *that*, which has nothing to with *this* whatsoever. That's where this approach becomes wasteful.> If you're memory constrained, you just have to use stack or globals.I do wonder how you expect the distinction between statics and globals to have any effect at all regarding memory size constraints.







