EmbeddedRelated.com
Forums
The 2026 Embedded Online Conference

Languages, is popularity dominating engineering?

Started by Ed Prochak December 12, 2014
On 12/14/2014 4:33 AM, tim..... wrote:
> > "Don Y" <this@is.not.me.com> wrote in message > news:m6i1dm$d2c$1@speranza.aioe.org... >> Hi Simon, >> >> On 12/13/2014 6:08 AM, Simon Clubley wrote: >>> On 2014-12-13, Don Y <this@is.not.me.com> wrote: >>>> >>>> Braces really help sort out nesting on if-else. >>> >>> In my personal coding standards, _everything_ (ie: single statements) >>> gets placed between braces in brace orientated languages. >> >> Agreed. It is too difficult to be tricked (e.g., by indent) into THINKING >> braces exist where they don't. Especially if your coding style tries to >> minimize newlines. > > Please explain > > how much does a new line cost on your machine?
Newlines cause your source to take up more vertical space (on a display device). Now for the moment consider how Easy it is to read this compared to the preceding sentence... Imagine if it's contents included structural syntactic elements that affected its meaning (besides just a bunch of sequential words). E.g., without scrolling back up, how many *words* did I type? How many were on contiguous lines? Any punctuation/capitalization errors in that sentence?? (No, *this* isn't important. But, other "little details" of comparably simple complexity -- locations of braces, parens, etc. -- *do* affect source code's meaning.
On 2014-12-14, tim..... <tims_new_home@yahoo.co.uk> wrote:
> > "Don Y" <this@is.not.me.com> wrote in message > news:m6i1dm$d2c$1@speranza.aioe.org... >> Hi Simon, >> >> On 12/13/2014 6:08 AM, Simon Clubley wrote: >>> On 2014-12-13, Don Y <this@is.not.me.com> wrote: >>>> >>>> Braces really help sort out nesting on if-else. >>> >>> In my personal coding standards, _everything_ (ie: single statements) >>> gets placed between braces in brace orientated languages. >> >> Agreed. It is too difficult to be tricked (e.g., by indent) into THINKING >> braces exist where they don't. Especially if your coding style tries to >> minimize newlines. > > Please explain > > how much does a new line cost on your machine? >
I wonder if Don's goal is to try and collapse his code so that more of it appears on the screen at the same time. In my case, my personal brace style is the Whitesmiths brace style so I am pretty much the opposite of Don here. However, I still wrap braces around everything because I think it makes things clearer even at the expense of getting slightly less code on the screen at any one time. Simon. -- Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP Microsoft: Bringing you 1980s technology to a 21st century world
Am 14.12.2014 um 10:38 schrieb upsidedown@downunder.com:
> On Sun, 14 Dec 2014 01:21:13 +0100, Hans-Bernhard Br&#4294967295;ker <HBBroeker@t-online.de> wrote:
>> Not really. Supposing you start out with a correct program and >> compiler, making previously automatic variables static can never reduce >> memory consumption. It can only increase it. > > I have been working with computers that did not even have any concept > of stack.
Which is why I chose the wording "automatic variables", rather than "stack variables".
> Of cause, these low memory locations can be reused as local variables > in multiple functions, as long as they do not directly or indirectly > call each other. I do not see why a static allocation would be any > larger than stack allocation.
Because that's not the meaning of "static" Les, Simon and myself were talking about. You're talking about the allocation schemes for _automatic_ variables employed by compilers for essentially stack-less machines (these days the 8051 may be the most prominent example). We were talking about flagging those variables _static_, at source level. > In processors without stack pointer
> relative addressing modes, this will significantly reduce the code > size.
Of course it will. If you allow the compiler to apply it. Making them "static" at source level, forbids this optimization, causing excess RAM usage.
Hi Simon,

On 12/14/2014 6:28 AM, Simon Clubley wrote:

>>>>> Braces really help sort out nesting on if-else. >>>> >>>> In my personal coding standards, _everything_ (ie: single statements) >>>> gets placed between braces in brace orientated languages. >>> >>> Agreed. It is too difficult to be tricked (e.g., by indent) into THINKING >>> braces exist where they don't. Especially if your coding style tries to >>> minimize newlines. >> >> Please explain >> >> how much does a new line cost on your machine? > > I wonder if Don's goal is to try and collapse his code so that more of it > appears on the screen at the same time.
Exactly. As should have been clear from my (elided) comment: "One of the criteria that (on initial exposure) seemed "arbitrary" in my first language design class was "be able to write functions/subroutines on a single page" (which was never formally defined). It doesn't take long to realize why this can be A Good Thing." And, expounding on that to indicate that syntactic sugar that tries to cram *too* much onto a single line can be counterproductive. The example I gave in another reply, up-thread: s := <- cmd; (n, str) := ops->tokenize(s, "\t;, \n"); case hd str { "foo" => spawn do_foo(tl str); "baz" or "bar" => do_bar(str); "move" => x = int hd tl str; y = int hd tl tl str; rest = tl tl tl str; move(x,y); eval(rest); * => die(); } is very expressive and "tight". But, intimidating and error prone (omit a '>' and "=>" becomes '=', etc.
> In my case, my personal brace style is the Whitesmiths brace style so I > am pretty much the opposite of Don here. However, I still wrap braces > around everything because I think it makes things clearer even at the > expense of getting slightly less code on the screen at any one time.
A similar question would be: "how much do braces cost on your machine?" When writing (code or prose), I tend to use large screens ("windows") so I can opt for a "longer (and usually wider!) page size". Scrolling back and forth is just too easy to miss an indent level of a structure or a line of code at the "page crease", etc.
On 2014-12-13, Don Y <this@is.not.me.com> wrote:
> Hi Simon, > > On 12/13/2014 8:49 AM, Simon Clubley wrote: >> On 2014-12-13, Hans-Bernhard Br&ouml;ker <HBBroeker@t-online.de> wrote: >>> Am 13.12.2014 um 13:56 schrieb Simon Clubley: >>>> In my case, pulling them out into .bss means it's easy to look at the >>>> linker map and see, at compile time, exactly how much memory is required >>>> for the variables making it far easier and reliable to analyze memory >>>> usage. >>> >>> The problem is that it doesn't just make memory consumption easier to >>> see ... it also makes it larger than it needs to be. So there's a good >>> chance you'll run out of memory _because_ you wanted to figure out >>> if/when you run out of memory. >> >> OTOH, it's a lot better than having to deal with subtle memory trashing >> errors because your now larger stack has descended into the space >> allocated to .bss (or even .data) and you find out the hard way that >> your code is now too big for the resources on the MCU you are currently >> using. > > Statics are a real downer if you're writing reentrant code. You have to > ("manually") ensure (by design) that no two consumers can access <whatever> > is reliant on that static "at the same time". Even if the static isn't > "required" to preserve data between function invocations (e.g., like strtok). > > A developer then needs intimate familiarity with every "library" that he > calls upon (i.e., code written by the guy in the next cubicle) to ensure > he isn't exposing his code to one of those (typical) "intermittents" that > you never manage to track down (because its impractical to reproduce the > EXACT conditions that caused it to manifest). >
Don't forget that in the message which started this sub-thread I mentioned this was for 8 bit MCUs with limited memory resources available and hence the goal for me is to try and use development techniques which expose problems as early on in the process as possible and do it in a more predictable deterministic way as possible. In 32 bit MCUs with more resources available I go for a much more traditional stack based approach and it's only the big buffers I tend to keep as static. In even larger 32 bit MCUs even the large buffers tend to get dynamically created at run-time in my code. I am also aware that as a hobbyist I am not building thousands of devices so you may have to make tradeoffs I don't such as saving a few pence by using a more resource limited MCU; hence a technique which may increase memory consumption slightly may not be available to you. However, there was a question I asked earlier and that is how do the costs of debugging a stack trashing .bss/.data compare with the costs of using a slightly more larger MCU and different development techniques in the first place ? BTW, I am also very aware that this technique designed to produce reliable code on small resource constrained MCUs is the same technique which can produce hard to maintain code on much larger systems. Simon. -- Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP Microsoft: Bringing you 1980s technology to a 21st century world
Am 14.12.2014 um 14:55 schrieb Simon Clubley:

> the goal for me is to try and use development techniques which expose > problems as early on in the process as possible and do it in a more > predictable deterministic way as possible.
The problem remains that your approach doesn't just expose those problems: it makes them worse. There's a common turn-of-phrase about a cure that's worse than the disease. In your case, the even diagnostic is worse than the disease.
> However, there was a question I asked earlier and that is how do the > costs of debugging a stack trashing .bss/.data compare with the costs > of using a slightly more larger MCU and different development techniques > in the first place ?
I had answered that, but apparently that message didn't make it out into the net: That's making the assumption that using automatic variables where they can be used, has to increase development time. I strongly doubt that assumption. Stack overflow is something you have to protect against, anyway. The actual amount of data in automatic variables doesn't change that in any meaningful way. Both the cost of stack checking and that of debugging a stack overflow remain essentially the same regardless of how much stuff is on the stack. That being said, let's just state that 50 million cents do indeed pay for quite a bit of effort.
Hi Simon,

On 12/14/2014 6:55 AM, Simon Clubley wrote:
> On 2014-12-13, Don Y <this@is.not.me.com> wrote: >> On 12/13/2014 8:49 AM, Simon Clubley wrote: >>> On 2014-12-13, Hans-Bernhard Br&ouml;ker <HBBroeker@t-online.de> wrote: >>>> Am 13.12.2014 um 13:56 schrieb Simon Clubley: >>>>> In my case, pulling them out into .bss means it's easy to look at the >>>>> linker map and see, at compile time, exactly how much memory is required >>>>> for the variables making it far easier and reliable to analyze memory >>>>> usage. >>>> >>>> The problem is that it doesn't just make memory consumption easier to >>>> see ... it also makes it larger than it needs to be. So there's a good >>>> chance you'll run out of memory _because_ you wanted to figure out >>>> if/when you run out of memory. >>> >>> OTOH, it's a lot better than having to deal with subtle memory trashing >>> errors because your now larger stack has descended into the space >>> allocated to .bss (or even .data) and you find out the hard way that >>> your code is now too big for the resources on the MCU you are currently >>> using. >> >> Statics are a real downer if you're writing reentrant code. You have to >> ("manually") ensure (by design) that no two consumers can access <whatever> >> is reliant on that static "at the same time". Even if the static isn't >> "required" to preserve data between function invocations (e.g., like strtok). >> >> A developer then needs intimate familiarity with every "library" that he >> calls upon (i.e., code written by the guy in the next cubicle) to ensure >> he isn't exposing his code to one of those (typical) "intermittents" that >> you never manage to track down (because its impractical to reproduce the >> EXACT conditions that caused it to manifest). > > Don't forget that in the message which started this sub-thread I mentioned > this was for 8 bit MCUs with limited memory resources available and hence > the goal for me is to try and use development techniques which expose > problems as early on in the process as possible and do it in a more > predictable deterministic way as possible. > > In 32 bit MCUs with more resources available I go for a much more > traditional stack based approach and it's only the big buffers I tend > to keep as static.
It really doesn't matter how big the processor is. Do you only have to verify the brakes work on "fast cars" but not "slow cars"? The stack can overflow on a big MCU just as easily as on a small MCU. (probably *more* likely as you may have many more stacks on that big MCU -- any of which can be problematic)
> In even larger 32 bit MCUs even the large buffers tend to get dynamically > created at run-time in my code.
Let's be clear: there are three types of memory in play, here. Dynamically allocated memory is created by explicit actions in your code. You call malloc/new and some part of (some) heap is (or is not) allocated for your needs. Static's are durable pieces of memory that are present at all times. They may not always be *accessible* (e.g., a static inside a function can only be accessed when that function is executing) but they always consume a fixed amount of resources. Automatic variables are "automatically" created on the stack when a function/block is entered. They can only be accessed within that function/block AND DISAPPEAR AUTOMATICALLY when the function is terminated/exits. These are, in a sense, dynamically allocated, *by* the compiler -- but ON THE STACK, not on the heap. There is a camp that frowns upon use of (true) dynamic memory allocation (because of the "run with scissors" argument: you can get hurt if you aren't careful). It, however, gives the programmer the most run-time flexibility over memory usage (you can create a persistent object *in* a function -- like a static would do; *or* an object with limited lifetime -- like an auto variable; you can control that object's visibility -- by only "telling" the folks you want to have access to it where it is located; etc.) But, aside from forgetting to free() every allocation (memory leak), you can also forget to verify that each allocation succeeds and, when faced with a failed allocation, end up dereferencing a NULL pointer. Or, you could just fail to have given consideration to how you react/recover from this condition ("Ohmigod! The sky is falling!!") OTOH, you tend to be far more aware of the amount of memory you are *expecting* to acquire in this way (from *a* heap). You wouldn't, for example, create a heap of size X if you *know* you will be requesting Y>X bytes from it! Using statics, you can get the compiler (linkage editor) to tell you how much "data" you are consuming. If this ever exceeds the amount of "RAM" in your system, you're screwed. But, what about when it *doesn't* exceed the amount of RAM? How do you decide how large the stack(s) and heap(s) should be? (I mean this seriously! Do you just make an arbitrary GUESS and see if the code runs? If it does, how confident are you that every possible combination of execution paths/orders will *still* yield a functional system? You still have to know what your maximum stack penetration will be (and, in many environments, stack and heap share a memory region; stack can grow if heap is shrinking -- retreating towards the opposite end of the region -- so this is a tougher metric to evaluate). Assume you use statics exclusively! (no recursion, no reentrancy, single threaded, etc.) How do you "count" the stack space consumed by each function invocation? I.e., the "return address" silently pushed onto the stack? Do you have some metric that will TELL you the maximum *number* of nested subroutine/function invocations? Or, do you have to look at the code to determine that? See, you need to understand where your code is *likely* to go in order to accurately assess its memory needs.
> I am also aware that as a hobbyist I am not building thousands of devices > so you may have to make tradeoffs I don't such as saving a few pence by > using a more resource limited MCU; hence a technique which may increase > memory consumption slightly may not be available to you.
I have no idea what sort of code you write so I can't comment on how much it may increase your effective memory usage (well, let's call it "allocation" because that's what it is; the memory may not be "used much" but it is permanently "allocated" by the use of statics). I write multithreaded code -- almost exclusively. So, virtually *every* function/procedure can be invoked multiple times, simultaneously. Anything that is static will get clobbered when another instance of the same function tries to access that SINGLE static (i.e., I would have to write every such function to actively *share* that object -- mutex -- in order to declare it as static) [One of my OS's doesn't suffer from this constraint -- but it is much heavierweight creating *processes* instead of threads] OTOH, if I create auto variables on the stack, then each thread has its own *private* copy of each such variable -- because each thread has its own stack! Sharing isn't inherent unless the thread explicitly takes action (and responsibility) for sharing an object. [similarly, if I dynamically allocate objects on the heap -- even a *shared* heap! -- then I can control their visibility by just not sharing the reference to a particular object with anyone with whom I'm not prepared to explicitly share that access.] Dynamic allocation (either via explicit heap actions *or* automatic allocation in stack frames) has a big advantage in that it allows your RAM to be used more efficiently. When X is done using all of the memory that it needs (i.e., when X exits), all of that memory *magically* becomes available for Y to use -- without any explicit action on Y's part!
> However, there was a question I asked earlier and that is how do the > costs of debugging a stack trashing .bss/.data compare with the costs > of using a slightly more larger MCU and different development techniques > in the first place ?
Again, you're missing the point. Tell me how big your stack *needs* to be in order to GUARANTEE that it won't overflow (i.e., it will never, ever overflow regardless of ANY conditions that it encounters while you're not babysitting it). 1KB? 10KB? 10MB? Are you *picking* a number based on how "comfortable" you feel with the unlikelihood of it being wrong? Or, do you have "science" to backup your assessment and the number reflects a true understanding of your code's design, operation and performance? It's like clowns who design (electronic) forms and allow N characters for a first name (or last name, street name, etc.). How do they *know* that N is big enough? "Bob Smith" might think "4 or 5" is a good number for first name with "7 or 8" for a surname; "Esmerelda Humperdink-Ticonderoga" may think "10 or 12" for first and *30* for surname! If you don't care about folks whose names "don't fit" (i.e., if the form doesn't HAVE TO WORK), this is an easy decision. OTOH, if the form HAS to work, what do you do? 80 characters for each? (i.e., the equivalent of "picking" 10MB as your stack size -- "to be safe")
> BTW, I am also very aware that this technique designed to produce reliable > code on small resource constrained MCUs is the same technique which can > produce hard to maintain code on much larger systems.
It's really the same problem. Would you sleep well at night knowing your child/spouse was scheduled for a robotically assisted surgery in the morning and you had written some of the code that controls that robot and had just "picked a very big number, hoping it was big enough" for the stack size? Would you pick a "worst case" number for the current limit for the servo that will be driving the actuator arm based on "presumed overkill"? "We're sorry, Mr Clubley -- we did all that we could! But, one of the tendons was just too thick for the robot to cut through. The servo kept faulting. And, by the time we recognized the problem and got the robot out of the way..." [of course, I am exaggerating] Or, would you study the problem and implementation and come up with hard numbers -- backed by data -- that indicate WHY each of your design decisions (coding decisions) are appropriate?
On Sun, 14 Dec 2014 11:07:43 -0700, Don Y <this@is.not.me.com> wrote:

>There is a camp that frowns upon use of (true) dynamic memory allocation >(because of the "run with scissors" argument: you can get hurt if you >aren't careful). It, however, gives the programmer the most run-time >flexibility over memory usage (you can create a persistent object *in* >a function -- like a static would do; *or* an object with limited >lifetime -- like an auto variable; you can control that object's >visibility -- by only "telling" the folks you want to have access to >it where it is located; etc.)
In real time control systems, I use some malloc() but try not to use free() and the system runs for years without reboots :-). With small systems, there is always the risk of dynamic memory fragmentation. Frequently allocating and freeing variable sized objects, you can easily end up in a situation, in which there are no single _continuous_ memory for new allocations, even if there would be a lot of free heap bytes available. For this reason frequent allocation/deallocation should be avoided. Alternatively some form of garbage collection/memory compacting would be needed, but C doesn't provide it and if available, would harm the real-time performance with unpredictable latencies. In a HRT system, the program is faulty, if the computation result is not delivered at the specified time.
>But, aside from forgetting to free() every allocation (memory leak), >you can also forget to verify that each allocation succeeds and, when >faced with a failed allocation, end up dereferencing a NULL pointer. >Or, you could just fail to have given consideration to how you >react/recover from this condition ("Ohmigod! The sky is falling!!")
Except for large dynamic memory allocations, any failed dynamic memory allocation would be catastrophic. Assuming you want to make a 100 byte dynamic memory allocation and it fails, how do you expect to continue from this ? Any fprintf to stderr or crash dump routine would potentially use dynamic memory, which again will cause allocation failures etc., creating a vicious circle. Thus, the only safe thing to do, if a small dynamic memory allocation fails is to halt or restart the processors. For larger allocations returning a NULL pointer makes sense, since it might be perfectly reasonable to try a smaller allocation in some cases.
Op 13-Dec-14 21:14, Les Cargill schreef:
> Paul Rubin wrote: >> Les Cargill <lcargill99@comcast.com> writes: >>> The subset of 'C' you really need is rather small: >> >> [interesting list, some comments] >> >>> - Resource Acquisition Is Initialization. Holds for 'C', too. Use >>> ternary operators or "constructors" to achieve this. >> >> I don't understand this: RAII is a C++ idiom that relies on C++'s >> exceptions calling object destructors > > Sort of. I don't like the idea that RAII is only specific to C++ > even though that's where it came from. The point of it is to make > sure everything is properly initialized to a reasonable value.
The RAII idiom has more to do with ensuring proper cleanup in all cases than with initialization. If it were just about initialization RAII wouldn't be a big deal. In C++ the destructor of an object will always be called when that object goes out of scope, regardless of what caused it to go out of scope (return statement, exception...). This makes it possible to automate resource management, ensuring resources are never leaked without having to rely on the caller to do the right thing in every possible flow. Since C has no destructors or similar mechanism saying "RAII holds for 'C'" too makes no sense.
> I understand RAII was developed to close this hole in C++, but I think > there's a more general principle inside that.
Destructors are a natural and integral part of the C++ language from pretty much the beginning rather than a stopgap measure to "close a hole". When (much later) exceptions where added to the C++ language the RAII idiom became rather essential. One might argue that without destructors there would be no point in having exceptions in the C++ language because resource management would become even more error prone to point of being impractical to get it right in every possible flow.
On 12/14/2014 12:15 PM, upsidedown@downunder.com wrote:
> On Sun, 14 Dec 2014 11:07:43 -0700, Don Y <this@is.not.me.com> wrote: > >> There is a camp that frowns upon use of (true) dynamic memory allocation >> (because of the "run with scissors" argument: you can get hurt if you >> aren't careful). It, however, gives the programmer the most run-time >> flexibility over memory usage (you can create a persistent object *in* >> a function -- like a static would do; *or* an object with limited >> lifetime -- like an auto variable; you can control that object's >> visibility -- by only "telling" the folks you want to have access to >> it where it is located; etc.) > > In real time control systems, I use some malloc() but try not to use > free() and the system runs for years without reboots :-).
The devil is *always* in the details. With more "modern" languages (where dynamically allocation is done "for you"), you tend to end up with lots of smaller alloc/free actions -- every object instantiation potentially poking a hole in the heap's freelist. In C (explicit allocation/release), the programmer has more control over where these allocations are done. E.g., you almost assuredly wouldn't malloc 4 bytes for an int -- and then free it some time later.
> With small systems, there is always the risk of dynamic memory > fragmentation. Frequently allocating and freeing variable sized > objects, you can easily end up in a situation, in which there are no > single _continuous_ memory for new allocations, even if there would be > a lot of free heap bytes available.
Again, depends on the allocation pattern. I have a character-based UI that I frequently use in small products. It lets me create menus, list boxes, radio buttons, check boxes, etc. "on the cheap". It would be foolish to static allocate each POSSIBLE UI "control/widget" and just let *most* of them sit idle while the interface is running (and ALL of them sit idle while the interface is OFF!). Each object is different size (as each menu, list, etc. can vary based on whatever the developer thinks appropriate for *this* control when invoked in *this* manner from *this* menu, etc.). *BUT*, objects tend to be created and deleted (free'd) in complementary orders. So, you don't create 1, 2, 3, 4 and free 2, 4, 1, 3 (which could lead to the fragmentation problem you describe). Rather, 1, 2, 3, 4 are deleted as 4, 3, 2, 1. I.e., a LIFO/stack ordering.
> For this reason frequent allocation/deallocation should be avoided. > > Alternatively some form of garbage collection/memory compacting would > be needed, but C doesn't provide it and if available, would harm the > real-time performance with unpredictable latencies. In a HRT system, > the program is faulty, if the computation result is not delivered at > the specified time. > >> But, aside from forgetting to free() every allocation (memory leak), >> you can also forget to verify that each allocation succeeds and, when >> faced with a failed allocation, end up dereferencing a NULL pointer. >> Or, you could just fail to have given consideration to how you >> react/recover from this condition ("Ohmigod! The sky is falling!!") > > Except for large dynamic memory allocations, any failed dynamic memory > allocation would be catastrophic.
[What's "large"? "small"?] That depends on what the code "expects" and how willing it is to accommodate the failed allocation. E.g., one of the allocation strategies I implement in my heaps is "get largest" (vs. "get smallest", "get at least", "get adjoining", etc.). So, an algorithm can issue a request for the largest contiguous block of memory in a particular heap. If this is sufficient, use it. If excessive, point to a portion of the allocated block (front or back) and free it (telling the memory manager what strategy to use when reintegrating that chunk into the heap's free list). Note "sufficient" and "excessive" need not be the same value! This allows me to enhance an algorithms *performance* by exploiting larger buffers (etc.) WHEN AVAILABLE without *forcing* them to be used ALWAYS.
> Assuming you want to make a 100 byte dynamic memory allocation and it > fails, how do you expect to continue from this ?
What if you can get by with 90 bytes? What if you can reschedule the operation to a later time? (as long as you meet your FINAL deadline, the fact that it doesn't get done "now" isn't necessarily fatal) [This is why SRT is *harder* than HRT!]
> Any fprintf to stderr > or crash dump routine would potentially use dynamic memory, which > again will cause allocation failures etc., creating a vicious circle.
That's why you have special routines for reporting errors! They *can't* fail due to resource issues.
> Thus, the only safe thing to do, if a small dynamic memory allocation > fails is to halt or restart the processors.
I don't agree (devil, details). If an HRT *task* fails to meet its deadline, then the HRT *task* has failed. The "system" hasn't, necessarily. [Incoming ballistic missile. Defensive intercept fails to destroy it (missed deadline). Silly to waste any more effort on that missile -- the deadline has past so any additional effort is for naught. Incoming missile destroys intercept's launcher -- too bad, so sad. Incoming missile destroys some *other* target leaving launcher intact. In each case, the missed deadline doesn't mean the "system" has failed -- and should be rebooted!]
> For larger allocations returning a NULL pointer makes sense, since it > might be perfectly reasonable to try a smaller allocation in some > cases.
Or delay the attempt and try again. The language has no knowledge of how it will be applied. Languages that silently manage dynamic objects leave the programmer either blissfully ignorant of the potential perils or litter the code with all sorts of exception handling ("what if this object can't be instantiated, *here*? how do I handle *this* case?"). IME, this results in the default exception handler(s) being used which, typically, just crash the app. What assurances do you have that *restarting* the app won't result in the same failure? In the same place? Or, elsewhere? E.g., if the stack overflows and stomps on your heap or your "data", what assurance do you have that restarting the app won't result in the same failure?
The 2026 Embedded Online Conference