EmbeddedRelated.com
Forums
The 2026 Embedded Online Conference

Languages, is popularity dominating engineering?

Started by Ed Prochak December 12, 2014
On 12/14/2014 11:36 PM, upsidedown@downunder.com wrote:
> On Sun, 14 Dec 2014 12:56:08 -0700, Don Y <this@is.not.me.com> wrote: > >> On 12/14/2014 12:15 PM, upsidedown@downunder.com wrote: >>> On Sun, 14 Dec 2014 11:07:43 -0700, Don Y <this@is.not.me.com> wrote: >>> >>>> There is a camp that frowns upon use of (true) dynamic memory allocation >>>> (because of the "run with scissors" argument: you can get hurt if you >>>> aren't careful). It, however, gives the programmer the most run-time >>>> flexibility over memory usage (you can create a persistent object *in* >>>> a function -- like a static would do; *or* an object with limited >>>> lifetime -- like an auto variable; you can control that object's >>>> visibility -- by only "telling" the folks you want to have access to >>>> it where it is located; etc.) >>> >>> In real time control systems, I use some malloc() but try not to use >>> free() and the system runs for years without reboots :-). >> >> The devil is *always* in the details. >> >> With more "modern" languages (where dynamically allocation is done "for you"), >> you tend to end up with lots of smaller alloc/free actions -- every object >> instantiation potentially poking a hole in the heap's freelist. >> >> In C (explicit allocation/release), the programmer has more control over >> where these allocations are done. E.g., you almost assuredly wouldn't >> malloc 4 bytes for an int -- and then free it some time later. >> >>> With small systems, there is always the risk of dynamic memory >>> fragmentation. Frequently allocating and freeing variable sized >>> objects, you can easily end up in a situation, in which there are no >>> single _continuous_ memory for new allocations, even if there would be >>> a lot of free heap bytes available. >> >> Again, depends on the allocation pattern. I have a character-based UI >> that I frequently use in small products. It lets me create menus, list >> boxes, radio buttons, check boxes, etc. "on the cheap". It would be >> foolish to static allocate each POSSIBLE UI "control/widget" and just >> let *most* of them sit idle while the interface is running (and ALL of >> them sit idle while the interface is OFF!). >> >> Each object is different size (as each menu, list, etc. can vary based >> on whatever the developer thinks appropriate for *this* control when >> invoked in *this* manner from *this* menu, etc.). >> >> *BUT*, objects tend to be created and deleted (free'd) in complementary >> orders. So, you don't create 1, 2, 3, 4 and free 2, 4, 1, 3 (which could >> lead to the fragmentation problem you describe). Rather, 1, 2, 3, 4 are >> deleted as 4, 3, 2, 1. I.e., a LIFO/stack ordering. > > If you are going to have such simple stack access, why don't you use > automatic variables, possibly using alloca() ?
They're not (simple) "variables" but, rather, (variable sized) structs. The UI code is *tiny* and invariant wrt the actual content of the UI. It parses a table and builds the structs that the interface needs on the fly. Then, "registers" them with the actual interface code. So, the cost of the interface is basically just the text that will be displayed *in* it. This means you don't have to debug each new menu, listbox, etc. but, rather, just "fill in the blanks" and let the existing code get it on the screen and interact with the user.
> With multiple threads and a single heap, there are no guaranties in > which order memory segments are allocated or released.
That isn't necessarily true, either. If the code imposes dependencies on what each thread does (and when), then you can ensure X has happened before Y based on those dependencies. Sticking with the UI example, if the "action routine" associated with a menu selection is not invoked until the menu has been torn down, then the action routine *knows* that the memory allocated for that menu has already been freed before the first statement executes. By contrast, languages that "hide" (well, lets say "make it more of an effort to be perpetually aware of *every* call to the memory management system") these actions (perform them "automatically" for the user and in a fine-grained manner) make it impractical for a developer to know what the state of a shared heap is likely to be at any point in the developer's code.
On 13/12/14 20:04, Don Y wrote:
> Hi Simon, > > On 12/13/2014 8:49 AM, Simon Clubley wrote: >> On 2014-12-13, Hans-Bernhard Br&ouml;ker <HBBroeker@t-online.de> wrote: >>> Am 13.12.2014 um 13:56 schrieb Simon Clubley: >>>> In my case, pulling them out into .bss means it's easy to look at the >>>> linker map and see, at compile time, exactly how much memory is >>>> required >>>> for the variables making it far easier and reliable to analyze memory >>>> usage. >>> >>> The problem is that it doesn't just make memory consumption easier to >>> see ... it also makes it larger than it needs to be. So there's a good >>> chance you'll run out of memory _because_ you wanted to figure out >>> if/when you run out of memory. >> >> OTOH, it's a lot better than having to deal with subtle memory trashing >> errors because your now larger stack has descended into the space >> allocated to .bss (or even .data) and you find out the hard way that >> your code is now too big for the resources on the MCU you are currently >> using. > > Statics are a real downer if you're writing reentrant code. You have to > ("manually") ensure (by design) that no two consumers can access <whatever> > is reliant on that static "at the same time". Even if the static isn't > "required" to preserve data between function invocations (e.g., like > strtok). >
One thing I haven't seen in this thread is comments on the efficiency (code space and run-time) of using statics vs. auto data. Non-static locals are not necessarily allocated on the stack (or /a/ stack) - they can be in registers, or they can be eliminated or combined by the compiler. Good compilers for brain-dead processors like the 8051 will automatically turn local variables into statics because the stack access is so slow - but they will do so cleverly, re-using the same static slots for different functions in order to reduce memory and/or banking. And on decent processors, a large proportion of local variables stay in registers - giving the smallest and fastest code with the least memory usage (stack or static). Using locals also means you can feel free to break complex expressions into smaller parts with local names, rather than writing it all as one line - the compiler will combine them. Minimising the scope /and/ lifetimes of variables is always good programming practice, and gives the compiler the best chance at finding flaws and generating good code. For large variables (i.e., arrays, and perhaps large structures), it can make a lot of sense making them static - such data is usually accessed more efficiently as statics, and you can see the memory usage more clearly in the map files). But for everything else, use local (auto) storage when possible.
On Fri, 12 Dec 2014 12:02:32 -0800 (PST), Ed Prochak
<edprochak@gmail.com> wrote:

>As I began my career in software and systems, choosing a programming language was at times serious. Over the years it seems that choosing a programming language has become: what is popular (or perceived popular by management). > >So a couple questions/topics to spark the discussions. > >Does this match your experience? Are you using a language because it is popular? Was it your choice or was it forced by management? > >How many languages do you know? Which language would you choose for a large pattern matching project? Which would you choose for a hard real time project? > >I'll sit back a bit before throwing in my thoughts. > > ed
This may be a bit oblique to the discussion&#4294967295;. Most platforms nowadays employ something at least similar to a 'standard' language. However, the language is only a small part of the learning curve, and generally contributes relatively little to the difficulties that often accompany a build. Much more impactive are the specifics of each environment, things like inbuilt data structures and methods, and the myriad platform-specific gotchas that lurk for the unwary. It can take years to become confident and fluent in a single environment, dealing with several can be a major career challenge.
Am 15.12.2014 um 09:26 schrieb David Brown:

> Good compilers for brain-dead processors like the 8051 will > automatically turn local variables into statics because the stack access > is so slow - but they will do so cleverly, re-using the same static > slots for different functions in order to reduce memory and/or banking.
It's actually slightly misleading to state that the compiler turned those automatic variables into "statics", because they obtain only one of the two key properties the variable would get by flagging it "static": they will have a build-time fixed address, but no static storage duration, i.e. their value will usually _not_ be conserved from one entry into their scope to the next. A better description of this kind of data overlaying in terms of C would be that the linker silently builds (a hierarchy of) static _unions_ out of automatic variables from separate branches of the call tree.
On 15/12/14 16:11, Hans-Bernhard Br&ouml;ker wrote:
> Am 15.12.2014 um 09:26 schrieb David Brown: > >> Good compilers for brain-dead processors like the 8051 will >> automatically turn local variables into statics because the stack access >> is so slow - but they will do so cleverly, re-using the same static >> slots for different functions in order to reduce memory and/or banking. > > It's actually slightly misleading to state that the compiler turned > those automatic variables into "statics", because they obtain only one > of the two key properties the variable would get by flagging it > "static": they will have a build-time fixed address, but no static > storage duration, i.e. their value will usually _not_ be conserved from > one entry into their scope to the next. > > A better description of this kind of data overlaying in terms of C would > be that the linker silently builds (a hierarchy of) static _unions_ out > of automatic variables from separate branches of the call tree. >
That would be more accurate, yes.
Hi David,

On 12/15/2014 1:26 AM, David Brown wrote:
>> Statics are a real downer if you're writing reentrant code. You have to >> ("manually") ensure (by design) that no two consumers can access <whatever> >> is reliant on that static "at the same time". Even if the static isn't >> "required" to preserve data between function invocations (e.g., like >> strtok). > > One thing I haven't seen in this thread is comments on the efficiency > (code space and run-time) of using statics vs. auto data. Non-static > locals are not necessarily allocated on the stack (or /a/ stack) - they > can be in registers, or they can be eliminated or combined by the > compiler. > > Good compilers for brain-dead processors like the 8051 will > automatically turn local variables into statics because the stack access > is so slow - but they will do so cleverly, re-using the same static > slots for different functions in order to reduce memory and/or banking.
That's not really a "static". Rather, more like a "register". It can *break* code unless the developer is aware of it and takes steps to treat these "locations" AS IF part of the processor's state. It's comparable to "helper functions" used to implement floating point on processors without FP hardware (i.e., you can't interrupt the flow of execution and begin *another* FP operation without deliberately preserving the contents of those "hidden globals".
> And on decent processors, a large proportion of local variables stay in > registers - giving the smallest and fastest code with the least memory > usage (stack or static). > > Using locals also means you can feel free to break complex expressions > into smaller parts with local names, rather than writing it all as one > line - the compiler will combine them.
As I said up-thread, let the compiler do the optimizing. Concentrate on *clearly* writing what you intend. If the compiler is clever enough to extract some nugget of efficiency from what you've written and *equivalently* transform your code into something "better", let *it* do that -- instead of *you* trying to be cryptically clever in what you *write*. Spend time finding good *algorithms* rather than micro-optimizing instructions. E.g., I haven't used "register" in ages -- figuring the compiler has a much better chance of figuring out what *should* go in registers than *I* do. OTOH, a compiler isn't going to know that the accuracy of the algorithm can be satisfied with integers (or fixed point) instead of "floats".
> Minimising the scope /and/ lifetimes of variables is always good > programming practice, and gives the compiler the best chance at finding > flaws and generating good code.
+1 IME, "block scope" is too infrequently exploited.
> For large variables (i.e., arrays, and perhaps large structures), it can > make a lot of sense making them static - such data is usually accessed > more efficiently as statics, and you can see the memory usage more > clearly in the map files). But for everything else, use local (auto) > storage when possible.
IMO, "static" should be read as "persistent". The size of the object, frequency of access, etc. shouldn't drive its use but, rather, the *required* lifetime (and, visibility). But, then again, in my applications, the goal is *not* to keep things around (unnecessarily). YMMV.
Paul Rubin wrote:
> Les Cargill <lcargill99@comcast.com> writes: >> Sort of. I don't like the idea that RAII is only specific to >> C++... The point of it is to make sure everything is properly >> initialized to a reasonable value. > > I think that misstates what RAII means, at least in the C++ world, > as Dombo explained. For example, in C you might open a file and do > stuff with the contents like this: > > void foo(char *filename) { > FILE *fd = fopen(filename, "r"); > // compute stuff and read from the file > close (fd); > } > > but that leaves you with the issue of what happens in case of an > abnormal return, a return from the middle of the function, etc. You > have to carefully navigate all those possibilities to make sure the file > gets closed instead of leaking the file descriptor. RAII style in C++ > looks like this: > > void foo(string &filename) { > std::ifstream fs(filename); > // compute stuff and read from the stream > } > > Note the absence of any call to explicitly close the stream before > returning. That's because the ifstream object destructor automatically > closes it when the ifstream goes out of scope. That means if the > function returns from the middle or some lower level throws an > exception, the file still gets closed. There's not an equivalent for > this in C unless you build a bunch of special machinery into your > application. >
So your top example does not manage the "can't open the file case" at all. void foo(char *filename) { FILE *fd = fopen(filename, "r"); if (fd==NULL) { // set things to indicate that the // open failed return; } // compute stuff and read from the file close (fd); } I think we have to separate this into two issues: - Some language systems ( not 'C' ) provide for automatic calling of destructors. - We wish to have the constraint that something is opened/allocated be managed as quickly as possible and in a nicely localized fashion. Yes, dynamic allocation leads to states where partial allocation/opening occurs. RAII was specifically developed to combat this. But there's an even more *general* case - one exemplified by perhaps a more esoteric ... thing - the "as if simultaneous" rule in SNMP agents - resolution of all varbinds ( attribute value pairs ) within a single PDU must be atomic. This forces separation of concern between "real work" and evaluation of constraints. Specifically, if anything cannot be done properly within an SNMP agent evaluating a single PDU containing multiple varbinds, the state of the agent is to be exactly as it was before the PDU was received. So I think there's a way of doing the latter in 'C' by arranging things carefully, possibly using early return.
>> The point is to break calculations, especially those that invoke >> division into manageable chunks for clarity and to control >> divide-by-zero problems. Have the declarations tell the story >> of how the ratio is derived one step at a time. > > Sure, that's good style, resembling functional programming; but the term > RAII usually means something different, described above. >
I expect it's simpler ( and perhaps less simple ) than it's made to be. I think it generalizes beyond C++.
>>> Hmm, ok a lot of the time, though idioms like >>> for (p = list_head; p != NULL; p = p->next) { ... } >>> seem perfectly fine. >> I am being very specific to 'C' here. > > The above is idiomatic C, I think. > >> It's an iterator, so it goes well with the integer-index >> approach. What I've found is that "for every time you need to >> use allocated pointers, there is a cleaner implementation using >> static arrays and indexing into them." > > It depends on what you're doing though yeah, dynamic structures are > probably less important in MCU applications. >
I'd like to see them used less in general. If the set of constraints for an operaiton is complete and closed, then evaluate each one in order and do the "real work" at the bottom. You might even "bag up" all the constraint checking in a separate routine so the flow is better. I feel like - and this would take a long time to fully write out - that perhaps we can limit the number of times new() or malloc() are called within many software systems and improve reliability a smidge that way. This isn't a sore spot - the codebases I use now use nearly no dynamic allocations and then only when you're *guaranteed* not to leak memory, enforceable by inspection.
>> It depends. Default signature of functions is void return. I prefer >> to have function returns be used for a list of constraint violations >> until the last one, which is the happy path. > > I'd be interested in seeing an example application in this style, if > you've got one you can release. >
I feel like an example would make things worse right now. :)
>> This is another fine approach, but it's not one I think you can use as >> much in 'C'. I don't automatically assume "stateful is bad"; it's >> just something that must be managed properly. That probably means "kept >> to a miniumum." > > It's easier with garbage collection, but those environments aren't well > suited to small embedded systems. > >> You can often allocate buffers and intermediate values statically and >> this helps with serializing for testing. > > True. >
That's one of the big wins here. -- Les Cargill
On 15/12/14 18:41, Don Y wrote:
> Hi David, > > On 12/15/2014 1:26 AM, David Brown wrote: >>> Statics are a real downer if you're writing reentrant code. You have to >>> ("manually") ensure (by design) that no two consumers can access >>> <whatever> >>> is reliant on that static "at the same time". Even if the static isn't >>> "required" to preserve data between function invocations (e.g., like >>> strtok). >> >> One thing I haven't seen in this thread is comments on the efficiency >> (code space and run-time) of using statics vs. auto data. Non-static >> locals are not necessarily allocated on the stack (or /a/ stack) - they >> can be in registers, or they can be eliminated or combined by the >> compiler. >> >> Good compilers for brain-dead processors like the 8051 will >> automatically turn local variables into statics because the stack access >> is so slow - but they will do so cleverly, re-using the same static >> slots for different functions in order to reduce memory and/or banking. > > That's not really a "static". Rather, more like a "register". It > can *break* code unless the developer is aware of it and takes steps to > treat these "locations" AS IF part of the processor's state. > > It's comparable to "helper functions" used to implement floating point > on processors without FP hardware (i.e., you can't interrupt the flow of > execution and begin *another* FP operation without deliberately preserving > the contents of those "hidden globals". > >> And on decent processors, a large proportion of local variables stay in >> registers - giving the smallest and fastest code with the least memory >> usage (stack or static). >> >> Using locals also means you can feel free to break complex expressions >> into smaller parts with local names, rather than writing it all as one >> line - the compiler will combine them. > > As I said up-thread, let the compiler do the optimizing. Concentrate on > *clearly* writing what you intend. If the compiler is clever enough to > extract some nugget of efficiency from what you've written and > *equivalently* > transform your code into something "better", let *it* do that -- instead of > *you* trying to be cryptically clever in what you *write*.
Absolutely. Compilers are better able to optimised from clear code than cryptic code - for example, they can do a better job with array expressions or multiplies than when someone has tried to "help" by using pointers or shifts.
> > Spend time finding good *algorithms* rather than micro-optimizing > instructions. > E.g., I haven't used "register" in ages -- figuring the compiler has a much > better chance of figuring out what *should* go in registers than *I* do. > > OTOH, a compiler isn't going to know that the accuracy of the algorithm can > be satisfied with integers (or fixed point) instead of "floats". > >> Minimising the scope /and/ lifetimes of variables is always good >> programming practice, and gives the compiler the best chance at finding >> flaws and generating good code. > > +1 IME, "block scope" is too infrequently exploited.
Not by me it isn't! I am also a fan of C99's mixing of declarations and statements, so that you don't have to declare variables until you actually need them.
> >> For large variables (i.e., arrays, and perhaps large structures), it can >> make a lot of sense making them static - such data is usually accessed >> more efficiently as statics, and you can see the memory usage more >> clearly in the map files). But for everything else, use local (auto) >> storage when possible. > > IMO, "static" should be read as "persistent". The size of the object, > frequency of access, etc. shouldn't drive its use but, rather, the > *required* lifetime (and, visibility). > > But, then again, in my applications, the goal is *not* to keep things > around > (unnecessarily). YMMV. >
Hi David,

On 12/15/2014 12:24 PM, David Brown wrote:
> On 15/12/14 18:41, Don Y wrote:
>> As I said up-thread, let the compiler do the optimizing. Concentrate on >> *clearly* writing what you intend. If the compiler is clever enough to >> extract some nugget of efficiency from what you've written and >> *equivalently* >> transform your code into something "better", let *it* do that -- instead of >> *you* trying to be cryptically clever in what you *write*. > > Absolutely. Compilers are better able to optimised from clear code than > cryptic code - for example, they can do a better job with array expressions or > multiplies than when someone has tried to "help" by using pointers or shifts.
I think a lot of this (programmer) behavior comes from the days of naive compilers -- folks got used to "being clever" with their source to try to force particular opcodes to be generated. *Now*, the behavior just obfuscates what the programmer is *really* trying to "say" (do).
>>> Minimising the scope /and/ lifetimes of variables is always good >>> programming practice, and gives the compiler the best chance at finding >>> flaws and generating good code. >> >> +1 IME, "block scope" is too infrequently exploited. > > Not by me it isn't! I am also a fan of C99's mixing of declarations and > statements, so that you don't have to declare variables until you actually need > them.
Again, I think this is a legacy behavior. People get used to declaring variables at the top of a function -- because you *had* to. One of the hazzards (inconveniences? inefficiencies?) or dealing with compilers of different vintage, different languages, etc. is the effort required to keep "best practices" in tune with the *current* environment. E.g., I have to make a conscious effort to alter my coding style to exploit tuples or lists under Limbo; then suffer the opposite problem when I carry-over that same style to C and find the compiler "unreasonably" complaining. "Whaddya mean, 'syntax error'???" :-/
On Fri, 12 Dec 2014 14:48:39 -0700, Don Y <this@is.not.me.com> wrote:

>On 12/12/2014 1:02 PM, Ed Prochak wrote: > >> As I began my career in software and systems, choosing a programming >> language was at times serious. Over the years it seems that choosing a >> programming language has become: what is popular (or perceived popular by >> management). > >Or, what the diploma mills are churning out!
Which would be Java and Python.
>> Are you using a language because it is popular? > >Not "popular" in the sense of "en vogue" but, typically, "reasonably well >known" (even if not EXPERTLY known by the audience I have to address). >A *great* language that is obscure is typically of very little value >(what happens when your "investment" -- programmer -- moves on to >another employer?)
But then how do you make yourself indispensable? 8-) When time to market is paramount, there can be benefits to using the "great" language even if it is obscure. If the boss and/or the client doesn't object, why unduly burden yourself by using inferior languages? [he says, tongue planted firmly in cheek]
>Often, there are other concerns that factor into a language choice >besides "code efficiency", "intuitiveness", "expressiveness", "popularity", >etc.
Outside the embedded and RBS (really big science) arenas, efficiency is hardly even a consideration any more. Witness the proliferation of bytecode interpreted languages and resource managed execution environments. If you find yourself making extensive use of C libraries from your oh!_so_wonderful high level language, you probably should have been writing in C to begin with. OTOH, the majority of programmers today have never seen a malloc() and wouldn't know what to do with one if they did. For most programmers, GC is not a nicety but a necessity without which they cannot write a functioning program.
>You also have to consider which features of your *environment* need to >be supported *in* the language (else you end up relying on "libraries" >to augment the language's capabilities -- often in sub-optimal ways). > >E.g., do you need to support concurrency? What sorts of communications >(a *huge* aspect of a reliable system design) do you support? Does >your communication mechanism impose type checking on the data that it >passes? Or, is it an "untyped byte stream" and you rely on the >targeted process/device/etc. to MANUALLY perform all of that testing >on the incoming data? > >For example, the scripting language (above) can't expect a neophyte to >be diligent and check for zero denominators. Or, to order operators >to preserve maximum precision in the result. As a result, the *language* >has to do this (at run-time or compile-time). I.e., > > 12334235234534635645674754675675675675678567/2354234029348293492384 > >should yield a *different* result from: > > (1+12334235234534635645674754675675675675678567)/2354234029348293492384 > >because the neophyte would *expect* it to produce a different result! >(imagine this is a subexpression in a larger expression) The neophyte >should be tasked with indicating the level of "detail" (precision) he >seeks in the output -- not the language (which would require educating the >neophyte as to where calculations could "blow up", etc.)
There is near consensus that a _safe_ language, by default, should provide arbitrary precision, base 10 arithmetic. The majority of programmers today have had little or no mathematics education and are unable to identify or fix potential problems caused by fixed precision and/or range in numerical calculations.
>> Was it your choice or was it forced by management?
"Pascal is a voluntarily worn straight jacket. You use it precisely because it won't let you do certain things. It's a PITA when you're writing the code, but a godsend later when you're trying to debug it." -- Marvin Minsky Management has a vested interest in not getting stuck with a useless pile of [code] if you leave, but that has to be weighed against the value of a developer's time. In many projects, software development is the major time expenditure and the major monetary expense. Anything which makes software developers more efficient should be welcome.
>> How many languages do you know? > >That's sort of a silly question. Define "know". I.e., one such definition >might be "able to sit down and write EFFECTIVE/bug-free code NOW".
By that definition, few people *know* any language. I certainly can read/understand more languages than I can sit down and immediately start to work with (there's only ~ a half dozen there). But generally what prevents me using a new language quickly is not it's grammar but it's vocabulary: i.e. before I can do anything useful I have to learn about its "standard library". But so far as writing effective, bug free code the first time: that's a fine ideal, but it really isn't that important. What is important is that buggy code not escape from the development environment and that the final code be effective. Obviously, the straighter the path, the better ... but the important thing is the result, not how you got to it.
>> Which language would you choose for a large pattern matching project? > >That would depend on the project. As most of my projects are real-time >(ignore the SRT/HRT distinction as most folks are misinformed, there) >and severely resource constrained. "Extra/unused resources" represent >excess cost. (You can't grow functionality to consume them after >the fact as that alters the Specification -- potentially compromising the >entire design. So, they can only be applied ex post factum to improve >performance... ABOVE that which was Specified). > >My speech synthesizer is little more than a pattern matching project. >But, it's RT and resource constraints dictate a small, tight implementation >("Gee, wouldn't this be *so* much easier to code in LISP??")
Lisp is ok, but Prolog would be a more natural choice. However, naively written Prolog can be slower than a molasses popsicle and it can yield unexpected results if you don't cut appropriately. For best trade off of development time, program size and execution speed, you'd probably want to use OCaml.
>> I'll sit back a bit before throwing in my thoughts. > >IME, the language isn't as important as a clear definition of the problem. >I've met lots of "experts" (in particular technologies, languages, etc.) >whose lack of knowledge of the *application* domain rendered most of their >knowledge *useless* (in-APPLICABLE). > >Some languages force you to clearly define THE IMPLEMENTATION. But, if >this still has a poor match to the actual *problem*, the the language's >features/facilities/CONSTRAINTS don't do anything to improve the quality >of the product ("Our code is 100% bug-free -- machine validated!" "Yeah, >but it doesn't *do* what we set out to do!")
The definition of "bug" is operation deviating from specification. When in doubt, change the specification. 8-) George
The 2026 Embedded Online Conference