EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

arm-gcc: pointer to constant string

Started by pozz September 14, 2018
On 22/09/18 05:15, Les Cargill wrote:
> David Brown wrote: >> On 18/09/18 04:24, Les Cargill wrote: >>> David Brown wrote: >>>> On 16/09/18 15:33, Tom Gardner wrote: >>>>> On 16/09/18 14:00, David Brown wrote: >>> <snip> >>>> >>>> There are few cases where formal proofs are worth the time and costs >>>> involved.&nbsp; But I like - when possible - to write my code with a view >>>> that a formal proof /could/ be made if someone were to go to the >>>> effort.&nbsp;&nbsp; It is rare that you could prove a whole program - but not >>>> uncommon that you could (if you wanted to) prove many of the functions >>>> involved. Thinking in terms of pre-conditions, invariants, >>>> post-conditions, variants, specifications - that is all good stuff for >>>> writing solid code, even if you don't do so formally. >>>> >>> >>> If you stick to something akin to the Haskell Actor model and everything >>> is an event, it's not too onerous to use combinators to generate a >>> significant subset of all messages. With the proper level of >>> introspection, you can "prove" the program works to a limit >>> enumerated by the test vectors and that it continues to do so as >>> changes are made to the code base. >>> >> >> Testing can /never/ prove a program is correct - unless you can >> enumerate /all/ inputs and states of your code and go through them all. >> &nbsp; Testing can prove a program is wrong (and it is vital for that), but it >> cannot prove a program is correct. >> > > So what you can to is find the meat of what is possible and do that. > > I must say - I am rather tired of this old warhorse; an appeal to Godel > in the form of an economics question. All I would say is > that you should test as well as you can. You should use a dragon to > fight the dragon.
No one will argue against the benefits of testing as well as you reasonably can. Testing is often good enough to reach the level of confidence you want for code. But you are simply /wrong/ to think that non-exhaustive testing can somehow "prove" that code works. And there are some classes of problem for which testing is extremely unlikely to identify the problem. In particular, for anything involving race conditions or timing, you cannot /test/ your way to working code. You have to write the code correctly in the first place, because the chances of spotting the microsecond window of disaster in your lab testing is tiny. (I am not saying you need to formally prove the code - again, this is all about appropriate balances of time and effort - but testing does not help.)
> > And so if your system is adept at rejecting inputs it cannot expect, > then what? > > It's all about the ability to enforce constraints. Sort your > constraints by the ability to do damage. > >> A functional programming language like Haskell gives you a much better >> chance of actually proving the correctness of your code than an >> imperative language like C - that is one of the reasons for the >> existence of functional programming languages (it is the lack of state >> and side-effects that help).&nbsp; But if you want to prove the correctness >> of your Haskell program, you do it by an analysis of the code, or by >> deriving correct code in the first place - you don't do it by testing. >> > > That's because we'd like to pretend that "analysis" is some > process that's not automatable. Or, at least, not somehow aided by > automation.
Analysis is much easier when you have restrictions on what can happen. In a pure functional programming language, your function takes some input and works solely on that input, and the only effect of the code is to generate the output. That means you can analyse it in isolation. In general C code, your function might use or change global state - it all quickly becomes a lot more complicated. <snip>
> For what it is worth, If I knew what I was trying to way, I'd say it :) >
OK. If you figure it out, we can take it up again - otherwise I think any replies I'd make would add to the confusion (mine, yours, or everyone else's).
David Brown wrote:

> But you are simply /wrong/ to think that non-exhaustive testing can > somehow "prove" that code works.
I hope we all agree on this.
> And there are some classes of problem for which testing is extremely > unlikely to identify the problem. In particular, for anything > involving race conditions or timing, you cannot /test/ your way to > working code.
If you know the possible origin of race conditions, you can actually detect them using faked versions of selected functions in an API. It still doesn't prove anything, and I'm not sure how often it is cost effective, but it is worth being aware of the possibility. We've done it on one legacy project I've worked on. The system is database-heavy, and has a high risk of problems due to race-conditions on database modifications. We created a modified version of the database API implementation, where we easily can inject what will appear as competing modifications of the database from other processes. It doesn't find all the problems, but it easily reveals to worst blunders. I am pretty sure you can do similar things in other cases - like file system interactions - but I'm not sure how often it is worth the effort. Maybe such a modified test implementation should be distributed with all libraries, where there is a serious risk of race-conditions in using the API. On the other hand, if you know that much, when you create a library, you might as well try to write the API so it (to the extent possible) lets the user avoid race-conditions completely. Greetings, Jacob -- "I remember being impressed with Ada because you could write an infinite loop without a faked up condition. The idea being that in Ada the typical infinite loop would normally be terminated by detonation." -- Larry Wall
On 22/09/18 12:16, Jacob Sparre Andersen wrote:
> David Brown wrote: > >> But you are simply /wrong/ to think that non-exhaustive testing can >> somehow "prove" that code works. > > I hope we all agree on this. > >> And there are some classes of problem for which testing is extremely >> unlikely to identify the problem. In particular, for anything >> involving race conditions or timing, you cannot /test/ your way to >> working code. > > If you know the possible origin of race conditions, you can actually > detect them using faked versions of selected functions in an API. It > still doesn't prove anything, and I'm not sure how often it is cost > effective, but it is worth being aware of the possibility. > > We've done it on one legacy project I've worked on. The system is > database-heavy, and has a high risk of problems due to race-conditions > on database modifications. We created a modified version of the > database API implementation, where we easily can inject what will appear > as competing modifications of the database from other processes. It > doesn't find all the problems, but it easily reveals to worst blunders. > > I am pretty sure you can do similar things in other cases - like file > system interactions - but I'm not sure how often it is worth the effort. > Maybe such a modified test implementation should be distributed with all > libraries, where there is a serious risk of race-conditions in using the > API. On the other hand, if you know that much, when you create a > library, you might as well try to write the API so it (to the extent > possible) lets the user avoid race-conditions completely. >
That's a interesting idea. I have done something a little similar when tracing timing problems. Basically, if your problem occurs when event A happens during processing B, then stick a big slow loop in the middle of B and the crash becomes far more likely. (Tests can't prove your code has no bugs - but they /can/ prove your code /does/ have bugs, and can be helpful in finding them.)
On 18-09-22 12:53 , David Brown wrote:
> On 22/09/18 05:15, Les Cargill wrote:
[on testing vs proof and analysis]
> No one will argue against the benefits of testing as well as you > reasonably can. Testing is often good enough to reach the level of > confidence you want for code. > > But you are simply /wrong/ to think that non-exhaustive testing can > somehow "prove" that code works. > > And there are some classes of problem for which testing is extremely > unlikely to identify the problem. In particular, for anything involving > race conditions or timing, you cannot /test/ your way to working code. > You have to write the code correctly in the first place, because the > chances of spotting the microsecond window of disaster in your lab > testing is tiny.
As you probably know, several static-analysis tools have the ability to detect possible race conditions, as accesses to unprotected shared variables from two or more threads/tasks. I have used the AdaControl tool for this (and other) analysis of Ada code, and found real race conditions. The tool-based race-condition analyses are most likely imperfect (for example, because shared variables are accessed via pointers), but nevertheless useful. -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
David Brown wrote:
> On 22/09/18 05:15, Les Cargill wrote: >> David Brown wrote: >>> On 18/09/18 04:24, Les Cargill wrote: >>>> David Brown wrote: >>>>> On 16/09/18 15:33, Tom Gardner wrote: >>>>>> On 16/09/18 14:00, David Brown wrote: >>>> <snip> >>>>> >>>>> There are few cases where formal proofs are worth the time and costs >>>>> involved.&nbsp; But I like - when possible - to write my code with a view >>>>> that a formal proof /could/ be made if someone were to go to the >>>>> effort.&nbsp;&nbsp; It is rare that you could prove a whole program - but not >>>>> uncommon that you could (if you wanted to) prove many of the functions >>>>> involved. Thinking in terms of pre-conditions, invariants, >>>>> post-conditions, variants, specifications - that is all good stuff for >>>>> writing solid code, even if you don't do so formally. >>>>> >>>> >>>> If you stick to something akin to the Haskell Actor model and >>>> everything >>>> is an event, it's not too onerous to use combinators to generate a >>>> significant subset of all messages. With the proper level of >>>> introspection, you can "prove" the program works to a limit >>>> enumerated by the test vectors and that it continues to do so as >>>> changes are made to the code base. >>>> >>> >>> Testing can /never/ prove a program is correct - unless you can >>> enumerate /all/ inputs and states of your code and go through them all. >>> &nbsp; Testing can prove a program is wrong (and it is vital for that), >>> but it >>> cannot prove a program is correct. >>> >> >> So what you can to is find the meat of what is possible and do that. >> >> I must say - I am rather tired of this old warhorse; an appeal to >> Godel in the form of an economics question. All I would say is >> that you should test as well as you can. You should use a dragon to >> fight the dragon. > > No one will argue against the benefits of testing as well as you > reasonably can.&nbsp; Testing is often good enough to reach the level of > confidence you want for code. > > But you are simply /wrong/ to think that non-exhaustive testing can > somehow "prove" that code works. >
"Proof" isn't that interesting outside of certain contexts. By "proof" I mean fully formal verification that it's 100% known to be correct. We want to find a break-even variation on that. We want the cost and the risk to be commensurate. Turns out, just measuring the risk is pretty hard.
> And there are some classes of problem for which testing is extremely > unlikely to identify the problem.&nbsp; In particular, for anything involving > race conditions or timing, you cannot /test/ your way to working code.
Not in harness, but you can prove out the approach being used as a demonstration and reasoning tool. And it depend on the problem - you might be able to generate test vectors that get darned close. You point is probably "yes, but how close?" :) And it's a good one. As an example, I had a "one producer/one consumer" buffer. I wasn't 100% sure if it needed semaphores - one producer, one consumer, right? I thought about it a bit , built a driver loop and was able to prove that I did indeed need semaphores. To be fair, I was leaning that way anyway.
> You have to write the code correctly in the first place, because the > chances of spotting the microsecond window of disaster in your lab > testing is tiny.&nbsp; (I am not saying you need to formally prove the code - > again, this is all about appropriate balances of time and effort - but > testing does not help.) >
Maybe it's bias from seeing stuff online, but I get the feeling that many people ( and I'm pretty sure you, David, are not one of them ) just sort of despair of the whole thing.
> >> >> And so if your system is adept at rejecting inputs it cannot expect, >> then what? >> >> It's all about the ability to enforce constraints. Sort your >> constraints by the ability to do damage. >> >>> A functional programming language like Haskell gives you a much better >>> chance of actually proving the correctness of your code than an >>> imperative language like C - that is one of the reasons for the >>> existence of functional programming languages (it is the lack of state >>> and side-effects that help).&nbsp; But if you want to prove the correctness >>> of your Haskell program, you do it by an analysis of the code, or by >>> deriving correct code in the first place - you don't do it by testing. >>> >> >> That's because we'd like to pretend that "analysis" is some >> process that's not automatable. Or, at least, not somehow aided by >> automation. > > Analysis is much easier when you have restrictions on what can happen.
You said it man. :)
> In a pure functional programming language, your function takes some > input and works solely on that input, and the only effect of the code is > to generate the output.&nbsp; That means you can analyse it in isolation.&nbsp; In > general C code, your function might use or change global state - it all > quickly becomes a lot more complicated. >
But the thing you must learn with C is how to constrain the use of global state. And for functional - in C.A.R. Tony Hoare's lecture on "The Billion Dollar Mistake", a guy stands up at the end with the message "Haskell is all fun and games until you invoke the I/O monad" :)
> <snip> > >> For what it is worth, If I knew what I was trying to way, I'd say it :) >> > > OK.&nbsp; If you figure it out, we can take it up again - otherwise I think > any replies I'd make would add to the confusion (mine, yours, or > everyone else's). >
It would be - in my opinion, we're obligated to do everything within our power to get things as close to correct as possible. And, for some reason, there isn't that much taught about how to do this, so we're probably on our own. But the fact remains that we must do this. This begins with design to provide information to help determine that things work as expected. It continues with spending time on frameworks, whether built or bought, to help check things. -- Les Cargill
On 22/09/18 21:31, David Brown wrote:
> On 22/09/18 12:16, Jacob Sparre Andersen wrote: >> David Brown wrote: >> >>> But you are simply /wrong/ to think that non-exhaustive testing can >>> somehow "prove" that code works. >> >> I hope we all agree on this. >> >>> And there are some classes of problem for which testing is extremely >>> unlikely to identify the problem.&nbsp; In particular, for anything >>> involving race conditions or timing, you cannot /test/ your way to >>> working code. >> >> If you know the possible origin of race conditions, you can actually >> detect them using faked versions of selected functions in an API.&nbsp; It >> still doesn't prove anything, and I'm not sure how often it is cost >> effective, but it is worth being aware of the possibility. >> >> We've done it on one legacy project I've worked on.&nbsp; The system is >> database-heavy, and has a high risk of problems due to race-conditions >> on database modifications.&nbsp; We created a modified version of the >> database API implementation, where we easily can inject what will appear >> as competing modifications of the database from other processes.&nbsp; It >> doesn't find all the problems, but it easily reveals to worst blunders. >> >> I am pretty sure you can do similar things in other cases - like file >> system interactions - but I'm not sure how often it is worth the effort. >> Maybe such a modified test implementation should be distributed with all >> libraries, where there is a serious risk of race-conditions in using the >> API.&nbsp; On the other hand, if you know that much, when you create a >> library, you might as well try to write the API so it (to the extent >> possible) lets the user avoid race-conditions completely. >> > > That's a interesting idea.&nbsp; I have done something a little similar when > tracing timing problems.&nbsp; Basically, if your problem occurs when event A > happens during processing B, then stick a big slow loop in the middle of > B and the crash becomes far more likely.&nbsp; (Tests can't prove your code > has no bugs - but they /can/ prove your code /does/ have bugs, and can > be helpful in finding them.)
There was a threading library I read about which had a debug/trace version you could link with. It recorded all thread scheduling based on its synchronisation primitives. If you hit an error case, you could replay the same test case and it would replay the exact same scheduling decisions for you, so you could debug the cause of the error. Presumably when a thread asked for a resource that was available (but which wasn't on the faulty run) it would block the thread until the correct sequence was able to be continued. I never used it, but that seems like a really good idea. Clifford Heath.
On 22/09/18 19:06, Niklas Holsti wrote:
> On 18-09-22 12:53 , David Brown wrote: >> On 22/09/18 05:15, Les Cargill wrote: > > [on testing vs proof and analysis] > >> No one will argue against the benefits of testing as well as you >> reasonably can. Testing is often good enough to reach the level of >> confidence you want for code. >> >> But you are simply /wrong/ to think that non-exhaustive testing can >> somehow "prove" that code works. >> >> And there are some classes of problem for which testing is extremely >> unlikely to identify the problem. In particular, for anything involving >> race conditions or timing, you cannot /test/ your way to working code. >> You have to write the code correctly in the first place, because the >> chances of spotting the microsecond window of disaster in your lab >> testing is tiny. > > As you probably know, several static-analysis tools have the ability to > detect possible race conditions, as accesses to unprotected shared > variables from two or more threads/tasks. > > I have used the AdaControl tool for this (and other) analysis of Ada > code, and found real race conditions. > > The tool-based race-condition analyses are most likely imperfect (for > example, because shared variables are accessed via pointers), but > nevertheless useful. >
If you can detect problems via static analysis tools, that is always good - finding them early means you can fix them early. Another way to avoid some kinds of problems is through smart use of programming languages. C++ "tag" classes are useful here - you can use them to make some kinds of multitasking errors into type errors. For example, suppose you have two locks - A and B - and want to be sure that you can't take lock "B" unless you also have lock "A". Calls to function "foo" should only be done when you have lock "A", and calls to function "bar" should only be when you have lock "B". You can make an class "GotLockA" whose constructor acquires lock A and whose destructor releases it. This class can actually be empty - it does not have any values. (It is /not/ a wrapper around lock objects - it is a type for a /particular/ lock.) Function foo then takes a parameter of type GotLockA (it doesn't do anything with the parameter) - and now any code that tries to call "foo" without holding lock A is a compile time error. Similarly, you make a class GotLockB whose constructor requires a GotLockA instance and acquires lock B. You can then only call "bar" if you've got lock B, which also implies you have lock A. There is work to be done deciding about copying and moving these classes. But the principle is that you match the dynamic, run-time state of locks to static scope and types.
On 22/09/18 20:50, Les Cargill wrote:
> David Brown wrote: >> On 22/09/18 05:15, Les Cargill wrote: >>> David Brown wrote: >>>> On 18/09/18 04:24, Les Cargill wrote: >>>>> David Brown wrote: >>>>>> On 16/09/18 15:33, Tom Gardner wrote: >>>>>>> On 16/09/18 14:00, David Brown wrote: >>>>> <snip>
>>> So what you can to is find the meat of what is possible and do that. >>> >>> I must say - I am rather tired of this old warhorse; an appeal to >>> Godel in the form of an economics question. All I would say is >>> that you should test as well as you can. You should use a dragon to >>> fight the dragon. >> >> No one will argue against the benefits of testing as well as you >> reasonably can. Testing is often good enough to reach the level of >> confidence you want for code. >> >> But you are simply /wrong/ to think that non-exhaustive testing can >> somehow "prove" that code works. >> > > > "Proof" isn't that interesting outside of certain contexts. By "proof" > I mean fully formal verification that it's 100% known to be correct. > > We want to find a break-even variation on that. We want the cost and > the risk to be commensurate. >
Agreed.
> Turns out, just measuring the risk is pretty hard. >
Agreed.
>> And there are some classes of problem for which testing is extremely >> unlikely to identify the problem. In particular, for anything >> involving race conditions or timing, you cannot /test/ your way to >> working code. > > Not in harness, but you can prove out the approach being used as a > demonstration and reasoning tool. And it depend on the problem - you > might be able to generate test vectors that get darned close. > > You point is probably "yes, but how close?" :) And it's a good one. >
Exactly, yes. Let's take an example of where people get things wrong. We have all seen code like this: // Make counter volatile so that it is atomic and safe for interrupts // even though we have a 16-bit cpu volatile uint32_t counter; void timerInterrupt(void) { counter++; // It's volatile - this is safe } uint32_t readCounterFromMainLoop(void) { return counter; // It's volatile - reading is atomic } Hopefully everyone reading this thread knows what the author's misunderstanding is here. Yet we have all seen code written this way. How much testing do you need before you are likely to catch a problem here? If the timer is run at 1000 Hz, you'll get a 16-bit overflow once per minute. If you have something like a 10 MHz msp430 processor, you've got maybe a 0.2 microsecond window where there is a danger while reading the counter. Perhaps you read 100 times a second. That means you spend 1/50,000 of your time in the danger zone. So every minute, you have a 1/50,0000 chance of a disaster. If my sums are right, you'll have a 50% chance of trouble after about 35,000 minutes - or 24 days. And that's assuming the problem is immediately obvious. You are /never/ going to catch this in any practical lab testing. But if you sell 10,000 units, it /will/ occur in live installations. So how close is your testing at showing the code is good? Absolutely useless. This is not an unrealistic example. And you don't even need something this unlikely. Boeing made engine controllers that failed if they ran for something like 600 days without a restart - no one could spot that problem in testing. Microsoft's Win 95 crashed reliably after 49 days when a 32-bit millisecond counter overran - it took /ten years/ to spot the problem, despite the huge numbers of systems in use. There are things in code that you simply have to get right - you /cannot/ test that they are cocrect. (As others have said, you might catch them with static analysis tools.)
> As an example, I had a "one producer/one consumer" buffer. I wasn't > 100% sure if it needed semaphores - one producer, one consumer, right? > > I thought about it a bit , built a driver loop > and was able to prove that I did indeed need semaphores. To be fair, > I was leaning that way anyway. >
Simplification of your structures and interactions can make it vastly easier to figure out what is correct and what is not.
>> You have to write the code correctly in the first place, because the >> chances of spotting the microsecond window of disaster in your lab >> testing is tiny. (I am not saying you need to formally prove the code >> - again, this is all about appropriate balances of time and effort - >> but testing does not help.) >> > > Maybe it's bias from seeing stuff online, but I get the feeling > that many people ( and I'm pretty sure you, David, are not one of them ) > just sort of despair of the whole thing.
I am certainly not in despair! I am merely concerned when people apparently think that good testing is all they need to be sure their code works correctly.
> >> >>> >>> And so if your system is adept at rejecting inputs it cannot expect, >>> then what? >>> >>> It's all about the ability to enforce constraints. Sort your >>> constraints by the ability to do damage. >>> >>>> A functional programming language like Haskell gives you a much better >>>> chance of actually proving the correctness of your code than an >>>> imperative language like C - that is one of the reasons for the >>>> existence of functional programming languages (it is the lack of state >>>> and side-effects that help). But if you want to prove the correctness >>>> of your Haskell program, you do it by an analysis of the code, or by >>>> deriving correct code in the first place - you don't do it by testing. >>>> >>> >>> That's because we'd like to pretend that "analysis" is some >>> process that's not automatable. Or, at least, not somehow aided by >>> automation. >> >> Analysis is much easier when you have restrictions on what can happen. > > You said it man. :) > >> In a pure functional programming language, your function takes some >> input and works solely on that input, and the only effect of the code >> is to generate the output. That means you can analyse it in >> isolation. In general C code, your function might use or change >> global state - it all quickly becomes a lot more complicated. >> > > But the thing you must learn with C is how to constrain the > use of global state. > > And for functional - in C.A.R. Tony Hoare's lecture on "The Billion > Dollar Mistake", a guy stands up at the end with the message "Haskell is > all fun and games until you invoke the I/O monad" :) >
That is the challenge with functional programming languages - for real life code, you usually need some state!
>> <snip> >> >>> For what it is worth, If I knew what I was trying to way, I'd say it :) >>> >> >> OK. If you figure it out, we can take it up again - otherwise I think >> any replies I'd make would add to the confusion (mine, yours, or >> everyone else's). >> > > > It would be - in my opinion, we're obligated to do everything within our > power to get things as close to correct as possible. And, for some > reason, there isn't that much taught about how to do this, so we're > probably on our own. But the fact remains that we must do this. >
Agreed.
> This begins with design to provide information to > help determine that things work as expected. It continues with spending > time on frameworks, whether built or bought, to help check things. >
These points are true in themselves - but miss 90% of the more important issues. If you leave the paragraph so unfinished, it gives entirely the wrong impression.
On 23/09/18 14:21, David Brown wrote:
> On 22/09/18 20:50, Les Cargill wrote: >> David Brown wrote: >>> On 22/09/18 05:15, Les Cargill wrote: >>>> David Brown wrote: >>>>> On 18/09/18 04:24, Les Cargill wrote: >>>>>> David Brown wrote: >>>>>>> On 16/09/18 15:33, Tom Gardner wrote: >>>>>>>> On 16/09/18 14:00, David Brown wrote: >>>>>> <snip> > >>>> So what you can to is find the meat of what is possible and do that. >>>> >>>> I must say - I am rather tired of this old warhorse; an appeal to >>>> Godel in the form of an economics question. All I would say is >>>> that you should test as well as you can. You should use a dragon to >>>> fight the dragon. >>> >>> No one will argue against the benefits of testing as well as you >>> reasonably can. Testing is often good enough to reach the level of >>> confidence you want for code. >>> >>> But you are simply /wrong/ to think that non-exhaustive testing can >>> somehow "prove" that code works. >>> >> >> >> "Proof" isn't that interesting outside of certain contexts. By "proof" >> I mean fully formal verification that it's 100% known to be correct. >> >> We want to find a break-even variation on that. We want the cost and >> the risk to be commensurate. >> > > Agreed. > >> Turns out, just measuring the risk is pretty hard. >> > > Agreed. > >>> And there are some classes of problem for which testing is extremely >>> unlikely to identify the problem. In particular, for anything >>> involving race conditions or timing, you cannot /test/ your way to >>> working code. >> >> Not in harness, but you can prove out the approach being used as a >> demonstration and reasoning tool. And it depend on the problem - you >> might be able to generate test vectors that get darned close. >> >> You point is probably "yes, but how close?" :) And it's a good one. >> > > Exactly, yes. > > Let's take an example of where people get things wrong. We have all > seen code like this: > > // Make counter volatile so that it is atomic and safe for interrupts > // even though we have a 16-bit cpu > volatile uint32_t counter; > > void timerInterrupt(void) { > counter++; // It's volatile - this is safe > } > > uint32_t readCounterFromMainLoop(void) { > return counter; // It's volatile - reading is atomic > } > > Hopefully everyone reading this thread knows what the author's > misunderstanding is here. Yet we have all seen code written this way. > > How much testing do you need before you are likely to catch a problem here? > > If the timer is run at 1000 Hz, you'll get a 16-bit overflow once per > minute. If you have something like a 10 MHz msp430 processor, you've > got maybe a 0.2 microsecond window where there is a danger while reading > the counter. Perhaps you read 100 times a second. That means you spend > 1/50,000 of your time in the danger zone. So every minute, you have a > 1/50,0000 chance of a disaster. If my sums are right, you'll have a 50% > chance of trouble after about 35,000 minutes - or 24 days. And that's > assuming the problem is immediately obvious. > > You are /never/ going to catch this in any practical lab testing. But > if you sell 10,000 units, it /will/ occur in live installations. > > So how close is your testing at showing the code is good? Absolutely > useless. > > This is not an unrealistic example. And you don't even need something > this unlikely. Boeing made engine controllers that failed if they ran > for something like 600 days without a restart - no one could spot that > problem in testing. Microsoft's Win 95 crashed reliably after 49 days > when a 32-bit millisecond counter overran - it took /ten years/ to spot > the problem, despite the huge numbers of systems in use. > > > There are things in code that you simply have to get right - you > /cannot/ test that they are cocrect. (As others have said, you might > catch them with static analysis tools.)
Just so. And for another example, consider the Patriot missile failure: http://www-users.math.umn.edu/~arnold/disasters/patriot.html The other significant problem is developing good tests is /at least/ as difficult as developing the application. It is hard enough to get developers to write tests that adequately test "happy days" (?daze?) normal operation. Getting them to develop good tests for fault conditions is more difficult. Hell, many times they don't even /conceive/ of failure modes, and /actively/ argue "that can't happen, the framework takes care of it". Oh, so the framework has solved the Byzantine General's problem, has it? And too many other examples.
On Sunday, 23 September 2018 17:06:44 UTC+2, Tom Gardner  wrote:

> It is hard enough to get developers to write tests that > adequately test "happy days" (?daze?) normal operation. > Getting them to develop good tests for fault conditions > is more difficult.
And beside the point anyway, while a (good) spec is supposed to cover all relevant cases to begin with: replied the developer. Good tests are a necessary condition for risk management, yet not by any stretch a sufficient condition to write quality code. Julio

The 2024 Embedded Online Conference