Reply by Les Cargill September 28, 20182018-09-28
David Brown wrote:
> On 23/09/18 19:04, Les Cargill wrote: >> David Brown wrote: > >>> Let's take an example of where people get things wrong.  We have all >>> seen code like this: >>> >>> // Make counter volatile so that it is atomic and safe for interrupts >>> // even though we have a 16-bit cpu >>> volatile uint32_t counter; >>> >>> void timerInterrupt(void) { >>>     counter++;    // It's volatile - this is safe >>> } >>> >>> uint32_t readCounterFromMainLoop(void) { >>>     return counter;    // It's volatile - reading is atomic >>> } >>> >>> Hopefully everyone reading this thread knows what the author's >>> misunderstanding is here.  Yet we have all seen code written this way. >>> >>> How much testing do you need before you are likely to catch a problem >>> here? >>> >>> If the timer is run at 1000 Hz, you'll get a 16-bit overflow once per >>> minute.  If you have something like a 10 MHz msp430 processor, you've >>> got maybe a 0.2 microsecond window where there is a danger while reading >>> the counter.  Perhaps you read 100 times a second.  That means you spend >>> 1/50,000 of your time in the danger zone.  So every minute, you have a >>> 1/50,0000 chance of a disaster.  If my sums are right, you'll have a 50% >>> chance of trouble after about 35,000 minutes - or 24 days.  And that's >>> assuming the problem is immediately obvious. >>> >>> You are /never/ going to catch this in any practical lab testing.  But >>> if you sell 10,000 units, it /will/ occur in live installations. >>> >> >> Excellent example. It's the sort of thing we would hope would be >> covered in an undergraduate's section on semaphores. >> >>> So how close is your testing at showing the code is good?  Absolutely >>> useless. >>> >> >> Oh, I think either of us could devise a mechanism to test it :) >> > /How/ would you test for this occurring, in a way that does not imply > enough understanding to know how to write the code so that the problem > never occurs? >
Crank up a couple of high-frequency ISRs, plus a bit of boilerplate to steer them by and detect the failure. In the end, that specific case is one for design.
> There is no point in knowing how to test for a bug that you would never > have written in the first place. > >
Ah, but we're looking for unknown unknowns with testing. -- Les Cargill
Reply by David Brown September 24, 20182018-09-24
On 24/09/18 12:53, Jacob Sparre Andersen wrote:
> David Brown wrote: > >> There is no point in knowing how to test for a bug that you would >> never have written in the first place. > > Well. We are human, so we may occassionally make mistakes. (I have > enough experience to know that I can make errors I never really would > make.) There is also the matter of protecting the construct against > later (incorrect) modification. >
Oh, we can all make mistakes. But some mistakes are easier to avoid than others. I can appreciate that some programmers don't realise that "volatile" does not mean "atomic", and will - as in my example - not realise that reading a 32-bit value on a 16-bit cpu is not atomic. (Even on a 32-bit cpu, changing the value is unlikely to be atomic - but in the example, the change was safe.) And people can make mistakes even when they /do/ know about such things - perhaps code was ported from a 32-bit system to a 16-bit system, or the type was originally 16-bit and later changed to 32-bit. However, I find it hard to believe that someone would know a good way to test for such mistakes in the code without also knowing how to avoid them - or that someone would think that this variable needs a test, without also thinking that this variable needs to be read more carefully.
Reply by Jacob Sparre Andersen September 24, 20182018-09-24
David Brown wrote:

> There is no point in knowing how to test for a bug that you would > never have written in the first place.
Well. We are human, so we may occassionally make mistakes. (I have enough experience to know that I can make errors I never really would make.) There is also the matter of protecting the construct against later (incorrect) modification. Greetings, Jacob -- "make it easy for folks to make changes that remain consistent with the documentation and discourage them from those changes that would result in the comments being invalidated"
Reply by Tom Gardner September 23, 20182018-09-23
On 23/09/18 20:05, julio@diegidio.name wrote:
> On Sunday, 23 September 2018 19:20:35 UTC+2, Tom Gardner wrote: >> On 23/09/18 16:56, julio@diegidio.name wrote: >>> On Sunday, 23 September 2018 17:49:17 UTC+2, Tom Gardner wrote: >>>> On 23/09/18 16:41, julio@diegidio.name wrote: >>>>> On Sunday, 23 September 2018 17:06:44 UTC+2, Tom Gardner wrote: >>>>> >>>>>> It is hard enough to get developers to write tests that >>>>>> adequately test "happy days" (?daze?) normal operation. >>>>>> Getting them to develop good tests for fault conditions >>>>>> is more difficult. >>>>> >>>>> And beside the point anyway, while a (good) spec is supposed to cover all >>>>> relevant cases to begin with: replied the developer. >>>> >>>> Good luck getting the customer to specify all the relevant >>>> fault conditions! >>> >>> That's just not the customer's job! It's rather called (good) analysis: >>> replied again the developer, though conscious of the decades of relentless >>> demolition campaigns of analysis to begin with... i.e. of the very premise >>> for doing any job properly (and for taking responsibility for results). >> >> Ah. Someone that hasn't experienced the religious >> varieties of XP/Agile methodology! > > I hope you are kidding, or the one who is still not doing the proverbial > two plus two is just you.
Regrettably I'm not kidding, but the emphasis in my statement is on "religious". Agile/XP - when applied correctly and sensibly - does have some advantages and can work.
>> Apart from the religious zealots, is /anyone/ surprised >> that XP/Agile has some disadvantages? > > That's too weak even as an euphemism. > > The so called democratisation of programming,
Democratisation isn't the word I would use. I would probably use "downskilling". The problem is that there is a limited supply of people that are skilled and can think and have experience. And aren't too expensive.
> when suddenly from being few > top notch professionals we were flooded by hordes of clueless bullshitters, > was immediately followed by the vendors and then the consultancies and then > the self-appointed gurus launching their band-wagons:
Exactly. Full stop. Those are the <expletive deleteds> that push the religion, so they can say "oh no wonder it failed, you only did 11 of the 12 necessary things". That isn't limited to programming. Politicians are adept at it (e.g. excuses for Brexit being a clusterfuck), and town planners (e.g. "share spaced" "reduced demarcation" zones).
> initially as pure tool > based development, later as the nonsense that they have dared call agile (true > agility is a good thing but it is just something else). And these speculations > have not just been selling broken solutions to an entire industry for decades > now, these speculations have systematically denied and lied about the very state > of the art since day one, because only by defaming and lying about the state of > the art they could ever get away with so much unmitigated crap! Indeed, sorry > for all this nonsense: I am a "just" a developer... Yet people keep buying > into it, because so is the human race nowadays: demented.
Yes.
>> Main technical problem in uninteresting jobs :) > > You choose your job... The gigantic technical and social and economic costs > are in the complete fuck up of the entire software industry. And we all pay.
All but one of my jobs were very interesting; somehow I managed to make some luck for myself :)
Reply by September 23, 20182018-09-23
On Sunday, 23 September 2018 19:20:35 UTC+2, Tom Gardner  wrote:
> On 23/09/18 16:56, julio@diegidio.name wrote: > > On Sunday, 23 September 2018 17:49:17 UTC+2, Tom Gardner wrote: > >> On 23/09/18 16:41, julio@diegidio.name wrote: > >>> On Sunday, 23 September 2018 17:06:44 UTC+2, Tom Gardner wrote: > >>> > >>>> It is hard enough to get developers to write tests that > >>>> adequately test "happy days" (?daze?) normal operation. > >>>> Getting them to develop good tests for fault conditions > >>>> is more difficult. > >>> > >>> And beside the point anyway, while a (good) spec is supposed to cover all > >>> relevant cases to begin with: replied the developer. > >> > >> Good luck getting the customer to specify all the relevant > >> fault conditions! > > > > That's just not the customer's job! It's rather called (good) analysis: > > replied again the developer, though conscious of the decades of relentless > > demolition campaigns of analysis to begin with... i.e. of the very premise > > for doing any job properly (and for taking responsibility for results). > > Ah. Someone that hasn't experienced the religious > varieties of XP/Agile methodology!
I hope you are kidding, or the one who is still not doing the proverbial two plus two is just you.
> Apart from the religious zealots, is /anyone/ surprised > that XP/Agile has some disadvantages?
That's too weak even as an euphemism. The so called democratisation of programming, when suddenly from being few top notch professionals we were flooded by hordes of clueless bullshitters, was immediately followed by the vendors and then the consultancies and then the self-appointed gurus launching their band-wagons: initially as pure tool based development, later as the nonsense that they have dared call agile (true agility is a good thing but it is just something else). And these speculations have not just been selling broken solutions to an entire industry for decades now, these speculations have systematically denied and lied about the very state of the art since day one, because only by defaming and lying about the state of the art they could ever get away with so much unmitigated crap! Indeed, sorry for all this nonsense: I am a "just" a developer... Yet people keep buying into it, because so is the human race nowadays: demented.
> Main technical problem in uninteresting jobs :)
You choose your job... The gigantic technical and social and economic costs are in the complete fuck up of the entire software industry. And we all pay. Julio
Reply by David Brown September 23, 20182018-09-23
On 23/09/18 19:04, Les Cargill wrote:
> David Brown wrote:
>> Let's take an example of where people get things wrong.&nbsp; We have all >> seen code like this: >> >> // Make counter volatile so that it is atomic and safe for interrupts >> // even though we have a 16-bit cpu >> volatile uint32_t counter; >> >> void timerInterrupt(void) { >> &nbsp;&nbsp;&nbsp;&nbsp;counter++;&nbsp;&nbsp;&nbsp; // It's volatile - this is safe >> } >> >> uint32_t readCounterFromMainLoop(void) { >> &nbsp;&nbsp;&nbsp;&nbsp;return counter;&nbsp;&nbsp;&nbsp; // It's volatile - reading is atomic >> } >> >> Hopefully everyone reading this thread knows what the author's >> misunderstanding is here.&nbsp; Yet we have all seen code written this way. >> >> How much testing do you need before you are likely to catch a problem >> here? >> >> If the timer is run at 1000 Hz, you'll get a 16-bit overflow once per >> minute.&nbsp; If you have something like a 10 MHz msp430 processor, you've >> got maybe a 0.2 microsecond window where there is a danger while reading >> the counter.&nbsp; Perhaps you read 100 times a second.&nbsp; That means you spend >> 1/50,000 of your time in the danger zone.&nbsp; So every minute, you have a >> 1/50,0000 chance of a disaster.&nbsp; If my sums are right, you'll have a 50% >> chance of trouble after about 35,000 minutes - or 24 days.&nbsp; And that's >> assuming the problem is immediately obvious. >> >> You are /never/ going to catch this in any practical lab testing.&nbsp; But >> if you sell 10,000 units, it /will/ occur in live installations. >> > > Excellent example. It's the sort of thing we would hope would be > covered in an undergraduate's section on semaphores. > >> So how close is your testing at showing the code is good?&nbsp; Absolutely >> useless. >> > > Oh, I think either of us could devise a mechanism to test it :) >
/How/ would you test for this occurring, in a way that does not imply enough understanding to know how to write the code so that the problem never occurs? There is no point in knowing how to test for a bug that you would never have written in the first place.
Reply by Tom Gardner September 23, 20182018-09-23
On 23/09/18 18:04, Les Cargill wrote:
> David Brown wrote: >> On 22/09/18 20:50, Les Cargill wrote: >>> David Brown wrote: >>>> On 22/09/18 05:15, Les Cargill wrote: >>>>> David Brown wrote: >>>>>> On 18/09/18 04:24, Les Cargill wrote: >>>>>>> David Brown wrote: >>>>>>>> On 16/09/18 15:33, Tom Gardner wrote: >>>>>>>>> On 16/09/18 14:00, David Brown wrote: >>>>>>> <snip> >> >>>>> So what you can to is find the meat of what is possible and do that. >>>>> >>>>> I must say - I am rather tired of this old warhorse; an appeal to >>>>> Godel in the form of an economics question. All I would say is >>>>> that you should test as well as you can. You should use a dragon to >>>>> fight the dragon. >>>> >>>> No one will argue against the benefits of testing as well as you >>>> reasonably can.&nbsp; Testing is often good enough to reach the level of >>>> confidence you want for code. >>>> >>>> But you are simply /wrong/ to think that non-exhaustive testing can >>>> somehow "prove" that code works. >>>> >>> >>> >>> "Proof" isn't that interesting outside of certain contexts. By "proof" >>> I mean fully formal verification that it's 100% known to be correct. >>> >>> We want to find a break-even variation on that. We want the cost and >>> the risk to be commensurate. >>> >> >> Agreed. >> >>> Turns out, just measuring the risk is pretty hard. >>> >> >> Agreed. >> >>>> And there are some classes of problem for which testing is extremely >>>> unlikely to identify the problem.&nbsp; In particular, for anything >>>> involving race conditions or timing, you cannot /test/ your way to >>>> working code. >>> >>> Not in harness, but you can prove out the approach being used as a >>> demonstration and reasoning tool. And it depend on the problem - you >>> might be able to generate test vectors that get darned close. >>> >>> You point is probably "yes, but how close?" :) And it's a good one. >>> >> >> Exactly, yes. >> >> Let's take an example of where people get things wrong.&nbsp; We have all >> seen code like this: >> >> // Make counter volatile so that it is atomic and safe for interrupts >> // even though we have a 16-bit cpu >> volatile uint32_t counter; >> >> void timerInterrupt(void) { >> &nbsp;&nbsp;&nbsp;&nbsp;counter++;&nbsp;&nbsp;&nbsp; // It's volatile - this is safe >> } >> >> uint32_t readCounterFromMainLoop(void) { >> &nbsp;&nbsp;&nbsp;&nbsp;return counter;&nbsp;&nbsp;&nbsp; // It's volatile - reading is atomic >> } >> >> Hopefully everyone reading this thread knows what the author's >> misunderstanding is here.&nbsp; Yet we have all seen code written this way. >> >> How much testing do you need before you are likely to catch a problem here? >> >> If the timer is run at 1000 Hz, you'll get a 16-bit overflow once per >> minute.&nbsp; If you have something like a 10 MHz msp430 processor, you've >> got maybe a 0.2 microsecond window where there is a danger while reading >> the counter.&nbsp; Perhaps you read 100 times a second.&nbsp; That means you spend >> 1/50,000 of your time in the danger zone.&nbsp; So every minute, you have a >> 1/50,0000 chance of a disaster.&nbsp; If my sums are right, you'll have a 50% >> chance of trouble after about 35,000 minutes - or 24 days.&nbsp; And that's >> assuming the problem is immediately obvious. >> >> You are /never/ going to catch this in any practical lab testing.&nbsp; But >> if you sell 10,000 units, it /will/ occur in live installations. >> > > Excellent example. It's the sort of thing we would hope would be > covered in an undergraduate's section on semaphores.
Yes, one would /hope/ that. Expect it, even. Unfortunately in the real world... I've been unpleasantly surprised by some of the people that were hired before my time.
>> So how close is your testing at showing the code is good?&nbsp; Absolutely >> useless. >> > > Oh, I think either of us could devise a mechanism to test it :)
Indeed, but that presumes the possibility is understood. And that's becoming more difficult where people "design" via stackexchange+cut'n'paste for the framework du jour (and du project). And if the tests show a "green light", that is taken as proving it is working. Seriously :(
>> This is not an unrealistic example.&nbsp; And you don't even need something >> this unlikely.&nbsp; Boeing made engine controllers that failed if they ran >> for something like 600 days without a restart - no one could spot that >> problem in testing.&nbsp; Microsoft's Win 95 crashed reliably after 49 days >> when a 32-bit millisecond counter overran - it took /ten years/ to spot >> the problem, despite the huge numbers of systems in use. >> >> > > I attribute those to "the fog of war". If anyone had been afforded the time to > think about them properly, they would have found them ( by inspection ) but the > phone rings, you have meetings, yadda yadd.
XP/Agile leaves it up to the customer to decide whether there is time. Guess what happens next?
Reply by Tom Gardner September 23, 20182018-09-23
On 23/09/18 16:56, julio@diegidio.name wrote:
> On Sunday, 23 September 2018 17:49:17 UTC+2, Tom Gardner wrote: >> On 23/09/18 16:41, julio@diegidio.name wrote: >>> On Sunday, 23 September 2018 17:06:44 UTC+2, Tom Gardner wrote: >>> >>>> It is hard enough to get developers to write tests that >>>> adequately test "happy days" (?daze?) normal operation. >>>> Getting them to develop good tests for fault conditions >>>> is more difficult. >>> >>> And beside the point anyway, while a (good) spec is supposed to cover all >>> relevant cases to begin with: replied the developer. >> >> Good luck getting the customer to specify all the relevant >> fault conditions! > > That's just not the customer's job! It's rather called (good) analysis: > replied again the developer, though conscious of the decades of relentless > demolition campaigns of analysis to begin with... i.e. of the very premise > for doing any job properly (and for taking responsibility for results).
Ah. Someone that hasn't experienced the religious varieties of XP/Agile methodology! There the customer is king, and they define what's "good enough" even when they can't understand the consequences and won't take responsibility :( Apart from the religious zealots, is /anyone/ surprised that XP/Agile has some disadvantages?
>> I've seen too many bad and inadequate tests. > > If that's your main problem, you haven't seen nearly enough...
Main technical problem in uninteresting jobs :)
> > HTH (and EOD), > > Julio >
Reply by Les Cargill September 23, 20182018-09-23
David Brown wrote:
> On 22/09/18 20:50, Les Cargill wrote: >> David Brown wrote: >>> On 22/09/18 05:15, Les Cargill wrote: >>>> David Brown wrote: >>>>> On 18/09/18 04:24, Les Cargill wrote: >>>>>> David Brown wrote: >>>>>>> On 16/09/18 15:33, Tom Gardner wrote: >>>>>>>> On 16/09/18 14:00, David Brown wrote: >>>>>> <snip> > >>>> So what you can to is find the meat of what is possible and do that. >>>> >>>> I must say - I am rather tired of this old warhorse; an appeal to >>>> Godel in the form of an economics question. All I would say is >>>> that you should test as well as you can. You should use a dragon to >>>> fight the dragon. >>> >>> No one will argue against the benefits of testing as well as you >>> reasonably can. Testing is often good enough to reach the level of >>> confidence you want for code. >>> >>> But you are simply /wrong/ to think that non-exhaustive testing can >>> somehow "prove" that code works. >>> >> >> >> "Proof" isn't that interesting outside of certain contexts. By "proof" >> I mean fully formal verification that it's 100% known to be correct. >> >> We want to find a break-even variation on that. We want the cost and >> the risk to be commensurate. >> > > Agreed. > >> Turns out, just measuring the risk is pretty hard. >> > > Agreed. > >>> And there are some classes of problem for which testing is extremely >>> unlikely to identify the problem. In particular, for anything >>> involving race conditions or timing, you cannot /test/ your way to >>> working code. >> >> Not in harness, but you can prove out the approach being used as a >> demonstration and reasoning tool. And it depend on the problem - you >> might be able to generate test vectors that get darned close. >> >> You point is probably "yes, but how close?" :) And it's a good one. >> > > Exactly, yes. > > Let's take an example of where people get things wrong. We have all > seen code like this: > > // Make counter volatile so that it is atomic and safe for interrupts > // even though we have a 16-bit cpu > volatile uint32_t counter; > > void timerInterrupt(void) { > counter++; // It's volatile - this is safe > } > > uint32_t readCounterFromMainLoop(void) { > return counter; // It's volatile - reading is atomic > } > > Hopefully everyone reading this thread knows what the author's > misunderstanding is here. Yet we have all seen code written this way. > > How much testing do you need before you are likely to catch a problem here? > > If the timer is run at 1000 Hz, you'll get a 16-bit overflow once per > minute. If you have something like a 10 MHz msp430 processor, you've > got maybe a 0.2 microsecond window where there is a danger while reading > the counter. Perhaps you read 100 times a second. That means you spend > 1/50,000 of your time in the danger zone. So every minute, you have a > 1/50,0000 chance of a disaster. If my sums are right, you'll have a 50% > chance of trouble after about 35,000 minutes - or 24 days. And that's > assuming the problem is immediately obvious. > > You are /never/ going to catch this in any practical lab testing. But > if you sell 10,000 units, it /will/ occur in live installations. >
Excellent example. It's the sort of thing we would hope would be covered in an undergraduate's section on semaphores.
> So how close is your testing at showing the code is good? Absolutely > useless. >
Oh, I think either of us could devise a mechanism to test it :)
> This is not an unrealistic example. And you don't even need something > this unlikely. Boeing made engine controllers that failed if they ran > for something like 600 days without a restart - no one could spot that > problem in testing. Microsoft's Win 95 crashed reliably after 49 days > when a 32-bit millisecond counter overran - it took /ten years/ to spot > the problem, despite the huge numbers of systems in use. > >
I attribute those to "the fog of war". If anyone had been afforded the time to think about them properly, they would have found them ( by inspection ) but the phone rings, you have meetings, yadda yadd.
> There are things in code that you simply have to get right - you > /cannot/ test that they are cocrect. (As others have said, you might > catch them with static analysis tools.) > >> As an example, I had a "one producer/one consumer" buffer. I wasn't >> 100% sure if it needed semaphores - one producer, one consumer, right? >> >> I thought about it a bit , built a driver loop >> and was able to prove that I did indeed need semaphores. To be fair, >> I was leaning that way anyway. >> > > Simplification of your structures and interactions can make it vastly > easier to figure out what is correct and what is not. >
Very much so.
>>> You have to write the code correctly in the first place, because the >>> chances of spotting the microsecond window of disaster in your lab >>> testing is tiny. (I am not saying you need to formally prove the code >>> - again, this is all about appropriate balances of time and effort - >>> but testing does not help.) >>> >> >> Maybe it's bias from seeing stuff online, but I get the feeling >> that many people ( and I'm pretty sure you, David, are not one of them ) >> just sort of despair of the whole thing. > > I am certainly not in despair! I am merely concerned when people > apparently think that good testing is all they need to be sure their > code works correctly. >
Oh, I didn't consider you to be in despair. Others, however, seem to be.
>> >>> >>>> >>>> And so if your system is adept at rejecting inputs it cannot expect, >>>> then what? >>>> >>>> It's all about the ability to enforce constraints. Sort your >>>> constraints by the ability to do damage. >>>> >>>>> A functional programming language like Haskell gives you a much better >>>>> chance of actually proving the correctness of your code than an >>>>> imperative language like C - that is one of the reasons for the >>>>> existence of functional programming languages (it is the lack of state >>>>> and side-effects that help). But if you want to prove the correctness >>>>> of your Haskell program, you do it by an analysis of the code, or by >>>>> deriving correct code in the first place - you don't do it by testing. >>>>> >>>> >>>> That's because we'd like to pretend that "analysis" is some >>>> process that's not automatable. Or, at least, not somehow aided by >>>> automation. >>> >>> Analysis is much easier when you have restrictions on what can happen. >> >> You said it man. :) >> >>> In a pure functional programming language, your function takes some >>> input and works solely on that input, and the only effect of the code >>> is to generate the output. That means you can analyse it in >>> isolation. In general C code, your function might use or change >>> global state - it all quickly becomes a lot more complicated. >>> >> >> But the thing you must learn with C is how to constrain the >> use of global state. >> >> And for functional - in C.A.R. Tony Hoare's lecture on "The Billion >> Dollar Mistake", a guy stands up at the end with the message "Haskell is >> all fun and games until you invoke the I/O monad" :) >> > > That is the challenge with functional programming languages - for real > life code, you usually need some state! > >>> <snip> >>> >>>> For what it is worth, If I knew what I was trying to way, I'd say it :) >>>> >>> >>> OK. If you figure it out, we can take it up again - otherwise I think >>> any replies I'd make would add to the confusion (mine, yours, or >>> everyone else's). >>> >> >> >> It would be - in my opinion, we're obligated to do everything within our >> power to get things as close to correct as possible. And, for some >> reason, there isn't that much taught about how to do this, so we're >> probably on our own. But the fact remains that we must do this. >> > > Agreed. > >> This begins with design to provide information to >> help determine that things work as expected. It continues with spending >> time on frameworks, whether built or bought, to help check things. >> > > These points are true in themselves - but miss 90% of the more important > issues. If you leave the paragraph so unfinished, it gives entirely the > wrong impression. >
I expect that anything beyond those quite general statements would require more domain-specific information. -- Les Cargill
Reply by September 23, 20182018-09-23
On Sunday, 23 September 2018 17:49:17 UTC+2, Tom Gardner  wrote:
> On 23/09/18 16:41, julio@diegidio.name wrote: > > On Sunday, 23 September 2018 17:06:44 UTC+2, Tom Gardner wrote: > > > >> It is hard enough to get developers to write tests that > >> adequately test "happy days" (?daze?) normal operation. > >> Getting them to develop good tests for fault conditions > >> is more difficult. > > > > And beside the point anyway, while a (good) spec is supposed to cover all > > relevant cases to begin with: replied the developer. > > Good luck getting the customer to specify all the relevant > fault conditions!
That's just not the customer's job! It's rather called (good) analysis: replied again the developer, though conscious of the decades of relentless demolition campaigns of analysis to begin with... i.e. of the very premise for doing any job properly (and for taking responsibility for results).
> I've seen too many bad and inadequate tests.
If that's your main problem, you haven't seen nearly enough... HTH (and EOD), Julio