EmbeddedRelated.com
Forums

[OT] I got a JOB!!!

Started by Tim Wescott May 16, 2017
On 5/16/2017 3:51 PM, Tom Gardner wrote:
> The other dysfunctional aspect of unit tests is that, > while they are very useful when making incremental > improvements during design, they can be a real > impediment in a few months/years time. The problem is > that over time people forget which tests demonstrate > required properties, and which are merely ensuring > behaviour of implementation artifacts. At that point > people are afraid to make changes that break tests, > even if the tests are unimportant. At that point the > codebase has become ossified.
But that's true any time you (attempt to) nail down an aspect of a design. Moreso if you don't provide a rationale for WHY the requirement (whether it be from a spec or a test) was imposed. It's like having a law go before a judge and the judge having only the text of the law on which to base his ruling -- he needs context as to the *intent* to more correctly understand it. I've taken to moving towards "prose" descriptions instead of "legalese specsmanship" in defining how systems "should" work. Describe particular examples and how the "ought to's" apply instead of just listing a set of SHALLs and SHALL NOTs. Ditto with regression tests ("WHY is this test here?") The kinds of gross perversions that a particular piece of software is expected to not just *tolerate* but expeditiously *accommodate* are unheard of in most other disciplines. Like taking a 10W wall wart and deciding it should now support a wider range of input voltages, an additional output and three times the power rating -- without changing the size of the package or its selling price. Or, adding another floor to a preexisting building. Etc.
> Classic anti-patterns warning of that: unit tests on > getters/setters, and/or changing visibility solely > to enable unit tests.
On 17/05/17 00:41, Clifford Heath wrote:
> On 17/05/17 09:32, Tim Wescott wrote: >> On Tue, 16 May 2017 23:51:26 +0100, Tom Gardner wrote: >> >>> On 16/05/17 23:11, Tim Wescott wrote: >>>> On Tue, 16 May 2017 22:05:49 +0100, Tom Gardner wrote: >>>> >>>>> On 16/05/17 21:24, Tim Wescott wrote: >>>>>> On Tue, 16 May 2017 20:17:17 +0000, eric.jacobsen wrote: >>>>>> >>>>>>> On Tue, 16 May 2017 13:49:28 -0500, Tim Wescott >>>>>>> <seemywebsite@myfooter.really> wrote: >>>>>>> >>>>>>>> Since this is a newsgroup, and this is news... >>>>>>>> >>>>>>>> Wescott Design Services is going into remission, while I pursue a >>>>>>>> day job. Job title is Software Designer 5 at Planar Systems -- so >>>>>>>> any circuit design or control systems jones will have to be >>>>>>>> satisfied by hobby work or on the side. >>>>>>> >>>>>>> Software Designer 5? Sounds a little like being in Sector 7-G? >>>>>> >>>>>> "Really Senior Embedded Guy". >>>>>> >>>>>>>> In the near term I'll be finishing up current work with current >>>>>>>> customers; in the longer term I'll probably concentrate on the >>>>>>>> educational videos and maybe hobby stuff. >>>>>>>> >>>>>>>> Lots of embedded Linux work in my near future, and possibly TDD >>>>>>>> proselytizing. >>>>>>> >>>>>>> Time Division Duplex? >>>>>> >>>>>> Test Driven Design. >>>>> >>>>> Be prepared to meet some people that believe X works /because/ all the >>>>> unit tests for X are passed and the console shows a green light. >>>> >>>> Well, yes. The two main good things about TDD for me is that it makes >>>> me think early about how something really should work, and there are >>>> finer- grained tests to make sure that if I did something really >>>> dumbass it gets caught. >>>> >>>> Even with TDD, I still find errors, so I don't live under the delusion >>>> that you can test in quality. >>>> >>>>> Usually they have never been introduced to the concept that "you can't >>>>> test quality into a product". >>>>> Unit tests developed as part of TDD are highly beneficial, but are not >>>>> sufficient. >>>>> >>>>> But I'm sure you know that! >>>> >>>> Yea verily!! >>> >>> The other dysfunctional aspect of unit tests is that, >>> while they are very useful when making incremental improvements during >>> design, they can be a real impediment in a few months/years time. The >>> problem is that over time people forget which tests demonstrate required >>> properties, and which are merely ensuring behaviour of implementation >>> artifacts. At that point people are afraid to make changes that break >>> tests, even if the tests are unimportant. At that point the codebase has >>> become ossified. >>> >>> Classic anti-patterns warning of that: unit tests on getters/setters, >>> and/or changing visibility solely to enable unit tests. >> >> That's an interesting point. I haven't been using TDD long enough for >> that to be an issue. Good to know! > > TDD works where it gives you *another way* to state your expectations. > Testing getter/setters never says more about the getter/setter than > is said by their declaration, so the tests have zero value.
Er, that's not TDD, that is Unit Tests. To be overly simplistic, TDD is a strategy for generating Unit Tests.
> The important thing is to say "how else can I state this requirement?". > If you're using truly succinct code, such as strongly-typed Haskell, > there often is simply *no other way* to describe the expected behavior. > That's why FP aficionados scoff at TDD zealots. Programming with strong > types is always better than using TDD.
Strong typing is very beneficial, but is completely insufficient. As one of an infinite number of trivial examples, consider testing for X>Y when you should be testing for Y>X.
On 17/05/17 00:32, Tim Wescott wrote:
> On Tue, 16 May 2017 23:51:26 +0100, Tom Gardner wrote: > >> On 16/05/17 23:11, Tim Wescott wrote: >>> On Tue, 16 May 2017 22:05:49 +0100, Tom Gardner wrote: >>> >>>> On 16/05/17 21:24, Tim Wescott wrote: >>>>> On Tue, 16 May 2017 20:17:17 +0000, eric.jacobsen wrote: >>>>> >>>>>> On Tue, 16 May 2017 13:49:28 -0500, Tim Wescott >>>>>> <seemywebsite@myfooter.really> wrote: >>>>>> >>>>>>> Since this is a newsgroup, and this is news... >>>>>>> >>>>>>> Wescott Design Services is going into remission, while I pursue a >>>>>>> day job. Job title is Software Designer 5 at Planar Systems -- so >>>>>>> any circuit design or control systems jones will have to be >>>>>>> satisfied by hobby work or on the side. >>>>>> >>>>>> Software Designer 5? Sounds a little like being in Sector 7-G? >>>>> >>>>> "Really Senior Embedded Guy". >>>>> >>>>>>> In the near term I'll be finishing up current work with current >>>>>>> customers; in the longer term I'll probably concentrate on the >>>>>>> educational videos and maybe hobby stuff. >>>>>>> >>>>>>> Lots of embedded Linux work in my near future, and possibly TDD >>>>>>> proselytizing. >>>>>> >>>>>> Time Division Duplex? >>>>> >>>>> Test Driven Design. >>>> >>>> Be prepared to meet some people that believe X works /because/ all the >>>> unit tests for X are passed and the console shows a green light. >>> >>> Well, yes. The two main good things about TDD for me is that it makes >>> me think early about how something really should work, and there are >>> finer- grained tests to make sure that if I did something really >>> dumbass it gets caught. >>> >>> Even with TDD, I still find errors, so I don't live under the delusion >>> that you can test in quality. >>> >>>> Usually they have never been introduced to the concept that "you can't >>>> test quality into a product". >>>> Unit tests developed as part of TDD are highly beneficial, but are not >>>> sufficient. >>>> >>>> But I'm sure you know that! >>> >>> Yea verily!! >> >> The other dysfunctional aspect of unit tests is that, >> while they are very useful when making incremental improvements during >> design, they can be a real impediment in a few months/years time. The >> problem is that over time people forget which tests demonstrate required >> properties, and which are merely ensuring behaviour of implementation >> artifacts. At that point people are afraid to make changes that break >> tests, even if the tests are unimportant. At that point the codebase has >> become ossified. >> >> Classic anti-patterns warning of that: unit tests on getters/setters, >> and/or changing visibility solely to enable unit tests. > > That's an interesting point. I haven't been using TDD long enough for > that to be an issue. Good to know!
The simple point to bear in mind is that the results of TDD are only as good as the quality of the tests. Test the wrong/unimportant thing, or don't test important behaviour, and the outcome can be "suboptimal". That's not a difficult point (to put it mildly!), but it is horrifying how it is ignored by zealots and/or not appreciated by inexperienced. The best defense is, to quote one of the two mottoes worth a damn, "Think". No change there, then!
On 17/05/17 01:56, Don Y wrote:
> On 5/16/2017 3:51 PM, Tom Gardner wrote: >> The other dysfunctional aspect of unit tests is that, >> while they are very useful when making incremental >> improvements during design, they can be a real >> impediment in a few months/years time. The problem is >> that over time people forget which tests demonstrate >> required properties, and which are merely ensuring >> behaviour of implementation artifacts. At that point >> people are afraid to make changes that break tests, >> even if the tests are unimportant. At that point the >> codebase has become ossified. > > But that's true any time you (attempt to) nail down > an aspect of a design. Moreso if you don't provide a > rationale for WHY the requirement (whether it be from a > spec or a test) was imposed.
Of course. Nonetheless, it happens - and zealots sometimes refuse to admit it.
AT Wednesday 17 May 2017 16:42, Tom Gardner wrote:

> On 17/05/17 01:56, Don Y wrote: >> On 5/16/2017 3:51 PM, Tom Gardner wrote: >>> The other dysfunctional aspect of unit tests is that, >>> while they are very useful when making incremental >>> improvements during design, they can be a real >>> impediment in a few months/years time. The problem is >>> that over time people forget which tests demonstrate >>> required properties, and which are merely ensuring >>> behaviour of implementation artifacts. At that point >>> people are afraid to make changes that break tests, >>> even if the tests are unimportant. At that point the >>> codebase has become ossified. >> >> But that's true any time you (attempt to) nail down >> an aspect of a design. Moreso if you don't provide a >> rationale for WHY the requirement (whether it be from a >> spec or a test) was imposed. > > Of course. Nonetheless, it happens - and zealots > sometimes refuse to admit it.
Therefor any good development process demands that requirements (high-level, or low-level) must be traceable to system requirements or must be marked as design decisions. Then you have the WHY. A test can not be the source of a requirement. A test should also be traceable to sys-reqs. And hopefully the sys-reqs are good. -- Reinhardt
On 17/05/17 18:35, Tom Gardner wrote:
> On 17/05/17 00:41, Clifford Heath wrote: >> On 17/05/17 09:32, Tim Wescott wrote: >>> On Tue, 16 May 2017 23:51:26 +0100, Tom Gardner wrote: >>> >>>> On 16/05/17 23:11, Tim Wescott wrote: >>>>> On Tue, 16 May 2017 22:05:49 +0100, Tom Gardner wrote: >>>>> >>>>>> On 16/05/17 21:24, Tim Wescott wrote: >>>>>>> On Tue, 16 May 2017 20:17:17 +0000, eric.jacobsen wrote: >>>>>>> >>>>>>>> On Tue, 16 May 2017 13:49:28 -0500, Tim Wescott >>>>>>>> <seemywebsite@myfooter.really> wrote: >>>>>>>> >>>>>>>>> Since this is a newsgroup, and this is news... >>>>>>>>> >>>>>>>>> Wescott Design Services is going into remission, while I pursue a >>>>>>>>> day job. Job title is Software Designer 5 at Planar Systems -- so >>>>>>>>> any circuit design or control systems jones will have to be >>>>>>>>> satisfied by hobby work or on the side. >>>>>>>> >>>>>>>> Software Designer 5? Sounds a little like being in Sector 7-G? >>>>>>> >>>>>>> "Really Senior Embedded Guy". >>>>>>> >>>>>>>>> In the near term I'll be finishing up current work with current >>>>>>>>> customers; in the longer term I'll probably concentrate on the >>>>>>>>> educational videos and maybe hobby stuff. >>>>>>>>> >>>>>>>>> Lots of embedded Linux work in my near future, and possibly TDD >>>>>>>>> proselytizing. >>>>>>>> >>>>>>>> Time Division Duplex? >>>>>>> >>>>>>> Test Driven Design. >>>>>> >>>>>> Be prepared to meet some people that believe X works /because/ all >>>>>> the >>>>>> unit tests for X are passed and the console shows a green light. >>>>> >>>>> Well, yes. The two main good things about TDD for me is that it makes >>>>> me think early about how something really should work, and there are >>>>> finer- grained tests to make sure that if I did something really >>>>> dumbass it gets caught. >>>>> >>>>> Even with TDD, I still find errors, so I don't live under the delusion >>>>> that you can test in quality. >>>>> >>>>>> Usually they have never been introduced to the concept that "you >>>>>> can't >>>>>> test quality into a product". >>>>>> Unit tests developed as part of TDD are highly beneficial, but are >>>>>> not >>>>>> sufficient. >>>>>> >>>>>> But I'm sure you know that! >>>>> >>>>> Yea verily!! >>>> >>>> The other dysfunctional aspect of unit tests is that, >>>> while they are very useful when making incremental improvements during >>>> design, they can be a real impediment in a few months/years time. The >>>> problem is that over time people forget which tests demonstrate >>>> required >>>> properties, and which are merely ensuring behaviour of implementation >>>> artifacts. At that point people are afraid to make changes that break >>>> tests, even if the tests are unimportant. At that point the codebase >>>> has >>>> become ossified. >>>> >>>> Classic anti-patterns warning of that: unit tests on getters/setters, >>>> and/or changing visibility solely to enable unit tests. >>> >>> That's an interesting point. I haven't been using TDD long enough for >>> that to be an issue. Good to know! >> >> TDD works where it gives you *another way* to state your expectations. >> Testing getter/setters never says more about the getter/setter than >> is said by their declaration, so the tests have zero value. > > Er, that's not TDD, that is Unit Tests. To be overly > simplistic, TDD is a strategy for generating Unit Tests.
Good code, in the best languages, can be read like a spec. When that happens, your code and your test is the same thing, expressed in the same way. Nothing is achieved by writing it twice.
>> The important thing is to say "how else can I state this requirement?". >> If you're using truly succinct code, such as strongly-typed Haskell, >> there often is simply *no other way* to describe the expected behavior. >> That's why FP aficionados scoff at TDD zealots. Programming with strong >> types is always better than using TDD. > > Strong typing is very beneficial, but is completely > insufficient. As one of an infinite number of trivial > examples, consider testing for X>Y when you should > be testing for Y>X.
If you got the code wrong, you'll get the test wrong too. That what I mean by "another way to describe" expected behaviour.
In article <B0USA.14491$8r1.938@fx42.am4>, spamjunk@blueyonder.co.uk 
says...
.......
> The simple point to bear in mind is that the results of TDD > are only as good as the quality of the tests. Test the > wrong/unimportant thing, or don't test important behaviour, > and the outcome can be "suboptimal". > > That's not a difficult point (to put it mildly!), but it > is horrifying how it is ignored by zealots and/or not > appreciated by inexperienced. > > The best defense is, to quote one of the two mottoes > worth a damn, "Think". No change there, then!
The Titanic sank but I bet nearly all the individual parts past their unit tests. Or the video I saw on Dev Humor a while back of sliding door and bolt to lock door fixed wrong way. Each part past its unit tests. -- Paul Carpenter | paul@pcserviceselectronics.co.uk <http://www.pcserviceselectronics.co.uk/> PC Services <http://www.pcserviceselectronics.co.uk/LogicCell/> Logic Gate Education <http://www.pcserviceselectronics.co.uk/fonts/> Timing Diagram Font <http://www.badweb.org.uk/> For those web sites you hate
On 17/05/17 10:03, Clifford Heath wrote:
> On 17/05/17 18:35, Tom Gardner wrote: >> On 17/05/17 00:41, Clifford Heath wrote: >>> On 17/05/17 09:32, Tim Wescott wrote: >>>> On Tue, 16 May 2017 23:51:26 +0100, Tom Gardner wrote: >>>> >>>>> On 16/05/17 23:11, Tim Wescott wrote: >>>>>> On Tue, 16 May 2017 22:05:49 +0100, Tom Gardner wrote: >>>>>> >>>>>>> On 16/05/17 21:24, Tim Wescott wrote: >>>>>>>> On Tue, 16 May 2017 20:17:17 +0000, eric.jacobsen wrote: >>>>>>>> >>>>>>>>> On Tue, 16 May 2017 13:49:28 -0500, Tim Wescott >>>>>>>>> <seemywebsite@myfooter.really> wrote: >>>>>>>>> >>>>>>>>>> Since this is a newsgroup, and this is news... >>>>>>>>>> >>>>>>>>>> Wescott Design Services is going into remission, while I pursue a >>>>>>>>>> day job. Job title is Software Designer 5 at Planar Systems -- so >>>>>>>>>> any circuit design or control systems jones will have to be >>>>>>>>>> satisfied by hobby work or on the side. >>>>>>>>> >>>>>>>>> Software Designer 5? Sounds a little like being in Sector 7-G? >>>>>>>> >>>>>>>> "Really Senior Embedded Guy". >>>>>>>> >>>>>>>>>> In the near term I'll be finishing up current work with current >>>>>>>>>> customers; in the longer term I'll probably concentrate on the >>>>>>>>>> educational videos and maybe hobby stuff. >>>>>>>>>> >>>>>>>>>> Lots of embedded Linux work in my near future, and possibly TDD >>>>>>>>>> proselytizing. >>>>>>>>> >>>>>>>>> Time Division Duplex? >>>>>>>> >>>>>>>> Test Driven Design. >>>>>>> >>>>>>> Be prepared to meet some people that believe X works /because/ all >>>>>>> the >>>>>>> unit tests for X are passed and the console shows a green light. >>>>>> >>>>>> Well, yes. The two main good things about TDD for me is that it makes >>>>>> me think early about how something really should work, and there are >>>>>> finer- grained tests to make sure that if I did something really >>>>>> dumbass it gets caught. >>>>>> >>>>>> Even with TDD, I still find errors, so I don't live under the delusion >>>>>> that you can test in quality. >>>>>> >>>>>>> Usually they have never been introduced to the concept that "you >>>>>>> can't >>>>>>> test quality into a product". >>>>>>> Unit tests developed as part of TDD are highly beneficial, but are >>>>>>> not >>>>>>> sufficient. >>>>>>> >>>>>>> But I'm sure you know that! >>>>>> >>>>>> Yea verily!! >>>>> >>>>> The other dysfunctional aspect of unit tests is that, >>>>> while they are very useful when making incremental improvements during >>>>> design, they can be a real impediment in a few months/years time. The >>>>> problem is that over time people forget which tests demonstrate >>>>> required >>>>> properties, and which are merely ensuring behaviour of implementation >>>>> artifacts. At that point people are afraid to make changes that break >>>>> tests, even if the tests are unimportant. At that point the codebase >>>>> has >>>>> become ossified. >>>>> >>>>> Classic anti-patterns warning of that: unit tests on getters/setters, >>>>> and/or changing visibility solely to enable unit tests. >>>> >>>> That's an interesting point. I haven't been using TDD long enough for >>>> that to be an issue. Good to know! >>> >>> TDD works where it gives you *another way* to state your expectations. >>> Testing getter/setters never says more about the getter/setter than >>> is said by their declaration, so the tests have zero value. >> >> Er, that's not TDD, that is Unit Tests. To be overly >> simplistic, TDD is a strategy for generating Unit Tests. > > Good code, in the best languages, can be read like a spec. > When that happens, your code and your test is the same thing, > expressed in the same way. Nothing is achieved by writing it > twice.
Sigh. Consider a spec such as "95th percentile latency of less than 10ms". Good luck expressing that in your code; testing it is difficult enough. More generally, consider that specifications normally deal with what needs to be achieved, and shouldn't specify how they are to be achieved. That is particularly apparent in hardware/software/mechanical systems, where the implementation of required behaviour could be in discrete transistors, HDL, software, or sheets of metal. Having pointed that out, I know what you are trying to say and it is worth achieving. But in the real world it is never that simple.
>>> The important thing is to say "how else can I state this requirement?". >>> If you're using truly succinct code, such as strongly-typed Haskell, >>> there often is simply *no other way* to describe the expected behavior. >>> That's why FP aficionados scoff at TDD zealots. Programming with strong >>> types is always better than using TDD. >> >> Strong typing is very beneficial, but is completely >> insufficient. As one of an infinite number of trivial >> examples, consider testing for X>Y when you should >> be testing for Y>X. > > If you got the code wrong, you'll get the test wrong too. > That what I mean by "another way to describe" expected behaviour.
Not necessarily. I suggest you read comp.risks for many many many examples where your presumptions are too simplistic in the real world. I get the feeling your experience in this area is with academic problems - which are valuable pedagogical examples, but no more.
On 17/05/17 09:57, Reinhardt Behm wrote:
> AT Wednesday 17 May 2017 16:42, Tom Gardner wrote: > >> On 17/05/17 01:56, Don Y wrote: >>> On 5/16/2017 3:51 PM, Tom Gardner wrote: >>>> The other dysfunctional aspect of unit tests is that, >>>> while they are very useful when making incremental >>>> improvements during design, they can be a real >>>> impediment in a few months/years time. The problem is >>>> that over time people forget which tests demonstrate >>>> required properties, and which are merely ensuring >>>> behaviour of implementation artifacts. At that point >>>> people are afraid to make changes that break tests, >>>> even if the tests are unimportant. At that point the >>>> codebase has become ossified. >>> >>> But that's true any time you (attempt to) nail down >>> an aspect of a design. Moreso if you don't provide a >>> rationale for WHY the requirement (whether it be from a >>> spec or a test) was imposed. >> >> Of course. Nonetheless, it happens - and zealots >> sometimes refuse to admit it. > > Therefor any good development process demands that requirements (high-level, > or low-level) must be traceable to system requirements or must be marked as > design decisions. Then you have the WHY. > A test can not be the source of a requirement. A test should also be > traceable to sys-reqs. > And hopefully the sys-reqs are good.
We are in violent agreement. But my point is about the XP/Agile/TDD/Unit Test zealots that overstate the benefits of those techniques and ignore their dysfunctional aspects.
On 17/05/17 10:10, Paul wrote:
> In article <B0USA.14491$8r1.938@fx42.am4>, spamjunk@blueyonder.co.uk > says... > ....... >> The simple point to bear in mind is that the results of TDD >> are only as good as the quality of the tests. Test the >> wrong/unimportant thing, or don't test important behaviour, >> and the outcome can be "suboptimal". >> >> That's not a difficult point (to put it mildly!), but it >> is horrifying how it is ignored by zealots and/or not >> appreciated by inexperienced. >> >> The best defense is, to quote one of the two mottoes >> worth a damn, "Think". No change there, then! > > The Titanic sank but I bet nearly all the individual parts past their > unit tests. > > Or the video I saw on Dev Humor a while back of sliding door and bolt > to lock door fixed wrong way. Each part past its unit tests.
Yup. None of this is difficult, but it does seem to escape some people :(