EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

Software's evil

Started by Lanarcam March 27, 2008
An extract from "Embedded systems dictionary" by Jack Ganssle and Michael
Barr:

"Unfortunately, as hardware design more and more resembles
software development, the hardware inherits all of software's evils:
late delivery, bugs and misinterpreted specs."

If that is the case, the problem of software doesn't lie in
the instrinsic properties of software components but
in the development process.

Hardware blocks developed as parts of FPGAs
have the same functional characteristics as discrete equivalent
hardware parts. If they suffer from software evil's, this is
only due to the development process.

Is it be possible to improve the development process
of harware blocks so that they don't suffer from these evils
and would it be possible to apply these techniques to
software? Is it a problem of costs only?


Lanarcam wrote:
> An extract from "Embedded systems dictionary" by Jack Ganssle and Michael > Barr: > > "Unfortunately, as hardware design more and more resembles > software development, the hardware inherits all of software's evils: > late delivery, bugs and misinterpreted specs." > > If that is the case, the problem of software doesn't lie in > the instrinsic properties of software components but > in the development process.
I think it is simpler than that. As designs become more complex, you inevitably get more of late delivery, bugs and misinterpreted specs. This applies equally well to software and hardware. -- Pertti
Pertti Kellom�ki a �crit :
> Lanarcam wrote: >> An extract from "Embedded systems dictionary" by Jack Ganssle and Michael >> Barr: >> >> "Unfortunately, as hardware design more and more resembles >> software development, the hardware inherits all of software's evils: >> late delivery, bugs and misinterpreted specs." >> >> If that is the case, the problem of software doesn't lie in >> the instrinsic properties of software components but >> in the development process. > > I think it is simpler than that. As designs become more complex, > you inevitably get more of late delivery, bugs and misinterpreted specs. > This applies equally well to software and hardware.
I think this is only part of the problem. A microcontroller is a complex design and generally without bugs. If a design is complex, you can decompose it into manageable units with well specified interfaces. This requires time and rigour and is not compatible with tight schedules and scarce resources.
On Mar 27, 6:42=A0am, Lanarcam <lanarc...@yahoo.fr> wrote:

> > I think it is simpler than that. As designs become more complex, > > you inevitably get more of late delivery, bugs and misinterpreted specs.=
> > This applies equally well to software and hardware. > > is complex, you can decompose it into manageable units > with well specified interfaces. This requires time and > rigour and is not compatible with tight schedules and
Volumes and volumes have been written on this. But in general consider that the tools available for software verification are not as mature as, and CONSIDERABLY more labor-intensive than the tools available for hardware verification. Further consider that the cost to be invested up front in verifying something "hard" (silicon, spacecraft, etc) vs. verifying something "soft" (can be field-updated for free) are very different value propositions. Consider also that a company designing a microcontroller is designing a general-purpose device that must be precisely characterized in order to be saleable. Would you buy a micro if the datasheet said that every parameter was TBD? A product that uses the microcontroller, on the other hand, is going to have a limited range of use and will not, as a rule, be as completely characterized - in fact it's unlikely to have any characterization data at all outside the intended use cases.
"Lanarcam" <lanarcam1@yahoo.fr> schreef in bericht 
news:47eb7ad1$0$27901$426a74cc@news.free.fr...
> Pertti Kellom&#4294967295;ki a &#4294967295;crit : >> Lanarcam wrote: >>> An extract from "Embedded systems dictionary" by Jack Ganssle and >>> Michael >>> Barr: >>> >>> "Unfortunately, as hardware design more and more resembles >>> software development, the hardware inherits all of software's evils: >>> late delivery, bugs and misinterpreted specs." >>> >>> If that is the case, the problem of software doesn't lie in >>> the instrinsic properties of software components but >>> in the development process. >> >> I think it is simpler than that. As designs become more complex, >> you inevitably get more of late delivery, bugs and misinterpreted specs. >> This applies equally well to software and hardware. > > I think this is only part of the problem. A microcontroller > is a complex design and generally without bugs. If a design > is complex, you can decompose it into manageable units > with well specified interfaces. This requires time and > rigour and is not compatible with tight schedules and > scarce resources.
You are right about that, though it will only be possible to get a 100% garantuee if You can manage to simulate every possible state in which something can be and that may be possible with some simple combinatorial logic circuit but will very rapidly become impossible if Your design becomes more complex. And only testing managable units is not always good enough either. Suppose one of them has an error which does not show up if You examine it because it only alters some bit somewhere else in memory (and all "replies" You get from Your unit when testing it are what You expect them to be). However, when You run the system as a whole, that bit might be a part of a variable in another managable unit which turned out to be OK as well when You tested it isolated. So You will have to test the units separately and together and creating every possible state that Your system might ever stumble upon, that is impossible. I do not know if it is true in all countries, but there are many countries in which certain electronic devices, like anaesthesiological equipment used to keep a patient asleep during an operation, is not allowed to have software in the circuits that control vital stuff. This does not rule out the fact that many errors remain in software because no "as-decent-as-possible" testing is done. Yours sincerely, Rene
larwe wrote:
> But in general consider > that the tools available for software verification are not as mature > as, and CONSIDERABLY more labor-intensive than the tools available for > hardware verification.
One of the biggest differences between verifying software and verifying hardware is that software as a rule deals with dynamically allocated, complex data structures. Hardware, while complicated, is at least fixed. One should also keep in mind the level of granularity. Verifying a microprocessor basically means verifying that if the device starts from a state within its specs, and executes one instruction, then the processor ends up in the state that the spec presrcibes. I have been in formal verification myself, so I don't mean to imply that this is a trivial task. However, the software equivalent would be to verify that each function satisfies its postcondition if the arguments satisfy the precondition. But the properties one really wants to verify are much more abstract, such as "does this piece of software land my plane safely?". -- Pertti
On Thu, 27 Mar 2008 10:19:23 +0100, "Lanarcam" <lanarcam1@yahoo.fr>
wrote:

>An extract from "Embedded systems dictionary" by Jack Ganssle and Michael >Barr: > >"Unfortunately, as hardware design more and more resembles >software development, the hardware inherits all of software's evils: >late delivery, bugs and misinterpreted specs." > >If that is the case, the problem of software doesn't lie in >the instrinsic properties of software components but >in the development process. > >Hardware blocks developed as parts of FPGAs >have the same functional characteristics as discrete equivalent >hardware parts. If they suffer from software evil's, this is >only due to the development process.
If the same quality of people were working on them as on the discrete parts, they were subjected to equivalent testing and field evaluation by as many customers then they would still be more problems because there are more variables between the VHDL and the final functionality.
>Is it be possible to improve the development process >of harware blocks so that they don't suffer from these evils >and would it be possible to apply these techniques to >software? Is it a problem of costs only? >
I think it's mostly a matter of costs (including time to market).. if you're willing to do a spacecraft level of documentation and design then it can be pretty much perfect the first time it escapes out the door, but it will cost orders of magnitude more money and take many times longer. In particular, the cost of doing a very high quality design seems to increase very rapidly with complexity.. maybe the square or the cube of complexity. Something 10x-50x more complex than a program that could be created bug-free in a few months by a single person might take 100-500 people and 3-10 years. Best regards, Spehro Pefhany -- "it's the network..." "The Journey is the reward" speff@interlog.com Info for manufacturers: http://www.trexon.com Embedded software/hardware/analog Info for designers: http://www.speff.com
larwe wrote:
> On Mar 27, 6:42 am, Lanarcam <lanarc...@yahoo.fr> wrote: > >>> I think it is simpler than that. As designs become more complex, >>> you inevitably get more of late delivery, bugs and misinterpreted specs. >>> This applies equally well to software and hardware. >> is complex, you can decompose it into manageable units >> with well specified interfaces. This requires time and >> rigour and is not compatible with tight schedules and > > Volumes and volumes have been written on this. But in general consider > that the tools available for software verification are not as mature > as, and CONSIDERABLY more labor-intensive than the tools available for > hardware verification. Further consider that the cost to be invested > up front in verifying something "hard" (silicon, spacecraft, etc) vs. > verifying something "soft" (can be field-updated for free) are very > different value propositions.
This is certainly an oft spoken subject, but the fact is that, imho, the state of software development has not really improved over the years, it is probably worse now than it was a few years ago due to more complexity, tighter schedules, etc. The software crisis is still with us. The fact that programmable hardware goes the same way can be an opportunity to discover new causes even if solutions are far ahead. There is also the fact that hardware engineers who knew how to do it right the first time, now meet, with programmable components, the same sort of problems as do software developers. They could certainly understand what has changed in their process.
> Consider also that a company designing a microcontroller is designing > a general-purpose device that must be precisely characterized in order > to be saleable. Would you buy a micro if the datasheet said that every > parameter was TBD? A product that uses the microcontroller, on the > other hand, is going to have a limited range of use and will not, as a > rule, be as completely characterized - in fact it's unlikely to have > any characterization data at all outside the intended use cases.
This can indeed be a problem if the product is later used for other applications as are reusable components. The point is how to produce really reusable components fully characterized and free of side effects. Another point is to decide when to invest to produce reusable components.
Rene wrote :
> > You are right about that, though it will only be possible to get a 100% > garantuee if You can manage to simulate every possible state in which > something can be and that may be possible with some simple combinatorial > logic circuit but will very rapidly become impossible if Your design becomes > more complex. And only testing managable units is not always good enough > either. Suppose one of them has an error which does not show up if You > examine it because it only alters some bit somewhere else in memory (and all > "replies" You get from Your unit when testing it are what You expect them to > be). However, when You run the system as a whole, that bit might be a part > of a variable in another managable unit which turned out to be OK as well > when You tested it isolated. So You will have to test the units separately > and together and creating every possible state that Your system might ever > stumble upon, that is impossible.
This is indeed impossible to manage unless you find a way to fully test "manageable units" and make sure that they are free of side effects. Needless to say I don't have the solution.
> I do not know if it is true in all countries, but there are many countries > in which certain electronic devices, like anaesthesiological equipment used > to keep a patient asleep during an operation, is not allowed to have > software in the circuits that control vital stuff.
Some industries are reluctant to admit software in their safety products, for instance railways signalling relays.
> This does not rule out the fact that many errors remain in software because > no "as-decent-as-possible" testing is done. >
Spehro Pefhany wrote:

> I think it's mostly a matter of costs (including time to market).. if > you're willing to do a spacecraft level of documentation and design > then it can be pretty much perfect the first time it escapes out the > door, but it will cost orders of magnitude more money and take many > times longer. > > In particular, the cost of doing a very high quality design seems to > increase very rapidly with complexity.. maybe the square or the cube > of complexity. Something 10x-50x more complex than a program that > could be created bug-free in a few months by a single person might > take 100-500 people and 3-10 years.
It certainly requires time to do a high quality design, but there are (must be) ways of separating parts so that the cost is less than what it is for the design as a single piece.

The 2024 Embedded Online Conference