EmbeddedRelated.com
Forums

Safety-Critical Software Design

Started by Randy Yates July 17, 2016
Randy Yates wrote:
> Hi Everyone, > > Are there any formal requirements, guidelines, or recommendations for > software which will run in a safety-critical environment in the United > States or world-wide? > > By a "safety-critical" environment I mean an environment in which a > failure can lead to loss of, or serious injury to, human life. For > example, automobile navigation systems, medical devices, lasers, etc. >
There are those and they are all different. For medical there is IEC 62304. It's a holistic process standard.
> I know there is the MISRA association and MISRA C. I am wondering if > there are others. >
One such approach is outlined in books by Bruce Powell-Douglas. It's a bit "executable UML" in its aroma but the principles apply. "Safety critical" really expands to fill the process sphere of the project - it's well beyond just tools and paradigm selection.
> My gut and experience tells me there should NEVER be software DIRECTLY > controlling signals of devices that might lead to human injury. Rather, > such devices should be controlled by discrete hardware, perhaps as > complex as an FPGA. There is always going to be a chance that a real > processor that, e.g., controls the enable signal to a laser is going to > crash with the signal enabled. > > I realize that hardware-only control is subject to failures as well, > but they wouldn't seem to be nearly as likely as a coding failure. >
While I'm sympathetic to this sentiment, I think it's rather an odd one. I've fixed too many hardware and FPGA problems in software to be too sympathetic.
> Let me get even more specific: would it be acceptable to use a processor > running linux in such an application? My gut reaction is "Not only no, > but HELL no," but I'm not sure if I'm being overly cautious. >
I'd rather leverage Linux as a comms concentrator running a set of actual micro controllers with much smaller code bases - hopefully code bases verging on being provably correct ( for some weak version of "proof" ). Hopefully, you know what I mean by that, This being said, Tiny Linux seems pretty stable. I've seen ( and coded for ) platforms using Linux for industrial control which had the aroma of safety criticality but the "safety" bits were often as you say - discrete signals for emergency stop and the like. All the observed failures were systems/hardware failures, not software ( after an appropriate field test and significant pre-field testing). Not that there were no bugs, just that the bugs were not critical. One warning; the operators were trained and employed by the same firm that developed the software.
> Any guidance, suggestions, comments, etc., would be appreciated. >
-- Les Cargill
Don Y wrote:
> On 7/17/2016 3:09 AM, Don Y wrote:
<snip> It's really important to write "appliances" that run outside the DUT to verify all the wiring and other assumed-correct things. Yes, you can do it by hand but it saves a lot of money to spend the time to do this. -- Les Cargill