EmbeddedRelated.com
Blogs

The three laws of safe embedded systems

Michael J. PontNovember 12, 20151 comment

This short article is part of an ongoing series in which I aim to explore some techniques that may be useful for developers and organisations that are beginning their first safety-related embedded project.

In the last two weeks, I’ve had the opportunity to discuss the contents of my previous article on this site with a group of very smart and enthusiastic engineers in Cairo (Egypt). As part of this discussion, it has become clear that I should add a few more details to explain the work required to create a first prototype of a safety-related embedded system: this information will appear in a future article (I will aim to do this before the end of 2015).

In the present article, I want to take a slightly different perspective on Stage 3 from my previous post:

This article is available in PDF format for easy printing

          3. Document the main hazards / threats / risks

In summary, the goal of “Step 3” is to: [i] consider ways in which your system could fail; [ii] consider what impact such a failure might have, not least (in a safety-critical system) on users of the system, or those in the vicinity.  The goal of the rest of the development process is then to work out how you can reduce the risk of system failure to a level that will be acceptable for the given system.

Step 3 is clearly an essential part of the development process.  Occasionally, during early design discussions, it appears that the developers may be overlooking potential failure modes for the systems that they are analysing.  In these circumstances, I have sometimes found it useful to take a step back and look at the system in a different way, by applying what I call the “Asimov Test” (or “The Three Laws of Safe Embedded Systems”).

As you may well have guessed by now, my test is based on the “Three Laws of Robotics” that the author Isaac Asimov introduced in 1942 (that’s right – the laws were published during the Second World War).

Asimov’s laws are as follows:

  1. A robot must not injure a human being, or - through inaction - allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence, as long as such protection does not conflict with the First or Second Laws.

[These laws first appeared in the short story “Run Around” in 1942.  If you want to read this story, the best starting point is probably the book “I, Robot”: this collects several of the Asimov’s robot articles together and presents them in a logical sequence.Alternatively – if you must - you could start with the film version of “I, Robot” from 2004.]

Borrowing ruthlessly from these laws, a system that passes my Asimov Test must meet the following conditions:

  1. An embedded system must not injure a human being, or - through inaction - allow a human being to come to harm.
  2. An embedded system must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. An embedded system must protect its own existence, as long as such protection does not conflict with the First or Second Laws.

If we apply these laws to an autonomous vehicle, then we quickly reach a logical conclusion that may be unpalatable for some vehicle owners (and vehicle manufacturers).

Suppose that you are the only passenger in an autonomous vehicle (to be clear: everyone is a passenger in such a vehicle - there is no human "driver" in the current sense of the term).

Further suppose that a group of three people rush into the road in front of your vehicle.

I will assume that your vehicle has two options: [i] kill the people on the road; [ii] swerve into the edge of the road with the risk that you - the vehicle occupant - will be severely injured or killed.  

There is - of course - only one logical outcome, and that is to swerve the vehicle (because this minimises the risk to human beings).

Applying these three laws may be particularly relevant as embedded systems become increasingly autonomous, but – in my experience - the questions raised by these laws can help to provoke a useful discussion during the early stages of the development of many safety-related and safety-critical systems.

Let's consider another (simpler) automotive example.  

Suppose that we have "security bolt" that is inserted into the vehicle steering mechanism when the vehicle is stationary, in order to prevent theft.  

This is a useful security feature.  Unfortunately, it may also have safety implications (since locking the vehicle when it is in motion may have very serious consequences).

If we apply our "three laws", then the embedded system should never accept the command to lock the vehicle, unless it can be demonstrated that this will not introduce a risk of harm to the users of the vehicle.

Overall, considering designs in terms of these "three laws" is proposed here simply as one (slightly more lighthearted) way of encouraging a team that is new to the development of safety-related systems to consider some of the potential risks and hazards.




[ - ]
Comment by jkvasanApril 15, 2016
Michael,
Nice analogy drawn. This would apply to medical devices even more, say, in the case of an Automatic External Defibrillator. These laws always form part of the Risk Management Strategy.

To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.

Please login (on the right) if you already have an account on this platform.

Otherwise, please use this form to register (free) an join one of the largest online community for Electrical/Embedded/DSP/FPGA/ML engineers: