EmbeddedRelated.com
Forums

I don't use an RTOS because...

Started by Unknown January 12, 2005
"Jonathan Kirwan" <jkirwan@easystreet.com> wrote in message
news:pc9ju0peggnab4mmi09f66b83u4vcbdvd4@4ax.com...
> Like many things, experience helps in judging when, and when not, to use
an O/S
> and what form it should take. It depends.
Nice post. As I've said a number of times in other similar threads: what we're dealing with here is tools (and skills) for managing complexity. So long as it works, and you get to sleep nights, I don't care if it's an RTOS or a clockwork tomato. Steve http://www.fivetrees.com
Tim Wescott wrote:

>I don't -- but all the SQA folks and software developers who >I know who've ever worked on really life-critical stuff (fly >by wire and medical) tend to view RTOS's with deep suspicion >bordering on paranoia.
If what needs to be done requires response to asynch events that contend for resources and which have hard real-time deadlines, The software developers who I know who've ever worked on really life-critical stuff (safety of flight or nuclear reactor control) tend to view *not* using one particular RTOS (QNX) with a deep suspicion bordering on paranoia. If what needs to be done can be done with a loop, they tend to view using any RTOS - including QNX - with a deep suspicion bordering on paranoia. The general feeling is that making the wrong decision when deciding whether to use an RTOS is a important diagnostic of the system engineer; anyone stupid enough to get this part wrong will get many other parts wrong as well. (This is not to say that QNX is the *only* good RTOS, but rather that QNX is *a* good RTOS, and is the one that the people I have worked with are most familiar with.) -- Guy Macon <http://www.guymacon.com/>
Paul E. Bennett wrote:
> >joep wrote: > >> Distributing the problem over more processors simplifies each >> controller, but you still have to make the total system work. It seems >> your just moving the complexity somewhere else (interface >> specification) and not really making the total system any less complex. > >You obviously haven't considered the same system from two different design >strategies the way I have. I can assure you that, in a system whose >requirements specification is very complex, there is a great deal of >benefit in the multiple processor approach. > >When I was developing a very large robotic system I explored the "whatif" >of using a multitude of processors instead of a dual processor system. I >used a Fault Tree Analysis software package that performed the probabilistic >failure rate for the total system. The dual processor approach used to take >a very long time to calculate its probabilistic failure rate due to the >common mode calculations that were required. With the greater number of >processors, distributed amongst specific functions and using a decent >interface technique, the time to calculate the probable failure rate took >much less time (1 day instead of 5 days). > >Knowing this from that work, I have looked at the whole system architecture >aspect for multiple processor systems. A processor per actuator, a group >controller and central controller multi-layer system always seems to >simplify the whole system and, although the functionality of the whole >still seems to be complicated, the understandability of the total system is >very much eased. By using multiple simple processors, the overall system >functionality is factored into much simpler sub-functions that are easier >to fully understand, easier to test for compliance and easier to maintain.
...for the same reasons that Object Oriented Programs are easier to fully understand, easier to test for compliance and easier to maintain. Alas, just like OOP, it is possible for a sufficiently clueless engineer/programmer to make a bad product using good tools. That's no reason not to have good tools, though. -- Guy Macon <http://www.guymacon.com/>
joep wrote:
> > >Paul E. Bennett wrote: > > joep wrote: > > > > > Distributing the problem over more processors simplifies each > > > controller, but you still have to make the total system work.
It
>seems > > > your just moving the complexity somewhere else (interface > > > specification) and not really making the total system any less >complex. > > > > You obviously haven't considered the same system from two different >design > > strategies the way I have. I can assure you that, in a system whose > > requirements specification is very complex, there is a great deal
of
> > benefit in the multiple processor approach. > > > > When I was developing a very large robotic system I explored the >"whatif" > > of using a multitude of processors instead of a dual processor >system. I > > used a Fault Tree Analysis software package that performed the >probabilistic > > failure rate for the total system. The dual processor approach
used
>to take > > a very long time to calculate its probabilistic failure rate due
to
>the > > common mode calculations that were required. With the greater
number
>of > > processors, distributed amongst specific functions and using a
decent
> > > interface technique, the time to calculate the probable failure
rate
>took > > much less time (1 day instead of 5 days). > > > > Knowing this from that work, I have looked at the whole system >architecture >> aspect for multiple processor systems. A processor per actuator,
a
>group >> controller and central controller multi-layer system always seems
to
> > simplify the whole system and, although the functionality of the >whole > > still seems to be complicated, the understandability of the total >system is > > very much eased. By using multiple simple processors, the overall >system > > functionality is factored into much simpler sub-functions that
are
>easier > > to fully understand, easier to test for compliance and easier to >maintain. > > > > -- > > ******************************************************************** > > Paul E. Bennett ....................<email://peb@a...> > > Forth based HIDECS Consultancy .....<http://www.amleth.demon.co.uk/> > > Mob: +44 (0)7811-639972 .........NOW AVAILABLE:- HIDECS COURSE...... > > Tel: +44 (0)1235-811095 .... see http://www.feabhas.com for details. > > Going Forth Safely ..... EBA. www.electric-boat-association.org.uk.. > > ******************************************************************** > > I guess I don't understand why you couldn't create "virtual processors" > (tasks) within a single processor using the same protocol/interface as > your multiple processor system. The virtual processors would have the > same functionality as your multiple simple processors and you have > removed the high failure rate physical connections. Also the practical > matters of a system upgrade (multiple downloads, tracking software > version compatibilty between processors), large development toolsets > required(if using different types of processors), obsolescence > headaches, multiple environmental issues, complex test simulations are > also big negatives. Now I have worked with/designed distributed systems > but they were distributed because a single processor couldn't handle > the throughput or a chuck of the problem was already solved by someone > else and I could buy it off the shelf. If a single processor can do it > all its a no brainer for me which architecture to choose. However, like > you said, if you do go with a distributed system, it helps to make one > processor very smart and all the others very very dumb (dictatorship is > a good model to follow).
Ah! Another post to comp.arch.embedded! Let's see what this one has to say... (opens post, whistling a happy tune) ********** WHAT THE.....**BAM!!!** ********** (SFX: sound of parts falling off of a recently-crashed automobile.) ... Honest, officer, I was cruising along at the speed limit when I ran into this giant block of text right in the middle of the newsgroup! No paragraphs, no whitespace, just a dense square block of text... Yes, I tried to stop, but the information superhighway was slippery. Someone had filled the road with this huge greasy sheet of quoted text full of random ">>>" and ">" an ">>" strings. I think I saw a .sig in there as well. No, I didn't get the license number of the fellow who spilled the toxic post, but I remember that he was driving this old SUV - maybe a joep? - that was making an annoying "www...www... www...www...www..." sound, and on the side of it I saw the words "http://groups.google.com" spray painted over a quite attractive but faded "DejaNews" sign. Officer, Please catch him before he kills another thread! -- Guy Macon <http://www.guymacon.com/> : ) : ) : ) : ) : ) : ) : ) : ) : )
joep wrote:

> > I guess I don't understand why you couldn't create "virtual processors" > (tasks) within a single processor using the same protocol/interface as > your multiple processor system. The virtual processors would have the > same functionality as your multiple simple processors and you have > removed the high failure rate physical connections.
Overall system resilience can be achieved much easier despite the number of physical connections. I also indicated that the system would have been (at minimum) a dual processor system in anycase. One system for control, the other system running a permit to operate. Even so, with the calculations performed on the system integrity, I favoured the many processors design because it was easier to prove that it met the integrity requirements. This was due to a lack of common mode issues between the various tasks that would have plagued the development within a single (or just two) processor(s).
> Also the practical > matters of a system upgrade (multiple downloads, tracking software > version compatibilty between processors), large development toolsets > required(if using different types of processors), obsolescence > headaches, multiple environmental issues, complex test simulations are > also big negatives.
Not really that much of a problem for the kind of systems I deal with (mostly Robotic or Automation for Nuclear Energy and Transportation Systems). I have one development environment and a code library that I can use on a wide range of processors (we are talking real re-use here). Also note that many of my systems have no need for upgrade over time as they are specified for specific tasks over long periods of expected operation. I can build the same type of nodes with several different processors and its functionality would be exactly the same in each case.
> Now I have worked with/designed distributed systems > but they were distributed because a single processor couldn't handle > the throughput or a chuck of the problem was already solved by someone > else and I could buy it off the shelf. If a single processor can do it > all its a no brainer for me which architecture to choose. However, like > you said, if you do go with a distributed system, it helps to make one > processor very smart and all the others very very dumb (dictatorship is > a good model to follow).
I expect that you could also find a considerable difference in costs between the two approaches. To run the sort of control I deal with at the Integrity Levels demanded of my systems you would probably consider processors with high MIPS ratings, high dollar per chip, masses of fast memory and some RTOS that you do not have the code for. With a requirement for 100% coverage testing you would be tied up for ages to prove your system is safe. I, on the other hand, count how many actuators there are, note what type they are and can see my way to using simple cheap microcontrollers that are fully committed to really looking after the needs of the actuator in meeting the demands of the system. Occassionally I may use two processors per actutor (one for control and one for comms). All such nodes also perform some limited data-logging (for diagnostic purposes) and have a range of self checking features that signal the nodes health status up to the group controller. I have all the source code for my systems and I can provide full certification for it. Testing the individual nodes is quite simple and once installed the system will usually operate for its required lifetime (25 years+) without hidden faults (actually, to date only one of my systems has been installed long enough to reach decommissioning early in its 26th year - the others are still going strong with the longest lived current system now in its 20th year - no upgrade having been necessary). -- ******************************************************************** Paul E. Bennett ....................<email://peb@a...> Forth based HIDECS Consultancy .....<http://www.amleth.demon.co.uk/> Mob: +44 (0)7811-639972 .........NOW AVAILABLE:- HIDECS COURSE...... Tel: +44 (0)1235-811095 .... see http://www.feabhas.com for details. Going Forth Safely ..... EBA. www.electric-boat-association.org.uk.. ********************************************************************
Guy Macon <http://www.guymacon.com/> wrote:

> ...for the same reasons that Object Oriented Programs are easier > to fully understand, easier to test for compliance and easier to > maintain. Alas, just like OOP, it is possible for a sufficiently > clueless engineer/programmer to make a bad product using good tools. > That's no reason not to have good tools, though.
Absolutely!! -- ******************************************************************** Paul E. Bennett ....................<email://peb@a...> Forth based HIDECS Consultancy .....<http://www.amleth.demon.co.uk/> Mob: +44 (0)7811-639972 .........NOW AVAILABLE:- HIDECS COURSE...... Tel: +44 (0)1235-811095 .... see http://www.feabhas.com for details. Going Forth Safely ..... EBA. www.electric-boat-association.org.uk.. ********************************************************************
Ian Bell wrote:
> Michael N. Moran wrote: >>If all we are talking about here is the RTOS, then the >>understanding of tasks/threads, semaphores, mutexes, and >>perhaps some other inter-thread communications mechanisms >>is all that is required. > > > If only it were that simple.
It is *that* "simple." -- Michael N. Moran (h) 770 516 7918 5009 Old Field Ct. (c) 678 521 5460 Kennesaw, GA, USA 30144 http://mnmoran.org "So often times it happens, that we live our lives in chains and we never even know we have the key." The Eagles, "Already Gone" The Beatles were wrong: 1 & 1 & 1 is 1
Paul E. Bennett wrote:

(Cconcerning multiple processor designs) 

>I expect that you could also find a considerable difference in costs >between the two approaches.
Like everything else, it depends. :) When I was working on toys that shipped 100,000 unit per day, a penny per unit was huge. When I worked on a multi-million dollar DVD-RAM production line, the cost of the computers was two orders of magnitude lower than the cost of the programming.
Guy Macon wrote:

What's your point after this totaly useless post?
Paul E. Bennett wrote:
> Overall system resilience can be achieved much easier despite the number of > physical connections. I also indicated that the system would have been (at > minimum) a dual processor system in anycase. One system for control, the > other system running a permit to operate. Even so, with the calculations > performed on the system integrity, I favoured the many processors design > because it was easier to prove that it met the integrity requirements. This > was due to a lack of common mode issues between the various tasks that > would have plagued the development within a single (or just two) > processor(s).
I think the right (if one can state there is a "right" anyway) solution is highly system dependent. I used to favour one processor designs because of (hardware) simplicity. It was a "let the software guys solve" attitude even though I also did software. After some (several :-) ) years undergoing the pains of such an approach and after attending some of Jack Ganssle's lectures I was struck by the perception of how fool I was by chosing it (because it's quite obvious when one gives it some thought). Just to cite an example I have a design in which a rotary encoder is handled by the main processor. Putting a small microcontroller there would make things much easier and reliable. The main processor handles (proprietary) serial communication with some acquisition modules. Another low end microcontroller would do the magic and release the main processor for more apropriate duties, not to mention would make the software design much easier. These are simple examples. I have seen this multiprocessor approach proposal more often lately. I am not sure if that is because I am paying more attention to the subject or just because of a paradigm change though. I am in the process of designing a new architecture for new products and that is the reason why I posted this questions. The posts so far have been very interesting and enlightening. One thing at least is clear to me right now: there is not a "one fits all" solution for the problem. Regards. Elder.