EmbeddedRelated.com
Forums

I don't use an RTOS because...

Started by Unknown January 12, 2005
Michael N. Moran wrote:

> Ian Bell wrote: >> 42Bastian Schick wrote: >>>So, you do write you C libraries yourself too ? >> >> 99.999% of C library functions are irrelevant for small embedded systems >> so what's to write? > > As usual, the arguments come down to the "embedded system" > qualifiers. Here the qualifier is small. However, there are > *many* of us who work with embedded systems that are moderate > to quite large. > > If the complexity is large, then an RTOS helps to > contain and manage that complexity, as does the > careful selection of libraries. > >
As the one who used the qualifier (deliberately) I would be inclined to use one here and say 'an RTOS *can* help contain and manage complexity as *can* the careful selection of libraries. Ian (Mr. Qualifier ;-) -- Ian Bell
Roberto Waltman wrote:

>>>I've used them extensively. They're great when you're working on >>>projects that need more than one software engineer, and in an >>>environment where you're using a lot of library code. >> >>That is an interesting comment. I have never heard an RTOS having >>properties that make multi-person development easier. It is mot clear from >>the post but did you mean an RTOS in general or a pre-mptive one in >>particular? > > >>Ian > > > I agree with Tim - Any OS, not necessarily an RTOS. For the same > reasons that another poster recommended using multiple processors: > Divide and conquer. > > Splitting a large application into separate processes that communicate > with each other using a limited and well specified set of functions, > (I do not mean that it as in "C" functions) makes it simpler to > develop them, to assign them to separate teams, to test and verify > them in isolation, etc. > > Of course you can do the same without an OS. But having one generally > provides most, (at least some) of the common layer as OS calls, IPC > mechanism, etc. Many projects that need more functionality than what > is provided by the bare OS would add it in a way that still looks as > common OS functions. (Like a message passing library for IPC, etc) > > As others pointed out, there are embedded systems and there are > embedded systems. I probably would not make any sense to use an RTOS > in, let's say, a TV remote controller. But the kind of systems I am > currently working on, (32bit processors, 128-256 Mbyte RAM - all used > up - multiple hardware interfaces to the external world and other > system components, support for multiple communication protocols, 20+ > people development teams, etc.) would be much more difficult to > develop without the foundations provided by an OS. > > > Roberto Waltman. > > [ Please reply to the group, > return address is invalid ]
What he said :). -- Tim Wescott Wescott Design Services http://www.wescottdesign.com
Michael N. Moran wrote:

> Tim Wescott wrote: > >> The preemptive multitasker is certainly harder to set up and will >> floor you with more really subtle bugs, > > > Like so many things software, this depends entirely upon > the team's familiarity with RTOS concepts. Saying that > you *will* encounter subtle bugs just because you use an > RTOS is quite misleading. > >
weeellll, perhaps. But when I've had problems with RTOS bugs (or much more usually bugs with the way we've used the RTOS) they've been real head-scratchers, and were not at all obvious to anyone until well after we found them. Having been at this game for nearly 15 years now, I've gotten pretty good about avoiding them, however. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com
Michael N. Moran wrote:

> Ian Bell wrote: > >> But it is worth noting that there is a significant lowering of >> reliability >> as soon as you move to a pre-emptive system. > > > What a bunch of FUD. > I assure you that for an appropriate application > I can write a more maintainable and highly reliable > application with a pre-emptive scheduler. > > Do you have an facts to back-up that claim? > >
I don't -- but all the SQA folks and software developers who I know who've ever worked on really life-critical stuff (fly by wire and medical) tend to view RTOS's with deep suspicion bordering on paranoia. Basically it is _much_ easier to fully analyze the timing and inter-element interactions of a system based on a task loop than one based on an RTOS. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com
joep wrote:

> Distributing the problem over more processors simplifies each > controller, but you still have to make the total system work. It seems > your just moving the complexity somewhere else (interface > specification) and not really making the total system any less complex.
You obviously haven't considered the same system from two different design strategies the way I have. I can assure you that, in a system whose requirements specification is very complex, there is a great deal of benefit in the multiple processor approach. When I was developing a very large robotic system I explored the "whatif" of using a multitude of processors instead of a dual processor system. I used a Fault Tree Analysis software package that performed the probablistic failure rate for the total system. The dual processor approach used to take a very long time to calculate its probabilistic failure rate due to the common mode calculations that were required. With the greater number of processors, distributed amongst specific functions and using a decent interface technique, the time to calculate the probable failure rate took much less time (1 day instead of 5 days). Knowing this from that work, I have looked at the whole system architecture aspect for multiple processor systems. A processor per actuator, a group controller and central controller multi-layer system always seems to simplify the whole system and, although the functionality of the whole still seems to be complicated, the understandability of the total system is very much eased. By using multiple simple processors, the overall system functionality is factored into much simpler sub-functions that are easier to fully understand, easier to test for compliance and easier to maintain. -- ******************************************************************** Paul E. Bennett ....................<email://peb@a...> Forth based HIDECS Consultancy .....<http://www.amleth.demon.co.uk/> Mob: +44 (0)7811-639972 .........NOW AVAILABLE:- HIDECS COURSE...... Tel: +44 (0)1235-811095 .... see http://www.feabhas.com for details. Going Forth Safely ..... EBA. www.electric-boat-association.org.uk.. ********************************************************************
"Michael N. Moran" <mnmoran@bellsouth.net> wrote in message
news:VVeGd.9321$Zv5.4432@bignews1.bellsouth.net...
> Robert Scott wrote: > > The difference between C libraries and an RTOS is that most C library > > functions can be simply documented and understood. But giving > > overall control to an RTOS requires a more thorough understanding of > > the interface. It is usually more than are series of simple function > > calls. It is more like completely understanding all the ins and > > outs of the Overlapped IO operations in Win32 programming. > > If all we are talking about here is the RTOS, then the > understanding of tasks/threads, semaphores, mutexes, and > perhaps some other inter-thread communications mechanisms > is all that is required. > > Yeah, it's a learning curve, but it's not *that* complex. > A worthy grok for any software practitioner.
See a thread I started last year re the priority inversion problems that affected the Mars landers. Steve http://www.fivetrees.com
Michael N. Moran wrote:

> Robert Scott wrote: >> The difference between C libraries and an RTOS is that most C library >> functions can be simply documented and understood. But giving >> overall control to an RTOS requires a more thorough understanding of >> the interface. It is usually more than are series of simple function >> calls. It is more like completely understanding all the ins and >> outs of the Overlapped IO operations in Win32 programming. > > If all we are talking about here is the RTOS, then the > understanding of tasks/threads, semaphores, mutexes, and > perhaps some other inter-thread communications mechanisms > is all that is required.
If only it were that simple. Ian -- Ian Bell
"Tim Wescott" <tim@wescottnospamdesign.com> wrote in message
news:10uj6m9iudtchd7@corp.supernews.com...
> Michael N. Moran wrote: > > > Ian Bell wrote: > > > >> But it is worth noting that there is a significant lowering of > >> reliability > >> as soon as you move to a pre-emptive system. > > > > > > What a bunch of FUD. > > I assure you that for an appropriate application > > I can write a more maintainable and highly reliable > > application with a pre-emptive scheduler. > > > > Do you have an facts to back-up that claim? > > > > > I don't -- but all the SQA folks and software developers who I know > who've ever worked on really life-critical stuff (fly by wire and > medical) tend to view RTOS's with deep suspicion bordering on paranoia. > Basically it is _much_ easier to fully analyze the timing and > inter-element interactions of a system based on a task loop than one > based on an RTOS.
Amen. Determinism and synchronism. Re use of RTOS as an aid to teamwork - yeah, I've seen that. It always strikes me as an attempt to use an off-the-shelf tool (that increases complexity and probable bug-count) as an alternative to enforcing team discipline. I'm also a guitar player; I've seen that same thing in music - throw money at the problem when, really, what is needed is chops - i.e. skill and discipline. Ain't no shortcuts. Steve http://www.fivetrees.com
On Sat, 15 Jan 2005 14:35:57 -0800, Tim Wescott <tim@wescottnospamdesign.com>
wrote:

>I don't -- but all the SQA folks and software developers who I know >who've ever worked on really life-critical stuff (fly by wire and >medical) tend to view RTOS's with deep suspicion bordering on paranoia.
One has to be that way. Operating systems are usually targeted at a broad audience for obvious marketing reasons -- more sales potential. But this also means that the base of code, the number of run-time conditional blocks of code that may or may not expose themselves readily, and the unused features or hidden features used in the O/S present their own unique challenges (and extra costs) for a critical application. In some cases, the requirements for testing every possible branch of code that *may* be executed and validating it becomes "more difficult" when you don't have the source code for the operating system and therefore cannot easily adduce exactly what different sections of code there are which may need validation. Also, an operating system almost always exposes vastly more interfaces than are actually needed by an application and their very presence in the code base presents unnecessary extra risk to be managed. That all this is made more vague and abstract in appearance by buying an "operating system" that is assumed "correct" doesn't help things -- it just makes it more likely that someone isn't going to expose on top of the table something that should have been examined more closely. Diffusion behind a proprietary software barrier doesn't make things "safer." In cases where the source code is fully available for the operating system, it usually means a tremendous amount of possible sources of risk that need to be analyzed/assessed and then tested. This is one reason why I developed my own, that is compile-time customizable to include only the minimum code and data space required for specific features. The resulting emitted code carries the least possible impact, given the desired features, and allows both a very close fit to needs as well as a low impact to risk assessment, validation and then documentation/monitoring. The minimum necessary impact. Like floating point libraries, operating systems are both powerful tools that raise the required bar of mastery for the engineers involved and can present unseen dangers to those of less experience and background. Careless use of either, without a very clear and hard-won understanding of the detailed areas where things need be more carefully weighed, is just asking for trouble. On the other hand, thread semantics and pre-emption semantics, for example, can both also present a very powerful tool for simplifying much of the application design and helping it to be more robust and understandable and maintainable.
> Basically it is _much_ easier to fully analyze the timing and >inter-element interactions of a system based on a task loop than one >based on an RTOS.
There is that. But it's sometimes hard to do (without an O/S of some kind) for complex systems where there is similar code salted all about to handle the "real time" aspects in the middle of other functions or where subroutines which are periodically called to "proceed for a short time" which need to save and restore state and go back inside the nested if-conditions so that they continue where they left off... analyzing that and making sure you test all the nested areas of code can be as bad as having the O/S around. Like many things, experience helps in judging when, and when not, to use an O/S and what form it should take. It depends. Jon
Paul E. Bennett wrote:
> joep wrote: > > > Distributing the problem over more processors simplifies each > > controller, but you still have to make the total system work. It
seems
> > your just moving the complexity somewhere else (interface > > specification) and not really making the total system any less
complex.
> > You obviously haven't considered the same system from two different
design
> strategies the way I have. I can assure you that, in a system whose > requirements specification is very complex, there is a great deal of > benefit in the multiple processor approach. > > When I was developing a very large robotic system I explored the
"whatif"
> of using a multitude of processors instead of a dual processor
system. I
> used a Fault Tree Analysis software package that performed the
probablistic
> failure rate for the total system. The dual processor approach used
to take
> a very long time to calculate its probabilistic failure rate due to
the
> common mode calculations that were required. With the greater number
of
> processors, distributed amongst specific functions and using a decent
> interface technique, the time to calculate the probable failure rate
took
> much less time (1 day instead of 5 days). > > Knowing this from that work, I have looked at the whole system
architecture
> aspect for multiple processor systems. A processor per actuator, a
group
> controller and central controller multi-layer system always seems to > simplify the whole system and, although the functionality of the
whole
> still seems to be complicated, the understandability of the total
system is
> very much eased. By using multiple simple processors, the overall
system
> functionality is factored into much simpler sub-functions that are
easier
> to fully understand, easier to test for compliance and easier to
maintain.
> > -- > ******************************************************************** > Paul E. Bennett ....................<email://peb@a...> > Forth based HIDECS Consultancy .....<http://www.amleth.demon.co.uk/> > Mob: +44 (0)7811-639972 .........NOW AVAILABLE:- HIDECS COURSE...... > Tel: +44 (0)1235-811095 .... see http://www.feabhas.com for details. > Going Forth Safely ..... EBA. www.electric-boat-association.org.uk.. > ********************************************************************
I guess I don't understand why you couldn't create "virtual processors" (tasks) within a single processor using the same protocol/interface as your multiple processor system. The virtual processors would have the same functionality as your multiple simple processors and you have removed the high failure rate physical connections. Also the practical matters of a system upgrade (multiple downloads, tracking software version compatibilty between processors), large development toolsets required(if using different types of processors), obsolescence headaches, multiple environmental issues, complex test simulations are also big negatives. Now I have worked with/designed distributed systems but they were distributed because a single processor couldn't handle the throughput or a chuck of the problem was already solved by someone else and I could buy it off the shelf. If a single processor can do it all its a no brainer for me which architecture to choose. However, like you said, if you do go with a distributed system, it helps to make one processor very smart and all the others very very dumb (dictatorship is a good model to follow).