Reply by Niklas Holsti March 1, 20122012-03-01
On 12-03-01 02:28 , Les Cargill wrote:
> Hi Niklas. Are you doing a lot of multicore stuff?
No... I have to admit that these days I don't do much application development at all, I mostly work on timing analysis tools. But recenly I have been involved peripherally with a couple of applications (satellite on-board SW), one wih code generated from event-driven state-charts, the other using a minor/major-frame static, non-preemptive schedule. The latter system has lots of artificial splitting of large jobs into small pieces, giving the complex code that I have been warning about in this discussion.
> I haven't > had the pleasure yet, and that might be why we're missing each other. > Multicore is certainly different.
Maybe. On the other hand, it is commonly held that a multi-threaded, pre-emptive SW can run on (symmetric) multi-core machines with no chamges, and make good use of the cores, since the SW is already prepared for threads to run concurrently, and is thus prepared for the real parallelism of a multi-core system.
> The systems I've worked on > also used a minimal number of threads - usually haveing an additional > thread meant we had a different interrupting device to manage.
If you use run-to-completion event-handling, you can run several state-machines concurrently within one thread, by interleaving their transitions and using a single, shared queue for incoming events to all these state-machines. For example, Rhapsody-in-C translates state-charts to this kind of C code. If that works with sufficient real-time performance, you can get by with a small number of threads. Niklas:
>> You say that each thread has to be "circumspect" in CPU usage. That is >> rather vague.
Les:
> It has to get in, do a small task, then get out at each point > in its state. "Circumspect" means "parsimonious" or "cheap" in > this case - it must use the least CPU necessary to execute that state > transition, and get back to a blocking call as quickly as it can.
What you say is all qualitative and not quantitative. Goals like "as quickly as it can" are typical of soft real-time systems (and non-real-time, throughput-oriented systems). Hard real-time systems must consider execution time quantitatively in the design. Suppose the system has an event that causes a transition that takes 500 ms to execute, even when coded to be as quick as it can -- for example, some sensor-fusion or image-processing stuff that wrestles with large floating-point matrices. Is this fast enough? That depends on how much the system can afford to delay its response to *other* events. If a delay of 500 ms is tolerable, this design works, even without preemption. If some event requires a response time less than 500 ms, you must either let this event preempt the 500 ms transition, or split the 500 ms transition into smaller pieces, which I consider artificial.
> >> If the system has real-time deadlines, but is not >> preemptive, it can work only if "circumspect" means that the thread >> execution times (between reschedulings) are smaller than the smallest >> required response time. Do you agree with this? > > > Not universally; no. One realtime deadline may require many > time quanta - a given thread may execute may times within a single > deadline time period.
But that strengthens my argument: the time between reschedulings must then be less than the corresponding fraction of the smallest deadline. For example, if thread A has a deadline of 10 ms, and needs to execute 5 times in that time, you need to have at least 5 reschedulings in 10 ms, so no thread can use more than 2 ms between reschedulings, or thereabouts.
> >> (In reality the times >> must often be a lot smaller, if incoming events are sporadic without a >> fixed phasing.) >> >> This means that the smallest required response time constrains the >> design of all threads, and therefore a reduction in the smallest >> required response time can force changes in the code of all threads, in >> a non-preemptive system. Do you agree? >> > > I think you are assuming a one-to-one map between *all* responses and > that time quantum. So no. A single response may require multiple > time quanta, still.
Which makes the situation worse -- see above -- there must be even more frequent reschedulings, and even stronger constraints on the maximum execution time between reschedulings. [snip]
> My use of the paradigm preceeds any of the object gurus, IMO. OO > hadn't quite propagated to realtime in the '80s in a > serious way. I have since used things like ObjecTime, Rose and > Rhapsody, but we'd done things like this with nothing > but a 'C' compiler on bare metal before.
Yep, it is about the only kind of concurrency you can do with just 'C' and no RTOS. The second application that I mentioned at the start of this post is built like that.
> Some of those things had hundreds of states ( which may be what > you are saying is the horror of it ) but I did not see that as a > curse. We were able to log events and state for testing and > never had a defect that wasn't 100% reproducible because of it...
Lots of states are OK if they are implied by the requirements. But if you must split a single state into 10 states, just because the transition to this state would otherwise take too long (in a non-preemptive system), these 10 states are artificial and I don't like them.
>> If your system can be >> designed in a natural way without preemption, do so. > > I, unfortunately, don't really know what that means.
It means that if you implement the state machines in their natural form, as implied by the application requirements, the state transitions are nevertheless fast enough and do not make the (non-preemptive) system too slow to respond.
>> But if you can >> avoid preemption only by artificially slicing the longer jobs into small >> pieces, you introduce similar risks (the order of execution of the >> pieces, and their interactions, may be hard to foresee) and much >> unnecessary complexity of code. >> > > If for any case, any of that is true, then yes :) There's no > crime in using whatever works. > > In summary, though, my statement stands: > > I do not see how having the system timer tick swap out a running > thread improves the reliability
It can let other threads meet their deadlines, even if the running thread exceeds its designed execution time, for some reason (unusual state, coding bug, bad luck with the cache, whatever).
> or determinacy of a system,
At least in priority-based systems, it can reduce the jitter in the activation times of high-priority threads.
> nor how it makes the design of a system easier.
Without it, you may have to split long jobs (long state transitions) into smaller pieces, artificially, just to get frequent reschedulings. But I have said all that before, so it is time to stop. -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
Reply by Les Cargill February 29, 20122012-02-29
Hi Niklas. Are you doing a lot of multicore stuff? I haven't
had the pleasure yet, and that might be why we're missing each other.
Multicore is certainly different. The systems I've worked on
also used a minimal number of threads - usually haveing an additional 
thread meant we had a different interrupting device to manage.

I certainly appreciate your very well presented thoughts. "Highly 
granular" processing has been something of a deep assumption for
a long time, and those are always good to challenge.

And it could be that I was simply corrupted by FPGA designers :)

Niklas Holsti wrote:
> On 12-02-28 15:06 , Les Cargill wrote: >> Niklas Holsti wrote: >> <snip> >>> >>> Les: >>> >>>> Ah! I see our disconnect. >>>> >>>> I am referring to preemptive multitasking vs. "cooperative" >>>> multitasking. >>>> >>>> Preemptive simply reruns the ready queue when the system clock >>>> timer ticks. Whoever is running when the system clock ticks gets put >>>> back on the ready queue and waits. >>> >>> That describes preemptive time-sliced round-robin scheduling without >>> priorities. I believe that tends to be used in soft real-time systems, >>> not so much in hard real-time systems. >>> >> >> No; the queue can be a priority queue. > > You said that "whoever is running ... gets put back in the ready queue > and waits", which is not priority scheduling. >
I should have left off "and waits".
> In a priority-driven system, there is no need to mess with the ready > queue at every clock tick, only when event makes some waiting task ready > to execute. (The better systems don't even waste time on handling > periodic clock ticks, but program a HW timer to interrupt when the next > timed event comes up, whenever that is.) >
Sure! I'm mainly thinking of how things are described in most academic literature in order to be as general as possible. Lots of ways to skin that cat.
>> Soft vs. hard realtime is a rhetorical swamp ":) > > The distinction is fuzzy, but real. >
Somewhat.
>>>> Cooperative does not. It is the responsibility of each thread of >>>> execution to be circumspect in its use of the CPU. >>> >>> Which adds to the design constraints and makes the design of each thread >>> more complex. Why should the code of thread A have to change, just >>> because the period or deadline of thread B has changed? >>> >> >> Erm.... it doesn't. That's rather the point.... > > We do not understand each other. >
I think that is true. I'm not sure what to do about that, either :)
> You say that each thread has to be "circumspect" in CPU usage. That is > rather vague.
It has to get in, do a small task, then get out at each point in its state. "Circumspect" means "parsimonious" or "cheap" in this case - it must use the least CPU necessary to execute that state transition, and get back to a blocking call as quickly as it can.
> If the system has real-time deadlines, but is not > preemptive, it can work only if "circumspect" means that the thread > execution times (between reschedulings) are smaller than the smallest > required response time. Do you agree with this?
Not universally; no. One realtime deadline may require many time quanta - a given thread may execute may times within a single deadline time period.
> (In reality the times > must often be a lot smaller, if incoming events are sporadic without a > fixed phasing.) > > This means that the smallest required response time constrains the > design of all threads, and therefore a reduction in the smallest > required response time can force changes in the code of all threads, in > a non-preemptive system. Do you agree? >
I think you are assuming a one-to-one map between *all* responses and that time quantum. So no. A single response may require multiple time quanta, still.
>>>>> If you are lucky enough to have a system in which the events, periods, >>>>> deadlines, and processing algorithms are such that you can process any >>>>> event to completion, before reacting to other events, and still meet >>>>> all >>>>> deadlines, you don't need preemption. In any other case, avoiding >>>>> preemption is asking for trouble, IMO still. >>>>> >>>> >>>> >>>> yes, I very much prefer run-to-completion for any kind of processing, >>>> but especially for realtime. >>>> >>>> In thirty years, I've never seen a case where run to completion was >>>> more difficult than other paradigms. That does not mean >>>> other events were locked out; it simply means that the data for them >>>> was >>>> queued. >>> >>> Ok, you have been lucky. In more heavily stressed real-time systems, >> >> What is even odder is: heavily stressed systems I've seen were the ones >> *mainly* that *used* run to completion. > > Makes me suspect that they were not well designed, or were soft > real-time systems. >
They varied. I don't know of a good working distinction between soft and hard realtime, so I can't speak to the last thing.
>> You'd allow some events, less >> important ones, to be dropped. Or go to a task loop architecture. In >> either case, having good instrumentation to count dropped events is >> important. > > Sounds more and more like soft real-time. If a hard-real-time system > drops events, it is entering its abnormal, fault-tolerance mode. But > dropping events can be normal for a soft-real-time system. >
It's possible that what I mean lines up with that. Only a few had enforced time budgets. The reason I brought that up was that the failure modes were gentler - you got slow degradation of response rather than falling off the cliff. That, of course, depends on what's desired of the system to start with...
>>> run-to-completion is a strait-jacket that forces the designer to chop >>> large jobs into artificial, small ones, until the small ones can be said >>> to "run to completion", although they really are just small steps in a >>> larger job. >>> >> >> Possibly. Although the approach makes it possible to control how the >> overall system fails. That's mainly what's good about it. > > Here I can agree: when you have split the large jobs into several small > pieces, and use some kind of scheduler to dispatch the pieces, it is > easy to add some code that gets executed between pieces and can > reorganize the sequence of pieces, for example aborting some long job > after is current piece. >
That's the general idea. The overall idea is really to make a "loop" into a series (or cycle) of state transitions rather than use control constructs.
> If you need to abort long jobs that have not been split into small > pieces (because the system is preemptive), you either have to poll an > "abort" flag frequently within the long job, or use kernel primitives to > abort the whole thread, which can be messy.
Ick. What I mean really doesn't hurt that bad :) These systems didn't use a large number of threads.
> (I can't resist noting here > that Ada has a nice mechanism for aborting computations, called > "asynchronous transfer of control".) > >> If you'll look at Bruce Powell Douglass' book, I beleive it stresses >> that run to completion is a virtue in high reliability systems. Not >> pushing that but it's simply one book I know about. > > The object-oriented gurus love run-to-completion because it makes it > look as if the object-method-statechart structure is natural for > real-time systems and lets one avoid the "difficulties" of preemption > and critical sections. But in practice, in such designs it is often > necessary to run different objects/statecharts in different threads, at > different priorities, to get preemption and responsiveness. >
My use of the paradigm preceeds any of the object gurus, IMO. OO hadn't quite propagated to realtime in the '80s in a serious way. I have since used things like ObjecTime, Rose and Rhapsody, but we'd done things like this with nothing but a 'C' compiler on bare metal before. Some of those things had hundreds of states ( which may be what you are saying is the horror of it ) but I did not see that as a curse. We were able to log events and state for testing and never had a defect that wasn't 100% reproducible because of it...
> Preemption brings some risks, since the programmers can mess up the > inter-thread data sharing and synchronization. If your system can be > designed in a natural way without preemption, do so.
I, unfortunately, don't really know what that means.
> But if you can > avoid preemption only by artificially slicing the longer jobs into small > pieces, you introduce similar risks (the order of execution of the > pieces, and their interactions, may be hard to foresee) and much > unnecessary complexity of code. >
If for any case, any of that is true, then yes :) There's no crime in using whatever works. In summary, though, my statement stands: I do not see how having the system timer tick swap out a running thread improves the reliability or determinacy of a system, nor how it makes the design of a system easier. -- Les Cargill
Reply by Niklas Holsti February 29, 20122012-02-29
On 12-02-28 15:06 , Les Cargill wrote:
> Niklas Holsti wrote: > <snip> >> >> Les: >> >>> Ah! I see our disconnect. >>> >>> I am referring to preemptive multitasking vs. "cooperative" >>> multitasking. >>> >>> Preemptive simply reruns the ready queue when the system clock >>> timer ticks. Whoever is running when the system clock ticks gets put >>> back on the ready queue and waits. >> >> That describes preemptive time-sliced round-robin scheduling without >> priorities. I believe that tends to be used in soft real-time systems, >> not so much in hard real-time systems. >> > > No; the queue can be a priority queue.
You said that "whoever is running ... gets put back in the ready queue and waits", which is not priority scheduling. In a priority-driven system, there is no need to mess with the ready queue at every clock tick, only when event makes some waiting task ready to execute. (The better systems don't even waste time on handling periodic clock ticks, but program a HW timer to interrupt when the next timed event comes up, whenever that is.)
> Soft vs. hard realtime is a rhetorical swamp ":)
The distinction is fuzzy, but real.
>>> Cooperative does not. It is the responsibility of each thread of >>> execution to be circumspect in its use of the CPU. >> >> Which adds to the design constraints and makes the design of each thread >> more complex. Why should the code of thread A have to change, just >> because the period or deadline of thread B has changed? >> > > Erm.... it doesn't. That's rather the point....
We do not understand each other. You say that each thread has to be "circumspect" in CPU usage. That is rather vague. If the system has real-time deadlines, but is not preemptive, it can work only if "circumspect" means that the thread execution times (between reschedulings) are smaller than the smallest required response time. Do you agree with this? (In reality the times must often be a lot smaller, if incoming events are sporadic without a fixed phasing.) This means that the smallest required response time constrains the design of all threads, and therefore a reduction in the smallest required response time can force changes in the code of all threads, in a non-preemptive system. Do you agree?
>>>> If you are lucky enough to have a system in which the events, periods, >>>> deadlines, and processing algorithms are such that you can process any >>>> event to completion, before reacting to other events, and still meet >>>> all >>>> deadlines, you don't need preemption. In any other case, avoiding >>>> preemption is asking for trouble, IMO still. >>>> >>> >>> >>> yes, I very much prefer run-to-completion for any kind of processing, >>> but especially for realtime. >>> >>> In thirty years, I've never seen a case where run to completion was >>> more difficult than other paradigms. That does not mean >>> other events were locked out; it simply means that the data for them was >>> queued. >> >> Ok, you have been lucky. In more heavily stressed real-time systems, > > What is even odder is: heavily stressed systems I've seen were the ones > *mainly* that *used* run to completion.
Makes me suspect that they were not well designed, or were soft real-time systems.
> You'd allow some events, less > important ones, to be dropped. Or go to a task loop architecture. In > either case, having good instrumentation to count dropped events is > important.
Sounds more and more like soft real-time. If a hard-real-time system drops events, it is entering its abnormal, fault-tolerance mode. But dropping events can be normal for a soft-real-time system.
>> run-to-completion is a strait-jacket that forces the designer to chop >> large jobs into artificial, small ones, until the small ones can be said >> to "run to completion", although they really are just small steps in a >> larger job. >> > > Possibly. Although the approach makes it possible to control how the > overall system fails. That's mainly what's good about it.
Here I can agree: when you have split the large jobs into several small pieces, and use some kind of scheduler to dispatch the pieces, it is easy to add some code that gets executed between pieces and can reorganize the sequence of pieces, for example aborting some long job after is current piece. If you need to abort long jobs that have not been split into small pieces (because the system is preemptive), you either have to poll an "abort" flag frequently within the long job, or use kernel primitives to abort the whole thread, which can be messy. (I can't resist noting here that Ada has a nice mechanism for aborting computations, called "asynchronous transfer of control".)
> If you'll look at Bruce Powell Douglass' book, I beleive it stresses > that run to completion is a virtue in high reliability systems. Not > pushing that but it's simply one book I know about.
The object-oriented gurus love run-to-completion because it makes it look as if the object-method-statechart structure is natural for real-time systems and lets one avoid the "difficulties" of preemption and critical sections. But in practice, in such designs it is often necessary to run different objects/statecharts in different threads, at different priorities, to get preemption and responsiveness. Preemption brings some risks, since the programmers can mess up the inter-thread data sharing and synchronization. If your system can be designed in a natural way without preemption, do so. But if you can avoid preemption only by artificially slicing the longer jobs into small pieces, you introduce similar risks (the order of execution of the pieces, and their interactions, may be hard to foresee) and much unnecessary complexity of code. -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
Reply by Sink0 February 29, 20122012-02-29
On Feb 24, 9:29=A0pm, Paul <p...@pcserviceselectronics.co.uk> wrote:
> In article <ji7s46$p1...@dont-email.me>, noem...@given.com says... > > > > > On 22/02/2012 03:26, Sink0 wrote: > > > I was thnking with myself today and i remembered an old software > > > architecture for embedded systems division: > > > > 1. Round-robin > > > 2. Round-robin with interrupts > > > 3. Function-queue-scheduling > > > 4. Real-time Operating Syste > > The thing I see that was missed in above is other operating system > architectures like pre-emptive, fault-tolerant, fallover, then the > categoreies such as sigle/multi-processor/multi-core. Then we could > consider Real Time versions of them.
Its is not that i missed, it is a more global division, much more specific for RT Embedded systems. I belive that division os mentioned on the "An Embedded Software Primer" book but i might be wrong. Still, a Round-Robin with interriputs is preemptive, and can be or not fault tolerant as an example. The system can be multi-core but i am disconsidering multi-core specific architectures as they have a much more limited range of applications.
> > Whether it is realtime or not depends on the application e.g. a > temperature monitor using a loop of 3 jumped to tasks can be deemed real > time if it ALWAYS responds within its SPECIFIED response time. > >
I know, but in theory the choice should be according to the application. But that is not totally true and linear. Several applications can be implemented with any of the three choices (again considering 1 and 2 as the same), and probably the final choice is much related to the developer previous experiences. And that the sort of comment and sugestion i am looking for. Still the discussion here is very productive for me. Several developers have a very different experience, so that might help others (as me) to open their minds related to the embedded software development. As an example, I tend to avoid COTS RTOS and most of the times i go for a semi-OS approach as Tim mentioned early. However there are several arguments on why to use a COTS RTOS, and i want to hear others experience on that. Thank you every one for the comments and sugestions. Cya
Reply by Les Cargill February 28, 20122012-02-28
Niklas Holsti wrote:
<snip>
> > Les: > >> Ah! I see our disconnect. >> >> I am referring to preemptive multitasking vs. "cooperative" multitasking. >> >> Preemptive simply reruns the ready queue when the system clock >> timer ticks. Whoever is running when the system clock ticks gets put >> back on the ready queue and waits. > > That describes preemptive time-sliced round-robin scheduling without > priorities. I believe that tends to be used in soft real-time systems, > not so much in hard real-time systems. >
No; the queue can be a priority queue. Soft vs. hard realtime is a rhetorical swamp ":)
> In priority-based preemptive scheduling, the running task keeps running > until it suspends itself (waits for something), or until some task of > higher priority becomes ready to run. >
Right. In fact, the "swap()"* verb simply puts the last guy running back on ( in all cases I know about ) if he has the highest priority. *defined as the code that exchanges register stacks between operating threads.
>> Cooperative does not. It is the responsibility of each thread of >> execution to be circumspect in its use of the CPU. > > Which adds to the design constraints and makes the design of each thread > more complex. Why should the code of thread A have to change, just > because the period or deadline of thread B has changed? >
Erm.... it doesn't. That's rather the point....
>>> If you are lucky enough to have a system in which the events, periods, >>> deadlines, and processing algorithms are such that you can process any >>> event to completion, before reacting to other events, and still meet all >>> deadlines, you don't need preemption. In any other case, avoiding >>> preemption is asking for trouble, IMO still. >>> >> >> >> yes, I very much prefer run-to-completion for any kind of processing, >> but especially for realtime. >> >> In thirty years, I've never seen a case where run to completion was >> more difficult than other paradigms. That does not mean >> other events were locked out; it simply means that the data for them was >> queued. > > Ok, you have been lucky. In more heavily stressed real-time systems,
What is even odder is: heavily stressed systems I've seen were the ones *mainly* that *used* run to completion. You'd allow some events, less important ones, to be dropped. Or go to a task loop architecture. In either case, having good instrumentation to count dropped events is important.
> run-to-completion is a strait-jacket that forces the designer to chop > large jobs into artificial, small ones, until the small ones can be said > to "run to completion", although they really are just small steps in a > larger job. >
Possibly. Although the approach makes it possible to control how the overall system fails. That's mainly what's good about it. If you'll look at Bruce Powell Douglass' book, I beleive it stresses that run to completion is a virtue in high reliability systems. Not pushing that but it's simply one book I know about. -- Les Cargill
Reply by Niklas Holsti February 28, 20122012-02-28
On 12-02-28 02:16 , Les Cargill wrote:
> Niklas Holsti wrote: >> On 12-02-26 22:44 , Les Cargill wrote: >>> Niklas Holsti wrote: >>>> On 12-02-25 21:58 , Les Cargill wrote: >>>>> Tim Wescott wrote: >>>>>> On Fri, 24 Feb 2012 11:26:04 +0000, FreeRTOS info wrote: >>>>>> >>>>>>> On 22/02/2012 03:26, Sink0 wrote: >>>>>>>> I was thnking with myself today and i remembered an old software >>>>>>>> architecture for embedded systems division: >>>>>>>> >>>>>>>> 1. Round-robin >>>>>>>> 2. Round-robin with interrupts >>>>>>>> 3. Function-queue-scheduling >>>>>>>> 4. Real-time Operating Syste >> >> [snip]
Les Cargill:
>>>>> Meh. The system is still a hunk of garbage if it depends on >>>>> "preemptive".
Niklas Holsti:
>>>> That is a very surprising opinion. If a SW designer cannot depend on >>>> preemption happening as designed, the benefits of preemption for simple >>>> design of real-time behaviour are lost, and the SW has to be >>>> designed in a much more complex way.
Les:
>>> I don't think it's particularly "more complex" myself - it's just >>> closer to being deterministic.
[snip]
>>> For realtime especially, I think of things as being event driven.
Niklas:
>> Assume you have a simple system with two types of events. Event A occurs >> at most once per second, takes 0.5 s to process, with a deadline of 1 s. >> Event B happens at most once per 10 ms, takes 1 ms to process, with a >> deadline of 10 ms. >> >> How do you handle the B events in time, without preempting the >> processing of the A events? You can perhaps handle the B's in an >> interrupt handler, but interrupts are just a HW form of preemption. >>
Les:
> Ah! I see our disconnect. > > I am referring to preemptive multitasking vs. "cooperative" multitasking. > > Preemptive simply reruns the ready queue when the system clock > timer ticks. Whoever is running when the system clock ticks gets put > back on the ready queue and waits.
That describes preemptive time-sliced round-robin scheduling without priorities. I believe that tends to be used in soft real-time systems, not so much in hard real-time systems. In priority-based preemptive scheduling, the running task keeps running until it suspends itself (waits for something), or until some task of higher priority becomes ready to run.
> Cooperative does not. It is the responsibility of each thread of > execution to be circumspect in its use of the CPU.
Which adds to the design constraints and makes the design of each thread more complex. Why should the code of thread A have to change, just because the period or deadline of thread B has changed?
>> If you are lucky enough to have a system in which the events, periods, >> deadlines, and processing algorithms are such that you can process any >> event to completion, before reacting to other events, and still meet all >> deadlines, you don't need preemption. In any other case, avoiding >> preemption is asking for trouble, IMO still. >> > > > yes, I very much prefer run-to-completion for any kind of processing, > but especially for realtime. > > In thirty years, I've never seen a case where run to completion was > more difficult than other paradigms. That does not mean > other events were locked out; it simply means that the data for them was > queued.
Ok, you have been lucky. In more heavily stressed real-time systems, run-to-completion is a strait-jacket that forces the designer to chop large jobs into artificial, small ones, until the small ones can be said to "run to completion", although they really are just small steps in a larger job. -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
Reply by Les Cargill February 27, 20122012-02-27
Niklas Holsti wrote:
> On 12-02-26 22:44 , Les Cargill wrote: >> Niklas Holsti wrote: >>> On 12-02-25 21:58 , Les Cargill wrote: >>>> Tim Wescott wrote: >>>>> On Fri, 24 Feb 2012 11:26:04 +0000, FreeRTOS info wrote: >>>>> >>>>>> On 22/02/2012 03:26, Sink0 wrote: >>>>>>> I was thnking with myself today and i remembered an old software >>>>>>> architecture for embedded systems division: >>>>>>> >>>>>>> 1. Round-robin >>>>>>> 2. Round-robin with interrupts >>>>>>> 3. Function-queue-scheduling >>>>>>> 4. Real-time Operating Syste > > [snip] > >>>> Meh. The system is still a hunk of garbage if it depends on >>>> "preemptive". >>> >>> That is a very surprising opinion. If a SW designer cannot depend on >>> preemption happening as designed, the benefits of preemption for simple >>> design of real-time behaviour are lost, and the SW has to be designed in >>> a much more complex way. >>> >> >> I don't think it's particularly "more complex" myself - it's just >> closer to being deterministic. >> >>> IMO it is absolutely OK for the real-time correctness of a preemptive >>> design to depend on preemption. >>> >> >> I don't believe that is the case. Maybe that's just me; dunno. > > [snip] > >> Perhaps "garbage" was too strong a word. How's "untrustworthy"? > > Still has to be motivated by an argument. > >> >> Suit yourself; I believe that depending on preemption is a recipe for >> latent defects. But it might be good enough for the domain, and it >> might otherwise work out fine. >> >> If I may... you seem to think that depending on preemption is somehow >> easier. That is an opinion I've seen before, but it doesn't seem to >> make much sense (to me). Even when I'm on a Linux system ( embedded or >> desktop ) , I tend to write things to behave as if there was no >> preemption. >> >> That means they hard block on an object like a sempahore/queue/spinlock/ >> timer and quickly determine that conditions to execute based on that >> object are true. > > Yes, that is how one suspends a task. And when a task wakes up, it > usually has to check the current state to decide what to do. But what > has that do with preemption? > >> For realtime especially, I think of things as being event driven. Events >> may be calculated from task loops and timer driven, but there's some >> sort of "regulator" ( think of an escapement on a pendulum clock ) >> and some fairly constant-time action regulated by that. > > Assume you have a simple system with two types of events. Event A occurs > at most once per second, takes 0.5 s to process, with a deadline of 1 s. > Event B happens at most once per 10 ms, takes 1 ms to process, with a > deadline of 10 ms. > > How do you handle the B events in time, without preempting the > processing of the A events? You can perhaps handle the B's in an > interrupt handler, but interrupts are just a HW form of preemption. >
Ah! I see our disconnect. I am referring to preemptive multitasking vs. "cooperative" multitasking. Preemptive simply reruns the ready queue when the system clock timer ticks. Whoever is running when the system clock ticks gets put back on the ready queue and waits. Cooperative does not. It is the responsibility of each thread of execution to be circumspect in its use of the CPU. <snip>
> >> All this trouble is in the service of determinacy and correctness. > > If you are lucky enough to have a system in which the events, periods, > deadlines, and processing algorithms are such that you can process any > event to completion, before reacting to other events, and still meet all > deadlines, you don't need preemption. In any other case, avoiding > preemption is asking for trouble, IMO still. >
yes, I very much prefer run-to-completion for any kind of processing, but especially for realtime. In thirty years, I've never seen a case where run to completion was more difficult than other paradigms. That does not mean other events were locked out; it simply means that the data for them was queued. In a handful of cases, I was replacing unstable code that *wasn't* run to completion with code that was. Yeah, it took a bit more design but it was rock solid and stable after the change. -- Les Cargill
Reply by Niklas Holsti February 27, 20122012-02-27
On 12-02-26 22:44 , Les Cargill wrote:
> Niklas Holsti wrote: >> On 12-02-25 21:58 , Les Cargill wrote: >>> Tim Wescott wrote: >>>> On Fri, 24 Feb 2012 11:26:04 +0000, FreeRTOS info wrote: >>>> >>>>> On 22/02/2012 03:26, Sink0 wrote: >>>>>> I was thnking with myself today and i remembered an old software >>>>>> architecture for embedded systems division: >>>>>> >>>>>> 1. Round-robin >>>>>> 2. Round-robin with interrupts >>>>>> 3. Function-queue-scheduling >>>>>> 4. Real-time Operating Syste
[snip]
>>> Meh. The system is still a hunk of garbage if it depends on >>> "preemptive". >> >> That is a very surprising opinion. If a SW designer cannot depend on >> preemption happening as designed, the benefits of preemption for simple >> design of real-time behaviour are lost, and the SW has to be designed in >> a much more complex way. >> > > I don't think it's particularly "more complex" myself - it's just > closer to being deterministic. > >> IMO it is absolutely OK for the real-time correctness of a preemptive >> design to depend on preemption. >> > > I don't believe that is the case. Maybe that's just me; dunno.
[snip]
> Perhaps "garbage" was too strong a word. How's "untrustworthy"?
Still has to be motivated by an argument.
> > Suit yourself; I believe that depending on preemption is a recipe for > latent defects. But it might be good enough for the domain, and it > might otherwise work out fine. > > If I may... you seem to think that depending on preemption is somehow > easier. That is an opinion I've seen before, but it doesn't seem to > make much sense (to me). Even when I'm on a Linux system ( embedded or > desktop ) , I tend to write things to behave as if there was no > preemption. > > That means they hard block on an object like a sempahore/queue/spinlock/ > timer and quickly determine that conditions to execute based on that > object are true.
Yes, that is how one suspends a task. And when a task wakes up, it usually has to check the current state to decide what to do. But what has that do with preemption?
> For realtime especially, I think of things as being event driven. Events > may be calculated from task loops and timer driven, but there's some > sort of "regulator" ( think of an escapement on a pendulum clock ) > and some fairly constant-time action regulated by that.
Assume you have a simple system with two types of events. Event A occurs at most once per second, takes 0.5 s to process, with a deadline of 1 s. Event B happens at most once per 10 ms, takes 1 ms to process, with a deadline of 10 ms. How do you handle the B events in time, without preempting the processing of the A events? You can perhaps handle the B's in an interrupt handler, but interrupts are just a HW form of preemption. The only two methods I can think of are (1) to insert lots of polls for event B in the code that handles event A, making sure that no interval between polls is more than 9 ms, or (2) to split the processing of event A into many small sub-functions ("sub-events" if you like), each taking at most 9 ms to execute, and to have a main loop that calls each sub-function in turn and checks for events in between sub-functions. Both methods complexify the code that processes A events. Method (1) becomes a horror when there are more than two events with different periods and deadlines. Method (2) becomes a horror when the processing algorithm for A involves much temporary data and control state that must be passed between the sub-functions. For one thing, the limit on the execution time of the sub-functions can force one to divide long loops into parts, for example a 1000-iteration loop into 10 x 100 iterations. Then consider what happens if the specs change so that event B occurs at 5 ms intervals, with a 5 ms deadline. In method (1) the number of polls must be doubled. In method (2) many sub-functions may have to be split into smaller sub-functions. With preemption, nothing in the code that processes event A has to be changed. Both non-preemptive solutions introduce jitter in the processing of the B events: in method (1) because the interval between polls is hard to make constant, in method (2) because the execution time of the sub-functions is hard to make constant. With preemption, it is easier to compute the preemption latency and jitter from the execution time of the critical sections, which are typically few and typically simple.
> All this trouble is in the service of determinacy and correctness.
If you are lucky enough to have a system in which the events, periods, deadlines, and processing algorithms are such that you can process any event to completion, before reacting to other events, and still meet all deadlines, you don't need preemption. In any other case, avoiding preemption is asking for trouble, IMO still. -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
Reply by Andrew Smallshaw February 26, 20122012-02-26
On 2012-02-25, Paul <paul@pcserviceselectronics.co.uk> wrote:
> In article <fb7hk7hmdjaimq7jp5fm5pstjcldfscbdd@4ax.com>, > upsidedown@downunder.com says... >> >> When building embedded systems, you usually have full control what is >> running on the hardware, so I do not understand why bother e.g. with >> round robin. The situation might be different, if the end user can >> launch unspecified programs at their own will, in which case round >> robin might make sense. > > Well in the most simplest of implememtations nearly all scheduling > schemes are round robin in the respect that they have a list of tasks > (or functions or routines) to perform, the list is gone through and > then scanned from the beginning again.
That was my thought exactly: it the simplest solution, so much so that you can adopt it without even thinking about it. If instead of thinking "round robin" you think "main loop" it instantly becomes a whole lot more familiar. Not that there's anything wrong with it if it gets the job done: I've never seen to point of adding several layers of complexity just to satisfy someone's notion of what the "right" way is. Doing so here instantly moves you up from the most basic devices. For example I don't see how you would do pre-emptive scheduling on the smaller PICs: sure, you can arrange the clock interrupt easily enough but you can't diddle the function call stack once you are handling it. -- Andrew Smallshaw andrews@sdf.lonestar.org
Reply by Les Cargill February 26, 20122012-02-26
Niklas Holsti wrote:
> On 12-02-25 21:58 , Les Cargill wrote: >> Tim Wescott wrote: >>> On Fri, 24 Feb 2012 11:26:04 +0000, FreeRTOS info wrote: >>> >>>> On 22/02/2012 03:26, Sink0 wrote: >>>>> I was thnking with myself today and i remembered an old software >>>>> architecture for embedded systems division: >>>>> >>>>> 1. Round-robin >>>>> 2. Round-robin with interrupts >>>>> 3. Function-queue-scheduling >>>>> 4. Real-time Operating Syste >>>>> > > [snip] > >>> >>> I've found that a simple half-RTOS (i.e., a non-preemptive 'function >>> caller') works well for the size of applications that I have been >>> writing >>> here. These are all one-man applications, so they only have a handful of >>> separate 'tasks' to perform, and there's only one guy to assign >>> priorities, and that guy understands that the slowest task can bog down >>> the fastest (because it's non-preemptive). >>> >>> I've also found that an RTOS makes a huge (beneficial) difference in >>> development and maintenance effort when you've got an application big >>> enough that you need more than one developer > > [snip] > >>> I've also stood by watching in horror (or had to adopt software and fix >>> it) as an application was written by software leads that completely >>> strangled the "real-timliness" of an RTOS, by putting in >>> seemingly-clever >>> mechanisms that allowed slow tasks to block high priority tasks. >>> > > [snip] > >>> But anything that's not preemptive >> >> Meh. The system is still a hunk of garbage if it depends on >> "preemptive". > > That is a very surprising opinion. If a SW designer cannot depend on > preemption happening as designed, the benefits of preemption for simple > design of real-time behaviour are lost, and the SW has to be designed in > a much more complex way. >
I don't think it's particularly "more complex" myself - it's just closer to being deterministic.
> IMO it is absolutely OK for the real-time correctness of a preemptive > design to depend on preemption. >
I don't believe that is the case. Maybe that's just me; dunno.
> Perhaps you meant that the logical correctness (eg. mutual exclusions) > should not depend on preemption? I can agree with that, but I would > still accept the use of ceiling priorities to implement mutual exclusion > (that is, to depend on non-preemption of a task of higher priority by > one of lower priority). >
No, I mean that the design itself should behave as if the envrionment is not preemptive.
>> I've had systems where you could *configure* >> "preemptiveness"*. It's an eye-opener. > > Sure there are kernels that can be configured like that. But if an > application is designed to use preemption, it is unfair to expect it to > have the same real-time behaviour when preemption is disabled, and to > call it "garbage" if it fails when preemption is disabled. >
Perhaps "garbage" was too strong a word. How's "untrustworthy"? Suit yourself; I believe that depending on preemption is a recipe for latent defects. But it might be good enough for the domain, and it might otherwise work out fine. If I may... you seem to think that depending on preemption is somehow easier. That is an opinion I've seen before, but it doesn't seem to make much sense (to me). Even when I'm on a Linux system ( embedded or desktop ) , I tend to write things to behave as if there was no preemption. That means they hard block on an object like a sempahore/queue/spinlock/ timer and quickly determine that conditions to execute based on that object are true. For realtime especially, I think of things as being event driven. Events may be calculated from task loops and timer driven, but there's some sort of "regulator" ( think of an escapement on a pendulum clock ) and some fairly constant-time action regulated by that. All this trouble is in the service of determinacy and correctness. And it seems to have paid off. -- Les Cargill