EmbeddedRelated.com
Forums

cooperative multitasking scheme

Started by Marco Bleekrode September 19, 2004
On Mon, 27 Sep 2004 06:37:28 GMT, Jonathan Kirwan
<jkirwan@easystreet.com> wrote:

>On Mon, 27 Sep 2004 09:04:19 +0300, Paul Keinanen <keinanen@sci.fi> wrote:
>I think you are looking at this more as a consumer, not a producer, of operating >system software. I can accept your term for talking from that point of view, >but in writing the core code (as I do), I use my meaning and not yours when I >consider and lay out the work ahead.
Yes, I have been mainly a OS service consumer for the last 20 years, but before that I also used and maintained small pre-emptive kernels for 8 bitters, so I know what is reasonable to expect and what is not.
>An interrupt will NOT switch processes in a system that switches only >cooperatively. It may *move* processes from one state or queue to another, but >the running process does not change until it makes a call to the O/S, in some >form or another (for example, to send a message, change a priority of another >process, switch() away, etc.)
No disagreement about this, but I did not comment on cooperative systems.
>In a preemptive system, yes.
yes. < a few chapters deleted, in which we apparently agree of the contents, but have a disagreement with the naming of various things >
>>In a pre-emptive system it is stupid to use static variables for >>holding floating point values etc. > >Of course it is. But that doesn't stop it from occurring. If you have had much >experience in the 1980s (which it sounds as though you have NOT) in writing an >O/S for the Intel x86 with the various compilers that were available
The 8 bitters such as 8080/8085/Z80/680x this was a real issue if you only had the object code for the library. Disassembling and changing the addressing mode to be relative to some base register was not an option, if the processor did not have a suitable addressing mode or all the available registers capable of being used for base registers were used for something else. In these situations you really had to copy the local floating point registers to the stack. However, with the x86 family with segmentation enabled, you should be able to use a separate data and stack segments for each task and thus have individual FP registers for each task in memory.
>and for the >various incarnations of real PCs which may or may not even have an FP CPU on >board, then you'd know about the stupidities that had to be contended with for >the existing FP techniques used then.
This is an issue if you emulate each opcode by an interrupt (trap) service each time a FP opcode was encountered in a non-FP CPU. Using subroutine calls and within the library code select either the FP opcode when available or use emulation in the local stack or local data segment if no FP-support available. Many compilers could be forced to issue subroutine calls for floating point operations. Paul
Spehro Pefhany wrote:
> <radsett@junk.aeolusdevelopment.cm> wrote: >
... snip ...
> >> Pre-emption requires a timer? I understand time-slicing would >> but surely you can do preemption w/o a timer. Useful yes. >> Necessary? > > I don't see how. However, it need not consume the resource > entirely- other stuff could be chained off the same timer > interrupt.
How little ingenuity show up these days. Back when many systems had neither timers nor interrupts, the thing to do was intercept a common system call, such as checking the status of an input keyboard, and count these to generate a timer tick. -- Chuck F (cbfalconer@yahoo.com) (cbfalconer@worldnet.att.net) Available for consulting/temporary embedded and systems. <http://cbfalconer.home.att.net> USE worldnet address!
On Mon, 27 Sep 2004 09:04:17 +0300, Paul Keinanen <keinanen@sci.fi>
wrote:

>It is generally a bad idea to have two or more tasks on the same >priority. If you can not decide any precedence between these two >tasks, there is usually something wrong with your design.
OK, here's the project I'm working on right now: It is a three-station test stand that tests transmission solenoid valves in a factory production environment. The three stations run the same test sequence, but they run independently. This is naturally implemented as three equal-priority tasks. Is this a bad design? How would you do it differently? -Robert Scott Ypsilanti, Michigan (Reply through this forum, not by direct e-mail to me, as automatic reply address is fake.)
On Mon, 27 Sep 2004 09:41:30 GMT, the renowned CBFalconer
<cbfalconer@yahoo.com> wrote:

>Spehro Pefhany wrote: >> <radsett@junk.aeolusdevelopment.cm> wrote: >> >... snip ... >> >>> Pre-emption requires a timer? I understand time-slicing would >>> but surely you can do preemption w/o a timer. Useful yes. >>> Necessary? >> >> I don't see how. However, it need not consume the resource >> entirely- other stuff could be chained off the same timer >> interrupt. > >How little ingenuity show up these days. Back when many systems >had neither timers nor interrupts, the thing to do was intercept a >common system call, such as checking the status of an input >keyboard, and count these to generate a timer tick.
That only show up because it's generated by a hardware timer interrupt (probably one out of many interruts). Don't let all the layers of software obscure the true inner workings. Best regards, Spehro Pefhany -- "it's the network..." "The Journey is the reward" speff@interlog.com Info for manufacturers: http://www.trexon.com Embedded software/hardware/analog Info for designers: http://www.speff.com
> Why should the OS need to get regular momentary control of the CPU in > a preempt system ? The only reason for a reschedule of the tasks is > when a higher priority task has changed state from blocked (waiting > for something) to runnable. There are mainly two situations this can > happen, an other task activates a signal or sends a message that the > blocked task is waiting for or an interrupt service routine satisfies > a wait condition.
The main purpose of the timer interrupt is representation of time and release of periodic tasks (that are very common e.g. control loops). The periodic tasks gives up execution after the work is done and blocks until the next period. One common implementation is a clock tick. The interrupt occurs at a regular interval (e.g. 10 ms) and a decision has to be taken whether a task has to be released. This approach is simple to implement, but there are two major drawbacks: The resolution of timed events is bound by the resolution of the clock tick and clock ticks without a task switch are a waste of execution time. A better approach is to generate timer interrupts at the release times of the tasks. The scheduler is now responsible to reprogram the timer after each occurrence of a timer interrupt. The list of sleeping threads has to be searched to find the nearest release time in the future of a higher priority thread than the one that will be released now. This time is used for the next timer interrupt. Martin ---------------------------------------------- JOP - a Java Processor core for FPGAs: http://www.jopdesign.com/
Martin Schoeberl wrote:

>>Why should the OS need to get regular momentary control of the CPU in >>a preempt system ? The only reason for a reschedule of the tasks is >>when a higher priority task has changed state from blocked (waiting >>for something) to runnable. There are mainly two situations this can >>happen, an other task activates a signal or sends a message that the >>blocked task is waiting for or an interrupt service routine satisfies >>a wait condition. > > > The main purpose of the timer interrupt is representation of time and > release of periodic tasks (that are very common e.g. control loops). The > periodic tasks gives up execution after the work is done and blocks until > the next period. > > One common implementation is a clock tick. The interrupt occurs at a > regular interval (e.g. 10 ms) and a decision has to be taken whether a > task has to be released. This approach is simple to implement, but there > are two major drawbacks: The resolution of timed events is bound by the > resolution of the clock tick and clock ticks without a task switch are a > waste of execution time. > > A better approach is to generate timer interrupts at the release times of > the tasks. The scheduler is now responsible to reprogram the timer after > each occurrence of a timer interrupt. The list of sleeping threads has to > be searched to find the nearest release time in the future of a higher > priority thread than the one that will be released now. This time is used > for the next timer interrupt. >
How does any scheme that relies on priority queues avoid starvation of low-priority tasks? This seems to be an inherent disadvantage versus the round-robin varying-time schemes.
On Wed, 29 Sep 2004 19:01:04 -0400, Elko Tchernev <etchernevnono@acm.org> wrote:

>How does any scheme that relies on priority queues avoid starvation >of low-priority tasks?
In general? This cannot be answered. But in specific circumstances, it's rather easy to imagine useful cases. Usually, the higher priority processes will wait for messages or semaphores and are related to low-level hardware interrupt routines which may put something in a buffer to be serviced by the high priority routine (which the low-level code awakens when it captures something to be processed.) In this case, the reason they are high priority is so that they can pay attention to incoming data in a timely way and not so they can just hog the CPU. Until data arrives, they aren't in the ready queue.
>This seems to be an inherent disadvantage versus >the round-robin varying-time schemes.
Round robin is usually just time-based preemption when a quantum expires. Round robin can operate, even in cases where different priorities are permitted for processes, when more than one of the highest priority, ready-to-run processes have the same priority. Admittedly, round robin does depend on at least two processes having the same priority (or no priorities, at all) but round robin and priority support aren't inherently mutually exclusive, unless all processes always have different priorities assigned to them. Jon
On Wed, 29 Sep 2004 19:01:04 -0400, Elko Tchernev
<etchernevnono@acm.org> wrote:

> How does any scheme that relies on priority queues avoid starvation >of low-priority tasks?
Why should it even attempt to do that ? It is up to the designer of the system to ensure that there are enough computing power available to handle all incoming events at the maximum rate possible. For instance the maximum rate for UART interrupts depend on the serial line bit rate, the number of Ethernet end of frame interrupts depends on the speed of the link and the minimum message size and the number of timer interrupts depends on the crystal frequency and what division ratios are programmed into the counters. If sufficient computing power can not always be provided, the system designer has to decide _in_advance_ what functionality _is_ sacrificed during congestion. Typical examples for functionality to be sacrificed are various "nice to have" status displays etc., which does not affect the main purpose of the system. Thus, such "nice to gave" features should be put on the lowest priority, so if there is a high demand for processing power, these functionalities are automatically sacrificed by starving of resources and no extra application logic is required for that. Paul
Paul Keinanen wrote:

> On Wed, 29 Sep 2004 19:01:04 -0400, Elko Tchernev > <etchernevnono@acm.org> wrote: > > >> How does any scheme that relies on priority queues avoid starvation >>of low-priority tasks? > > > Why should it even attempt to do that ? > > It is up to the designer of the system to ensure that there are enough > computing power available to handle all incoming events at the maximum > rate possible. For instance the maximum rate for UART interrupts > depend on the serial line bit rate, the number of Ethernet end of > frame interrupts depends on the speed of the link and the minimum > message size and the number of timer interrupts depends on the crystal > frequency and what division ratios are programmed into the counters. > > If sufficient computing power can not always be provided, the system > designer has to decide _in_advance_ what functionality _is_ sacrificed > during congestion. Typical examples for functionality to be sacrificed > are various "nice to have" status displays etc., which does not > affect the main purpose of the system. > > Thus, such "nice to gave" features should be put on the lowest > priority, so if there is a high demand for processing power, these > functionalities are automatically sacrificed by starving of resources > and no extra application logic is required for that. >
The logical conclusion from what you say is, that there's essentially two priority levels - the highest, which is the "must have" level, and all others - which become "nice to have", and _can_ be starved by the highest. All you say about computing power is true, however, sometimes you can't anticipate the actual computing requirements in the field, or maybe you _lack_ computing power to handle the theoretically maximum throughput, and are only able to handle the average (with buffering, let's say). In such cases, it seems to me that the priority queue approach will not degrade gracefully, but can shut down the low-priority tasks for long time periods. The round-robin with priority (where priority determines how long a task can run before being pre-empted, or switches on its own) offers more graceful degradation in high-load circumstances. It has the flexibility of distributing the CPU time between low-priority, continuously-running background tasks, and is much simpler to implement.
On Thu, 30 Sep 2004 17:21:56 -0400, Elko Tchernev
<etchernevnono@acm.org> wrote:

> The logical conclusion from what you say is, that there's >essentially two priority levels - the highest, which is the "must have" >level, and all others - which become "nice to have", and _can_ be >starved by the highest.
I am talking about realtime systems, which is the place that you typically use strictly priority based scheduling. Quite a few embedded systems are realtime, although not all realtime systems are embedded. In a realtime system, the program is faulty, if the results do not arrive in time when they are needed, no matter how correct the calculations themselves are. When there is not enough resources, do you use some "fair" scheduling to avoid starving the low priority task and as a consequence the high priority task does not meet the deadline ? Such system would be useless for the intended purpose. In a typical realtime system, the lowest priority null (idle) task could consume more than 50 % on average over a long period. At periods of high demand, the null task will be starved. You cold replace the null task with a some other low priority "nice to have" work that can be starved in the same way.
>All you say about computing power is true, >however, sometimes you can't anticipate the actual computing >requirements in the field, or maybe you _lack_ computing power to handle >the theoretically maximum throughput, and are only able to handle the >average (with buffering, let's say).
Buffering in a real time system is acceptable when the length of a load burst as well as the frequency of occurrence of such bursts are well defined. For instance it is OK in a half duplex serial protocol to receive bytes at a higher rate than the higher level could handle, since the maximum length of the received frame is known and thus a sufficiently large buffer can be allocated in the receiving routine to avoid overwriting. Since in a typical half duplex protocol, the next request will not arrive, before a response has been sent to the previous request. In some situation it may even be acceptable to delay transmitting the response as a flow control method, but still the routines that handles the reception of individual bytes must be fast enough to receive bytes at the rate the line speed allows.
>In such cases, it seems to me that >the priority queue approach will not degrade gracefully, but can shut >down the low-priority tasks for long time periods. The round-robin with >priority (where priority determines how long a task can run before being >pre-empted, or switches on its own) offers more graceful degradation in >high-load circumstances. It has the flexibility of distributing the CPU >time between low-priority, continuously-running background tasks, and is >much simpler to implement.
Such systems are usable only in situations, in which you have _no_ firm deadlines. Paul