EmbeddedRelated.com
Forums
The 2026 Embedded Online Conference

Common name for a "Task Loop"

Started by Tim Wescott June 24, 2016
On Fri, 8 Jul 2016 23:54:15 +0300, Dimiter_Popoff <dp@tgi-sci.com>
wrote:

>On 08.7.2016 ?. 23:35, Niklas Holsti wrote: >> On 16-07-08 19:06 , Tim Wescott wrote:
>>> I have never had a problem doing this separation myself, although I've >>> inherited code from other people that fails at this, and rather badly. >>> The most egregious example of this was code that put the most important >>> two jobs into one superloop inside of one task -- and did so in such a >>> way that bollixed up the whole system if incoming commands exceeded a >>> rather moderate rate. >>> >>> I'm not entirely sure, but I think there's a possibility that if your >>> code _does_ have such an interaction between tasks, then it means that >>> there's something fundamentally wrong with your software design, or >>> possibly your system design as a whole. >> >> I agree that some troublesome task-to-task interactions are design flaws >> and can be eliminated by design changes, sometimes by splitting some >> task into one hard-real-time task and another less urgent task. But I >> have not always succeeded at that. >> >> One example of interactions I find difficult is a shared I/O channel or >> bus that must be used by various tasks, for various purposes, with most >> transmissions being sporadic and such that the sending task must wait >> for and check a response transmission. I believe that there are analysis >> methods which can find bounds on the response times for these tasks, >> including queueing delays and communication latencies, but I haven't >> studied or tried them yet. I may have to do so in my current project, >> however, because the system has three inter-communicating computers, >> with each computer connected by one SpaceWire link to a central router. > >Well sending (tcp over ethernet, let's be just practical) can be >controlled by task priorities quite well. Reception is another beast, >the quick fix which usually works is to throw more buffer RAM at it; >then if it works you may not have to seek the right fix :-).
I consider TCP/IP directly harmful for hard real-time systems :-) due to the congestion handling. In hard real-time systems, a message with correct contents but arriving after deadline is useless and possibly even harmful. Anyway, I would use a high priority Ethernet task that handles hardware access and for received packets forward them as raw MAC or UDP frames to other appropriate hard real-time tasks (according to port number for UDP). If messages are lost during the forwarding, this is just the same if lost already on the wire. TCP packets would go to a lower priority TCP task that would handle the TCP/IP connections and then transfer received data to appropriate soft real-time task. Alternatively, run TCP part of the TCP/IP stack in each soft real-time task using TCP/IP connections. The Ethernet task would forward packets to the correct SRT tasks using the port number. Those SRT tasks could run at different low priorities and hence the TCP/IP connections would have similar priority relationship. Even if some of the SRT tasks could not immediately handle new data, adjusting the TCP/IP window size would impellent the flow control.
On 09.7.2016 &#1075;. 09:34, upsidedown@downunder.com wrote:
> On Fri, 8 Jul 2016 23:54:15 +0300, Dimiter_Popoff <dp@tgi-sci.com> > wrote: > >> On 08.7.2016 ?. 23:35, Niklas Holsti wrote: >>> On 16-07-08 19:06 , Tim Wescott wrote: > >>>> I have never had a problem doing this separation myself, although I've >>>> inherited code from other people that fails at this, and rather badly. >>>> The most egregious example of this was code that put the most important >>>> two jobs into one superloop inside of one task -- and did so in such a >>>> way that bollixed up the whole system if incoming commands exceeded a >>>> rather moderate rate. >>>> >>>> I'm not entirely sure, but I think there's a possibility that if your >>>> code _does_ have such an interaction between tasks, then it means that >>>> there's something fundamentally wrong with your software design, or >>>> possibly your system design as a whole. >>> >>> I agree that some troublesome task-to-task interactions are design flaws >>> and can be eliminated by design changes, sometimes by splitting some >>> task into one hard-real-time task and another less urgent task. But I >>> have not always succeeded at that. >>> >>> One example of interactions I find difficult is a shared I/O channel or >>> bus that must be used by various tasks, for various purposes, with most >>> transmissions being sporadic and such that the sending task must wait >>> for and check a response transmission. I believe that there are analysis >>> methods which can find bounds on the response times for these tasks, >>> including queueing delays and communication latencies, but I haven't >>> studied or tried them yet. I may have to do so in my current project, >>> however, because the system has three inter-communicating computers, >>> with each computer connected by one SpaceWire link to a central router. >> >> Well sending (tcp over ethernet, let's be just practical) can be >> controlled by task priorities quite well. Reception is another beast, >> the quick fix which usually works is to throw more buffer RAM at it; >> then if it works you may not have to seek the right fix :-). > > I consider TCP/IP directly harmful for hard real-time systems :-) due > to the congestion handling. In hard real-time systems, a message with > correct contents but arriving after deadline is useless and possibly > even harmful.
Well real time ad tcp may be intersecting at some point but even if they do your "harmful" logic still applies. The thing is, in tcp the timing does not depend only on your system but on its peer(s) and proxies. However, this does not preclude real time systems from using tcp - just not for their real time part. It just has to be designed such that its timing impredictability does not interfere with the real time jobs the system has. For example a netMCA uses tcp for viewing over VNC (RFB), system backup over ftp, http access to data etc., however the real time part - several megasamples/s of ADC data being processed without missing a sample or an event in the stream just does not know about tcp. All this under DPS of course, I have yet to witness someone manage all that under another OS. Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/
On 08/07/2016 23:28, upsidedown@downunder.com wrote:

<snip>

> I would never even dream of letting multiple tasks directly accessing > a serial line.
Agreed.
> Just put a high priority task on every serial line and let it do the > arbitration between the application tasks. You can put much more > intelligence than into the ISR.
I would have a queue for each "multiple" task, and a serial task, where each queue could be prioritised accordingly and data placed in a single serial task queue for the interrupt. The last thing I would do is place more intelligence in the ISR. It needs to be as quick as possible taking data off the serial task write queue, and putting data on the serial task read queue which the serial task can then process. -- Mike Perkins Video Solutions Ltd www.videosolutions.ltd.uk
On 16-07-09 00:56 , Les Cargill wrote:
> Niklas Holsti wrote: >> [snips] >> One example of interactions I find difficult is a shared I/O channel or >> bus that must be used by various tasks, for various purposes, with most >> transmissions being sporadic and such that the sending task must wait >> for and check a response transmission. > > So the sender sends asynchronously then blocks on a receive > ( presumably with a timeout ) , with other tasks/ISRs handling > the details. You may even have separate send and receive loops, > with state indicating the timeout for each receive.
I have no problem *implementing* such things, my problem is *analysing* the timing to compute worst-case task response times under various load scenarios. This computation must also consider the possible latencies of response-generation at the remote end of the channel.
> If you do this right, very few tasks are marked "ready" at any given > instant and you'll get more deterministic operation.
Whenever a task is "not ready", it is blocked waiting for something. These blockings, and in particular blockings caused by dynamic task interactions that may or may not occur at run time, create problems for the timing *analysis*.
>> I believe that there are analysis >> methods which can find bounds on the response times for these tasks, >> including queueing delays and communication latencies, but I haven't >> studied or tried them yet. > > It might be worth adding references to a high-res free-running timer > and accumulating data. If you then wish to construct a latency model, > you can then have data to see if your model makes any sense.
I don't want to rely on such measurements, which always depend on the scenarios and test cases used. I want to have a method for computing response times from the design, and proving by analysis that deadlines are met in all possible scenarios. I know how to do that for systems where tasks interact in simple ways, but I don't yet know how to do it for cases like the above shared channel. There are tools that should be able to do it, but I have yet to try them out. -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
On 16-07-09 00:48 , Les Cargill wrote:
> Niklas Holsti wrote: >> On 16-07-08 20:50 , Les Cargill wrote: >>> Tim Wescott wrote:
[snips]
>>>> I'm not entirely sure, but I think there's a possibility that if your >>>> code _does_ have such an interaction between tasks, then it means that >>>> there's something fundamentally wrong with your software design, or >>>> possibly your system design as a whole. >>> >>> That's kind of what I mean by "dependence on priority is Bad." >> >> That is not how I have understood your dislike of priorities. My current >> project has about 20 tasks. One of these tasks is cyclic with a 2 ms >> period; another is the background task which scrubs RAM with a deadline >> measured in several minutes. These two tasks do not interact at all, >> apart from sharing the same processor. The application depends on the 2 >> ms task having a higher priority than the background task, and I find it >> hard to image how that dependency can be called "bad". >> > > That's a case where it doesn't matter. I've seen cases where > mis-assigning the task priorities caused different behavior. > > That's bad.
Mis-assigning priorities is IMO the same kind of programming error as mis-defining the termination condition of a loop. But it would be nonsensical to say that it is bad if an application depends on the termination condition of a loop. I believe I do understand your concern, that changing the priorities should not change the "result" of the program, for example because priorities have been used to control the sharing of data between tasks, such that changing the priorities causes (different) data-access races and thus changes the data-flow in the computation. Mutual exclusion and race-prevention *can* be implemented correctly by priorities (for example using the Immediate Priority Ceiling Inheritance protocol, as in my application), but of course it can also be done incorrectly. However, in a real-time application the behaviour always depends on the timing of the SW actions, even if there are no data races in the SW. For example, if the SW is supposed to sample an ADC at 100 Hz, but the new priorities mean that it is sampled only at 50 Hz, or with a large jitter, inevitably the results will be different, and perhaps useless. (You can say that in this case there is a data-race between the SW and the HW, and I would agree, but that is typical for real-time systems.)
> Your situation is simple enough that it's okay. With exactly two > tasks, you will have highly deterministic operation - assuming the > memory scrubber doesn't cause any untoward starvation .
I have something like 20 tasks, at multiple priorities and with some complex interactions. I use priorities to gain determinism, not to lose it.
> With, say a 100msec time tick that runs the task queue,
The kernel in this application (the AdaCore RTS for the LEON2 processor) is tickless. The clock and timer resolution is sub-microsecond. Task switch times are about 50 us on this 86 MHz processor.
> But I would also say that having the memory scrubber coded such that > it would behave the same whether the kernel is preemptive or not > would be a better design and implementation.
I totally disagree. A non-preemptive system would force me to insert a large number of "yield" calls in all tasks, carefully placed so that the execution time between yields never exceeds the maximum latency I allow for the 2 ms task. Yuck, yuck, triple yuck.
> Hopefully, you have a good handle on the jitter for the 2ms task.
I believe I do. With the preemptive and tickless kernel, it is limited by the maximum duration of actions at higher priorities. As this task is the highest-priority task, only interrupt handling and other interrupt-level operations (start/stop I/O) can cause jitter.
> Ironically, one way I've estimated CPU utilization is to write a "do > nothing" loop at very low priority that only increments a counter, then > something periodically dumps a hi-res clock and the counter.
The ever-ready background memory-scrubbing task plays that role in my application; the actual scrubbing rate is an estimate of the CPU load posed by the other, higher-priority tasks. This is a very common design in this domain (satellite on-board SW). If I want to have more detailed measurements, I can use the kernel's task-specific and interrupt-specific CPU-time accounting. -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
On 16-07-09 01:28 , upsidedown@downunder.com wrote:
> On Fri, 8 Jul 2016 23:35:22 +0300, Niklas Holsti > <niklas.holsti@tidorum.invalid> wrote:
[snip]
>> One example of interactions I find difficult is a shared I/O channel or >> bus that must be used by various tasks, for various purposes, with most >> transmissions being sporadic and such that the sending task must wait >> for and check a response transmission. I believe that there are analysis >> methods which can find bounds on the response times for these tasks, >> including queueing delays and communication latencies, but I haven't >> studied or tried them yet. I may have to do so in my current project, >> however, because the system has three inter-communicating computers, >> with each computer connected by one SpaceWire link to a central router. > > I would never even dream of letting multiple tasks directly accessing > a serial line. > > Just put a high priority task on every serial line and let it do the > arbitration between the application tasks. You can put much more > intelligence than into the ISR.
I was not speaking of an *implementation* difficulty, but of the difficulty of *analysing* the resulting client-task timing and response times. Of course the implementation uses one or more transmit queues, a dedicated channel-server task, interrupt handling, and possibly a separate channel-receiver task (for full-duplex channels). -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
On 16-07-09 09:25 , Dimiter_Popoff wrote:
> On 09.7.2016 &#1075;. 01:28, upsidedown@downunder.com wrote: >> On Fri, 8 Jul 2016 23:35:22 +0300, Niklas Holsti >> <niklas.holsti@tidorum.invalid> wrote: >> >>> On 16-07-08 19:06 , Tim Wescott wrote: >>>> On Thu, 07 Jul 2016 23:09:08 +0300, Niklas Holsti wrote: >>>>> Don Y wrote: >>>>> [snips] >>>>>> Or, is it just "some suggestive feeling" >>>>>> that led to the small integers chosen? >>>>> >>>>> No. From the requirements, I derive a set of tasks and deadlines, and >>>>> then assign priorities in deadline-monotonic order. Then I analyse or >>>>> measure WCETs and crank the response-time algorithm to check >>>>> schedulability. The main problem is that the task interactions are >>>>> often >>>>> more complex than assumed by the simpler schedulability analysis >>>>> methods. Fortunately, in my projects it is usually possible to >>>>> separate >>>>> hard-real-time tasks from soft-real-time tasks, and the >>>>> interactions of >>>>> the hard-real-time tasks tend to be rather simple. >>>> >>>> I have never had a problem doing this separation myself, although I've >>>> inherited code from other people that fails at this, and rather badly. >>>> The most egregious example of this was code that put the most important >>>> two jobs into one superloop inside of one task -- and did so in such a >>>> way that bollixed up the whole system if incoming commands exceeded a >>>> rather moderate rate. >>>> >>>> I'm not entirely sure, but I think there's a possibility that if your >>>> code _does_ have such an interaction between tasks, then it means that >>>> there's something fundamentally wrong with your software design, or >>>> possibly your system design as a whole. >>> >>> I agree that some troublesome task-to-task interactions are design flaws >>> and can be eliminated by design changes, sometimes by splitting some >>> task into one hard-real-time task and another less urgent task. But I >>> have not always succeeded at that. >>> >>> One example of interactions I find difficult is a shared I/O channel or >>> bus that must be used by various tasks, for various purposes, with most >>> transmissions being sporadic and such that the sending task must wait >>> for and check a response transmission. I believe that there are analysis >>> methods which can find bounds on the response times for these tasks, >>> including queueing delays and communication latencies, but I haven't >>> studied or tried them yet. I may have to do so in my current project, >>> however, because the system has three inter-communicating computers, >>> with each computer connected by one SpaceWire link to a central router. >> >> I would never even dream of letting multiple tasks directly accessing >> a serial line. > > Of course, it is insane to have multiple tasks access the same port > at the same time, I suppose he does not mean that.
Indeed not, see my reply to upsidedown.
> The way I do this - say for ethernet output or PPP output - is have > an I/O task which processes an output queue of packets. The tasks are > queuing their packets (pointers to scattered data really) on their > own and will succeed better based on priority; the I/O task simply > pushes the queue (FIFO) out.
My approach is similar. Sometimes there are several output queues with different priorities.
> The input part is more complex - the incoming packets are buffered > in a number of buffers, then each packet is passed to its processing > object (based on protocol etc), then say IP packets go into another > queue (just pointers of course, no copying) which is processed by > another task which eventually discards the packets and frees the > buffer when all packets are discarded
I do similar things here, too, when the channel is full-duplex. For half-duplex channels, the I/O task just waits for the response, then forwards it to the original client task, if that task is waiting for it. -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
Niklas Holsti wrote:
> On 16-07-09 00:48 , Les Cargill wrote: >> Niklas Holsti wrote: >>> On 16-07-08 20:50 , Les Cargill wrote: >>>> Tim Wescott wrote: > [snips] >>>>> I'm not entirely sure, but I think there's a possibility that if your >>>>> code _does_ have such an interaction between tasks, then it means that >>>>> there's something fundamentally wrong with your software design, or >>>>> possibly your system design as a whole. >>>> >>>> That's kind of what I mean by "dependence on priority is Bad." >>> >>> That is not how I have understood your dislike of priorities. My current >>> project has about 20 tasks. One of these tasks is cyclic with a 2 ms >>> period; another is the background task which scrubs RAM with a deadline >>> measured in several minutes. These two tasks do not interact at all, >>> apart from sharing the same processor. The application depends on the 2 >>> ms task having a higher priority than the background task, and I find it >>> hard to image how that dependency can be called "bad". >>> >> >> That's a case where it doesn't matter. I've seen cases where >> mis-assigning the task priorities caused different behavior. >> >> That's bad. > > Mis-assigning priorities is IMO the same kind of programming error as > mis-defining the termination condition of a loop. But it would be > nonsensical to say that it is bad if an application depends on the > termination condition of a loop. >
I don't agree with that as a metaphor. I'm very familiar with the moon lander and the original use of priority. The story of the 1202 alarm. It's a critical technology. But really, that story is a story about two basic priorities. But... if I may restate: Any design that does not depend on task priority will be more robust than one that does. Please also understand that we're just talking here. It's eminently possible to do exactly what you describe and have a nice, robust system - obviously; people do it all the time.
> I believe I do understand your concern, that changing the priorities > should not change the "result" of the program, for example because > priorities have been used to control the sharing of data between tasks, > such that changing the priorities causes (different) data-access races > and thus changes the data-flow in the computation. >
Something like that, yes.
> Mutual exclusion and race-prevention *can* be implemented correctly by > priorities (for example using the Immediate Priority Ceiling Inheritance > protocol, as in my application), but of course it can also be done > incorrectly. >
Yes.
> However, in a real-time application the behaviour always depends on the > timing of the SW actions, even if there are no data races in the SW. For > example, if the SW is supposed to sample an ADC at 100 Hz, but the new > priorities mean that it is sampled only at 50 Hz, or with a large > jitter, inevitably the results will be different, and perhaps useless. > (You can say that in this case there is a data-race between the SW and > the HW, and I would agree, but that is typical for real-time systems.) >
Understood. I'm not saying you're wrong. Bear with me :)
>> Your situation is simple enough that it's okay. With exactly two >> tasks, you will have highly deterministic operation - assuming the >> memory scrubber doesn't cause any untoward starvation . > > I have something like 20 tasks, at multiple priorities and with some > complex interactions. I use priorities to gain determinism, not to lose it. >
The critical thing is - I never trust that as a solution.
>> With, say a 100msec time tick that runs the task queue, > > The kernel in this application (the AdaCore RTS for the LEON2 processor) > is tickless. The clock and timer resolution is sub-microsecond. Task > switch times are about 50 us on this 86 MHz processor. > >> But I would also say that having the memory scrubber coded such that >> it would behave the same whether the kernel is preemptive or not >> would be a better design and implementation. > > I totally disagree. A non-preemptive system would force me to insert a > large number of "yield" calls in all tasks, carefully placed so that the > execution time between yields never exceeds the maximum latency I allow > for the 2 ms task. Yuck, yuck, triple yuck. >
Is the problem the context switch time or an aesthetic ( or other cost) objection to the way this code would have to be structured? It could be that the instrumentation necessary for the memory scrubber to figure out it needs to swap, costs too much. And I understand that. And I'm not talking about randomly inserting "sleep()" calls as if you were balancing a wheel :) - I mean constructing a "memory scrubber" object that understands its CPU budget. Running the memory scrubber on constraints based on estimated CPU utilization. And in this case, it may just be totally unnecessary. It could easily be good enough ( and probably is) .
>> Hopefully, you have a good handle on the jitter for the 2ms task. > > I believe I do. With the preemptive and tickless kernel, it is limited > by the maximum duration of actions at higher priorities. As this task is > the highest-priority task, only interrupt handling and other > interrupt-level operations (start/stop I/O) can cause jitter. >
It sounds like it.
>> Ironically, one way I've estimated CPU utilization is to write a "do >> nothing" loop at very low priority that only increments a counter, then >> something periodically dumps a hi-res clock and the counter. > > The ever-ready background memory-scrubbing task plays that role in my > application; the actual scrubbing rate is an estimate of the CPU load > posed by the other, higher-priority tasks. This is a very common design > in this domain (satellite on-board SW). >
Oh, understood. This is a fine point, a very detailed thing. I'm more oriented to safety critical systems, and that may account for much of our differences here. Obviously this approach can work quite well.
> If I want to have more detailed measurements, I can use the kernel's > task-specific and interrupt-specific CPU-time accounting. >
This too. -- Les Cargill
upsidedown@downunder.com wrote:
> On Fri, 8 Jul 2016 23:54:15 +0300, Dimiter_Popoff <dp@tgi-sci.com> > wrote: > >> On 08.7.2016 ?. 23:35, Niklas Holsti wrote: >>> On 16-07-08 19:06 , Tim Wescott wrote: > >>>> I have never had a problem doing this separation myself, although I've >>>> inherited code from other people that fails at this, and rather badly. >>>> The most egregious example of this was code that put the most important >>>> two jobs into one superloop inside of one task -- and did so in such a >>>> way that bollixed up the whole system if incoming commands exceeded a >>>> rather moderate rate. >>>> >>>> I'm not entirely sure, but I think there's a possibility that if your >>>> code _does_ have such an interaction between tasks, then it means that >>>> there's something fundamentally wrong with your software design, or >>>> possibly your system design as a whole. >>> >>> I agree that some troublesome task-to-task interactions are design flaws >>> and can be eliminated by design changes, sometimes by splitting some >>> task into one hard-real-time task and another less urgent task. But I >>> have not always succeeded at that. >>> >>> One example of interactions I find difficult is a shared I/O channel or >>> bus that must be used by various tasks, for various purposes, with most >>> transmissions being sporadic and such that the sending task must wait >>> for and check a response transmission. I believe that there are analysis >>> methods which can find bounds on the response times for these tasks, >>> including queueing delays and communication latencies, but I haven't >>> studied or tried them yet. I may have to do so in my current project, >>> however, because the system has three inter-communicating computers, >>> with each computer connected by one SpaceWire link to a central router. >> >> Well sending (tcp over ethernet, let's be just practical) can be >> controlled by task priorities quite well. Reception is another beast, >> the quick fix which usually works is to throw more buffer RAM at it; >> then if it works you may not have to seek the right fix :-). > > I consider TCP/IP directly harmful for hard real-time systems :-) due > to the congestion handling. In hard real-time systems, a message with > correct contents but arriving after deadline is useless and possibly > even harmful. >
What I think you'll find is that limiting the number of sockets used and exploiting ethernet switching, plus under utilizing the physical layer will lead to much more deterministic operation. You can use Ethernet as-if it were SPI/I2C with better deployment ability. These days, there's nothing wrong with having a "soft" realtime processor ( as in say, a Beaglebone ) as the only connection to a hard-realtime processor. The only advantage of this over SPI is the flexibility in deployment. It's darned nice to have the number of hard realtime boards connected to the soft realtime gateway be somewhat variable. And there are very small Ethernet switches these days. If it's sufficiently hard realtime that Ethernet bit times are a consideration, then this is more challenging or just not worth it.
> Anyway, I would use a high priority Ethernet task that handles > hardware access and for received packets forward them as raw MAC or > UDP frames to other appropriate hard real-time tasks (according to > port number for UDP). If messages are lost during the forwarding, this > is just the same if lost already on the wire. > > TCP packets would go to a lower priority TCP task that would handle > the TCP/IP connections and then transfer received data to appropriate > soft real-time task. >
SFAIK, most small micros offer low-latency lightweight TCP/UDP/IP stacks.
> Alternatively, run TCP part of the TCP/IP stack in each soft > real-time task using TCP/IP connections. The Ethernet task would > forward packets to the correct SRT tasks using the port number. Those > SRT tasks could run at different low priorities and hence the TCP/IP > connections would have similar priority relationship. Even if some of > the SRT tasks could not immediately handle new data, adjusting the > TCP/IP window size would impellent the flow control. >
-- Les Cargill
On 16-07-09 22:26 , Les Cargill wrote:
> Niklas Holsti wrote: >> On 16-07-09 00:48 , Les Cargill wrote: >>> Niklas Holsti wrote: >>>> On 16-07-08 20:50 , Les Cargill wrote: >>>>> Tim Wescott wrote: >> [snips] >>>>>> I'm not entirely sure, but I think there's a possibility that if your >>>>>> code _does_ have such an interaction between tasks, then it means >>>>>> that >>>>>> there's something fundamentally wrong with your software design, or >>>>>> possibly your system design as a whole. >>>>> >>>>> That's kind of what I mean by "dependence on priority is Bad." >>>> >>>> That is not how I have understood your dislike of priorities. My >>>> current >>>> project has about 20 tasks. One of these tasks is cyclic with a 2 ms >>>> period; another is the background task which scrubs RAM with a deadline >>>> measured in several minutes. These two tasks do not interact at all, >>>> apart from sharing the same processor. The application depends on the 2 >>>> ms task having a higher priority than the background task, and I >>>> find it >>>> hard to image how that dependency can be called "bad". >>>> >>> >>> That's a case where it doesn't matter. I've seen cases where >>> mis-assigning the task priorities caused different behavior. >>> >>> That's bad. >> >> Mis-assigning priorities is IMO the same kind of programming error as >> mis-defining the termination condition of a loop. But it would be >> nonsensical to say that it is bad if an application depends on the >> termination condition of a loop. >> > > I don't agree with that as a metaphor. > > I'm very familiar with the moon lander and the original use of > priority. The story of the 1202 alarm. It's a critical technology. But > really, that story is a story about two basic priorities.
I don't understand your point. Quoting from Wikipedia (https://en.wikipedia.org/wiki/Apollo_Guidance_Computer): "Luckily for Apollo 11, the AGC software had been designed with priority scheduling. Just as it had been designed to do, the software automatically recovered, deleting lower priority tasks including the 1668 display task, to complete its critical guidance and control tasks."
> But... if I may restate: > > Any design that does not depend on task priority will be more robust > than one that does.
Robust against what? Requirements changes? Programming errors? Temporary overloads? I don't see it, whatever robustness measure you use.
>> I have something like 20 tasks, at multiple priorities and with some >> complex interactions. I use priorities to gain determinism, not to >> lose it. >> > > The critical thing is - I never trust that as a solution.
And I don't understand why you don't. I don't think you have shown any reasonable grounds for this distrust.
>>> But I would also say that having the memory scrubber coded such that >>> it would behave the same whether the kernel is preemptive or not >>> would be a better design and implementation. >> >> I totally disagree. A non-preemptive system would force me to insert a >> large number of "yield" calls in all tasks, carefully placed so that the >> execution time between yields never exceeds the maximum latency I allow >> for the 2 ms task. Yuck, yuck, triple yuck. >> > > Is the problem the context switch time or an aesthetic ( or other cost) > objection to the way this code would have to be structured?
It is several things: - extra work to design the scrubber SW, since it now depends on the 2 ms requirement - extra work to verify that the design is correct and the 2 ms requirement is correctly implemented in the scrubber SW - extra maintenance work if either the 2 ms period changes, or the scrubbing requirements change - in general, it adds an unnecessary dependency between the 2 ms task and the scrubbing task, both at design time and at run-time, and so violates design modularity. If the whole design would be based on this principle, the same extra stuff would enter the other 18 or so tasks, too. Moreover, some part of the code would still have to decide that the 2 ms task should be run more often than other tasks, and that application code would in effect either reimplement priority-based scheduling, or EDF scheduling, or some similar system, or would be based on a statically predetermined "minor/major cycle" type of scheduler, which can be severe burden on SW maintenance.
> It could be that the instrumentation necessary for the memory scrubber > to figure out it needs to swap, costs too much. And I understand that. > > And I'm not talking about randomly inserting "sleep()" calls as if > you were balancing a wheel :) - I mean constructing a "memory > scrubber" object that understands its CPU budget. Running the > memory scrubber on constraints based on estimated CPU utilization.
My current memory scubber object understands its CPU budget: it is allowed (and even required) to use all the CPU time available at its priority level. What I don't want to do is to force the scrubber task to understand the CPU budgets of the *other*, higher-priority tasks. And the scrubber task is beatifully simple, just a couple of nested loops, the outermost being eternal. Completely robust against any conceivable change in any other part of the application, and against any overload. But I admit that the scrubber task is a special case. At higher priorities, to be robust against overloads the tasks may have to implement load-shedding or load-refusing logic such as limiting the rate at which the task is triggered by sporadic inputs. However, IMO even such things are easier in a preemptive system. Can you explain how you would design a memory scrubber object, in a non-preemptive system, to support not only the 2 ms task, but tasks at 1 Hz, 10 Hz, and five (say) sporadic, I/O-triggered tasks at rates of up to around 50 Hz? -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
The 2026 Embedded Online Conference