EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

Common name for a "Task Loop"

Started by Tim Wescott June 24, 2016
On Fri, 24 Jun 2016 13:21:10 -0700, Don Y wrote:

> On 6/24/2016 11:37 AM, Tim Wescott wrote: >> So, this is the third time in a month or so that I've needed to tell >> someone "use a task loop" -- but I'm not sure if I can say "just Google >> it". >> >> So: When I say "task loop" I mean that I'm _not_ using an RTOS, but >> rather that I'm doing some small thing in a small processor, and >> somewhere in my code there's a loop that goes: >> >> for (;;) >> { >> if (task_1_ready) >> { >> task_1_update(); >> } >> else if (task_2_ready) >> { >> task_2_update(); >> } >> else if (task_3_ready) >> // et cetera >> } >> >> The "task_n_ready" variables are set offstage (in an ISR, or by one of >> the task_n_update functions) and reset within the tasks. >> >> So -- is there a common Google-able term for this? > > It's a variant of foreground-background. There's no explicit scheduling > (other than the ISR's coming along "whenever they wish"). Whether the > background is a big loop or a straight shot of code is up to the > "application". > > I've used (*really* slim!) multitasking executives that used the loop as > a general framework for "scheduling" tasks -- but allowed voluntary > rescheduling directives to be invoked by the individual "function calls" > invoked from that loop: > > main() { > while (FOREVER) { > task1(); > task2(); > task3(); > } > } > > Or, more cleverly: > > main() { > while (FOREVER) { > low_priority_tasks(); medium_priority_tasks(); > medium_priority_tasks(); > high_priority_tasks(); high_priority_tasks(); > high_priority_tasks(); high_priority_tasks(); > } > } > > low_priority_tasks() { > task1(); > task9(); > task3(); > } > > ... > > high_priority_tasks() { > task5(); > task2(); > task7(); > } > > I.e., moving a task() from one wrapping function (low/medium/high) to > another effectively alters its quantum. > > This can be salted: > > main() { > while (FOREVER) { > low_priority_tasks(); > low_latency_tasks(); > medium_priority_tasks(); > low_latency_tasks(); > medium_priority_tasks(); > low_latency_tasks(); > high_priority_tasks(); > low_latency_tasks(); > high_priority_tasks(); > low_latency_tasks(); > high_priority_tasks(); > low_latency_tasks(); > high_priority_tasks(); > low_latency_tasks(); > } > } > > so tasks that need to be serviced "in short order" can gain a share of > the processor "more frequently".
I don't think your "prioritization" scheme really works, though -- if a high-priority task happens to miss it's cue right before the loop iterates then all the low- and mid-priority tasks get executed before the high-priority stuff. That's why my "if (flag)" folderol. You could also do "if (task_n()) {continue;}", I suppose. -- Tim Wescott Control systems, embedded software and circuit design I'm looking for work! See my website if you're interested http://www.wescottdesign.com
On 6/24/2016 4:31 PM, Tim Wescott wrote:
> I don't think your "prioritization" scheme really works, though -- if a > high-priority task happens to miss it's cue right before the loop > iterates then all the low- and mid-priority tasks get executed before the > high-priority stuff.
Priority is a misnomer. It's a "relatively short word" that suggests "larger quantum". The "low_latency" task (list) tries to address those tasks that need to "wake up more often". As all of this relies on active tasks voluntarily relinquishing control of the processor "quickly", it's a tough system to tune if performance is an issue. But, the overhead of the "executive" can be almost negligible: a couple of microseconds (literally) -- because it isn't really *doing* anything!
> That's why my "if (flag)" folderol. > > You could also do "if (task_n()) {continue;}", I suppose.
The problem comes when all task_n are "ready" (eligible to run). If you *don't* want to add some overhead in a scheduler, then the individual tasks have to assume responsibility for this fairness. When I use this sort of approach, my code looks like: ... // do something ... // do something else ... // do still more yield() ... // do some other stuff ... // do some more other stuff ... // do still more other stuff yield() ... For someone unaccustomed to it, it looks like you're spending a helluva lot of time (relatively speaking) "yield()-ing" and rather little "doing something". But, when you look at the implementation of yield and see how little it costs, you think nothing about injecting even more yield()'s in the code. [This works best when coding in ASM as you have far more control over how much machine state you preserve -- which can be as little as the program counter!]
On Fri, 24 Jun 2016 17:03:07 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:

>On 6/24/2016 4:31 PM, Tim Wescott wrote: >> I don't think your "prioritization" scheme really works, though -- if a >> high-priority task happens to miss it's cue right before the loop >> iterates then all the low- and mid-priority tasks get executed before the >> high-priority stuff. > >Priority is a misnomer. It's a "relatively short word" that >suggests "larger quantum". The "low_latency" task (list) tries to >address those tasks that need to "wake up more often".
What does quantum has to do with priority in a real time environment ? Quantum is something to do with time sharing systems.
>As all of this relies on active tasks voluntarily relinquishing control >of the processor "quickly", it's a tough system to tune if performance >is an issue.
When designing a real time (with OS or just al loop) tasks are partitioned in a such a way that each task has a known maximum execution time. The shorter the execution time and the more time critical it is, it should receive a higher priority. You can't have long execution time critical tasks, you must divide into smaller tasks with short execution times and then determine the priority order of each new and existing tasks. You _design_ the execution time into each task.
>But, the overhead of the "executive" can be almost negligible: a couple >of microseconds (literally) -- because it isn't really *doing* anything! > >> That's why my "if (flag)" folderol. >> >> You could also do "if (task_n()) {continue;}", I suppose.
If a task does some actual job this cycle, then it should do a "continue" to the beginning of the loop to test for any higher priority tasks becomes runable. When there are no high priority tasks, then it will fall down to lower priority tasks and so on, until falling through to the null task, which preferably do some low power consumption sleep or wait for interrupt to reactivate the loop.
>The problem comes when all task_n are "ready" (eligible to run). >If you *don't* want to add some overhead in a scheduler, then >the individual tasks have to assume responsibility for this >fairness.
Of course it is your responsibility as a RT system designer to make sure that no high priority tasks consume excessive amount of time !
On 6/25/2016 12:52 AM, upsidedown@downunder.com wrote:
> On Fri, 24 Jun 2016 17:03:07 -0700, Don Y > <blockedofcourse@foo.invalid> wrote: > >> On 6/24/2016 4:31 PM, Tim Wescott wrote: >>> I don't think your "prioritization" scheme really works, though -- if a >>> high-priority task happens to miss it's cue right before the loop >>> iterates then all the low- and mid-priority tasks get executed before the >>> high-priority stuff. >> >> Priority is a misnomer. It's a "relatively short word" that >> suggests "larger quantum". The "low_latency" task (list) tries to >> address those tasks that need to "wake up more often". > > What does quantum has to do with priority in a real time environment ?
Who mentioned "real-time"? "... I'm _not_ using an RTOS..."
On Sat, 25 Jun 2016 01:07:41 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:

>On 6/25/2016 12:52 AM, upsidedown@downunder.com wrote: >> On Fri, 24 Jun 2016 17:03:07 -0700, Don Y >> <blockedofcourse@foo.invalid> wrote: >> >>> On 6/24/2016 4:31 PM, Tim Wescott wrote: >>>> I don't think your "prioritization" scheme really works, though -- if a >>>> high-priority task happens to miss it's cue right before the loop >>>> iterates then all the low- and mid-priority tasks get executed before the >>>> high-priority stuff. >>> >>> Priority is a misnomer. It's a "relatively short word" that >>> suggests "larger quantum". The "low_latency" task (list) tries to >>> address those tasks that need to "wake up more often". >> >> What does quantum has to do with priority in a real time environment ? > >Who mentioned "real-time"? > "... I'm _not_ using an RTOS..."
You started by priorities, which usually implies some (soft) real time functionality. A real time application doesn't necessary need an RTOS. A task loop like for (;;) { if (task_1_ready) { task_1_update(); task_1_ready = 0 ; continue ; } if (task_2_ready) { task_2_update(); task_2_ready = 0 ; continue ; } if (task_3_ready) { task_3_update(); task_3_ready = 0 ; continue ; } // et cetera null_task() ; } works exactly as a non-preemtive RTOS. The pre-emptive RTOS is slightly different. In a preemtive RTOS assuming task_2_update is interrupted, the context is saved, an implicit "continue" is executed to the beginning of the for loop (scheduler is rescanned). If task 1 has become ready, it is first executed and since task_2_ready is still set, task_2_update is entered, the context is restored and hopefully run to completition and clear task_2_ready. A task loop and an RTOS are not really that much different. The task loop and non-preemptive RTOS can store the context in local static variables, while the pre-emptive RTOS stores the context into a process specific stacks.
On 2016-06-24, Paul Rubin <no.email@nospam.invalid> wrote:
> Tim Wescott <seemywebsite@myfooter.really> writes: >> So, this is the third time in a month or so that I've needed to tell >> someone "use a task loop" -- but I'm not sure if I can say "just Google >> it". > > I'd call that a polling loop, since it polls each task to see if it's > ready. But I don't know if that's a standard term.
Until I read your reply, this was _exactly_ what I was going to say. It _is_ a polling loop (or polling architecture if you want to distinguish it from an interrupt architecture.) Simon. -- Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP Microsoft: Bringing you 1980s technology to a 21st century world
On 6/25/2016 4:00 AM, upsidedown@downunder.com wrote:
> On Sat, 25 Jun 2016 01:07:41 -0700, Don Y > <blockedofcourse@foo.invalid> wrote: > >> On 6/25/2016 12:52 AM, upsidedown@downunder.com wrote: >>> On Fri, 24 Jun 2016 17:03:07 -0700, Don Y >>> <blockedofcourse@foo.invalid> wrote: >>> >>>> On 6/24/2016 4:31 PM, Tim Wescott wrote: >>>>> I don't think your "prioritization" scheme really works, though -- if a >>>>> high-priority task happens to miss it's cue right before the loop >>>>> iterates then all the low- and mid-priority tasks get executed before the >>>>> high-priority stuff. >>>> >>>> Priority is a misnomer. It's a "relatively short word" that >>>> suggests "larger quantum". The "low_latency" task (list) tries to >>>> address those tasks that need to "wake up more often". >>> >>> What does quantum has to do with priority in a real time environment ? >> >> Who mentioned "real-time"? >> "... I'm _not_ using an RTOS..." > > You started by priorities, which usually implies some (soft) real time > functionality.
"I've used (*really* slim!) multitasking executives that used the ----------------------------------^^^^^^^^^^^^^^^^^^^^^^^ loop as a general framework for "scheduling" tasks -- but allowed voluntary rescheduling directives to be invoked by the individual "function calls" invoked from that loop: Multitasking does not imply Real-Time. Likewise, Real-Time does not imply Multitasking. "Priorities" just says some things are more important than others (acknowledging the fact that there are only so many resources available in a given implementation). Or, are you deciding that a "time sharing" system is a "real-time" system? (UN*X has "priorities" in its scheduler; is *it* "real-time"?)
> A real time application doesn't necessary need an RTOS. A task loop > like
And, a real-time application need not consist of more than one task! I.e., real-time does not imply multitasking. MTOS != RTOS.
> A task loop and an RTOS are not really that much different. The task > loop and non-preemptive RTOS can store the context in local static > variables, while the pre-emptive RTOS stores the context into a > process specific stacks.
A task loop and an *MTOS* are functionally equivalent. An RTOS is a different beast, entirely.
On 6/25/2016 8:27 AM, Simon Clubley wrote:
> On 2016-06-24, Paul Rubin <no.email@nospam.invalid> wrote: >> Tim Wescott <seemywebsite@myfooter.really> writes: >>> So, this is the third time in a month or so that I've needed to tell >>> someone "use a task loop" -- but I'm not sure if I can say "just Google >>> it". >> >> I'd call that a polling loop, since it polls each task to see if it's >> ready. But I don't know if that's a standard term. > > Until I read your reply, this was _exactly_ what I was going to say. > > It _is_ a polling loop (or polling architecture if you want to > distinguish it from an interrupt architecture.)
No. Why does any "polling" have to occur? Why can't every task be treated as active/ready? Why can't it coexist with an active foreground? How do you address task_N's whose ready conditions can't be simply externalized as "task_N_ready"? (Or, are you ONLY addressing implementations where this is the case -- ignoring any other sort of implementation that can ALSO exploit a "big loop"?) // The Big Loop while (FOREVER) { verify_all_windows_closed(); verify_all_doors_locked(); verify_no_water_leaks(); handle_annunciator(); } verify_all_windows_closed() { while (FOREVER) { windowID = get_head(window_list); ASSERT(windowID != 0) open = get_window_state(windowID); // reschedule window_cracked = open ? windowID : 0; append(window_list, windowID); } } verify_all_doors_locked() { while (FOREVER) { doorID = get_head(door_list); ASSERT(doorID != 0) open = get_door_state(doorID); // reschedule door_ajar = open ? doorID : 0; append(door_list, doorID); } } handle_annunciator() { // simple state machine while (FOREVER) { off: while !(window_cracked || door_ajar ... ) { yield(); // all is well } annunciator(ON); // bad things! on: while (window_cracked || door_ajar ... ) { yield(); // problem persists } annunciator(OFF); // all is well //off: yield(); } } Note that every task is always "ready". And, the get_XXXX_state() functions can typically interface to a communication system that is running in the foreground (so that it doesn't drop packets). [Note that these can implicitly reschedule/yield while performing their intended functions -- esp if "getting the state" is time consuming] There's no mention of "time" in any of this code -- it runs as fast as the resources available permit. If the window_list grows, then the time between "checks" of a particular window will similarly lengthen. If this gets to be too long, allocate a larger quantum to the "verify_all_windows_closed()" task (put it in the loop TWICE so you don't have to rewrite the task's implementation!) If you later decide to add "check_all_smoke_detectors()" to the main loop, then all tasks will slow down a bit (in their relative frequency). If you decide the smoke detectors must be checked "more promptly", then inject the "check_all_smoke_detectors()" task between every other task listed in the big loop (i.e., do_something, check_smoke, do_something_else, check_smoke, do_yet_another_thing, check_smoke)
On 2016-06-25, Don Y <blockedofcourse@foo.invalid> wrote:
> On 6/25/2016 8:27 AM, Simon Clubley wrote: >> On 2016-06-24, Paul Rubin <no.email@nospam.invalid> wrote: >>> Tim Wescott <seemywebsite@myfooter.really> writes: >>>> So, this is the third time in a month or so that I've needed to tell >>>> someone "use a task loop" -- but I'm not sure if I can say "just Google >>>> it". >>> >>> I'd call that a polling loop, since it polls each task to see if it's >>> ready. But I don't know if that's a standard term. >> >> Until I read your reply, this was _exactly_ what I was going to say. >> >> It _is_ a polling loop (or polling architecture if you want to >> distinguish it from an interrupt architecture.) > > No. Why does any "polling" have to occur? Why can't every task be > treated as active/ready? Why can't it coexist with an active foreground? >
Hello Don, I define a polling architecture in the general sense as one or more loops actively going out and asking {X}, {Y} or {Z} on an ongoing basis if they have anything to tell you or if they need you to give them some processing time. I define an interrupt architecture as an architecture as something where no code runs until some state change occurs, maybe in an external door sensor, or maybe in an I/O device which completes an operation and has some results to deliver. In either case, there's an interrupt and only then are CPU cycles consumed by the CPU deciding what needs to run next. {X}, {Y} and {Z} can be anything; either some software construct such as a task or some physical construct such as an external sensor. What matters is whether you are asking {X}, {Y} and {Z} for their current state (polling architecture) or if you are waiting for them to tell you (interrupt architecture). In the example you post below, I therefore consider that to be a classic polling architecture because the code is going out and asking the sensors on a continuous basis if something has happened. If your code were to setup a range of interrupts from external sensors and then got on with something else or just stuck around in some Wait For Interrupt state, then I would instead consider that to be an interrupt architecture. Simon. PS: The reason I've left your message fully intact below is so others can see your full example without them having to go to your original message.
> How do you address task_N's whose ready conditions can't be simply > externalized as "task_N_ready"? (Or, are you ONLY addressing implementations > where this is the case -- ignoring any other sort of implementation that > can ALSO exploit a "big loop"?) > > // The Big Loop > while (FOREVER) { > verify_all_windows_closed(); > verify_all_doors_locked(); > verify_no_water_leaks(); > handle_annunciator(); > } > > verify_all_windows_closed() { > while (FOREVER) { > windowID = get_head(window_list); > ASSERT(windowID != 0) > > open = get_window_state(windowID); // reschedule > window_cracked = open ? windowID : 0; > append(window_list, windowID); > } > } > > verify_all_doors_locked() { > while (FOREVER) { > doorID = get_head(door_list); > ASSERT(doorID != 0) > > open = get_door_state(doorID); // reschedule > door_ajar = open ? doorID : 0; > append(door_list, doorID); > } > } > > handle_annunciator() { > // simple state machine > while (FOREVER) { > off: > while !(window_cracked || door_ajar ... ) { > yield(); // all is well > } > > annunciator(ON); // bad things! > > on: > while (window_cracked || door_ajar ... ) { > yield(); // problem persists > } > > annunciator(OFF); // all is well > > //off: > yield(); > } > } > > Note that every task is always "ready". And, the get_XXXX_state() > functions can typically interface to a communication system that > is running in the foreground (so that it doesn't drop packets). > > [Note that these can implicitly reschedule/yield while performing > their intended functions -- esp if "getting the state" is time > consuming] > > There's no mention of "time" in any of this code -- it runs as fast > as the resources available permit. If the window_list grows, > then the time between "checks" of a particular window will similarly > lengthen. If this gets to be too long, allocate a larger quantum > to the "verify_all_windows_closed()" task (put it in the loop TWICE > so you don't have to rewrite the task's implementation!) > > If you later decide to add "check_all_smoke_detectors()" to the main > loop, then all tasks will slow down a bit (in their relative frequency). > If you decide the smoke detectors must be checked "more promptly", > then inject the "check_all_smoke_detectors()" task between every > other task listed in the big loop (i.e., do_something, check_smoke, > do_something_else, check_smoke, do_yet_another_thing, check_smoke)
-- Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP Microsoft: Bringing you 1980s technology to a 21st century world
On 25.6.2016 &#1075;. 19:41, Don Y wrote:
> On 6/25/2016 8:27 AM, Simon Clubley wrote: >> On 2016-06-24, Paul Rubin <no.email@nospam.invalid> wrote: >>> Tim Wescott <seemywebsite@myfooter.really> writes: >>>> So, this is the third time in a month or so that I've needed to tell >>>> someone "use a task loop" -- but I'm not sure if I can say "just Google >>>> it". >>> >>> I'd call that a polling loop, since it polls each task to see if it's >>> ready. But I don't know if that's a standard term. >> >> Until I read your reply, this was _exactly_ what I was going to say. >> >> It _is_ a polling loop (or polling architecture if you want to >> distinguish it from an interrupt architecture.) > > No. Why does any "polling" have to occur? Why can't every task be > treated as active/ready? Why can't it coexist with an active foreground?
Hi Don, [no, no icicles here any longer 30+C actually, very nice - for a change... ] may be "calling" rather than polling :-). I did it just once for a project out of curiousity - to see how much effort it would save me by not having a decent scheduler. It saved nothing at best, probably it cost me more work. On another post of yours re overhead of a "calling loop" vs. a true scheduler, I would say this is not a good deal at all. Even a scheduler as complex as the one DPS has - it handles priorities, provides fairness (so no one can hog the system), allows cooperative task exits (is done all the time actually) while forcing tasks out after some predefined time, gives a path to interrupts to kick the current task out and strongly suggest the one the interrupt wants to execute now etc. etc. is responsible for < 1% of the total overhead. (Under system load that is, if none of the tasks has anything to do and keeps quitting this is what will happen all the time, obviously). And having a good scheduler underneath makes you just forget about scheduling, I can't remember when I last had to think about it. Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/

The 2024 Embedded Online Conference