EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

Common name for a "Task Loop"

Started by Tim Wescott June 24, 2016
On Sat, 25 Jun 2016 09:41:46 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:

>On 6/25/2016 8:27 AM, Simon Clubley wrote: >> On 2016-06-24, Paul Rubin <no.email@nospam.invalid> wrote: >>> Tim Wescott <seemywebsite@myfooter.really> writes: >>>> So, this is the third time in a month or so that I've needed to tell >>>> someone "use a task loop" -- but I'm not sure if I can say "just Google >>>> it". >>> >>> I'd call that a polling loop, since it polls each task to see if it's >>> ready. But I don't know if that's a standard term. >> >> Until I read your reply, this was _exactly_ what I was going to say. >> >> It _is_ a polling loop (or polling architecture if you want to >> distinguish it from an interrupt architecture.) > >No. Why does any "polling" have to occur? Why can't every task be >treated as active/ready? Why can't it coexist with an active foreground? > >How do you address task_N's whose ready conditions can't be simply >externalized as "task_N_ready"? (Or, are you ONLY addressing implementations >where this is the case -- ignoring any other sort of implementation that >can ALSO exploit a "big loop"?)
In the 1970's this was implemented with the event flag register concept. On a 16 bit machine a global 16 bit event flag register was used and each task had a wait for event flag mask. The scheduler performed an AND operation between the global event flag register and the task wait mask. If the result was non-zero, the task executed. Very light weight scheduling if the number if the event flags fit into one machine word and not horribly inefficient, even if you had to test a few machine words :-) This concept could be used on a task loop as well.
On Fri, 24 Jun 2016 17:03:07 -0700, Don Y wrote:

> On 6/24/2016 4:31 PM, Tim Wescott wrote: >> I don't think your "prioritization" scheme really works, though -- if a >> high-priority task happens to miss it's cue right before the loop >> iterates then all the low- and mid-priority tasks get executed before >> the high-priority stuff. > > Priority is a misnomer. It's a "relatively short word" that suggests > "larger quantum". The "low_latency" task (list) tries to address those > tasks that need to "wake up more often". > > As all of this relies on active tasks voluntarily relinquishing control > of the processor "quickly", it's a tough system to tune if performance > is an issue. > > But, the overhead of the "executive" can be almost negligible: a couple > of microseconds (literally) -- because it isn't really *doing* anything! > >> That's why my "if (flag)" folderol. >> >> You could also do "if (task_n()) {continue;}", I suppose. > > The problem comes when all task_n are "ready" (eligible to run). > If you *don't* want to add some overhead in a scheduler, then the > individual tasks have to assume responsibility for this fairness.
Well, that's exactly what I'm trying to say about _your_ approach. In your scheme, if everything comes ready all at once and at the wrong time, then the execution has to plow through all of the low-priority tasks and all of the mid-priority tasks before it gets to execute a high-priority task. In mine, it just needs to finish up whatever task it's on and then the highest-priority task automatically gets executed. This still depends on the low-priority stuff being broken into bits short enough that they never get in the way of the high-priority stuff. To me, this situation is the big advantage of an RTOS -- if you have stuff to do on very dissimilar time scales, you can just write the damned slow stuff without having to think hard about breaking it into bite-sized chunks.
> When I use this sort of approach, my code looks like: > > ... // do something ... // do something else ... // do still more > yield() > ... // do some other stuff ... // do some more other stuff ... // > do still more other stuff > yield() > ... >
When you have no RTOS you have a yield() function call??? How?
> For someone unaccustomed to it, it looks like you're spending a helluva > lot of time (relatively speaking) "yield()-ing" and rather little "doing > something". > > But, when you look at the implementation of yield and see how little it > costs, you think nothing about injecting even more yield()'s in the > code. > > [This works best when coding in ASM as you have far more control over > how much machine state you preserve -- which can be as little as the > program counter!]
-- Tim Wescott Control systems, embedded software and circuit design I'm looking for work! See my website if you're interested http://www.wescottdesign.com
On Sat, 25 Jun 2016 01:07:41 -0700, Don Y wrote:

> On 6/25/2016 12:52 AM, upsidedown@downunder.com wrote: >> On Fri, 24 Jun 2016 17:03:07 -0700, Don Y <blockedofcourse@foo.invalid> >> wrote: >> >>> On 6/24/2016 4:31 PM, Tim Wescott wrote: >>>> I don't think your "prioritization" scheme really works, though -- if >>>> a high-priority task happens to miss it's cue right before the loop >>>> iterates then all the low- and mid-priority tasks get executed before >>>> the high-priority stuff. >>> >>> Priority is a misnomer. It's a "relatively short word" that suggests >>> "larger quantum". The "low_latency" task (list) tries to address >>> those tasks that need to "wake up more often". >> >> What does quantum has to do with priority in a real time environment ? > > Who mentioned "real-time"? > "... I'm _not_ using an RTOS..."
Who ever said that using an RTOS makes a system real-time (or at least correctly real-time)? Who ever said that not using an RTOS kept a system from being real-time? An RTOS is a tool that makes certain real-time systems easier to code -- but it by no means guarantees that real-time constraints will automatically be met. -- Tim Wescott Control systems, embedded software and circuit design I'm looking for work! See my website if you're interested http://www.wescottdesign.com
On Sat, 25 Jun 2016 08:27:54 -0700, Don Y wrote:

> On 6/25/2016 4:00 AM, upsidedown@downunder.com wrote: >> On Sat, 25 Jun 2016 01:07:41 -0700, Don Y <blockedofcourse@foo.invalid> >> wrote: >> >>> On 6/25/2016 12:52 AM, upsidedown@downunder.com wrote: >>>> On Fri, 24 Jun 2016 17:03:07 -0700, Don Y >>>> <blockedofcourse@foo.invalid> wrote: >>>> >>>>> On 6/24/2016 4:31 PM, Tim Wescott wrote: >>>>>> I don't think your "prioritization" scheme really works, though -- >>>>>> if a high-priority task happens to miss it's cue right before the >>>>>> loop iterates then all the low- and mid-priority tasks get executed >>>>>> before the high-priority stuff. >>>>> >>>>> Priority is a misnomer. It's a "relatively short word" that >>>>> suggests "larger quantum". The "low_latency" task (list) tries to >>>>> address those tasks that need to "wake up more often". >>>> >>>> What does quantum has to do with priority in a real time environment >>>> ? >>> >>> Who mentioned "real-time"? >>> "... I'm _not_ using an RTOS..." >> >> You started by priorities, which usually implies some (soft) real time >> functionality. > > "I've used (*really* slim!) multitasking executives that used the > ----------------------------------^^^^^^^^^^^^^^^^^^^^^^^ > loop as a general framework for "scheduling" tasks -- but allowed > voluntary rescheduling directives to be invoked by the individual > "function calls" invoked from that loop: > > Multitasking does not imply Real-Time. Likewise, Real-Time does not > imply Multitasking. > > "Priorities" just says some things are more important than others > (acknowledging the fact that there are only so many resources available > in a given implementation). Or, are you deciding that a "time sharing" > system is a "real-time" system? > > (UN*X has "priorities" in its scheduler; is *it* "real-time"?) > >> A real time application doesn't necessary need an RTOS. A task loop >> like > > And, a real-time application need not consist of more than one task! > I.e., real-time does not imply multitasking. MTOS != RTOS. > >> A task loop and an RTOS are not really that much different. The task >> loop and non-preemptive RTOS can store the context in local static >> variables, while the pre-emptive RTOS stores the context into a process >> specific stacks. > > A task loop and an *MTOS* are functionally equivalent. An RTOS is a > different beast, entirely.
I think you are confused in your terminology. A real-time operating system is a handy way to implement multitasking in a way that can meet real-time constraints. So while not all multitasking operating systems are real-time, all real-time operating systems are multitasking. And, a task loop can be made real-time reasonably easily, as long as you're willing to live with writing every bit of code to the requirements of the shortest real-time deadline. -- Tim Wescott Control systems, embedded software and circuit design I'm looking for work! See my website if you're interested http://www.wescottdesign.com
On Sat, 25 Jun 2016 17:59:15 +0000, Simon Clubley wrote:

> On 2016-06-25, Don Y <blockedofcourse@foo.invalid> wrote: >> On 6/25/2016 8:27 AM, Simon Clubley wrote: >>> On 2016-06-24, Paul Rubin <no.email@nospam.invalid> wrote: >>>> Tim Wescott <seemywebsite@myfooter.really> writes: >>>>> So, this is the third time in a month or so that I've needed to tell >>>>> someone "use a task loop" -- but I'm not sure if I can say "just >>>>> Google it". >>>> >>>> I'd call that a polling loop, since it polls each task to see if it's >>>> ready. But I don't know if that's a standard term. >>> >>> Until I read your reply, this was _exactly_ what I was going to say. >>> >>> It _is_ a polling loop (or polling architecture if you want to >>> distinguish it from an interrupt architecture.) >> >> No. Why does any "polling" have to occur? Why can't every task be >> treated as active/ready? Why can't it coexist with an active >> foreground? >> >> > Hello Don, > > I define a polling architecture in the general sense as one or more > loops actively going out and asking {X}, {Y} or {Z} on an ongoing basis > if they have anything to tell you or if they need you to give them some > processing time. > > I define an interrupt architecture as an architecture as something where > no code runs until some state change occurs, maybe in an external door > sensor, or maybe in an I/O device which completes an operation and has > some results to deliver. In either case, there's an interrupt and only > then are CPU cycles consumed by the CPU deciding what needs to run next. > > {X}, {Y} and {Z} can be anything; either some software construct such as > a task or some physical construct such as an external sensor. What > matters is whether you are asking {X}, {Y} and {Z} for their current > state (polling architecture) or if you are waiting for them to tell you > (interrupt architecture). > > In the example you post below, I therefore consider that to be a classic > polling architecture because the code is going out and asking the > sensors on a continuous basis if something has happened. > > If your code were to setup a range of interrupts from external sensors > and then got on with something else or just stuck around in some Wait > For Interrupt state, then I would instead consider that to be an > interrupt architecture. > > Simon.
Except that -- for my code at least -- if you add a command at the end to sleep if no tasks are ready, then it's no longer a polling by your definition architecture. (Yes, "sleep if no tasks are ready" is a bit challenging, but it can certainly be done if you're careful). -- Tim Wescott Wescott Design Services http://www.wescottdesign.com I'm looking for work -- see my website!
On 6/25/2016 1:43 PM, Tim Wescott wrote:
> On Fri, 24 Jun 2016 17:03:07 -0700, Don Y wrote: > >> On 6/24/2016 4:31 PM, Tim Wescott wrote: >>> I don't think your "prioritization" scheme really works, though -- if a >>> high-priority task happens to miss it's cue right before the loop >>> iterates then all the low- and mid-priority tasks get executed before >>> the high-priority stuff. >> >> Priority is a misnomer. It's a "relatively short word" that suggests >> "larger quantum". The "low_latency" task (list) tries to address those >> tasks that need to "wake up more often". >> >> As all of this relies on active tasks voluntarily relinquishing control >> of the processor "quickly", it's a tough system to tune if performance >> is an issue. >> >> But, the overhead of the "executive" can be almost negligible: a couple >> of microseconds (literally) -- because it isn't really *doing* anything! >> >>> That's why my "if (flag)" folderol. >>> >>> You could also do "if (task_n()) {continue;}", I suppose. >> >> The problem comes when all task_n are "ready" (eligible to run). >> If you *don't* want to add some overhead in a scheduler, then the >> individual tasks have to assume responsibility for this fairness. > > Well, that's exactly what I'm trying to say about _your_ approach. In > your scheme, if everything comes ready all at once and at the wrong time, > then the execution has to plow through all of the low-priority tasks and > all of the mid-priority tasks before it gets to execute a high-priority
It's a fundamentally different approach. No one task is more "urgent" than another (in terms of latency). You assign "importance" by deciding on how much of the processor's resources (execution time) you want to allocate to each task -- in a very loose sense. You can't count on a preemptive scheduler yanking the processor back from you if you get greedy. So, you have to be acutely aware that every opcode you fetch comes at the expense of some other task. It's time that the other task(s) don't have AND time that is imposed BEFORE the other tasks can execute. So, you "yield often". And, since all the other tasks are equally considerate, you don't worry about being starved out. OTOH, if the developer decides that you need a proportionately larger piece of the pie, you don't have to selectively elide individual yield()'s -- your "job" just runs twice for each time the other jobs are run! Your "cpu" is effectively twice as fast.
> task. In mine, it just needs to finish up whatever task it's on and then > the highest-priority task automatically gets executed.
And it gets *exclusive* use of the processor while it is executing! So, anything of lower priority starves. Your scheduling algorithm is "highest priority first". Few systems are this cut and dry. In practice, you want to give more resources to "more important" tasks -- and still let other tasks continue to run. You don't want a big loop of "try this, then try that" but, rather, to be able to code "this" and "that" in isolation and then tie them back together (because they all must work in a single environment). In a foreground-background approach, the interrupt system takes priority over the background. And, within the interrupt system, the interrupt having the highest priority can preempt those of lower priorities -- allowing them to resume when it has completed. Extending those priorities to the background in any meaningful way distributes the scheduling mechanisms among the tasks. I.e., if highest_priority gets half way through and then needs to *wait* for something else, it needs to mark itself as "not ready" and hope that someone will mark it as ready when that something else happens. And, that whichever executing task happens to finish up (along with any "higher priority" tasks) so that the if-tree will transfer control to it. How do you deal with two tasks of equal priority? One has to come first in your scheme as they can't share a priority level (because one "if" precedes the other). If I have to flash the display and "blink" a noise maker, do I have to put those activities in the same task? Why can't they coexist alongside each other at equal "priorities" so BOTH can happen without one starving the other? The scheme I illustrated shows everything as "the same priority" but with differing degrees of "importance". Or, alternatively, with differing resource requirements to achieve similar levels of performance. Should servicing UART0 have priority over UART1 (note I'm not talking about ISRs but, rather, the handlers that the ISR's feed and are fed by). I.e., if I allow two clients to connect to a device using two different UARTs on the device, should I give preference to traffic from one over the other? Is one, somehow, more "important" than the other? Where do you put the task that manages system timers? (or, do you require each individual task to examine a hardware counter-timer and do their own timekeeping?) Where does the keyboard fit into your priority scheme? If too low, then keys get dropped. If too high, then the operator's actions can jeopardize some other aspect of the application (maybe the UARTs drop characters because you're busy debouncing keys?) And, of course, if power fails, THAT should be noticed ASAP. But, will the power fail task have to assume responsibility for doing all the housekeeping as the system goes down? The priorities that would apply may be exactly the opposite of what they were when the system was running normally (and the system doesn't even know, yet, that it will no longer be running normally!). When you pick (somewhat arbitrary) priorities, you invariably end up having to juggle them. Then, refactor because the assignment doesn't work in all operating conditions (e.g., power fail). In practical systems, this results in a flattening of priorities -- which means SHARING the processor concurrently.
> This still depends on the low-priority stuff being broken into bits short > enough that they never get in the way of the high-priority stuff. To me, > this situation is the big advantage of an RTOS -- if you have stuff to do > on very dissimilar time scales, you can just write the damned slow stuff > without having to think hard about breaking it into bite-sized chunks.
But then you incur the costs of a formal scheduler -- not an ad hoc loop of code. How will task_lowest() ever have any guarantees of meeting its requirements? You've cast the scheduling decisions in concrete, at compile time.
>> When I use this sort of approach, my code looks like: >> >> ... // do something ... // do something else ... // do still more >> yield() >> ... // do some other stuff ... // do some more other stuff ... // >> do still more other stuff >> yield() >> ... > > When you have no RTOS you have a yield() function call??? How?
All yield() has to do is save the state of the current task and advance to the next task in the "queue". In the big loop example, the "queue" is fixed. task() { ... ... // do something ... // do something else ... // do still more yield() ... // do some other stuff ... // do some more other stuff ... // do still more other stuff yield() ... } imagine if yield() was: save current state (whatever that may be and however you want to save it) discard top stack frame (i.e., the frame that invoked yield()) ret (i.e., return FROM task() to whatever invoked it!) The magic you're failing to see is you can fudge the preamble to task() (and all task's) to effectively be: restore state (whatever it was and however it was saved) jump thru saved PC to the location after most recent yield() This is what a simple scheduler would do "in one "reschedule()" routine. But, a scheduler would have to examine a queue of "ready" tasks to select the next to execute. Here, the loop has made it clear which will be the next task to execute. Which will *always* be the next to execute! [In an ASM environment, yield can be a couple of opcodes!]
On 6/25/2016 1:46 PM, Tim Wescott wrote:
> On Sat, 25 Jun 2016 01:07:41 -0700, Don Y wrote: > >> On 6/25/2016 12:52 AM, upsidedown@downunder.com wrote: >>> On Fri, 24 Jun 2016 17:03:07 -0700, Don Y <blockedofcourse@foo.invalid> >>> wrote: >>> >>>> On 6/24/2016 4:31 PM, Tim Wescott wrote: >>>>> I don't think your "prioritization" scheme really works, though -- if >>>>> a high-priority task happens to miss it's cue right before the loop >>>>> iterates then all the low- and mid-priority tasks get executed before >>>>> the high-priority stuff. >>>> >>>> Priority is a misnomer. It's a "relatively short word" that suggests >>>> "larger quantum". The "low_latency" task (list) tries to address >>>> those tasks that need to "wake up more often". >>> >>> What does quantum has to do with priority in a real time environment ? >> >> Who mentioned "real-time"?
---^^^^^^^^^^^^^^^^^^^^^^^^^^
>> "... I'm _not_ using an RTOS..." > > Who ever said that using an RTOS makes a system real-time (or at least > correctly real-time)? Who ever said that not using an RTOS kept a system > from being real-time?
Reread your original post. Where did you mention a "real-time environment"? "I'm doing some small thing in a small processor" For all I know, task_1 may be "compute first fibonacci number" and task_2 "compute second fibonacci number". Presumably, the first being a prerequisite for the second...
> An RTOS is a tool that makes certain real-time systems easier to code -- > but it by no means guarantees that real-time constraints will > automatically be met.
Of course not! But, it is a measurable entity that can be characterized like any other component. So the *designer* can state with authority that particular SPECIFIED constraints of the real-time application can or can't be met.
On 6/25/2016 1:50 PM, Tim Wescott wrote:
> On Sat, 25 Jun 2016 08:27:54 -0700, Don Y wrote: > >> On 6/25/2016 4:00 AM, upsidedown@downunder.com wrote: >>> On Sat, 25 Jun 2016 01:07:41 -0700, Don Y <blockedofcourse@foo.invalid> >>> wrote: >>> >>>> On 6/25/2016 12:52 AM, upsidedown@downunder.com wrote: >>>>> On Fri, 24 Jun 2016 17:03:07 -0700, Don Y >>>>> <blockedofcourse@foo.invalid> wrote: >>>>> >>>>>> On 6/24/2016 4:31 PM, Tim Wescott wrote: >>>>>>> I don't think your "prioritization" scheme really works, though -- >>>>>>> if a high-priority task happens to miss it's cue right before the >>>>>>> loop iterates then all the low- and mid-priority tasks get executed >>>>>>> before the high-priority stuff. >>>>>> >>>>>> Priority is a misnomer. It's a "relatively short word" that >>>>>> suggests "larger quantum". The "low_latency" task (list) tries to >>>>>> address those tasks that need to "wake up more often". >>>>> >>>>> What does quantum has to do with priority in a real time environment >>>>> ? >>>> >>>> Who mentioned "real-time"? >>>> "... I'm _not_ using an RTOS..." >>> >>> You started by priorities, which usually implies some (soft) real time >>> functionality. >> >> "I've used (*really* slim!) multitasking executives that used the >> ----------------------------------^^^^^^^^^^^^^^^^^^^^^^^ >> loop as a general framework for "scheduling" tasks -- but allowed >> voluntary rescheduling directives to be invoked by the individual >> "function calls" invoked from that loop: >> >> Multitasking does not imply Real-Time. Likewise, Real-Time does not >> imply Multitasking. >> >> "Priorities" just says some things are more important than others >> (acknowledging the fact that there are only so many resources available >> in a given implementation). Or, are you deciding that a "time sharing" >> system is a "real-time" system? >> >> (UN*X has "priorities" in its scheduler; is *it* "real-time"?) >> >>> A real time application doesn't necessary need an RTOS. A task loop >>> like >> >> And, a real-time application need not consist of more than one task! >> I.e., real-time does not imply multitasking. MTOS != RTOS. >> >>> A task loop and an RTOS are not really that much different. The task >>> loop and non-preemptive RTOS can store the context in local static >>> variables, while the pre-emptive RTOS stores the context into a process >>> specific stacks. >> >> A task loop and an *MTOS* are functionally equivalent. An RTOS is a >> different beast, entirely. > > I think you are confused in your terminology. A real-time operating > system is a handy way to implement multitasking in a way that can meet > real-time constraints. So while not all multitasking operating systems > are real-time, all real-time operating systems are multitasking.
No. A real-time operating system does not imply multitasking. And, multitasking does not imply real-time. I can make a single-threaded application that draws on services offered by an RTOS to achieve particular timeliness goals. I can make a multithreaded application that draws on services offered by an MTOS to achieve the illusion of parallelism.
> And, a task loop can be made real-time reasonably easily, as long as > you're willing to live with writing every bit of code to the requirements > of the shortest real-time deadline.
Then it's not an RTOS (or an MTOS), is it? If it's an *OS*, please identify the services that it is providing for you? Welcome to the days of "program loaders" (predating OS's)
Hi Simon,

On 6/25/2016 10:59 AM, Simon Clubley wrote:

> I define a polling architecture in the general sense as one or more loops > actively going out and asking {X}, {Y} or {Z} on an ongoing basis if they > have anything to tell you or if they need you to give them some processing > time.
But, that is a very specific type of application. What would you call: while (FOREVER) { V=readADC(); writeDAC(V); } Imagine I had written it as: value-t V; while (FOREVER) { task1(); task2(V); } I.e., I'm sure there's a piece of code in my car that is reading the accelerator pedal position and using that to control ignition. Over and over again. It doesn't "ask" to see if my foot is on the pedal and read the position only when that's found to be true. When does "polling" stop being polling? I.e., when all tasks are ALWAYS "ready", what are you "asking"/polling?
> I define an interrupt architecture as an architecture as something where > no code runs until some state change occurs, maybe in an external door > sensor, or maybe in an I/O device which completes an operation and has > some results to deliver. In either case, there's an interrupt and only > then are CPU cycles consumed by the CPU deciding what needs to run next.
Again, another very specific class of applications. Few systems sit (in HALT) waiting for an interrupt -- and then returning to HALT. In practice, most systems are AT LEAST "foreground-background" -- some set of interrupts happening "at will" that feed some set of operations that are happening "when not handling an interrupt". This is the "poor man's" way of addressing timeliness constraints: move anything with timing constraints into an ISR (foreground) and deal with everything else in the background. This ensures that you don't "miss" anything by being too slow to "look for it" (polling interval). And, inevitably, you clutter up the foreground with all those "important" things: handling UART interrupts, NIC interrupts, timer/jiffy IRQ's, display refresh IRQ, keyboard scanning, sound synthesis, etc. And, if you're lucky, the hardware priorities associated with the interrupts align with your "intrinsic" priority assessment for each of these things (is a UART overrun IRQ as important as a UART receive IRQ? After all, the character has already been OVERRUN, why rush to address that fact??!). But, often, they don't. Or, need to change (but the hardware might not allow reassigning priorities dynamically -- or, may impose some arcane restrictions on these choices). And, as more things compete for the foreground (which is a limited resource), you find the background starving. So, you are tempted to move more code into ISR's to "ensure" they get handled. Which, of course, makes the foreground even more bloated... etc. [This is why you encounter desktop systems that "lose time" when heavily loaded. Or, drop network packets, etc. They often have to deal with a variety of hardware configurations -- interrupt priorities! -- that they couldn't address when the system was built. So, you find remedies like "Increase the clock speed" when the problem might be correctible with a different hardware configuration]
> {X}, {Y} and {Z} can be anything; either some software construct such as > a task or some physical construct such as an external sensor. What matters > is whether you are asking {X}, {Y} and {Z} for their current state > (polling architecture) or if you are waiting for them to tell you > (interrupt architecture). > > In the example you post below, I therefore consider that to be a classic > polling architecture because the code is going out and asking the sensors > on a continuous basis if something has happened.
So, generating fibonacci numbers would be...? Processing paychecks? (or, would you see "asking for hours worked and wage rate" as "polling" for current state?) Generating a shipping manifest from the barcoded labels on items passing a conveyor?
> If your code were to setup a range of interrupts from external sensors > and then got on with something else or just stuck around in some Wait > For Interrupt state, then I would instead consider that to be an > interrupt architecture.
Without changing the example, if I claim that get_window_state() takes the ID argument provided, looks it up to determine the CAN address of a node that can read the state of that window sensor, crafts and sends a message to that node inquiring as to the current state, then waits for the reply (all of those comms actions involving some amount of interrupt activity), then what do I have? [Note that the code can continue to execute while these things are happening] This is why I lump all of these in "foreground-background".
> Simon. > > PS: The reason I've left your message fully intact below is so others can > see your full example without them having to go to your original message.
>> // The Big Loop >> while (FOREVER) { >> verify_all_windows_closed(); >> verify_all_doors_locked(); >> verify_no_water_leaks(); >> handle_annunciator(); >> } >> >> verify_all_windows_closed() { >> while (FOREVER) { >> windowID = get_head(window_list); >> open = get_window_state(windowID); // reschedule >> window_cracked = open ? windowID : 0; >> append(window_list, windowID); >> } >> } >> >> verify_all_doors_locked() { >> while (FOREVER) { >> doorID = get_head(door_list); >> open = get_door_state(doorID); // reschedule >> door_ajar = open ? doorID : 0; >> append(door_list, doorID); >> } >> } >> >> handle_annunciator() { >> // simple state machine >> while (FOREVER) { >> off: >> while !(window_cracked || door_ajar ... ) { >> yield(); // all is well >> } >> annunciator(ON); // bad things! >> >> on: >> while (window_cracked || door_ajar ... ) { >> yield(); // problem persists >> } >> annunciator(OFF); // all is well >> >> yield(); >> } >> }
Hi Dimiter,

On 6/25/2016 11:08 AM, Dimiter_Popoff wrote:

> [no, no icicles here any longer 30+C actually, very nice > - for a change... ]
Well, to be honest, I kinda was figuring they'd be melted, by now! :>
> may be "calling" rather than polling :-). > > I did it just once for a project out of curiousity - to see how much > effort it would save me by not having a decent scheduler. > It saved nothing at best, probably it cost me more work.
Removing the scheduler, IME, only makes sense on really small designs. But, the point is, you don't have to give up multitasking just because you've done away with the scheduler! IMO, the biggest technological productivity enhancement is the decomposition that multitasking affords. Its easier to get the code *right* as well as getting it *done*.
> On another post of yours re overhead of a "calling loop" vs. a true > scheduler, I would say this is not a good deal at all.
See above. I can hack together a really "rich" development environment on a really *tiny* MCU by resorting to this sort of trickery. When you're dealing with a few *hundred* bytes of RAM, more "formal" methods just aren't practical. But, resorting to ad hoc spaghetti coding for anything of substance (little RAM doesn't mean little ROM!) is a Rx for disaster!
> Even a scheduler as complex as the one DPS has - it handles priorities, > provides fairness (so no one can hog the system), allows cooperative > task exits (is done all the time actually) while forcing tasks out > after some predefined time, gives a path to interrupts to kick the > current task out and strongly suggest the one the interrupt wants > to execute now etc. etc. is responsible for < 1% of the total > overhead. (Under system load that is, if none of the tasks has > anything to do and keeps quitting this is what will happen > all the time, obviously). > And having a good scheduler underneath makes you just forget about > scheduling, I can't remember when I last had to think about it.
My schedulers have become increasingly complex, with experience. I want the "system" to do A LOT for me -- so I can concentrate on solving a problem at hand (instead of managing processor resources). I want per process protection domains, kernel mediated IPC, seemless RPC, the ability to detect when a deadline has been missed and "handle" that event in a timely fashion, decide which tasks can perform which actions on which resources, etc. Let the machine deal with all this bookkeeping so I can concentrate on addressing the problem... Currently, I have a "per core" scheduler that handles scheduling resources for a "single virtual CPU". As I'm now seeing that I can buy multicore processors for what I've budgeted for single cores in the past, I'm now adding scheduling for the multiple cores on a particular physical processor. And, beyond that, a "workload scheduler" that decides which processors (nodes) will host which jobs (I allow processes to physically migrate at run time to exploit available "remote" hardware as the *system* load increases). Of course, there's a lot less "science" behind these last two issues as it's only recently been practical to distribute workloads over multiple closely (and loosely) coupled processors. Especially when there are nontrivial communication costs involved in "leaving the local node". OTOH, your algorithms don't have to be "ideal" when MIPS are so cheap! [Off to bury a friend... :< ]

The 2024 Embedded Online Conference