EmbeddedRelated.com
Forums
Memfault Beyond the Launch

Common name for a "Task Loop"

Started by Tim Wescott June 24, 2016
Don Y wrote:

>>> It means <something> has to actively detect the various conditions >>> that might "make ready" a particular task. And, to potentially do >>> it every time that set of conditions proves to be true. >> >> The process that generates an event will set the appropriate event >> flag register bit. No problem. > > It means the signaling needs of one task have to be implemented by > another. If the signaled task decides it wants to respond to a different > set of conditions, then the signaling task has to be modified to > detect those, instead of the original set.
What has a signaling task to with that. It just sets the defined bits in the register. It is up to the signaled task to watch for the bits it is interested in. If during processing it is interested in other events it just watches these. The task that creates the event does not even have to know this. -- Reinhardt
On 6/26/2016 5:35 AM, upsidedown@downunder.com wrote:
> On Sun, 26 Jun 2016 05:11:32 -0700, Don Y > <blockedofcourse@foo.invalid> wrote: > >>> On RSX-11 you posted an read/write QIO request and defined an event >>> flag (typically #5 for terminal I/O), the OS then set this event flag >>> when something happened on the serial line, such as received a >>> character, had a framing error etc. The task waiting for the event >>> flag then determined what was the cause of the recent flag setting. >> >> So, the signaled task has to wake up to decide if it really >> *wants* to be awakened. > > You must be trolling.
I am trying to explain an aspect of the "big loop" implementation that I presented up-thread. Namely, that the tasks can examine <whatever> they consider to be the criteria that would cause them to "wake up" instead of relying on something else to *notify* them (via an event, IPC, etc.) when that "something else" detects the condition that is of interest.
> If I post a read/write channel, the expected result would be byte > received or byte sent. However, if the result is different, such as > timeout, framing error, parity error etc. it is as important to > receive a notification of such events. It is much more convenient to > have a channel specific event flag than having to search for the > reason from a multiline serial card status registers.
You're talking about a hardware interface driving event generation. But events (i.e., any "readiness criteria") may not be encoded in a "status register". How do you signal "garage door *almost* open" vs. "garage door almost *closed*"? Or, "missile almost at target" vs. "missile recently fired"? How do you know which criteria are of interest to which tasks? E.g., a task might want to be signaled at each of these times to update its control of an exhaust fan running in the garage (it wants to turn on as the door is opening but turn off before the door is fully opened as it expects <something> to be transiting the garage threshold once the door is FULLY opened; similarly, turn off before the door is fully closed as to do otherwise would result in back pressure in the garage and against the door) Something else might want to be notified when the door HAS closed or opened, *completely*. [Events are similar to signals. And, typically signals are used sparingly because they tend to be crude.] The *advantage* to events (and IPC's) is that it can make "communication paths" more explicit. Managing communication interconnects is a big part of managing system complexity. And, a valuable diagnostic and profiling tool! E.g., in my OS, you can't talk to <something> unless you have a "capability" to do so (the kernel simply refuses to let you issue an IPC/RPC by NOT letting you even KNOW the handle for the other party!). And, getting a "capability" is predicated on having a "credential" that allows you to perform certain actions on certain (local or remote) objects. I.e., you can't talk to the "file system" unless you have been given a capability anchored in the "file system object" and a credential that *it* recognizes as allowing you to initiate actions on it (open file, etc.) At the same time, it (or, an agency acting on its behalf) can't BOTHER YOU unless you've previously set up a capability by which it can do so! A pleasant side effect of this is that if it starts "pestering you" (behaving badly), you can simply delete the associated capability and NEVER waste an opcode fetch responding to its unwanted interactions! (i.e., it can't effectively mount a denial of service attack -- forcing you to examine each of its incoming messages only to DISCARD them, burning *your* CPU cycles in the process -- because its kernel simply won't accept messages from it destined for you; it burns *its* CPU cycles to no avail! [This is NOT possible in the "big loop" example I illustrated; you never know who might go poking around your "state" or when!] So, at any time, the kernel (*your* local kernel) knows who you *might* be talking with. And, the types of things you will be allowed to "say"! [Imagine knowing who *might* want to call a particular function at any particular time in your application's execution. And, knowing that anyone NOT on that (dynamically updated) list CAN NOT invoke the function!] You (the kernel) can leverage this information at run-time. E.g., if none of the tasks that can generate any of the events upon which you are waiting are "ready" (eligible to run), then surely *you* won't be ready (now or anytime soon!). In my case, if all of the "capabilities" (communication endpoints) that the kernel has currently registered for you exist on the *local* node, then all of your communications will be to other tasks *on* the local node (i.e., they will all be IPC's -- no RPC's!) Or, if "many" of them reside on some *other* particular node, then there might be an advantage to "physically" moving your task *to* that other node (so more of your communication paths are local IPC's instead of their current RPC's) [My workload scheduler relies on this to figure out where things might want to reside and which relocation targets *could* prove to be "costly" choices] You can't make these sorts of optimizations at compile time. By contrast, you can do a static analysis of a module to determine which functions it will invoke -- and which *those* will invoke, etc. Yet, can't determine that a priori at run-time (you don't know what code will execute until it has executed -- unless you force all function invocations to be done through a "vector table")
On 6/26/2016 6:18 AM, Reinhardt Behm wrote:
> Don Y wrote: > >>>> It means <something> has to actively detect the various conditions >>>> that might "make ready" a particular task. And, to potentially do >>>> it every time that set of conditions proves to be true. >>> >>> The process that generates an event will set the appropriate event >>> flag register bit. No problem. >> >> It means the signaling needs of one task have to be implemented by >> another. If the signaled task decides it wants to respond to a different >> set of conditions, then the signaling task has to be modified to >> detect those, instead of the original set. > > What has a signaling task to with that. It just sets the defined bits in the > register. > It is up to the signaled task to watch for the bits it is interested in. If > during processing it is interested in other events it just watches these. > The task that creates the event does not even have to know this.
Someone codes the signaling task. He/she writes code that sets specific bits. When/why? What criteria does it use to determine which bits should be set/raised and when? Would it be wise to set an event flag when an ESCape character is received? Why would the signaling task even consider such a silly notion?! OTOH, perhaps reception of an ESCape character is *exactly* what the signaled tasks is interested in! If, instead, the signaling task raises an event for "character received", then the signaled task wakes up, sees that it is NOT an ESCape character and <yawns>: "Not interested. Wake me when something INTERESTING happens..." Silly? Consider receipt of a BREAK, XON or XOFF. Three "special" characters often encountered in *generic* serial comms. Each signals some exceptional processing (stop transmission, restart transmission, data link escape, etc.) likely to be handled by some "exception" code. What if ESCape was the equivalent to an STX in the protocol that the *signaled* task implements? I.e., *until* an ESCape (STX) is detected, it has no interest in any of the incoming data. You've migrated requirements for one task into another's implementation. To NOT do so is equivalent to my saying "let each task decide what is important for it to 'become ready'" (letting it block until a character is received is just an optimization)
On Sun, 26 Jun 2016 07:07:34 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:

>On 6/26/2016 6:18 AM, Reinhardt Behm wrote: >> Don Y wrote: >> >>>>> It means <something> has to actively detect the various conditions >>>>> that might "make ready" a particular task. And, to potentially do >>>>> it every time that set of conditions proves to be true. >>>> >>>> The process that generates an event will set the appropriate event >>>> flag register bit. No problem. >>> >>> It means the signaling needs of one task have to be implemented by >>> another. If the signaled task decides it wants to respond to a different >>> set of conditions, then the signaling task has to be modified to >>> detect those, instead of the original set. >> >> What has a signaling task to with that. It just sets the defined bits in the >> register. >> It is up to the signaled task to watch for the bits it is interested in. If >> during processing it is interested in other events it just watches these. >> The task that creates the event does not even have to know this. > >Someone codes the signaling task. He/she writes code that sets specific >bits. When/why? What criteria does it use to determine which bits >should be set/raised and when?
The person (the system architect) who divides the system into individual tasks is also responsible for the messages between the tasks or in the event flag case which event flags are used. It is that simple. Period.
Don Y wrote:

> On 6/26/2016 6:18 AM, Reinhardt Behm wrote: >> Don Y wrote: >> >>>>> It means <something> has to actively detect the various conditions >>>>> that might "make ready" a particular task. And, to potentially do >>>>> it every time that set of conditions proves to be true. >>>> >>>> The process that generates an event will set the appropriate event >>>> flag register bit. No problem. >>> >>> It means the signaling needs of one task have to be implemented by >>> another. If the signaled task decides it wants to respond to a different >>> set of conditions, then the signaling task has to be modified to >>> detect those, instead of the original set. >> >> What has a signaling task to with that. It just sets the defined bits in >> the register. >> It is up to the signaled task to watch for the bits it is interested in. >> If during processing it is interested in other events it just watches >> these. The task that creates the event does not even have to know this. > > Someone codes the signaling task. He/she writes code that sets specific > bits. When/why? What criteria does it use to determine which bits > should be set/raised and when? > > Would it be wise to set an event flag when an ESCape character is > received? Why would the signaling task even consider such a silly notion?!
It does not and was never intended to do this.
> > OTOH, perhaps reception of an ESCape character is *exactly* what the > signaled tasks is interested in! If, instead, the signaling task > raises an event for "character received", then the signaled task > wakes up, sees that it is NOT an ESCape character and <yawns>: "Not > interested. Wake me when something INTERESTING happens..." > > Silly? Consider receipt of a BREAK, XON or XOFF. Three "special" > characters often encountered in *generic* serial comms. Each > signals some exceptional processing (stop transmission, restart > transmission, data link escape, etc.) likely to be handled by > some "exception" code. > > What if ESCape was the equivalent to an STX in the protocol that the > *signaled* task implements? I.e., *until* an ESCape (STX) is detected, > it has no interest in any of the incoming data. > > You've migrated requirements for one task into another's implementation.
No, you invented this interpretation.
> To NOT do so is equivalent to my saying "let each task decide what > is important for it to 'become ready'" (letting it block until a character > is received is just an optimization)
That was the original idea of this event register. The signaling task does not care what specifics about e.g a serial input has happened. It just signals that _something_ has happened on the serial channel. The signaled task wakes up and decides if it is really interested in this because it and only it knows what is interesting to it. You invent highly complicated stuff just to show that your "great OS" does this better. You move all this into a task where it just does not belong. When I implement a protocol the protocol handling is done in that task not somewhere else. For example in my avionics system the GPS receiver task does not care if a waypoint has just been reached. That is just not its job. Its job is to receive, decode and inform others of the present position. Some task higher up ill decide what to do with this information. For this it might need data from other tasks that the GPS task does not even have and should not have and bothered with. If I had designed the system your way I would have never got it through certification by the authorities because the design intertwines everything with everything. I design to the good old Unix philosophy, on task does one job does this good. Your ideas remind me of the Redmond. -- Reinhardt
On 6/26/2016 7:35 AM, upsidedown@downunder.com wrote:
> On Sun, 26 Jun 2016 07:07:34 -0700, Don Y > <blockedofcourse@foo.invalid> wrote: > >> On 6/26/2016 6:18 AM, Reinhardt Behm wrote: >>> Don Y wrote: >>> >>>>>> It means <something> has to actively detect the various conditions >>>>>> that might "make ready" a particular task. And, to potentially do >>>>>> it every time that set of conditions proves to be true. >>>>> >>>>> The process that generates an event will set the appropriate event >>>>> flag register bit. No problem. >>>> >>>> It means the signaling needs of one task have to be implemented by >>>> another. If the signaled task decides it wants to respond to a different >>>> set of conditions, then the signaling task has to be modified to >>>> detect those, instead of the original set. >>> >>> What has a signaling task to with that. It just sets the defined bits in the >>> register. >>> It is up to the signaled task to watch for the bits it is interested in. If >>> during processing it is interested in other events it just watches these. >>> The task that creates the event does not even have to know this. >> >> Someone codes the signaling task. He/she writes code that sets specific >> bits. When/why? What criteria does it use to determine which bits >> should be set/raised and when? > > The person (the system architect) who divides the system into > individual tasks is also responsible for the messages between the > tasks or in the event flag case which event flags are used. > > It is that simple. Period.
You're busy chasing a rabbit -- and ignoring the fact that *pork* is on the menu today! Read back up-thread. I am indicating a capability of a *particular* implementation. I am not suggesting Tim use it for *any* of his projects -- he can opt to implement an ANALOG controller if it suits *his* needs. OTOH, in a resource constrained environment, every OS functionality takes up resources (bytes of ROM and bytes of RAM, CPU cycles, etc.). The multitasking EXECUTIVE (note I didn't even call it an "OS"!) described earlier is extremely capable of providing a rich set of features without "stilting" the implementation (see the example app I mentioned upthread). In my current environment, it's expensive for a thread to "see if it should wake up" -- everything that it might be interested in lies on the far side of a protection barrier. I.e., it would need to invoke an IPC/RPC just to *check* on <whatever>. As a result, tasks *block* on communication channels, waiting for a "message of interest" to wake them up. But, I've ensured that ONLY that "message of interest" can be delivered over that particular channel. E.g., if a task is waiting for a "power fail" (or "power restored") message on a channel, it will NEVER encounter a "character received" message on that channel. Furthermore, the "power" message will only be accepted from a task previously authorized to *send* such a message. Any "unauthorized" task attempting to send to that channel is blocked at *its* kernel interface. Any "unacceptable" message from the task authorized to use that channel is handled by the receiving kernel -- and generates an exception which, by default, kills the task (it's the equivalent of trying to access a protected resource -- you're either acting in a rogue manner or are faulty; in either case, you should be terminated). As such, my current implementation gives me the ability to decide what *a* particular task is interested in dynamically, AT RUN TIME. Because the system is not static; applications can be loaded and unloaded and new applications added while the system is running. How does your "system architect" handle that? Ship one with every system to reengineer the system each time a user's needs change?
On 6/26/2016 7:37 AM, Reinhardt Behm wrote:

>>>>>> It means <something> has to actively detect the various conditions >>>>>> that might "make ready" a particular task. And, to potentially do >>>>>> it every time that set of conditions proves to be true. >>>>> >>>>> The process that generates an event will set the appropriate event >>>>> flag register bit. No problem. >>>> >>>> It means the signaling needs of one task have to be implemented by >>>> another. If the signaled task decides it wants to respond to a different >>>> set of conditions, then the signaling task has to be modified to >>>> detect those, instead of the original set. >>> >>> What has a signaling task to with that. It just sets the defined bits in >>> the register. >>> It is up to the signaled task to watch for the bits it is interested in. >>> If during processing it is interested in other events it just watches >>> these. The task that creates the event does not even have to know this. >> >> Someone codes the signaling task. He/she writes code that sets specific >> bits. When/why? What criteria does it use to determine which bits >> should be set/raised and when? >> >> Would it be wise to set an event flag when an ESCape character is >> received? Why would the signaling task even consider such a silly notion?! > > It does not and was never intended to do this.
Why *shouldn't* it? Why is "any character received" the criteria? We're NOT designing a general purpose computer on which general purpose applications execute! We *know* what is "of interest" at any given point in the execution.
>> OTOH, perhaps reception of an ESCape character is *exactly* what the >> signaled tasks is interested in! If, instead, the signaling task >> raises an event for "character received", then the signaled task >> wakes up, sees that it is NOT an ESCape character and <yawns>: "Not >> interested. Wake me when something INTERESTING happens..." >> >> Silly? Consider receipt of a BREAK, XON or XOFF. Three "special" >> characters often encountered in *generic* serial comms. Each >> signals some exceptional processing (stop transmission, restart >> transmission, data link escape, etc.) likely to be handled by >> some "exception" code. >> >> What if ESCape was the equivalent to an STX in the protocol that the >> *signaled* task implements? I.e., *until* an ESCape (STX) is detected, >> it has no interest in any of the incoming data. >> >> You've migrated requirements for one task into another's implementation. > > No, you invented this interpretation.
Why signal *anything*? You are doing this FOR THE BENEFIT OF some other task. But, arbitrarily deciding on some *particular* notion of "important action/event" -- regardless of the needs of the particular application embodied in the EMBEDDED SYSTEM.
>> To NOT do so is equivalent to my saying "let each task decide what >> is important for it to 'become ready'" (letting it block until a character >> is received is just an optimization) > > That was the original idea of this event register. The signaling task does > not care what specifics about e.g a serial input has happened. It just > signals that _something_ has happened on the serial channel. The signaled > task wakes up and decides if it is really interested in this because it and > only it knows what is interesting to it.
"because it and only it knows what is interesting to it" Isn't this EXACTLY what I was saying about the example multitasking executive I presented? "OTOH, if a task can sit and watch for whatever it considers important (assuming you aren't using an OS), then it can implement whatever tests *it* deems appropriate -- now, and a potentially different set, later." My point is that the *task* (task_N in Tim's original form) decides when task_N_ready is true -- instead of relying on something *else* to set it "offstage (in an ISR, or by one of the task_n_update functions)": "because it and only it knows what is interesting to it" [Gee, imagine that!] If a task has to be reinvoked (woken up) just to check to see if it *wanted* to wake up, then the cost of "waking up" becomes a factor in overall system performance. Every "active" task takes some amount of resources from the tasks that really *need* to be running. OTOH, in the executive I presented, this cost is negligible (*9* opcode fetches for a context switch). So, for the task to run a line or two of code *checking* to see if conditions are right for it to "wake up" is peanuts. E.g., I can "block" on a timer by just checking to see if it's current value is "0", or not, and either continuing its execution (when it reaches "0") or "ret" to the next task in the "big loop".
> You invent highly complicated stuff just to show that your "great OS" does > this better. You move all this into a task where it just does not belong.
That's your opinion. Of course, you don't have any idea of the nature of my application -- nor its requirements -- so pretty presumptive of you to think it *doesn't* "do it better".
> When I implement a protocol the protocol handling is done in that task not > somewhere else.
So, the UART ISR handles the protocol?
> For example in my avionics system the GPS receiver task does not care if a > waypoint has just been reached. That is just not its job. Its job is to > receive, decode and inform others of the present position. > Some task higher up ill decide what to do with this information. For this it > might need data from other tasks that the GPS task does not even have and > should not have and bothered with. > If I had designed the system your way I would have never got it through > certification by the authorities because the design intertwines everything > with everything. > I design to the good old Unix philosophy, on task does one job does this > good. Your ideas remind me of the Redmond.
You've missed the point of my implementation entirely! It is *exactly* the "do one thing well" approach. The power monitor cares not what use is made of its imformation. The barcode reader doesn't know what a barcode *means* -- or even if it should be accepted at this point in the user interface protocol. It just guarantees that the barcode is valid before signaling the Operator Interface task of its availability. [Hint: the exemplar device is a medical instrument; FDA approvals required!] If the Operator interface task is not interested in the state of the power subsystem -- or the availability of a barcode -- it doesn't respond to those events/conditions. The power subsystem doesn't decide how a power fail is handled -- it just provides the notification mechanisms. The system can choose to ignore these alerts (at its own peril) or defer them -- without *losing* them. UART0 has no idea what UART1 is doing -- even though ISR's are two instances of the same, exact code operating on different hardware resources and the upper layers implement two instances of the exact same protocol on the two devices simultaneously. Each task thinks it is the only thing running in the system -- other than the things on which it relies. E.g., I can excise any task and know exactly what the effect on the system will be. Exactly which capabilities/functionalities will disappear.
On Sat, 25 Jun 2016 22:00:06 -0700, Dave Nadler wrote:

> On Friday, June 24, 2016 at 12:37:16 PM UTC-6, Tim Wescott wrote: >> ... So -- is there a common Google-able term for this? > > Kludge? > > OK, OK, I've done it too...
I don't think it's a klugde in a small enough application. It has it's downsides, true, but it works just fine if you're careful. -- Tim Wescott Control systems, embedded software and circuit design I'm looking for work! See my website if you're interested http://www.wescottdesign.com
On 2016-06-25, Tim Wescott <seemywebsite@myfooter.really> wrote:
> > Except that -- for my code at least -- if you add a command at the end to > sleep if no tasks are ready, then it's no longer a polling by your > definition architecture. >
Yes, I would agree with that; it's no longer a purely polling architecture in that case. If you only come out of sleep as a result of, say, an external interrupt causing a task to become ready then I would call that more of an interrupt architecture. Simon. -- Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP Microsoft: Bringing you 1980s technology to a 21st century world
On 2016-06-25, Don Y <blockedofcourse@foo.invalid> wrote:
> Hi Simon, > > On 6/25/2016 10:59 AM, Simon Clubley wrote: > >> I define a polling architecture in the general sense as one or more loops >> actively going out and asking {X}, {Y} or {Z} on an ongoing basis if they >> have anything to tell you or if they need you to give them some processing >> time. > > But, that is a very specific type of application. What would you call: > > while (FOREVER) { > V=readADC(); > writeDAC(V); > } >
Polling.
> Imagine I had written it as: > > value-t V; > > while (FOREVER) { > task1(); > task2(V); > } >
Polling unless task1() consumes no CPU cycles until an external event wakes it up to process some data.
> I.e., I'm sure there's a piece of code in my car that is reading > the accelerator pedal position and using that to control ignition. > Over and over again. It doesn't "ask" to see if my foot is on the > pedal and read the position only when that's found to be true. > > When does "polling" stop being polling? I.e., when all tasks > are ALWAYS "ready", what are you "asking"/polling? >
No it stops being polling when you can insert a WFI opcode in your main loop and still have your application work. If part of a larger system, it stops being polling when the scheduler doesn't have to schedule any time for your loop until some external event happens.
> >> {X}, {Y} and {Z} can be anything; either some software construct such as >> a task or some physical construct such as an external sensor. What matters >> is whether you are asking {X}, {Y} and {Z} for their current state >> (polling architecture) or if you are waiting for them to tell you >> (interrupt architecture). >> >> In the example you post below, I therefore consider that to be a classic >> polling architecture because the code is going out and asking the sensors >> on a continuous basis if something has happened. > > So, generating fibonacci numbers would be...? > Processing paychecks? (or, would you see "asking for hours worked > and wage rate" as "polling" for current state?)
Are you trolling ? :-) Neither of those have anything to do with reading sensor data.
> Generating a shipping manifest from the barcoded labels on > items passing a conveyor? >
Interrupt if the scanner delivers an interrupt upon scanning a parcel's barcode or polling otherwise.
>> If your code were to setup a range of interrupts from external sensors >> and then got on with something else or just stuck around in some Wait >> For Interrupt state, then I would instead consider that to be an >> interrupt architecture. > > Without changing the example, if I claim that get_window_state() > takes the ID argument provided, looks it up to determine the CAN > address of a node that can read the state of that window sensor, > crafts and sends a message to that node inquiring as to the current > state, then waits for the reply (all of those comms actions involving > some amount of interrupt activity), then what do I have? >
It's still inquiring for the current state so it's a polling architecture. Simon. -- Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP Microsoft: Bringing you 1980s technology to a 21st century world

Memfault Beyond the Launch