EmbeddedRelated.com
Forums

Common name for a "Task Loop"

Started by Tim Wescott June 24, 2016
On 6/28/2016 1:54 PM, Don Y wrote:
> On 6/28/2016 9:49 AM, Rob Gaddi wrote: >> Which is how I usually implement it, but the first thing in A is >> >> switch (state) { >> case IDLE: >> if (!input_ready) break; >> >> The tasks usually wind up having to be state machines anyhow; why bother >> pulling one transition out to a different place? > > Does each invocation of "A" have to begin at the switch? > I.e., do you implement "substates" so you can resume at a different > point *in* "IDLE" on the next invocation?
> I.e., how does "A" return to "resume" on the next invocation (do you have > to split it out as an "intermediate sub-state?) or to the block *inside* > complain()?
Said another way, is "state" the equivalent of (and only means of representing) the "virtual program counter" for 'A'?
Don Y wrote:

> On 6/28/2016 1:54 PM, Don Y wrote: >> On 6/28/2016 9:49 AM, Rob Gaddi wrote: >>> Which is how I usually implement it, but the first thing in A is >>> >>> switch (state) { >>> case IDLE: >>> if (!input_ready) break; >>> >>> The tasks usually wind up having to be state machines anyhow; why bother >>> pulling one transition out to a different place? >> >> Does each invocation of "A" have to begin at the switch? >> I.e., do you implement "substates" so you can resume at a different >> point *in* "IDLE" on the next invocation? > >> I.e., how does "A" return to "resume" on the next invocation (do you have >> to split it out as an "intermediate sub-state?) or to the block *inside* >> complain()? > > Said another way, is "state" the equivalent of (and only means of > representing) the "virtual program counter" for 'A'?
Yup. Each invocation of A runs through to completion without blocking; if it's waiting for I/O then you're in case SPI_WAIT: if (!spi_finished) break; -- Rob Gaddi, Highland Technology -- www.highlandtechnology.com Email address domain is currently out of order. See above to fix.
Am 28.06.2016 um 21:02 schrieb Don Y:

> I see "polling" as different from "event driven". I *don't* see > event driven as requiring an interrupt (foreground) system! > > I.e., why can't an "event" be totally synthetic: 10,000 iterations > of the "big loop"? (before folks *dedicated* timer hardware to > implementations, you could still "write code" :> )
IMHO: because that would do violence to the language. Events are things that just happen whenever they will, often without anyone having asked for them. "Polling" is when somebody goes around asking people more-or-less silly questions. That's what these things mean in non-technical speak, and it's generally best to stay close to those original meaning when we tech people appropriate them for our own use. So the the difference between polling and interrupt-/event-driven operation is one of direction of activity. It's the difference between pulling and pushing data from one place to an other. One is where the door bell rings, the other is where you go look if there's mail in the box.
> Or, why can't an "event" be created by POLLING something?
It can be created by it, but it cannot _be_ it.
> E.g., the "line frequency clock" I described elsewhere just *looks* > to see if a "zero crossing" has been detected on the AC mains
Actually no, that's not the event. The event was when the mains _did_ cross the line, not when you got around to looking if it had. Whether that's event-driven or polled depends on _how_ you acquire the information that line is being crossed. If you just look if the sign has changed, that's polling. If you have hardware that fires some interrupt the very instant the line is crossed, that's event-driven. The distinction is much less global than one might think, though: pnce the information has been handed over to software, the nature of its handling can change at any time (and repeatedly): * You get an interrupt and just set a flag (or increase some counter)? That's an event turned to input for later polling. * You noticed some condition turned true and call a function? That's polled information being handled in an event-driven fashion.
On 28.6.2016 г. 17:40, Tim Wescott wrote:
> On Tue, 28 Jun 2016 06:20:43 +0300, Dimiter_Popoff wrote: > >> On 28.6.2016 г. 05:54, Simon Clubley wrote: >>> >> > ..... >>> I still think that's a polling loop because you are reaching out to the >>> sensor and asking it for it's current value. >> >> "Polling" in programming means "as opposed to interrupt handling". >> How is this opposed to "interrupt handling". >> >> What you suggest to be polling is simply a loop of calls to subroutines. >> A "call loop" is what describes it - although it does not matter a lot >> what word is used as long as it is not an already widely accepted one >> like "polling" which you want to redefine. Nothing wrong with that of >> course - as long as you don't have to communicate with other people >> using your redefined term. >> >> So much talk about so little :-). Although Tim's topic idea worked, >> produced quite a discussion. > > Well, I was hoping for a name of a design pattern, and I'm still not > happy about my choices. So from that perspective it's a wash. > > "Super loop" seems closest, but it seems to more capture the notion of > _always_ executing A, B, C, rather than executing only those bits that > are ready. >
I encounter the "super loop" term for the first time here, but why not. Although I see nothing "super" about it, to me this is still an endless loop of calls to subroutines either of which may opt to do something or to just return if it has nothing to do. But like I said earlier I have almost never used this approach, it smells of oversimplifying - thus costing more than a decent scheduler does. Implement loops, state machines etc. as you like within tasks but having a true - preemptive allowing cooperative operation- scheduler costs very little in both machine resources and effort to put together so I see no point in trying to avoid it. Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/
On 6/28/2016 2:09 PM, Rob Gaddi wrote:
>> Said another way, is "state" the equivalent of (and only means of >> representing) the "virtual program counter" for 'A'? > > Yup. Each invocation of A runs through to completion without blocking; > if it's waiting for I/O then you're in > > case SPI_WAIT: > if (!spi_finished) break;
So, if you were bit-banging the SPI interface, you'd have to keep the 'substate' (of "SPI_WAIT") in a "spi_clock_counter" (so you could issue *a* clock then relinquish the processor for "yet another big loop iteration" while remaining in the SPI_WAIT state *or* issue the individual clocks "in-line" and ensure they actually make it onto the I/O pins sequentially with the required time between them (determined by the device). In my case, this would be (*inline* with whatever was issuing the SPI): count = CLOCKS_REQUIRED; while (count > 0) { issue_clock(HIGH); load_timer(SPI_TIMER,SPI_HIGH_DURATION) pause_while_high: wait_timer(SPI_TIMER); issue_clock(LOW); load_timer(SPI_TIMER,SPI_LOW_DURATION) pause_while_low: wait_timer(SPI_TIMER); clock--; } i.e., on the first load_timer() would effectively set the PC to "pause_while_high" AFTER initializing the timer (though it would "ret" to the caller after having done so). Each wait_timer() would examine the specified timer and become a "ret" iff the timer was not-expired. Otherwise, it becomes a noop -- allowing the code to advance to the next statement. All of this sitting *within* your "IDLE" (or <whatever> state) [Other semantics are possible, this one has proven to be the most expressive, IME] So, you can do things like: count = CLOCKS_REQUIRED; load_timer(SPI_TIMEOUT,MAX_SPI_CYCLE_DURATION) while (count > 0) { issue_clock(HIGH); load_timer(SPI_TIMER,SPI_HIGH_DURATION) pause_while_high: if (check_timer(SPI_TIMEOUT)) goto ABORT; wait_timer(SPI_TIMER); issue_clock(LOW); load_timer(SPI_TIMER,SPI_LOW_DURATION) pause_while_low: if (check_timer(SPI_TIMEOUT)) goto ABORT; wait_timer(SPI_TIMER); clock--; } I.e., you can run code while waiting (instead of being BLOCKED in the wait). [Of course, this doesn't make sense for an SPI interface; OTOH, it can be helpful for implementing "STUCK KEY" timers for keypads ALONGSIDE the "DEBOUNCE" timer! In the real world, this seems to happen quite a lot: starting a motor and waiting for it to hit a limit switch -- yet KNOWING that it can't run "forever" unless the switch or the mechanism are broken!]
Am 29.06.2016 um 00:00 schrieb Dimiter_Popoff:
> On 28.6.2016 &#1075;. 17:40, Tim Wescott wrote: >> "Super loop" seems closest, but it seems to more capture the notion of >> _always_ executing A, B, C, rather than executing only those bits that >> are ready.
> I encounter the "super loop" term for the first time here, but why not.
Which may be the foundation for another name for this which I remember having seen (possibly here, but can't remember when or from whom): "OBL", as in "One Big Loop". Which is right on the money, I think.
> but having a true - preemptive allowing cooperative operation- scheduler > costs very little in both machine resources and effort to put together > so I see no point in trying to avoid it.
Well, it does increase resource usage in at least one aspect, which can quickly become significant: memory, to store the state of currently preempted / waiting tasks. And because you'll have to swap between task states, including call stack and CPU register contents, that usually means the scheduler itself can't be written in the high-level language of choice.
Dimiter_Popoff wrote:

> On 28.6.2016 &#4294967295;. 17:40, Tim Wescott wrote: >> On Tue, 28 Jun 2016 06:20:43 +0300, Dimiter_Popoff wrote: >> >>> On 28.6.2016 &#4294967295;. 05:54, Simon Clubley wrote: >>>> >>> > ..... >>>> I still think that's a polling loop because you are reaching out to the >>>> sensor and asking it for it's current value. >>> >>> "Polling" in programming means "as opposed to interrupt handling". >>> How is this opposed to "interrupt handling". >>> >>> What you suggest to be polling is simply a loop of calls to subroutines. >>> A "call loop" is what describes it - although it does not matter a lot >>> what word is used as long as it is not an already widely accepted one >>> like "polling" which you want to redefine. Nothing wrong with that of >>> course - as long as you don't have to communicate with other people >>> using your redefined term. >>> >>> So much talk about so little :-). Although Tim's topic idea worked, >>> produced quite a discussion. >> >> Well, I was hoping for a name of a design pattern, and I'm still not >> happy about my choices. So from that perspective it's a wash. >> >> "Super loop" seems closest, but it seems to more capture the notion of >> _always_ executing A, B, C, rather than executing only those bits that >> are ready. >> > > I encounter the "super loop" term for the first time here, but why not. > Although I see nothing "super" about it, to me this is still an endless > loop of calls to subroutines either of which may opt to do something > or to just return if it has nothing to do. > But like I said earlier I have almost never used this approach, it > smells of oversimplifying - thus costing more than a decent scheduler > does. Implement loops, state machines etc. as you like within tasks > but having a true - preemptive allowing cooperative operation- scheduler > costs very little in both machine resources and effort to put together > so I see no point in trying to avoid it. > > Dimiter > > ------------------------------------------------------ > Dimiter Popoff, TGI http://www.tgi-sci.com > ------------------------------------------------------ > http://www.flickr.com/photos/didi_tgi/
I think you're vastly underestimating the advantages of non-preemption. You get rid entirely of the need for mutexes and resource locking; all your accesses become atomic because you can't be interrupted. Now, once you start adding interrupts in it can get hairier, because you've just reintroduced preemption. But for projects where you have serious constraints on code, RAM, or power, something that doesn't need a scheduler is a real win. -- Rob Gaddi, Highland Technology -- www.highlandtechnology.com Email address domain is currently out of order. See above to fix.
Don Y wrote:

> On 6/28/2016 2:09 PM, Rob Gaddi wrote: >>> Said another way, is "state" the equivalent of (and only means of >>> representing) the "virtual program counter" for 'A'? >> >> Yup. Each invocation of A runs through to completion without blocking; >> if it's waiting for I/O then you're in >> >> case SPI_WAIT: >> if (!spi_finished) break; > > So, if you were bit-banging the SPI interface, you'd have to > keep the 'substate' (of "SPI_WAIT") in a "spi_clock_counter" > (so you could issue *a* clock then relinquish the processor for > "yet another big loop iteration" while remaining in the SPI_WAIT > state *or* issue the individual clocks "in-line" and ensure they > actually make it onto the I/O pins sequentially with the > required time between them (determined by the device). >
Although on a small enough processor that I wouldn't consider even cooperative OS, you generally bit-bang SPI by flipping the port pins as fast as you can and finding that it's just not all that fast. No reason to multitask in the middle of it.
> In my case, this would be (*inline* with whatever was issuing the SPI): > > count = CLOCKS_REQUIRED; > while (count > 0) { > issue_clock(HIGH); > load_timer(SPI_TIMER,SPI_HIGH_DURATION) > pause_while_high: > wait_timer(SPI_TIMER); > issue_clock(LOW); > load_timer(SPI_TIMER,SPI_LOW_DURATION) > pause_while_low: > wait_timer(SPI_TIMER); > clock--; > } > > i.e., on the first load_timer() would effectively set the PC to > "pause_while_high" AFTER initializing the timer (though it would > "ret" to the caller after having done so). > > Each wait_timer() would examine the specified timer and become > a "ret" iff the timer was not-expired. Otherwise, it becomes > a noop -- allowing the code to advance to the next statement. > > All of this sitting *within* your "IDLE" (or <whatever> state) > > [Other semantics are possible, this one has proven to be the most > expressive, IME] > > So, you can do things like: > > count = CLOCKS_REQUIRED; > load_timer(SPI_TIMEOUT,MAX_SPI_CYCLE_DURATION) > while (count > 0) { > issue_clock(HIGH); > > load_timer(SPI_TIMER,SPI_HIGH_DURATION) > pause_while_high: > if (check_timer(SPI_TIMEOUT)) > goto ABORT; > wait_timer(SPI_TIMER); > > issue_clock(LOW); > > load_timer(SPI_TIMER,SPI_LOW_DURATION) > pause_while_low: > if (check_timer(SPI_TIMEOUT)) > goto ABORT; > wait_timer(SPI_TIMER); > > clock--; > } > > I.e., you can run code while waiting (instead of being BLOCKED in > the wait). > > [Of course, this doesn't make sense for an SPI interface; OTOH, it can > be helpful for implementing "STUCK KEY" timers for keypads ALONGSIDE > the "DEBOUNCE" timer! In the real world, this seems to happen > quite a lot: starting a motor and waiting for it to hit a limit > switch -- yet KNOWING that it can't run "forever" unless the switch > or the mechanism are broken!]
And at that point you've got a non-preemptive OS where you explicitly yield the processor in each task. It's fine as far as it goes, but you've just introduced all the issues of managing multiple stacks, having a scheduler that keeps track of "tasks", etc. If you've got the RAM for it, great, but I wouldn't want to try it on 1KB. -- Rob Gaddi, Highland Technology -- www.highlandtechnology.com Email address domain is currently out of order. See above to fix.
On Wed, 29 Jun 2016 01:00:02 +0300, Dimiter_Popoff wrote:

> On 28.6.2016 &#1075;. 17:40, Tim Wescott wrote: >> On Tue, 28 Jun 2016 06:20:43 +0300, Dimiter_Popoff wrote: >> >>> On 28.6.2016 &#1075;. 05:54, Simon Clubley wrote: >>>> >>> > ..... >>>> I still think that's a polling loop because you are reaching out to >>>> the sensor and asking it for it's current value. >>> >>> "Polling" in programming means "as opposed to interrupt handling". >>> How is this opposed to "interrupt handling". >>> >>> What you suggest to be polling is simply a loop of calls to >>> subroutines. >>> A "call loop" is what describes it - although it does not matter a lot >>> what word is used as long as it is not an already widely accepted one >>> like "polling" which you want to redefine. Nothing wrong with that of >>> course - as long as you don't have to communicate with other people >>> using your redefined term. >>> >>> So much talk about so little :-). Although Tim's topic idea worked, >>> produced quite a discussion. >> >> Well, I was hoping for a name of a design pattern, and I'm still not >> happy about my choices. So from that perspective it's a wash. >> >> "Super loop" seems closest, but it seems to more capture the notion of >> _always_ executing A, B, C, rather than executing only those bits that >> are ready. >> >> > I encounter the "super loop" term for the first time here, but why not. > Although I see nothing "super" about it, to me this is still an endless > loop of calls to subroutines either of which may opt to do something or > to just return if it has nothing to do. > But like I said earlier I have almost never used this approach, it > smells of oversimplifying - thus costing more than a decent scheduler > does. Implement loops, state machines etc. as you like within tasks but > having a true - preemptive allowing cooperative operation- scheduler > costs very little in both machine resources and effort to put together > so I see no point in trying to avoid it.
The "scheduler" overhead is about 10x lower with this style of switching. If you've got a bunch of similar-run-length tasks this method works dandy. Of course, if you've got a bunch of fast tasks and even one slow one (reciting the Gettysburg address to a human, for instance) then a preemptive scheduler gets very attractive. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com I'm looking for work -- see my website!
hi Dimiter,

On 6/28/2016 3:00 PM, Dimiter_Popoff wrote:
> I encounter the "super loop" term for the first time here, but why not. > Although I see nothing "super" about it, to me this is still an endless > loop of calls to subroutines either of which may opt to do something > or to just return if it has nothing to do. > But like I said earlier I have almost never used this approach, it > smells of oversimplifying - thus costing more than a decent scheduler > does. Implement loops, state machines etc. as you like within tasks > but having a true - preemptive allowing cooperative operation- scheduler > costs very little in both machine resources and effort to put together > so I see no point in trying to avoid it.
I don't think you're understanding the value for "lean" approaches when you are operating in resource starved implementations. Imagine having a few HUNDRED bytes of RAM, *total* in your system. How much of this do you "divert" to implementing a formal scheduler? How much do you devote to preserving the state(s) of independant tasks? One of my earliest products was a LORAN-C position plotter. It received LORAN coordinates (time-difference pairs) from an external LORAN (radio) receiver. These are ~6 (decimal) digit values that represent the differences, in time, between radio waves being received from three geographically-fixed transmitters (a master and a pair of slaves; the slaves synchronized to the master) <https://en.wikipedia.org/wiki/LORAN#Operation> not a very good explanation -- but close enough. <https://en.wikipedia.org/wiki/Loran-C#Principle> As they are *differences*, they present a hyperbolic coordinate system (back to Conic Sections 101 :> ). Based on knowledge of where these transmitters are located on the globe (latitude and longitude!), you project the hyperbolic coordinate system onto the spherical coordinate system of lat-lon (as used in navigation). Then, correct for the fact that the Earth is an OBLATE sphere (not really *round*). Finally, map these onto a mercator map projection (the precisely scaled sheet of paper -- aka MAP -- in your plotter on which your pen will be drawing!) and drive the pen to the "current position" from its previous position. While the displays, keypad, stepping motor interfaces, etc. can all use nice little integers, all of this navigational math has to be done using floats. Each "coordinate" (whether in the hyperbolic, spherical/oblate, mercator coordinate system) is thus a PAIR of floats. With 256 bytes of RAM to play with (remember the pushdown stack and EVERY RAM consumer), you really don't have spare *bits*, let alone *bytes*, to devote to any sort of formal scheduling framework. E.g., the only time I bother with something as crude as (my version of) the "big loop" is when I am operating in these severely constrained environments. My implementation costs me EXACTLY *two* bytes per task. Yet, allows me to still write code AS IF I had the services and framework of a "real" scheduler. I.e., I don't force tasks to watch a hardware counter/timer to do their *own* timekeeping; I don't force a task to rely on exclusive access to a physical resource (e.g., N tasks can all "use the UART" instead of restricting access to just one set aside for that specific purpose); I don't prevent tasks from blocking WHEREVER IT IS CONVENIENT FOR THEM; etc. So, each task can truly focus on its own needs without having to address the concerns of other tasks (except for any data/events that it must "source" to those tasks). I think your current product offerings are REALLY "flush" with resources, by comparison. E.g., you (and I) wouldn't think twice about using an "int" (32b!) as a "bool". By contrast, the plotter mentioned above packed *8* bools into a byte -- and, at some times, might use that very same byte as a *counter*, etc. depending on what was happening in the product at that time!