Reply by Tim Wescott July 26, 20152015-07-26
On Sun, 26 Jul 2015 09:27:27 -0500, Les Cargill wrote:

> Tim Wescott wrote: >> On Fri, 24 Jul 2015 20:52:51 -0500, Les Cargill wrote: >> >>> Tim Wescott wrote: >>>> On Fri, 24 Jul 2015 04:12:23 -0700, Robert Willy wrote: >>>> >>>>> Hi, >>>>> >>>>> I read an online tutorial on RTOS, see below dot line please. >>>>> I am not clear about what is for the word 'queue' in the first line. >>>>> Does it mean a task queue? >>>>> >>>>> >>>>> Thanks in advance. >>>>> .................... >>>>> The use of a queue allows the medium priority task to block until an >>>>> event causes data to be available - and then immediately jump to the >>>>> relevant >>>>> function to handle the event. This prevents wasted processor >>>>> cycles - >>>>> in contrast to the infinite loop implementation whereby an event >>>>> will only be processed once the loop cycles to the appropriate >>>>> handler. >>>> >>>> I'm not sure what you mean by "task queue". >>>> >>>> Generally, RTOS implementations don't bind a queue to any specific >>>> task. Rather, the developer does so: anything can put messages into a >>>> queue, anything can take messages off of a queue, and anything can >>>> pend on a queue. If the developer is wise, only one task takes >>>> things off the queue, and there is a well-defined, small number (one >>>> is best) of sources that put things on the queue. >>>> >>>> You arrange things so that the task that depends on the queue needs >>>> to run if and only if there's a message on the queue, and you have >>>> the task pend on the queue having a message available. >>>> >>>> >>> One common method of referring to task/thread eligibility to run is to >>> have a "ready queue" or a "waiting queue". This may not actually be a >>> queue; it can be nothing more than a state in the set of task control >>> blocks. >>> >>> Vhttp://www.qnx.com/developers/docs/660/index.jsp?topic=% >> 2Fcom.qnx.doc.neutrino.prog%2Ftopic%2Foverview_Ready_queue.html >> >> That's not what the OP is talking about, > > > Then I have no hope of finding what he's talking about. > > 1) "The use of a queue allows the medium priority task to block until an > event causes data to be available" - mention of blocking, which is very > much a "waiting queue"/"ready queue" sort of thing, although you have to > wonder why medium priority matters. > > The big ambiguity is whether or not the queue is a data source/buffer or > simply a wait/block structure. > > 2) The compare/contrast with The Big Loop. > > wait/ready queuing is as close as I can get with that mess.
The OP refers to it in another sub-thread: it's the FreeRTOS "queue" entity, which is a typical RTOS queue that you stuff messages into from a source, and block on pending message availability in some task. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com
Reply by Don Y July 26, 20152015-07-26
On 7/26/2015 7:27 AM, Les Cargill wrote:
> Tim Wescott wrote: >> On Fri, 24 Jul 2015 20:52:51 -0500, Les Cargill wrote: >> >>> Tim Wescott wrote: >>>> On Fri, 24 Jul 2015 04:12:23 -0700, Robert Willy wrote: >>>> >>>>> Hi, >>>>> >>>>> I read an online tutorial on RTOS, see below dot line please. >>>>> I am not clear about what is for the word 'queue' in the first line. >>>>> Does it mean a task queue? >>>>> >>>>> >>>>> Thanks in advance. >>>>> .................... >>>>> The use of a queue allows the medium priority task to block until an >>>>> event causes data to be available - and then immediately jump to the >>>>> relevant >>>>> function to handle the event. This prevents wasted processor cycles >>>>> - >>>>> in contrast to the infinite loop implementation whereby an event >>>>> will only be processed once the loop cycles to the appropriate >>>>> handler. >>>> >>>> I'm not sure what you mean by "task queue". >>>> >>>> Generally, RTOS implementations don't bind a queue to any specific >>>> task. Rather, the developer does so: anything can put messages into a >>>> queue, anything can take messages off of a queue, and anything can pend >>>> on a queue. If the developer is wise, only one task takes things off >>>> the queue, and there is a well-defined, small number (one is best) of >>>> sources that put things on the queue. >>>> >>>> You arrange things so that the task that depends on the queue needs to >>>> run if and only if there's a message on the queue, and you have the >>>> task pend on the queue having a message available. >>>> >>>> >>> One common method of referring to task/thread eligibility to run is to >>> have a "ready queue" or a "waiting queue". This may not actually be a >>> queue; it can be nothing more than a state in the set of task control >>> blocks. >>> >>> Vhttp://www.qnx.com/developers/docs/660/index.jsp?topic=% >> 2Fcom.qnx.doc.neutrino.prog%2Ftopic%2Foverview_Ready_queue.html >> >> That's not what the OP is talking about, > > Then I have no hope of finding what he's talking about.
The OP mistakenly called it a "task queue". It's actually a "data FIFO" supported as a first-class object by the OS. As such, a *task* can pend on it "efficiently" (moreso than spinning on it!)
> 1) "The use of a queue allows the medium priority task to block until > an event causes data to be available" - mention of blocking, which is > very much a "waiting queue"/"ready queue" sort of thing, although > you have to wonder why medium priority matters.
Read the cited example.
> The big ambiguity is whether or not the queue is a data source/buffer > or simply a wait/block structure. > > 2) The compare/contrast with The Big Loop. > > wait/ready queuing is as close as I can get with that mess.
Read the examples *preceding* the cited example. :>
> > but yes, I had forgotten that >> terminology (please, please do not ask me why). >> > > :) >
Reply by Les Cargill July 26, 20152015-07-26
Tim Wescott wrote:
> On Fri, 24 Jul 2015 20:52:51 -0500, Les Cargill wrote: > >> Tim Wescott wrote: >>> On Fri, 24 Jul 2015 04:12:23 -0700, Robert Willy wrote: >>> >>>> Hi, >>>> >>>> I read an online tutorial on RTOS, see below dot line please. >>>> I am not clear about what is for the word 'queue' in the first line. >>>> Does it mean a task queue? >>>> >>>> >>>> Thanks in advance. >>>> .................... >>>> The use of a queue allows the medium priority task to block until an >>>> event causes data to be available - and then immediately jump to the >>>> relevant >>>> function to handle the event. This prevents wasted processor cycles >>>> - >>>> in contrast to the infinite loop implementation whereby an event >>>> will only be processed once the loop cycles to the appropriate >>>> handler. >>> >>> I'm not sure what you mean by "task queue". >>> >>> Generally, RTOS implementations don't bind a queue to any specific >>> task. Rather, the developer does so: anything can put messages into a >>> queue, anything can take messages off of a queue, and anything can pend >>> on a queue. If the developer is wise, only one task takes things off >>> the queue, and there is a well-defined, small number (one is best) of >>> sources that put things on the queue. >>> >>> You arrange things so that the task that depends on the queue needs to >>> run if and only if there's a message on the queue, and you have the >>> task pend on the queue having a message available. >>> >>> >> One common method of referring to task/thread eligibility to run is to >> have a "ready queue" or a "waiting queue". This may not actually be a >> queue; it can be nothing more than a state in the set of task control >> blocks. >> >> Vhttp://www.qnx.com/developers/docs/660/index.jsp?topic=% > 2Fcom.qnx.doc.neutrino.prog%2Ftopic%2Foverview_Ready_queue.html > > That's not what the OP is talking about,
Then I have no hope of finding what he's talking about. 1) "The use of a queue allows the medium priority task to block until an event causes data to be available" - mention of blocking, which is very much a "waiting queue"/"ready queue" sort of thing, although you have to wonder why medium priority matters. The big ambiguity is whether or not the queue is a data source/buffer or simply a wait/block structure. 2) The compare/contrast with The Big Loop. wait/ready queuing is as close as I can get with that mess. > but yes, I had forgotten that
> terminology (please, please do not ask me why). >
:) -- Les Cargill
Reply by July 25, 20152015-07-25
On Fri, 24 Jul 2015 13:59:07 -0500, Tim Wescott
<seemywebsite@myfooter.really> wrote:

>On Fri, 24 Jul 2015 04:12:23 -0700, Robert Willy wrote: > >> Hi, >> >> I read an online tutorial on RTOS, see below dot line please. >> I am not clear about what is for the word 'queue' in the first line. >> Does it mean a task queue? >> >> >> Thanks in advance. >> .................... >> The use of a queue allows the medium priority task to block until an >> event causes data to be available - and then immediately jump to the >> relevant >> function to handle the event. This prevents wasted processor cycles - >> in contrast to the infinite loop implementation whereby an event will >> only be processed once the loop cycles to the appropriate handler. > >I'm not sure what you mean by "task queue". > >Generally, RTOS implementations don't bind a queue to any specific task. >Rather, the developer does so: anything can put messages into a queue, >anything can take messages off of a queue, and anything can pend on a >queue. If the developer is wise, only one task takes things off the >queue, and there is a well-defined, small number (one is best) of sources >that put things on the queue. > >You arrange things so that the task that depends on the queue needs to >run if and only if there's a message on the queue, and you have the task >pend on the queue having a message available.
In a typical RT system, typically most (and sometimes all) tasks are in a Wait_For_XX state. When a significant event occurred, the task scheduler is restarted, scanning the task list in priority order to find the first (highest priority) task that has become Runable due to the significant event, saves the context of the old task and starts to run that new task. A significant event could be e.g. * Completetion of a clock interrupt * Completetion of some other interrupt e.g. serial line * Writing data to some queue * Setting some event flag (single bit messages) * The currently Running tasks goes to sleep Of course the last three actions must go through some OS routines to do the actual operation and then kick the scheduler to search for task that might have become Runnable due to the event. If all tasks are in some Wait_For_.. state, the scheduler falls through to the NULL task loop, which should preferably be implemented with some low power consumption WaitForInterrupt instruction in the NULL task loop.
Reply by Tim Wescott July 25, 20152015-07-25
On Fri, 24 Jul 2015 20:52:51 -0500, Les Cargill wrote:

> Tim Wescott wrote: >> On Fri, 24 Jul 2015 04:12:23 -0700, Robert Willy wrote: >> >>> Hi, >>> >>> I read an online tutorial on RTOS, see below dot line please. >>> I am not clear about what is for the word 'queue' in the first line. >>> Does it mean a task queue? >>> >>> >>> Thanks in advance. >>> .................... >>> The use of a queue allows the medium priority task to block until an >>> event causes data to be available - and then immediately jump to the >>> relevant >>> function to handle the event. This prevents wasted processor cycles >>> - >>> in contrast to the infinite loop implementation whereby an event >>> will only be processed once the loop cycles to the appropriate >>> handler. >> >> I'm not sure what you mean by "task queue". >> >> Generally, RTOS implementations don't bind a queue to any specific >> task. Rather, the developer does so: anything can put messages into a >> queue, anything can take messages off of a queue, and anything can pend >> on a queue. If the developer is wise, only one task takes things off >> the queue, and there is a well-defined, small number (one is best) of >> sources that put things on the queue. >> >> You arrange things so that the task that depends on the queue needs to >> run if and only if there's a message on the queue, and you have the >> task pend on the queue having a message available. >> >> > One common method of referring to task/thread eligibility to run is to > have a "ready queue" or a "waiting queue". This may not actually be a > queue; it can be nothing more than a state in the set of task control > blocks. > > Vhttp://www.qnx.com/developers/docs/660/index.jsp?topic=%
2Fcom.qnx.doc.neutrino.prog%2Ftopic%2Foverview_Ready_queue.html That's not what the OP is talking about, but yes, I had forgotten that terminology (please, please do not ask me why). -- Tim Wescott Wescott Design Services http://www.wescottdesign.com
Reply by Les Cargill July 24, 20152015-07-24
Tim Wescott wrote:
> On Fri, 24 Jul 2015 04:12:23 -0700, Robert Willy wrote: > >> Hi, >> >> I read an online tutorial on RTOS, see below dot line please. >> I am not clear about what is for the word 'queue' in the first line. >> Does it mean a task queue? >> >> >> Thanks in advance. >> .................... >> The use of a queue allows the medium priority task to block until an >> event causes data to be available - and then immediately jump to the >> relevant >> function to handle the event. This prevents wasted processor cycles - >> in contrast to the infinite loop implementation whereby an event will >> only be processed once the loop cycles to the appropriate handler. > > I'm not sure what you mean by "task queue". > > Generally, RTOS implementations don't bind a queue to any specific task. > Rather, the developer does so: anything can put messages into a queue, > anything can take messages off of a queue, and anything can pend on a > queue. If the developer is wise, only one task takes things off the > queue, and there is a well-defined, small number (one is best) of sources > that put things on the queue. > > You arrange things so that the task that depends on the queue needs to > run if and only if there's a message on the queue, and you have the task > pend on the queue having a message available. >
One common method of referring to task/thread eligibility to run is to have a "ready queue" or a "waiting queue". This may not actually be a queue; it can be nothing more than a state in the set of task control blocks. Vhttp://www.qnx.com/developers/docs/660/index.jsp?topic=%2Fcom.qnx.doc.neutrino.prog%2Ftopic%2Foverview_Ready_queue.html -- Les Cargill
Reply by Don Y July 24, 20152015-07-24
On 7/24/2015 2:03 PM, Robert Willy wrote:
> On Friday, July 24, 2015 at 12:34:29 PM UTC-7, Don Y wrote: >> On 7/24/2015 4:12 AM, Robert Willy wrote:
>>> The use of a queue allows the medium priority task to block until an event >>> causes data to be available - and then immediately jump to the relevant >>> function to handle the event. This prevents wasted processor cycles - in >>> contrast to the infinite loop implementation whereby an event will only be >>> processed once the loop cycles to the appropriate handler. >> >> So, the queue allows <something> to wait -- in an ordered fashion >> (i.e., "I got here first! The rest of you will have to ") -- until >> <whatever> ("data" in this case) is available. >> >> [The "infinite loop implementation" has to refer to the archaic >> approach of "one big loop" that repeatedly tries to check for >> everything and anything that might be able to "proceed" in >> its computation.] >> >> It appears this P is trying to espouse the advantages of queuing >> on an event/resource over that of repeatedly *polling* for that >> resource/event.
>> Does this make sense in the context of your INTENDED question? > > Excuse me not giving full information about my question. The link for > the except tutorial is from: > > http://www.freertos.org/tutorial/solution3.html > > below title: Concept of Operation > > I have thought about it, but no answer is satisfying to me. > Thank all of you for the explanation.
From the cited link (comments from my previous post interspersed): -----8<-----8<----- The medium Priority Task The medium priority task can be represented by the following pseudo code. #define DELAY_PERIOD 4 #define FLASH_RATE 1000 void MediumPriorityTask( void *pvParameters ) { xQueueItem Data; TickType_t FlashTime; InitialiseQueue(); FlashTime = xTaskGetTickCount(); for( ;; ) { do { // A if( xQueueReceive( xCommsQueue, &Data, DELAY_PERIOD ) ) { -- Wanna bet xQueueReceive() is defined as taking a pointer to a queue -- on which to *pend* (block) awaiting messages, a pointer to a place to copy -- any received data EXTRACTED from that queue (when it eventually arrives) -- and a *timeout* so the function doesn't block indefinitely?? As I said, -- previously: >> So, as an example, instead of checking a UART (directly *or* a FIFO/buffer >> that the UART ISR maintains) for "available received data" which you can >> then "process", a more elegant/efficient approach is to tell the OS >> that you are waiting for data to be available. >> >> The OS then suspends your task (marks it as not ready to run so it >> no longer consumes CPU cycles... that would be wasted repeatedly >> checking for data that is NOT YET AVAILABLE) *at* the point where you >> invoked "wait_for_data/event". I.e., the subroutine/function "doesn't >> RETURN" until the condition is satisfied! >> >> To the programmer, this makes life easy: the OS does the "checking for >> available data" ON BEHALF OF the task that requires it. -- I think what you are missing is that xQueueReceive() can "hang" indefinitely -- waiting for the arrival of data in that queue (or, at least until the -- timeout expires, FORCING it to return). During the time while it is -- "hung", other tasks are using the processor. *This* task isn't -- wasting any CPU cycles doing something silly like: -- if (!data_available) { -- reschedule(); // i.e., yield CPU to other tasks -- } else { -- get_data(); -- which is what it would do in the "big loop" approach. ProcessRS232Characters( Data.Value ); -- Having successfully returned from xQueueReceive() (an UNsuccessful return -- would be one where some parameter was in error or the timeout expired before -- data was available in the queue), the code now processes the data extracted -- from the queue (i.e., made available by xQueueReceive() in the -- buffer/variable referenced in the xQueueReceive() invocation ("&Data") } // B } while ( uxQueueMessagesWaiting( xCommsQueue ) ); // C if( ScanKeypad() ) { UpdateLCD(); } // D if( ( xTaskGetTickCount() - FlashTime ) >= FLASH_RATE ) { FlashTime = xTaskGetTickCount(); UpdateLED(); } } return 0; } Referring to the labels within the code fragment above: A: The task first blocks waiting for a communications event. The block time is relatively short. B: The do-while loop executes until no data remains in the queue. This implementation would have to be modified if data arrives too quickly for the queue to ever be completely empty. C: Either the queue has been emptied of all data, or no data arrived within the specified blocking period. The maximum time that can be spent blocked waiting for data is short enough to ensure the keypad is scanned frequently enough to meet the specified timing constraints. D: Check to see if it is time to flash the LED. There will be some jitter in the frequency at which this line executes, but the LED timing requirements are flexible enough to be met by this implementation. -----8<-----8<----- [IMO, flashing the LED here is a sloppy way of doing it. Instead, signal some other task whose sole job is to flash the LED.]
Reply by Robert Willy July 24, 20152015-07-24
On Friday, July 24, 2015 at 12:34:29 PM UTC-7, Don Y wrote:
> On 7/24/2015 4:12 AM, Robert Willy wrote: > > Hi, > > > > I read an online tutorial on RTOS, see below dot line please. > > I am not clear about what is for the word 'queue' in the first line. > > Does it mean a task queue? > > (sigh) How NOT to ask questions! :( You've taken this entirely > out of context and expect folks to GUESS (or, invest effort to > try to guess) that surrounding context. > > "Online tutorial"? Hmm.... perhaps you could have provided a > pointer (URL) to that tutorial so folks would be able to *see* > the context that you've omitted?? > > <frown> > > Be that as it may... > > > The use of a queue allows the medium priority task to block until an event > > causes data to be available - and then immediately jump to the relevant > > function to handle the event. This prevents wasted processor cycles - in > > contrast to the infinite loop implementation whereby an event will only be > > processed once the loop cycles to the appropriate handler. > > So, the queue allows <something> to wait -- in an ordered fashion > (i.e., "I got here first! The rest of you will have to ") -- until > <whatever> ("data" in this case) is available. > > [The "infinite loop implementation" has to refer to the archaic > approach of "one big loop" that repeatedly tries to check for > everything and anything that might be able to "proceed" in > its computation.] > > It appears this P is trying to espouse the advantages of queuing > on an event/resource over that of repeatedly *polling* for that > resource/event. > > So, as an example, instead of checking a UART (directly *or* a FIFO/buffer > that the UART ISR maintains) for "available received data" which you can > then "process", a more elegant/efficient approach is to tell the OS > that you are waiting for data to be available. > > The OS then suspends your task (marks it as not ready to run so it > no longer consumes CPU cycles... that would be wasted repeatedly > checking for data that is NOT YET AVAILABLE) *at* the point where you > invoked "wait_for_data/event". I.e., the subroutine/function "doesn't > RETURN" until the condition is satisfied! > > To the programmer, this makes life easy: the OS does the "checking for > available data" ON BEHALF OF the task that requires it. > > It also allows a definite ordering of "consumers" to be imposed. > It may be something as simple as "first come, first served" for > THAT particular resource. Or, "most important goes first" for > some other type of resource/event. > > E.g., a task that monitors the charging of a battery probably is > concerned with knowing the status of primary power -- the > charger only works when power *is* available! So, a power > fail event would be of interest to that charger task -- at > the very least, it would be able to update its estimate of > when charging will be COMPLETE to reflect "never"! :> > > OTOH, another task that is responsible for copying key configuration > parameters from (volatile!) RAM into FLASH/NVRAM would probably be > MORE concerned with that event! It would want to be able to > ensure that activity is performed regardless of how long the > device can "stay up" after the event is signaled. > > A FIFO ordering might give the battery charger task first > crack at the event -- depending on the order of execution of > those two tasks (charger & NVRAM) -- even though it is > far less "important" to the operation of the device! > > Trying to do this prioritization in a "big loop" approach > means everything needs to know about everything else! I.e., > the battery charger can check for the power fail event... > but, if it happens to see it first, it needs to check to see > if the NVRAM task needs to respond to that instead or first! > > This leads to clumsy and brittle implementations -- because you > have to distribute and replicate operational decisions in > many places (information hiding being a win in most cases!) > > Does this make sense in the context of your INTENDED question?
Excuse me not giving full information about my question. The link for the except tutorial is from: http://www.freertos.org/tutorial/solution3.html below title: Concept of Operation I have thought about it, but no answer is satisfying to me. Thank all of you for the explanation.
Reply by Don Y July 24, 20152015-07-24
On 7/24/2015 4:12 AM, Robert Willy wrote:
> Hi, > > I read an online tutorial on RTOS, see below dot line please. > I am not clear about what is for the word 'queue' in the first line. > Does it mean a task queue?
(sigh) How NOT to ask questions! :( You've taken this entirely out of context and expect folks to GUESS (or, invest effort to try to guess) that surrounding context. "Online tutorial"? Hmm.... perhaps you could have provided a pointer (URL) to that tutorial so folks would be able to *see* the context that you've omitted?? <frown> Be that as it may...
> The use of a queue allows the medium priority task to block until an event > causes data to be available - and then immediately jump to the relevant > function to handle the event. This prevents wasted processor cycles - in > contrast to the infinite loop implementation whereby an event will only be > processed once the loop cycles to the appropriate handler.
So, the queue allows <something> to wait -- in an ordered fashion (i.e., "I got here first! The rest of you will have to ") -- until <whatever> ("data" in this case) is available. [The "infinite loop implementation" has to refer to the archaic approach of "one big loop" that repeatedly tries to check for everything and anything that might be able to "proceed" in its computation.] It appears this P is trying to espouse the advantages of queuing on an event/resource over that of repeatedly *polling* for that resource/event. So, as an example, instead of checking a UART (directly *or* a FIFO/buffer that the UART ISR maintains) for "available received data" which you can then "process", a more elegant/efficient approach is to tell the OS that you are waiting for data to be available. The OS then suspends your task (marks it as not ready to run so it no longer consumes CPU cycles... that would be wasted repeatedly checking for data that is NOT YET AVAILABLE) *at* the point where you invoked "wait_for_data/event". I.e., the subroutine/function "doesn't RETURN" until the condition is satisfied! To the programmer, this makes life easy: the OS does the "checking for available data" ON BEHALF OF the task that requires it. It also allows a definite ordering of "consumers" to be imposed. It may be something as simple as "first come, first served" for THAT particular resource. Or, "most important goes first" for some other type of resource/event. E.g., a task that monitors the charging of a battery probably is concerned with knowing the status of primary power -- the charger only works when power *is* available! So, a power fail event would be of interest to that charger task -- at the very least, it would be able to update its estimate of when charging will be COMPLETE to reflect "never"! :> OTOH, another task that is responsible for copying key configuration parameters from (volatile!) RAM into FLASH/NVRAM would probably be MORE concerned with that event! It would want to be able to ensure that activity is performed regardless of how long the device can "stay up" after the event is signaled. A FIFO ordering might give the battery charger task first crack at the event -- depending on the order of execution of those two tasks (charger & NVRAM) -- even though it is far less "important" to the operation of the device! Trying to do this prioritization in a "big loop" approach means everything needs to know about everything else! I.e., the battery charger can check for the power fail event... but, if it happens to see it first, it needs to check to see if the NVRAM task needs to respond to that instead or first! This leads to clumsy and brittle implementations -- because you have to distribute and replicate operational decisions in many places (information hiding being a win in most cases!) Does this make sense in the context of your INTENDED question?
Reply by Grant Edwards July 24, 20152015-07-24
On 2015-07-24, Tim Wescott <seemywebsite@myfooter.really> wrote:
> >> I read an online tutorial on RTOS, see below dot line please. >> I am not clear about what is for the word 'queue' in the first line. >> Does it mean a task queue? > >> Thanks in advance. >> ....................
>> The use of a queue allows the medium priority task to block until an >> event causes data to be available - and then immediately jump to the >> relevant function to handle the event. This prevents wasted processor >> cycles - in contrast to the infinite loop implementation whereby an >> event will only be processed once the loop cycles to the appropriate >> handler. > > I'm not sure what you mean by "task queue".
I think that in this case your explanation below is correct. In other contexts, "task queue" might refer to a data structure internal to an RTOS scheduler that defines an ordered set of the runnable tasks (the order could be determined strictly by task priority or by some other equitable-round-robin type scheme). It's a queue that contains tasks as its data elements.
> Generally, RTOS implementations don't bind a queue to any specific task. > Rather, the developer does so: anything can put messages into a queue, > anything can take messages off of a queue, and anything can pend on a > queue. If the developer is wise, only one task takes things off the > queue, and there is a well-defined, small number (one is best) of sources > that put things on the queue. > > You arrange things so that the task that depends on the queue needs to > run if and only if there's a message on the queue, and you have the task > pend on the queue having a message available.
-- Grant Edwards grant.b.edwards Yow! Pardon me, but do you at know what it means to be gmail.com TRULY ONE with your BOOTH!