EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

From cooperative to preemptive scheduler: a real example

Started by pozz January 6, 2020
I noticed my previous post about preemptive OS involved many people and 
started many discussions, most of them theoric.

Someone wrote the synchronization of tasks in preemptive scheduler is 
not so difficult, after understanding some things. Others suggested to 
abandon at all preemptive scheduler, considering its pitfalls.

Because I know my limits, I don't think I can produce a well-written 
preemption system. However I'd like to understand a little more about 
them. Starting from an example.

Suppose my system is a display where a message is written. The message 
can be customized by a serial line. In cooperative approach, I would 
write something:

--- main.c ---
...
while(1) {
   task_display();
   task_serial();
}
--- end of main.c ---

--- display.c ---
static const char msg[32];
void display_set_message(const char *new_msg) {
   strncpy(msg, new_msg, sizeof(msg));
}
void task_display(void) {
   if (refresh_is_needed()) {
     display_printat(0, 0, msg);
   }
}
--- end of display.c ---

--- serial.c ---
static unsigned char rxbuf[64];
static size_t rxlen;
void task_serial(void)
{
   unsigned char b = serial_rx();
   if (b != EOF) {
     rxbuf[rxlen++] = b;
     if (frame_is_complete(rxbuf, rxlen)) {
       char new_msg[32];
       /* decode new message from received frame from serial line */
       display_set_message(new_msg);
       rxlen = 0;
     }
   }
}
--- end of serial.c ---

The display needs to be refreshed. display_printat() is blocking: when 
it returns, all the display was refreshed. So the display always shows 
the entire message: there's no risk the display shows a part of the 
previous message and a part of the new message.

How to convert these two tasks in a preemptive scheduler? Which priority 
to assign to them?

The simplest approach is...

--- display.c ---
static const char msg[32];
void display_set_message(const char *new_msg) {
   strncpy(msg, new_msg, sizeof(msg));
}
void task_display(void) {
   while(1) {
     if (refresh_is_needed()) {
       display_printat(0, 0, msg);
     }
   }
}
--- end of display.c ---

--- serial.c ---
static unsigned char rxbuf[32];
static size_t rxlen;
void task_serial(void)
{
   while(1) {
     unsigned char b = serial_rx();
     if (b != EOF) {
       rxbuf[rxlen++] = b;
       if (frame_is_complete(rxbuf, rxlen)) {
         char new_msg[32];
         /* decode new message from received frame from serial line */
         display_set_message(new_msg);
         rxlen = 0;
       }
     }
   }
}
--- end of serial.c ---

This code works most of the time, but the display sometime can show a 
mix of old/new messages. This happens if display task is interrupted 
during refresh by serial task that calls display_set_message(). Or when 
display_set_message() is interrupted by display task and a refresh occurs.

If I assigned a higher priority to display task, the problem would 
remain. Indeed display_printat() couldn't be interrupted, but 
display_set_message() yes.

Here the solution is to take a binary semaphore before using the shared 
resource (and give the semaphore after the job is done).

void display_set_message(const char *new_msg) {
   semaphore_take_forever();
   strncpy(msg, new_msg, sizeof(msg));
   semaphore_give();
}

...
       if (frame_is_complete(rxbuf)) {
         char new_msg[32];
         /* decode new message from received frame from serial line */
         semaphore_take_forever();
         display_set_message(new_msg);
         semaphore_give();
         rxlen = 0;
       }
...


My impression is that a very simple code is cluttered with 
synchronization things that decrease readability and maintainability and 
increase complexity. Why? Just to use preemption?

Again my impression is that preemption is NOT GOOD and must be avoided 
if it isn't required.

So the question is: when a preemption scheduler is needed? Could you 
give a real example?

 From what I have understood, preemption could solve real-time requirement.

Suppose display_printat() takes too much time to finish. This increases 
the worst-case superloop duration and could delay some system reaction.
For example, if display_printat() takes 1 second to finish, the system 
could react after 1 second from an event (the press of a button, for 
example).

If this isn't acceptable, preemption could help. Is it correct?
On 1/6/2020 6:08 PM, pozz wrote:

[ 8< ]

> My impression is that a very simple code is cluttered with synchronization > things that decrease readability and maintainability and increase complexity. > Why? Just to use preemption?
The "clutter" is introduced because your "problem" inherently involves conflict; you're allowing two competing uses for a single resource. The use of the synchronization primitive OVERTLY acknowledges this issue/possibility -- lest (subsequent) another developer fail to recognize that the possibility exists (i.e., "latent bug").
> Again my impression is that preemption is NOT GOOD and must be avoided if it > isn't required.
"Multiplication is NOT GOOD and must be avoided if it isn't required (i.e., if you can use repeated ADDITIONs, instead)"
> So the question is: when a preemption scheduler is needed? Could you give a > real example?
The "scheduler" is present in any multitasking system -- cooperative or preemptive. SOMETHING has to decide who to give the processor to when the currently executing task gives up control (or, has it removed from it) In your cooperative examples, the "while()" is used to implement the scheduler: when one task() "returns", the one listed on the next line (of the while loop) is given control... "scheduled" to run. Don't conflate "big loop" with "cooperative".
> From what I have understood, preemption could solve real-time requirement.
Preemption, like any capability, brings with it assets and liabilities. Imagine you were tasked with building a box that blinked lights (XMAS lights!) at different/varying rates. The box has a dozen solid state switches that control the individual lights (or, "light strands"). It would be really easy -- and intuitive -- to write: void lights1() { while(FOREVER) { light(1,ON); sleep(500ms); light(1,OFF); sleep(279ms); } } void lights2() { while(FOREVER) { light(2,ON); sleep(100ms); light(2,OFF); sleep(50ms); } } void lights3() { while(FOREVER) { ontime = (10.0 * rand() ) / RAND_MAX; light(3,ON); sleep(ontime); light(3,OFF); sleep(10.0 - ontime); } } etc. No silly "yields" to get in the way. No need for synchronization primitives, either, because nothing is SHARED! [Contrived example but you'll find that there are many cases of tasks co-executing that are NOT sharing anything (other than the processor)] There are other classes of problems where the problem lends itself, naturally, to "peaceful" sharing -- where you're not in conflict with another. And, other techniques to hide the sharing mitigation in other mechanisms. Preemption lets you code AS IF you were the sole owner of the processor... EXCEPT when you need to share something (which would imply that you are NOT the sole owner -- at least at THAT time! :> ) The downside to cooperative multitasking is that *it* clutters your code -- with all those yield()s -- and requires you to keep track of how "long" you've hogged the CPU in the time since your last yield (because that time gets reflected to all subsequent task runnings). When I write code in a cooperative environment, I *litter* the code with yield()s to keep *reaction* times (of other tasks) short. This then means yield() has to run like greased lightning lest it impact overall performance (because it is pure overhead!)
On 2020-01-07 3:08, pozz wrote:
> I noticed my previous post about preemptive OS involved many people and > started many discussions, most of them theoric. > > Someone wrote the synchronization of tasks in preemptive scheduler is > not so difficult, after understanding some things.
I made some such statement.
> Others suggested to > abandon at all preemptive scheduler, considering its pitfalls. > > Because I know my limits, I don't think I can produce a well-written > preemption system. However I'd like to understand a little more about > them. Starting from an example. > > Suppose my system is a display where a message is written. The message > can be customized by a serial line.
So, this system consists of a display and a serial input line and has requirements as follows: 1. The display shall at all times show a message, of at most 31 characters. - To be defined: what the initial message should be at system reset. 2. The SW shall receive characters from the serial line, buffering them in a "frame buffer" in memory, which can hold up to 64 characters. 3. After each received (and buffered) serial-line character, the SW shall check if the buffered characters form a complete "frame". - To be defined: what to do if the frame buffer is full but does not form a complete frame. (This may of course be impossible by design of the "frame_is_complete" function.) 4. When the buffered characters form a complete frame, the SW shall convert (decode) the contents of the frame into a message, of at most 31 characters, display that message until another, new frame is received, and erase the frame-buffer in preparation for the next frame. The real-time aspects are undefined, except that each message is displayed until the next frame is received.
> In cooperative approach, I would > write something: > > --- main.c --- > ... > while(1) { > &#4294967295; task_display(); > &#4294967295; task_serial(); > } > --- end of main.c --- > > --- display.c --- > static const char msg[32]; > void display_set_message(const char *new_msg) { > &#4294967295; strncpy(msg, new_msg, sizeof(msg)); > } > void task_display(void) { > &#4294967295; if (refresh_is_needed()) { > &#4294967295;&#4294967295;&#4294967295; display_printat(0, 0, msg); > &#4294967295; } > } > --- end of display.c --- > > --- serial.c --- > static unsigned char rxbuf[64]; > static size_t rxlen; > void task_serial(void) > { > &#4294967295; unsigned char b = serial_rx(); > &#4294967295; if (b != EOF) { > &#4294967295;&#4294967295;&#4294967295; rxbuf[rxlen++] = b; > &#4294967295;&#4294967295;&#4294967295; if (frame_is_complete(rxbuf, rxlen)) { > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; char new_msg[32]; > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; /* decode new message from received frame from serial line */ > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; display_set_message(new_msg); > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; rxlen = 0; > &#4294967295;&#4294967295;&#4294967295; } > &#4294967295; } > } > --- end of serial.c --- > > The display needs to be refreshed. display_printat() is blocking: when > it returns, all the display was refreshed. So the display always shows > the entire message: there's no risk the display shows a part of the > previous message and a part of the new message. > > How to convert these two tasks in a preemptive scheduler? Which priority > to assign to them?
Before that conversion one must think about the real-time requirements: deadlines, response-times. This is difficult for this example, because you have not stated any requirements. Let's assume these requirements and properties of the environment: A. The function "serial_rx" polls the one-character reception buffer of the serial line once, and returns the received character, if any, and EOF otherwise. It must be called at least as often as characters arrive (that is, depending on baud rate) to avoid overrun and loss of some characters. B. A pause in the serial-line character arrival cannot be assumed after the completion of a frame. The first character of the next frame can arrive as quickly as the baud rate allows. C. The functions "frame_is_complete" and "display_set_message" take, together, so much less time than the serial-line character period that the whole "task_serial" function also takes less time than the character period. D. The function "display_printat" can take longer than the serial-line character period. Under these assumptions, the cooperative solution does not work, because when a frame is completed, "display_printat" is called which may mean too much delay for the next "serial_rx" call and cause loss of input characters.
> The simplest approach is... > > --- display.c --- > static const char msg[32]; > void display_set_message(const char *new_msg) { > &#4294967295; strncpy(msg, new_msg, sizeof(msg)); > } > void task_display(void) { > &#4294967295; while(1) { > &#4294967295;&#4294967295;&#4294967295; if (refresh_is_needed()) { > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; display_printat(0, 0, msg); > &#4294967295;&#4294967295;&#4294967295; } > &#4294967295; } > } > --- end of display.c --- > > --- serial.c --- > static unsigned char rxbuf[32]; > static size_t rxlen; > void task_serial(void) > { > &#4294967295; while(1) { > &#4294967295;&#4294967295;&#4294967295; unsigned char b = serial_rx(); > &#4294967295;&#4294967295;&#4294967295; if (b != EOF) { > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; rxbuf[rxlen++] = b; > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; if (frame_is_complete(rxbuf, rxlen)) { > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; char new_msg[32]; > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; /* decode new message from received frame from serial line */ > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; display_set_message(new_msg); > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; rxlen = 0; > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; } > &#4294967295;&#4294967295;&#4294967295; } > &#4294967295; } > } > --- end of serial.c --- > > This code works most of the time, but the display sometime can show a > mix of old/new messages.
Because the "msg" variable in display.c is accessed from both tasks, as an unprotected shared variable. In addition to that problem, you have written both tasks to use polling, with no delay, which wastes processor resources, especially for "task_display". The "task_serial" task does need to poll "serial_rx" (per the assumptions above), but it could certainly do so at some non-zero period, computed from the baud rate and the execution times to ensure that "serial_rx" calls are frequent enough to avoid loss of input data. Of course, a serious design would use serial-line interrupts and trigger the "task_serial" only when a character has been received. For "task_display", you could replace the "refresh is needed" flag with another semaphore, which is initially zero, is "given" in "task_serial" when a new message is to be displayed, and is "taken" by "task_display" before it displays the new message. Then "task_display" consumes no processing resources until it actually has to.
> Here the solution is to take a binary semaphore before using the shared > resource (and give the semaphore after the job is done). > > void display_set_message(const char *new_msg) { > &#4294967295; semaphore_take_forever(); > &#4294967295; strncpy(msg, new_msg, sizeof(msg));
Here you need some code to set "refresh is needed" to true. That flag is also a shared variable.
> &#4294967295; semaphore_give();
If you have semaphore calls here, in "display_set_message", ...
> } > > ... > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; if (frame_is_complete(rxbuf)) { > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; char new_msg[32]; > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; /* decode new message from received frame from serial line */ > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; semaphore_take_forever(); > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; display_set_message(new_msg); > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; semaphore_give();
... then you do not need themn (and should not have them) here, around the call of "display_set_message".
> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; rxlen = 0; > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; } > ...
You also need to use the mutex semaphore from "task_display", for example as follows: void task_display(void) { while(1) { if (refresh_is_needed()) { char new_msg[32]; semaphore_take_forever(); strncpy (new_msg, msg, sizeof (new_msg)); // Here something to set "refresh is needed" to false. semaphore_give(); display_printat(0, 0, new_msg); } } } Otherwise the "task_serial" could still overwrite the message with a new one, during the call of "display_printat". To assign priorities, you look at the deadlines of the tasks: - task_serial: deadline = serial-line character period (actually one-half of it) - task_display: no deadline defined: infinite deadline. Then you assign priorities in order of deadlines: higher priorities for shorter deadlines, hence "task_serial" will have higher priority than "task_display". The numerical values of the priorities do not matter, only their ordering. With "task_serial" having a higher priority, it can pre-empt the slow "display_printat" function whenever it needs to, and thus call "serial_rx" often enough.
> My impression is that a very simple code is cluttered with > synchronization things that decrease readability and maintainability and > increase complexity. Why? Just to use preemption?
No -- to make the SW work, where the cooperative design did not work. Maintenance is eased because the pre-emptive design continues to work even if the execution time of "display_printat" was initially short, but then increased to become longer than the serial-line character period. In larger programs there are important advantages of preemption in helping decouple modules from each other.
> From what I have understood, preemption could solve real-time requirement. > > Suppose display_printat() takes too much time to finish. This increases > the worst-case superloop duration and could delay some system reaction. > For example, if display_printat() takes 1 second to finish, the system > could react after 1 second from an event (the press of a button, for > example).
Or it could lose serial input data (under my assumptions).
> If this isn't acceptable, preemption could help. Is it correct?
Yes. -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
Il 07/01/2020 03:37, Don Y ha scritto:
> On 1/6/2020 6:08 PM, pozz wrote: > > [ 8< ] > >> My impression is that a very simple code is cluttered with >> synchronization things that decrease readability and maintainability >> and increase complexity. Why? Just to use preemption? > > The "clutter" is introduced because your "problem" inherently involves > conflict; you're allowing two competing uses for a single resource.
Howevere the shared resource complexity is present only when preemption is used.
> The use of the synchronization primitive OVERTLY acknowledges this > issue/possibility -- lest (subsequent) another developer fail to > recognize that the possibility exists (i.e., "latent bug"). > >> Again my impression is that preemption is NOT GOOD and must be avoided >> if it isn't required. > > "Multiplication is NOT GOOD and must be avoided if it isn't required (i.e., > if you can use repeated ADDITIONs, instead)"
I know all approaches have pros and cons. What I was meaning is that preemption is used too often, even when it isn't really required. With FreeRTOS preemption scheduler is often enabled. It seems to me many times preemption is used only to show how nice I am.
>> So the question is: when a preemption scheduler is needed? Could you >> give a real example? > > The "scheduler" is present in any multitasking system -- cooperative or > preemptive.&#4294967295; SOMETHING has to decide who to give the processor to when > the currently executing task gives up control (or, has it removed from it) > > In your cooperative examples, the "while()" is used to implement the > scheduler: > when one task() "returns", the one listed on the next line (of the while > loop) > is given control... "scheduled" to run. > > Don't conflate "big loop" with "cooperative".
Yes, my superloop is an example of a *very simple* cooperative scheduler, but a cooperative scheduler can be implemented in a different way (as FreeRTOS).
>> &#4294967295;From what I have understood, preemption could solve real-time >> requirement. > > Preemption, like any capability, brings with it assets and liabilities. > Imagine you were tasked with building a box that blinked lights (XMAS > lights!) at different/varying rates.&#4294967295; The box has a dozen solid state > switches that control the individual lights (or, "light strands"). > > It would be really easy -- and intuitive -- to write: > &#4294967295;&#4294967295;&#4294967295; void lights1() { > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; while(FOREVER) { > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; light(1,ON);&#4294967295; sleep(500ms); > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; light(1,OFF); sleep(279ms); > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; } > &#4294967295;&#4294967295;&#4294967295; } > > &#4294967295;&#4294967295;&#4294967295; void lights2() { > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; while(FOREVER) { > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; light(2,ON);&#4294967295; sleep(100ms); > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; light(2,OFF); sleep(50ms); > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; } > &#4294967295;&#4294967295;&#4294967295; } > > &#4294967295;&#4294967295;&#4294967295; void lights3() { > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; while(FOREVER) { > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; ontime = (10.0 * rand() ) / RAND_MAX; > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; light(3,ON);&#4294967295; sleep(ontime); > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; light(3,OFF); sleep(10.0 - ontime); > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; } > &#4294967295;&#4294967295;&#4294967295; } > > etc.&#4294967295; No silly "yields" to get in the way.&#4294967295; No need for synchronization > primitives, either, because nothing is SHARED!
In the superloop cooperative approach: void lights1() { if (state_ON && timer_expired()) { light(1, OFF); timer_arm(500ms); state_ON = false; } else if (!state_ON && timer_expired()) { light(1, ON); timer_arm(279ms); state_ON = true; } This is a state-machine and I admit it's harder to write then in preemptive scheduler.
> [Contrived example but you'll find that there are many cases of > tasks co-executing that are NOT sharing anything (other than the > processor)] > > There are other classes of problems where the problem lends itself, > naturally, > to "peaceful" sharing -- where you're not in conflict with another.&#4294967295; And, > other techniques to hide the sharing mitigation in other mechanisms. > > Preemption lets you code AS IF you were the sole owner of the processor... > EXCEPT when you need to share something (which would imply that you are > NOT the sole owner -- at least at THAT time!&#4294967295; :> )
I suspect many real applications need synchronization mess (and risks if you don't know very well what the pitfalls of multitasking). And in those cases I'm not sure if it's simpler to code in preemption/blocking/synchronization or in cooperative/non-blocking/state-machine.
> The downside to cooperative multitasking is that *it* clutters your > code -- with all those yield()s -- and requires you to keep track of > how "long" you've hogged the CPU in the time since your last yield > (because that time gets reflected to all subsequent task runnings).
>> When I write code in a cooperative environment, I *litter* the code
> with yield()s to keep *reaction* times (of other tasks) short.&#4294967295; This > then means yield() has to run like greased lightning lest it impact > overall performance (because it is pure overhead!)
If you use non-blocking state-machines, there aren't any downside to cooperative multitasking. There aren't real yield()s, they are hidden when the task function exits.
On 1/7/2020 2:11 AM, pozz wrote:
> Il 07/01/2020 03:37, Don Y ha scritto: >> On 1/6/2020 6:08 PM, pozz wrote: >> >> [ 8< ] >> >>> My impression is that a very simple code is cluttered with synchronization >>> things that decrease readability and maintainability and increase >>> complexity. Why? Just to use preemption? >> >> The "clutter" is introduced because your "problem" inherently involves >> conflict; you're allowing two competing uses for a single resource. > > Howevere the shared resource complexity is present only when preemption is used.
Because it doesn't work right in the nonpreempt case! :>
>> The use of the synchronization primitive OVERTLY acknowledges this >> issue/possibility -- lest (subsequent) another developer fail to >> recognize that the possibility exists (i.e., "latent bug"). >> >>> Again my impression is that preemption is NOT GOOD and must be avoided if it >>> isn't required. >> >> "Multiplication is NOT GOOD and must be avoided if it isn't required (i.e., >> if you can use repeated ADDITIONs, instead)" > > I know all approaches have pros and cons. What I was meaning is that preemption > is used too often, even when it isn't really required.
Much of this has to do with coding styles. E.g., I can't recall the last time I wrote a single-threaded application. My mind just doesn't see things like that, anymore. I *always* see parallelism in problems. You develop a "taste' for a particular type of coding. E.g., I have a buddy who doesn't think twice about spawning new tasks -- only to kill them off a short time later, having decided that they've served their purpose. He might have HUNDREDS in a little tiny system, at any given time! OTOH, I tend to spawn tasks that "run forever" and "do more". I have a colleague who approaches all projects top-down. I find it unnerving to watch him "think" during his design process. By contrast, I *assess* the problem from the top, down -- and then *implement* it from the bottom up with a clear view of where I'm headed! Similarly, I now deal almost exclusively with "featureful" RTOSs -- memory protection, multiple cores, multiple processors, network interfaces, high resolution timing, etc. I'm tired of counting bytes and packing 8 booleans into a byte. Processors are cheap -- my time isn't! The biggest headache in preemptive designs is worrying about which operations MUST be atomic -- and being sure to protect that aspect of their design. But, this is related to sharing. If you don't share stuff, then you don't have to worry about this problem! And, /ad-hoc/ sharing TENDS to be "A Bad Thing". You want to strive to isolate "things" as much as possible. Information hiding. etc. If there's no *compelling* reason for A to know about B, then why expose one to the other? And, if they *do* need to be aware of each other, make their interactions very visible and restricted to a small set of operations. So, if you're already working to minimize sharing, then you're already working to facilitate the preemptive approach. Finally, its easier to relate to and tune a preemptive system because the "interrupts" (preemptions) are visible -- much moreso than all those "yield()s" scattered through your codebase!
> With FreeRTOS preemption scheduler is often enabled. It seems to me many times > preemption is used only to show how nice I am.
Designing BIG systems with cooperative approach can become a headache. How do you ever know what sort of latency a particular task may encounter at a particular time? You have to be aware of what all the other tasks are doing (and how willingly they are CURRENTLY relinquishing the processor) in order to guesstimate the time between activations of YOUR task. What's BIG? BIG == COMPLEX. What's COMPLEX? COMPLEX is anything that doesn't COMPLETELY fit in your head. <grin> If you can't remember all of the pertinent details to be able to make a decision/assessment (e.g., the above scenario), then your system is COMPLEX.
>>> From what I have understood, preemption could solve real-time requirement. >> >> Preemption, like any capability, brings with it assets and liabilities. >> Imagine you were tasked with building a box that blinked lights (XMAS >> lights!) at different/varying rates. The box has a dozen solid state >> switches that control the individual lights (or, "light strands"). >> >> It would be really easy -- and intuitive -- to write: >> void lights1() { >> while(FOREVER) { >> light(1,ON); sleep(500ms); >> light(1,OFF); sleep(279ms); >> } >> } >> >> void lights2() { >> while(FOREVER) { >> light(2,ON); sleep(100ms); >> light(2,OFF); sleep(50ms); >> } >> } >> >> void lights3() { >> while(FOREVER) { >> ontime = (10.0 * rand() ) / RAND_MAX; >> light(3,ON); sleep(ontime); >> light(3,OFF); sleep(10.0 - ontime); >> } >> } >> >> etc. No silly "yields" to get in the way. No need for synchronization >> primitives, either, because nothing is SHARED! > > In the superloop cooperative approach: > > void lights1() { > if (state_ON && timer_expired()) { > light(1, OFF); > timer_arm(500ms); > state_ON = false; > } else if (!state_ON && timer_expired()) { > light(1, ON); > timer_arm(279ms); > state_ON = true; > } > > This is a state-machine and I admit it's harder to write then in preemptive > scheduler.
Yes. In the preemptive approach I described, the "state" is automatically saved for you -- it manifests as the PRESERVED value of the Program Counter at the time the task was preempted. Note that this need not be done by a time-slicer. Rather, each sleep() effectively relinquishes the processor... and resumes execution when the time period elapses. Note that you can create a cooperative solution that similarly "tracks" the "state" -- by having yield() capture the program counter and restore it on the next activation.
>> [Contrived example but you'll find that there are many cases of >> tasks co-executing that are NOT sharing anything (other than the >> processor)] >> >> There are other classes of problems where the problem lends itself, naturally, >> to "peaceful" sharing -- where you're not in conflict with another. And, >> other techniques to hide the sharing mitigation in other mechanisms. >> >> Preemption lets you code AS IF you were the sole owner of the processor... >> EXCEPT when you need to share something (which would imply that you are >> NOT the sole owner -- at least at THAT time! :> ) > > I suspect many real applications need synchronization mess (and risks if you > don't know very well what the pitfalls of multitasking). > And in those cases I'm not sure if it's simpler to code in > preemption/blocking/synchronization or in cooperative/non-blocking/state-machine.
You can actually mix them in the same design. I, for example, often implement a cooperative multitasking system *in* an ISR (so the ISRs function can evolve from iteration to iteration). There, the yield() causes another "responsibility" of the ISR to begin execution while still actively IN the original interrupt. A separate mechanism is used to return from the interrupt in the last "interrupt task" executed.
>> The downside to cooperative multitasking is that *it* clutters your >> code -- with all those yield()s -- and requires you to keep track of >> how "long" you've hogged the CPU in the time since your last yield >> (because that time gets reflected to all subsequent task runnings). > >> When I write code in a cooperative environment, I *litter* the code >> with yield()s to keep *reaction* times (of other tasks) short. This >> then means yield() has to run like greased lightning lest it impact >> overall performance (because it is pure overhead!) > > If you use non-blocking state-machines, there aren't any downside to > cooperative multitasking. There aren't real yield()s, they are hidden when the > task function exits.
Find a coding style that you're comfortable with and with which you can be reasonably expected to become proficient. Then, hone your skills on that approach -- peeking into other approaches, periodically, to see when/if they might offer you a better solution. The "right" approach is the one that works for you. *Reliably*. (if you find yourself writing buggy code -- or, coding efficiency drops -- then look at where YOUR problems lie and see if any of them are related to the environment in which you've chosen to code) The only real worry is to avoid things that are too "exotic" as they can make it hard for others to understand/adopt that approach. E.g., I like to use "finite state executives" to encode user interface operations and communication protocols. This lets me condense the rules for the interface into small tables that (*I* think) tersely encapsulate the essence of the operator's actions at any point in the interface. For example, to collect a numeric value from the user, I might have this FSM fragment: STATE ENTRY On '0' THRU '9' goto ENTRY executing AccumulateDigit() On 'BACKSPACE' goto ENTRY executing ElideDigit() On 'CLEAR' goto ENTRY executing ClearAccumulator() On 'ENTER' goto VERIFY executing CheckValue() STATE VERIFY On 'VALID' goto DONE executing AcceptValue() On 'INVALID' goto ERROR executing SignalError() STATE ERROR On 'TIMEOUT' goto ENTRY executing ClearErrorMessage() LinkTo ENTRY Everything above this sentence can compile into as few as *30* bytes, despite its verbosity! (LinkTo effectively treats the referenced state's rules as if they were also present in the table. Here, it allows the operator to type any of the "normal" data entry keystrokes enumerated in the referenced state -- ENTRY -- and have them terminate the error message -- without having to wait for the TIMEOUT) AccumulateDigit() { accumulator = 10*accumulator + (input-'0') DisplayAccumulator() } ElideDigit() { accumulator /= 10 DisplayAccumulator() } CheckValue() { signal( ((accumulator >= MIN) && (accumlator <= MAX)) ? 'VALID' : 'INVALID' ) } SignalError() { beep() DisplayString("Value out of bounds!") if (Timer) kill(Timer) spawn(Timer) } Timer() { // nominally runs concurrently with ERROR state sleep(2 sec) signal(TIMEOUT) } In *my* opinion, this is relatively self-explanatory. However, it requires additional compile-time and run-time tools to implement. And, needs to be integrated with the particular O/S. So, it's unattractive to certain clients (who want a cleaner toolchain or have their own O/S requirements). [Imagine developing an entire subsystem like this -- and then discovering a client resists your including it (REUSING it as an ALREADY DEBUGGED component!) in your solution to their problem!] Good luck!
Il 07/01/2020 15:51, Don Y ha scritto:
> On 1/7/2020 2:11 AM, pozz wrote: >> Il 07/01/2020 03:37, Don Y ha scritto: >>> On 1/6/2020 6:08 PM, pozz wrote: >>> >>> [ 8< ] >>> >>>> My impression is that a very simple code is cluttered with >>>> synchronization things that decrease readability and maintainability >>>> and increase complexity. Why? Just to use preemption? >>> >>> The "clutter" is introduced because your "problem" inherently involves >>> conflict; you're allowing two competing uses for a single resource. >> >> Howevere the shared resource complexity is present only when >> preemption is used. > > Because it doesn't work right in the nonpreempt case!&#4294967295; :>
Why do you say this? This application can work flowlessy even with cooperative multitasking.
>>> The use of the synchronization primitive OVERTLY acknowledges this >>> issue/possibility -- lest (subsequent) another developer fail to >>> recognize that the possibility exists (i.e., "latent bug"). >>> >>>> Again my impression is that preemption is NOT GOOD and must be >>>> avoided if it isn't required. >>> >>> "Multiplication is NOT GOOD and must be avoided if it isn't required >>> (i.e., >>> if you can use repeated ADDITIONs, instead)" >> >> I know all approaches have pros and cons. What I was meaning is that >> preemption is used too often, even when it isn't really required. > > Much of this has to do with coding styles.&#4294967295; E.g., I can't recall the > last time > I wrote a single-threaded application.&#4294967295; My mind just doesn't see things > like > that, anymore.&#4294967295; I *always* see parallelism in problems. > > You develop a "taste' for a particular type of coding.&#4294967295;&#4294967295; E.g., I have a > buddy > who doesn't think twice about spawning new tasks -- only to kill them off > a short time later, having decided that they've served their purpose.&#4294967295; He > might have HUNDREDS in a little tiny system, at any given time!&#4294967295; OTOH, I > tend to spawn tasks that "run forever" and "do more". > > I have a colleague who approaches all projects top-down.&#4294967295; I find it > unnerving > to watch him "think" during his design process.&#4294967295; By contrast, I *assess* > the > problem from the top, down -- and then *implement* it from the bottom up > with > a clear view of where I'm headed! > > Similarly, I now deal almost exclusively with "featureful" RTOSs -- memory > protection, multiple cores, multiple processors, network interfaces, high > resolution timing, etc.&#4294967295; I'm tired of counting bytes and packing 8 booleans > into a byte.&#4294967295; Processors are cheap -- my time isn't!
Yes, you're right. It's a matter of coding style. I don't have any experience in multi-tasking systems so I am worried about them. There's a learning curve for coding tasks in a preemptive environment that it appears to me a waste of time if I'm able to reach the same goal with a completeley different approach that is much more friendly to me. Anyway I'd like to learn a little of the other approach. This is the reason of my posts.
> The biggest headache in preemptive designs is worrying about which > operations > MUST be atomic -- and being sure to protect that aspect of their design. > But, this is related to sharing.&#4294967295; If you don't share stuff, then you > don't have > to worry about this problem! > > And, /ad-hoc/ sharing TENDS to be "A Bad Thing".&#4294967295; You want to strive to > isolate > "things" as much as possible.&#4294967295; Information hiding.&#4294967295; etc.&#4294967295; If there's no > *compelling* reason for A to know about B, then why expose one to the > other? > > And, if they *do* need to be aware of each other, make their interactions > very visible and restricted to a small set of operations.
In my very simple application (display showing a message) there is a sharing resource that can't be avoided (at least to me). Imagine if many variables would be set through the serial line: a semaphore everytime both tasks need to access those variables!
> So, if you're already working to minimize sharing, then you're already > working > to facilitate the preemptive approach. > > Finally, its easier to relate to and tune a preemptive system because the > "interrupts" (preemptions) are visible -- much moreso than all those > "yield()s" scattered through your codebase!
Again I don't use explicit yield()s. So the worst-case superloop duration is the sum of worst-case durations of each task, plus worst-case duration of interrupts. If tasks are coded as non-blocking (state-machine), this worst-case duration could be very small and real-time requirements can be respected.
>> With FreeRTOS preemption scheduler is often enabled. It seems to me >> many times preemption is used only to show how nice I am. > > Designing BIG systems with cooperative approach can become a headache.&#4294967295; How > do you ever know what sort of latency a particular task may encounter at > a particular time?&#4294967295; You have to be aware of what all the other tasks are > doing (and how willingly they are CURRENTLY relinquishing the processor) > in order to guesstimate the time between activations of YOUR task.
Again, in my approach every task are *non-blocking*, so they take 100us-1ms maximum at each loop. If I have 10 tasks, superloop duration could be estimated in 10ms maximum. If the most critical real-time requirement is 100ms or more, cooperative multitasking is ok. Of course, we need to take into account interrupts, that are much shorter than tasks, so they can be normally ignored. Anyway they must be considered in preemptive scheduler too.
> What's BIG?&#4294967295; BIG == COMPLEX. > > What's COMPLEX?&#4294967295; COMPLEX is anything that doesn't COMPLETELY fit in your > head. > <grin>&#4294967295; If you can't remember all of the pertinent details to be able to > make a decision/assessment (e.g., the above scenario), then your system is > COMPLEX. > >>>> &#4294967295;From what I have understood, preemption could solve real-time >>>> requirement. >>> >>> Preemption, like any capability, brings with it assets and liabilities. >>> Imagine you were tasked with building a box that blinked lights (XMAS >>> lights!) at different/varying rates.&#4294967295; The box has a dozen solid state >>> switches that control the individual lights (or, "light strands"). >>> >>> It would be really easy -- and intuitive -- to write: >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295; void lights1() { >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; while(FOREVER) { >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; light(1,ON);&#4294967295; sleep(500ms); >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; light(1,OFF); sleep(279ms); >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; } >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295; } >>> >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295; void lights2() { >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; while(FOREVER) { >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; light(2,ON);&#4294967295; sleep(100ms); >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; light(2,OFF); sleep(50ms); >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; } >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295; } >>> >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295; void lights3() { >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; while(FOREVER) { >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; ontime = (10.0 * rand() ) / RAND_MAX; >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; light(3,ON);&#4294967295; sleep(ontime); >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; light(3,OFF); sleep(10.0 - ontime); >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; } >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295; } >>> >>> etc.&#4294967295; No silly "yields" to get in the way.&#4294967295; No need for synchronization >>> primitives, either, because nothing is SHARED! >> >> In the superloop cooperative approach: >> >> void lights1() { >> &#4294967295;&#4294967295; if (state_ON && timer_expired()) { >> &#4294967295;&#4294967295;&#4294967295;&#4294967295; light(1, OFF); >> &#4294967295;&#4294967295;&#4294967295;&#4294967295; timer_arm(500ms); >> &#4294967295;&#4294967295;&#4294967295;&#4294967295; state_ON = false; >> &#4294967295;&#4294967295; } else if (!state_ON && timer_expired()) { >> &#4294967295;&#4294967295;&#4294967295;&#4294967295; light(1, ON); >> &#4294967295;&#4294967295;&#4294967295;&#4294967295; timer_arm(279ms); >> &#4294967295;&#4294967295;&#4294967295;&#4294967295; state_ON = true; >> &#4294967295;&#4294967295; } >> >> This is a state-machine and I admit it's harder to write then in >> preemptive scheduler. > > Yes.&#4294967295; In the preemptive approach I described, the "state" is automatically > saved for you -- it manifests as the PRESERVED value of the Program Counter > at the time the task was preempted.&#4294967295; Note that this need not be done by > a time-slicer.&#4294967295; Rather, each sleep() effectively relinquishes the > processor... > and resumes execution when the time period elapses. > > Note that you can create a cooperative solution that similarly > "tracks" the "state" -- by having yield() capture the program counter > and restore it on the next activation. > >>> [Contrived example but you'll find that there are many cases of >>> tasks co-executing that are NOT sharing anything (other than the >>> processor)] >>> >>> There are other classes of problems where the problem lends itself, >>> naturally, >>> to "peaceful" sharing -- where you're not in conflict with another. >>> And, >>> other techniques to hide the sharing mitigation in other mechanisms. >>> >>> Preemption lets you code AS IF you were the sole owner of the >>> processor... >>> EXCEPT when you need to share something (which would imply that you are >>> NOT the sole owner -- at least at THAT time!&#4294967295; :> ) >> >> I suspect many real applications need synchronization mess (and risks >> if you don't know very well what the pitfalls of multitasking). >> And in those cases I'm not sure if it's simpler to code in >> preemption/blocking/synchronization or in >> cooperative/non-blocking/state-machine. > > You can actually mix them in the same design. > > I, for example, often implement a cooperative multitasking system *in* > an ISR (so the ISRs function can evolve from iteration to iteration). > There, the yield() causes another "responsibility" of the ISR to > begin execution while still actively IN the original interrupt. > A separate mechanism is used to return from the interrupt in the > last "interrupt task" executed. > >>> The downside to cooperative multitasking is that *it* clutters your >>> code -- with all those yield()s -- and requires you to keep track of >>> how "long" you've hogged the CPU in the time since your last yield >>> (because that time gets reflected to all subsequent task runnings). >> &#4294967295;>> When I write code in a cooperative environment, I *litter* the code >>> with yield()s to keep *reaction* times (of other tasks) short.&#4294967295; This >>> then means yield() has to run like greased lightning lest it impact >>> overall performance (because it is pure overhead!) >> >> If you use non-blocking state-machines, there aren't any downside to >> cooperative multitasking. There aren't real yield()s, they are hidden >> when the task function exits. > > Find a coding style that you're comfortable with and with which you can > be reasonably expected to become proficient.&#4294967295; Then, hone your skills > on that approach -- peeking into other approaches, periodically, to see > when/if they might offer you a better solution.&#4294967295; The "right" approach is > the one that works for you.&#4294967295; *Reliably*.&#4294967295; (if you find yourself writing > buggy code -- or, coding efficiency drops -- then look at where YOUR > problems lie and see if any of them are related to the environment > in which you've chosen to code) > > The only real worry is to avoid things that are too "exotic" as they > can make it hard for others to understand/adopt that approach. > > E.g., I like to use "finite state executives" to encode user interface > operations and communication protocols.&#4294967295; This lets me condense the rules > for the interface into small tables that (*I* think) tersely encapsulate > the essence of the operator's actions at any point in the interface. > > For example, to collect a numeric value from the user, I might have > this FSM fragment: > > &#4294967295;&#4294967295;&#4294967295;&#4294967295; STATE&#4294967295;&#4294967295; ENTRY > On&#4294967295; '0' THRU '9'&#4294967295;&#4294967295; goto ENTRY&#4294967295;&#4294967295;&#4294967295; executing&#4294967295; AccumulateDigit() > On&#4294967295; 'BACKSPACE'&#4294967295;&#4294967295;&#4294967295; goto ENTRY&#4294967295;&#4294967295;&#4294967295; executing&#4294967295; ElideDigit() > On&#4294967295; 'CLEAR'&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; goto ENTRY&#4294967295;&#4294967295;&#4294967295; executing&#4294967295; ClearAccumulator() > On&#4294967295; 'ENTER'&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; goto VERIFY&#4294967295;&#4294967295; executing&#4294967295; CheckValue() > > &#4294967295;&#4294967295;&#4294967295;&#4294967295; STATE&#4294967295;&#4294967295; VERIFY > On&#4294967295; 'VALID'&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; goto DONE&#4294967295;&#4294967295;&#4294967295;&#4294967295; executing&#4294967295; AcceptValue() > On&#4294967295; 'INVALID'&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; goto ERROR&#4294967295;&#4294967295;&#4294967295; executing&#4294967295; SignalError() > > &#4294967295;&#4294967295;&#4294967295;&#4294967295; STATE&#4294967295;&#4294967295; ERROR > On&#4294967295; 'TIMEOUT'&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; goto ENTRY&#4294967295;&#4294967295;&#4294967295; executing&#4294967295; ClearErrorMessage() > LinkTo&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; ENTRY > > Everything above this sentence can compile into as few as *30* bytes, > despite its verbosity!&#4294967295; (LinkTo effectively treats the referenced > state's rules as if they were also present in the table.&#4294967295; Here, it allows > the operator to type any of the "normal" data entry keystrokes enumerated > in the referenced state -- ENTRY -- and have them terminate the error > message -- without having to wait for the TIMEOUT) > > AccumulateDigit() { > &#4294967295;&#4294967295;&#4294967295; accumulator = 10*accumulator + (input-'0') > &#4294967295;&#4294967295;&#4294967295; DisplayAccumulator() > } > > ElideDigit() { > &#4294967295;&#4294967295;&#4294967295; accumulator /= 10 > &#4294967295;&#4294967295;&#4294967295; DisplayAccumulator() > } > > CheckValue() { > &#4294967295;&#4294967295;&#4294967295; signal( ((accumulator >= MIN) && (accumlator <= MAX)) ? > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; 'VALID' : > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; 'INVALID' ) > } > > SignalError() { > &#4294967295;&#4294967295;&#4294967295; beep() > &#4294967295;&#4294967295;&#4294967295; DisplayString("Value out of bounds!") > &#4294967295;&#4294967295;&#4294967295; if (Timer) > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; kill(Timer) > &#4294967295;&#4294967295;&#4294967295; spawn(Timer) > } > > Timer() {&#4294967295;&#4294967295;&#4294967295;&#4294967295; // nominally runs concurrently with ERROR state > &#4294967295;&#4294967295;&#4294967295; sleep(2 sec) > &#4294967295;&#4294967295;&#4294967295; signal(TIMEOUT) > } > > In *my* opinion, this is relatively self-explanatory.&#4294967295; However, it > requires additional compile-time and run-time tools to implement. > And, needs to be integrated with the particular O/S. > > So, it's unattractive to certain clients (who want a cleaner toolchain > or have their own O/S requirements). > > [Imagine developing an entire subsystem like this -- and then discovering > a client resists your including it (REUSING it as an ALREADY DEBUGGED > component!) in your solution to their problem!] > > Good luck!
Il 07/01/2020 08:38, Niklas Holsti ha scritto:
> On 2020-01-07 3:08, pozz wrote: >> I noticed my previous post about preemptive OS involved many people >> and started many discussions, most of them theoric. >> >> Someone wrote the synchronization of tasks in preemptive scheduler is >> not so difficult, after understanding some things. > > I made some such statement. > >> Others suggested to abandon at all preemptive scheduler, considering >> its pitfalls. >> >> Because I know my limits, I don't think I can produce a well-written >> preemption system. However I'd like to understand a little more about >> them. Starting from an example. >> >> Suppose my system is a display where a message is written. The message >> can be customized by a serial line. > > So, this system consists of a display and a serial input line and has > requirements as follows: > > 1. The display shall at all times show a message, of at most 31 characters. > > - To be defined: what the initial message should be at system reset. > > 2. The SW shall receive characters from the serial line, buffering them > in a "frame buffer" in memory, which can hold up to 64 characters. > > 3. After each received (and buffered) serial-line character, the SW > shall check if the buffered characters form a complete "frame". > > - To be defined: what to do if the frame buffer is full but does not > form a complete frame. (This may of course be impossible by design of > the "frame_is_complete" function.) > > 4. When the buffered characters form a complete frame, the SW shall > convert (decode) the contents of the frame into a message, of at most 31 > characters, display that message until another, new frame is received, > and erase the frame-buffer in preparation for the next frame. > > The real-time aspects are undefined, except that each message is > displayed until the next frame is received.
The only real-time is that the new message sent through the serial line appears on the display in a reasonable time: 100ms? 1s? Something similar. The second requirement is that the display mustn't show a hybrid message composed by two parts of the successive messages.
>> In cooperative approach, I would write something: >> >> --- main.c --- >> ... >> while(1) { >> &#4294967295;&#4294967295; task_display(); >> &#4294967295;&#4294967295; task_serial(); >> } >> --- end of main.c --- >> >> --- display.c --- >> static const char msg[32]; >> void display_set_message(const char *new_msg) { >> &#4294967295;&#4294967295; strncpy(msg, new_msg, sizeof(msg)); >> } >> void task_display(void) { >> &#4294967295;&#4294967295; if (refresh_is_needed()) { >> &#4294967295;&#4294967295;&#4294967295;&#4294967295; display_printat(0, 0, msg); >> &#4294967295;&#4294967295; } >> } >> --- end of display.c --- >> >> --- serial.c --- >> static unsigned char rxbuf[64]; >> static size_t rxlen; >> void task_serial(void) >> { >> &#4294967295;&#4294967295; unsigned char b = serial_rx(); >> &#4294967295;&#4294967295; if (b != EOF) { >> &#4294967295;&#4294967295;&#4294967295;&#4294967295; rxbuf[rxlen++] = b; >> &#4294967295;&#4294967295;&#4294967295;&#4294967295; if (frame_is_complete(rxbuf, rxlen)) { >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; char new_msg[32]; >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; /* decode new message from received frame from serial line */ >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; display_set_message(new_msg); >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; rxlen = 0; >> &#4294967295;&#4294967295;&#4294967295;&#4294967295; } >> &#4294967295;&#4294967295; } >> } >> --- end of serial.c --- >> >> The display needs to be refreshed. display_printat() is blocking: when >> it returns, all the display was refreshed. So the display always shows >> the entire message: there's no risk the display shows a part of the >> previous message and a part of the new message. >> >> How to convert these two tasks in a preemptive scheduler? Which >> priority to assign to them? > > Before that conversion one must think about the real-time requirements: > deadlines, response-times. This is difficult for this example, because > you have not stated any requirements. > > Let's assume these requirements and properties of the environment: > > A. The function "serial_rx" polls the one-character reception buffer of > the serial line once, and returns the received character, if any, and > EOF otherwise. It must be called at least as often as characters arrive > (that is, depending on baud rate) to avoid overrun and loss of some > characters.
No, serial driver works in interrupt mode and already use a FIFO buffer, sufficiently big. serial_rx() pop a single element from the FIFO, if any.
> B. A pause in the serial-line character arrival cannot be assumed after > the completion of a frame. The first character of the next frame can > arrive as quickly as the baud rate allows.
Why? I think the user can change the message only when he wants. Normally no activity is present on the serial line.
> C. The functions "frame_is_complete" and "display_set_message" take, > together, so much less time than the serial-line character period that > the whole "task_serial" function also takes less time than the character > period. > > D. The function "display_printat" can take longer than the serial-line > character period. > > Under these assumptions, the cooperative solution does not work, because > when a frame is completed, "display_printat" is called which may mean > too much delay for the next "serial_rx" call and cause loss of input > characters.
Serial driver interrupts guarantees no loss of input during display_printat() or other functions.
>> The simplest approach is... >> >> --- display.c --- >> static const char msg[32]; >> void display_set_message(const char *new_msg) { >> &#4294967295;&#4294967295; strncpy(msg, new_msg, sizeof(msg)); >> } >> void task_display(void) { >> &#4294967295;&#4294967295; while(1) { >> &#4294967295;&#4294967295;&#4294967295;&#4294967295; if (refresh_is_needed()) { >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; display_printat(0, 0, msg); >> &#4294967295;&#4294967295;&#4294967295;&#4294967295; } >> &#4294967295;&#4294967295; } >> } >> --- end of display.c --- >> >> --- serial.c --- >> static unsigned char rxbuf[32]; >> static size_t rxlen; >> void task_serial(void) >> { >> &#4294967295;&#4294967295; while(1) { >> &#4294967295;&#4294967295;&#4294967295;&#4294967295; unsigned char b = serial_rx(); >> &#4294967295;&#4294967295;&#4294967295;&#4294967295; if (b != EOF) { >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; rxbuf[rxlen++] = b; >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; if (frame_is_complete(rxbuf, rxlen)) { >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; char new_msg[32]; >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; /* decode new message from received frame from serial line */ >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; display_set_message(new_msg); >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; rxlen = 0; >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; } >> &#4294967295;&#4294967295;&#4294967295;&#4294967295; } >> &#4294967295;&#4294967295; } >> } >> --- end of serial.c --- >> >> This code works most of the time, but the display sometime can show a >> mix of old/new messages. > > Because the "msg" variable in display.c is accessed from both tasks, as > an unprotected shared variable. > > In addition to that problem, you have written both tasks to use polling, > with no delay, which wastes processor resources, especially for > "task_display". The "task_serial" task does need to poll "serial_rx" > (per the assumptions above), but it could certainly do so at some > non-zero period, computed from the baud rate and the execution times to > ensure that "serial_rx" calls are frequent enough to avoid loss of input > data. > > Of course, a serious design would use serial-line interrupts and trigger > the "task_serial" only when a character has been received. > > For "task_display", you could replace the "refresh is needed" flag with > another semaphore, which is initially zero, is "given" in "task_serial" > when a new message is to be displayed, and is "taken" by "task_display" > before it displays the new message. Then "task_display" consumes no > processing resources until it actually has to.
I was thinking to a refresh made at regular intervals, such as every 100ms.
>> Here the solution is to take a binary semaphore before using the >> shared resource (and give the semaphore after the job is done). >> >> void display_set_message(const char *new_msg) { >> &#4294967295;&#4294967295; semaphore_take_forever(); >> &#4294967295;&#4294967295; strncpy(msg, new_msg, sizeof(msg)); > > Here you need some code to set "refresh is needed" to true. That flag is > also a shared variable. > >> &#4294967295;&#4294967295; semaphore_give(); > > If you have semaphore calls here, in "display_set_message", ... > >> } >> >> ... >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; if (frame_is_complete(rxbuf)) { >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; char new_msg[32]; >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; /* decode new message from received frame from serial line */ >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; semaphore_take_forever(); >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; display_set_message(new_msg); >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; semaphore_give(); > > .... then you do not need themn (and should not have them) here, around > the call of "display_set_message". > >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; rxlen = 0; >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; } >> ... > > You also need to use the mutex semaphore from "task_display", for > example as follows: > > &#4294967295;&#4294967295; void task_display(void) { > &#4294967295;&#4294967295;&#4294967295;&#4294967295; while(1) { > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; if (refresh_is_needed()) { > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; char new_msg[32]; > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; semaphore_take_forever(); > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; strncpy (new_msg, msg, sizeof (new_msg)); > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; // Here something to set "refresh is needed" to false. > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; semaphore_give(); > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; display_printat(0, 0, new_msg); > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; } > &#4294967295;&#4294967295;&#4294967295; } > &#4294967295; } > > Otherwise the "task_serial" could still overwrite the message with a new > one, during the call of "display_printat". > > To assign priorities, you look at the deadlines of the tasks: > > - task_serial: deadline = serial-line character period (actually > one-half of it) > > - task_display: no deadline defined: infinite deadline. > > Then you assign priorities in order of deadlines: higher priorities for > shorter deadlines, hence "task_serial" will have higher priority than > "task_display". The numerical values of the priorities do not matter, > only their ordering. > > With "task_serial" having a higher priority, it can pre-empt the slow > "display_printat" function whenever it needs to, and thus call > "serial_rx" often enough. > >> My impression is that a very simple code is cluttered with >> synchronization things that decrease readability and maintainability >> and increase complexity. Why? Just to use preemption? > > No -- to make the SW work, where the cooperative design did not work. > > Maintenance is eased because the pre-emptive design continues to work > even if the execution time of "display_printat" was initially short, but > then increased to become longer than the serial-line character period. > > In larger programs there are important advantages of preemption in > helping decouple modules from each other. > >> &#4294967295;From what I have understood, preemption could solve real-time >> requirement. >> >> Suppose display_printat() takes too much time to finish. This >> increases the worst-case superloop duration and could delay some >> system reaction. >> For example, if display_printat() takes 1 second to finish, the system >> could react after 1 second from an event (the press of a button, for >> example). > > Or it could lose serial input data (under my assumptions). > >> If this isn't acceptable, preemption could help. Is it correct? > > Yes. >
On 2020-01-08 1:02, pozz wrote:
> Il 07/01/2020 08:38, Niklas Holsti ha scritto: >> On 2020-01-07 3:08, pozz wrote: >>> I noticed my previous post about preemptive OS involved many people >>> and started many discussions, most of them theoric. >>> >>> Someone wrote the synchronization of tasks in preemptive scheduler is >>> not so difficult, after understanding some things. >> >> I made some such statement. >> >>> Others suggested to abandon at all preemptive scheduler, considering >>> its pitfalls. >>> >>> Because I know my limits, I don't think I can produce a well-written >>> preemption system. However I'd like to understand a little more about >>> them. Starting from an example. >>> >>> Suppose my system is a display where a message is written. The >>> message can be customized by a serial line. >> >> So, this system consists of a display and a serial input line and has >> requirements as follows: >> >> 1. The display shall at all times show a message, of at most 31 >> characters. >> >> - To be defined: what the initial message should be at system reset. >> >> 2. The SW shall receive characters from the serial line, buffering >> them in a "frame buffer" in memory, which can hold up to 64 characters. >> >> 3. After each received (and buffered) serial-line character, the SW >> shall check if the buffered characters form a complete "frame". >> >> - To be defined: what to do if the frame buffer is full but does not >> form a complete frame. (This may of course be impossible by design of >> the "frame_is_complete" function.) >> >> 4. When the buffered characters form a complete frame, the SW shall >> convert (decode) the contents of the frame into a message, of at most >> 31 characters, display that message until another, new frame is >> received, and erase the frame-buffer in preparation for the next frame. >> >> The real-time aspects are undefined, except that each message is >> displayed until the next frame is received. > > The only real-time is that the new message sent through the serial line > appears on the display in a reasonable time: 100ms? 1s? Something similar. > > The second requirement is that the display mustn't show a hybrid message > composed by two parts of the successive messages. > > >>> In cooperative approach, I would write something:
[snip code]
>>> How to convert these two tasks in a preemptive scheduler? Which >>> priority to assign to them? >> >> Before that conversion one must think about the real-time >> requirements: deadlines, response-times. This is difficult for this >> example, because you have not stated any requirements. >> >> Let's assume these requirements and properties of the environment: >> >> A. The function "serial_rx" polls the one-character reception buffer >> of the serial line once, and returns the received character, if any, >> and EOF otherwise. It must be called at least as often as characters >> arrive (that is, depending on baud rate) to avoid overrun and loss of >> some characters.
You asked about possible advantages of pre-emption; I made my assumptions, above, such that the (incomplete) example you gave shows this advantage, under these assumptions (which could be true for other, otherwise similar example applications).
> No, serial driver works in interrupt mode and already use a FIFO buffer, > sufficiently big. serial_rx() pop a single element from the FIFO, if any.
Ah, then your *system* is intrinsically pre-emptive (the interrupts pre-empt the tasks), even if the *code you showed* does not show this pre-emption. I won't reply to your other comments on my assumptions, as they are irrelevant to the point of where and when pre-emption can be good for you.
> Serial driver interrupts guarantees no loss of input during > display_printat() or other functions.
Right, because it is pre-emptive. So there you see the advantage.
>> For "task_display", you could replace the "refresh is needed" flag >> with another semaphore, which is initially zero, is "given" in >> "task_serial" when a new message is to be displayed, and is "taken" by >> "task_display" before it displays the new message. Then "task_display" >> consumes no processing resources until it actually has to. > > I was thinking to a refresh made at regular intervals, such as every 100ms.
In some systems that could result in annoying flickering of the display, which could even be dangerous (seizure-inducing) to some users. -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
On 2020-01-08 0:51, pozz wrote:
> Il 07/01/2020 15:51, Don Y ha scritto: >> On 1/7/2020 2:11 AM, pozz wrote: >>> Il 07/01/2020 03:37, Don Y ha scritto: >>>> On 1/6/2020 6:08 PM, pozz wrote: >>>> >>>> [ 8< ] >>>> >>>>> My impression is that a very simple code is cluttered with >>>>> synchronization things that decrease readability and >>>>> maintainability and increase complexity. Why? Just to use preemption? >>>> >>>> The "clutter" is introduced because your "problem" inherently involves >>>> conflict; you're allowing two competing uses for a single resource. >>> >>> Howevere the shared resource complexity is present only when >>> preemption is used. >> >> Because it doesn't work right in the nonpreempt case!&#4294967295; :> > > Why do you say this? This application can work flowlessy even with > cooperative multitasking.
Only if you have a pre-empting serial-line interrupt handler and a "serial_rx" function that interacts properly with that interrupt handler, although the two share data (the input queue) and the latter can pre-empt the former when interrupts are enabled. The design of that interrupt handler and the "serial_rx" function will exhibit some of the "clutter" you are complaining about. [snip]
> It's a matter of coding style. I don't have any > experience in multi-tasking systems so I am worried about them. There's > a learning curve for coding tasks in a preemptive environment that it > appears to me a waste of time if I'm able to reach the same goal with a > completeley different approach that is much more friendly to me.
If the system didn't supply that interrupt handler and the associated "serial_rx" function, the "friendly" approach would not reach the same goal as the pre-emptive approach.
> Anyway I'd like to learn a little of the other approach. This is the > reason of my posts.
In addition to the CSP approach -- which is a bit theoretical, as few programming languages support it directly -- you could look at how Ada does multi-tasking, by looking at the slides for the book "Real-Time Systems and Programming Languages (Fourth Edition) Ada 2005, Real-Time Java and C/Real-Time POSIX", at https://www.cs.york.ac.uk/rts/books/RTSBookFourthEdition.html (the book itself is not free, unfortunately).
>> And, if they *do* need to be aware of each other, make their interactions >> very visible and restricted to a small set of operations. > > In my very simple application (display showing a message) there is a > sharing resource that can't be avoided (at least to me). Imagine if many > variables would be set through the serial line: a semaphore everytime > both tasks need to access those variables!
What is your concern with that? You only need one semaphore to provide mutual exclusion between two tasks, not a separate semaphore for each shared variable. Are you worried about the processor time for the semaphore operations? or the code clutter? If you have many variables, shared in that way, you probably have some way of identifying a particular variable by a run-time value, such as an enumeration or a string name, and then you can write a single function that accesses any one variable when given the identifier of that variable as a parameter, and you can encapsulate the take/give of the semaphore within that function. In such cases, you should also consider carefully /when/ a task should accept a change in a variable. It is often the case that failures or bad behaviour can result if a task uses a variable, X say, in two places, but the value of X changes unexpectedly between the first use and the second use, because there is a "yield" or pre-emption between the uses. Then it is better for the task to take a local copy of X, at a suitable point in its execution, and use that local copy subsequently, until it is time to refresh the local copy. Using the local copy of course does not need mutex protection.
> So the worst-case superloop duration is the sum of worst-case > durations of each task, plus worst-case duration of interrupts.
> If tasks are coded as non-blocking (state-machine), this worst-case > duration could be very small and real-time requirements can be > respected. You might try coding an FFT or Quicksort or other complex algorithm as a state machine, with a variable overall length of the input and output arrays, and then compare the "clutter" of those state machines with the clutter of pre-emptive coding.
> Again, in my approach every task are *non-blocking*,
(Just a note that this use of the term "blocking" does not conform with its normal use in task scheduling, where a task "blocks" when it suspends itself to wait for some event that has not yet happened, or when it cannot execute because a higher-priority task is executing. Such "blocked" tasks are not running and are not using processor time. A task that just runs and computes for say 60 seconds is not "blocking" in the normal sense of the word.)
> so they take > 100us-1ms maximum at each loop. If I have 10 tasks, superloop duration > could be estimated in 10ms maximum. If the most critical real-time > requirement is 100ms or more, cooperative multitasking is ok.
Yes, everything depends on the execution times and the required response times. If cooperative works, without excessively cluttered state machines, and you are not worried about significant long-term evolution of the SW, it may be a defensible approach. -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
On 1/7/2020 3:51 PM, pozz wrote:
>>> I know all approaches have pros and cons. What I was meaning is that >>> preemption is used too often, even when it isn't really required. >> >> Much of this has to do with coding styles. E.g., I can't recall the last time >> I wrote a single-threaded application. My mind just doesn't see things like >> that, anymore. I *always* see parallelism in problems. >> >> You develop a "taste' for a particular type of coding. E.g., I have a buddy >> who doesn't think twice about spawning new tasks -- only to kill them off >> a short time later, having decided that they've served their purpose. He >> might have HUNDREDS in a little tiny system, at any given time! OTOH, I >> tend to spawn tasks that "run forever" and "do more". >> >> I have a colleague who approaches all projects top-down. I find it unnerving >> to watch him "think" during his design process. By contrast, I *assess* the >> problem from the top, down -- and then *implement* it from the bottom up with >> a clear view of where I'm headed! >> >> Similarly, I now deal almost exclusively with "featureful" RTOSs -- memory >> protection, multiple cores, multiple processors, network interfaces, high >> resolution timing, etc. I'm tired of counting bytes and packing 8 booleans >> into a byte. Processors are cheap -- my time isn't! > > Yes, you're right. It's a matter of coding style.
Yes, but "style" can go a long way towards making or breaking a particular design. Which do you prefer: while (foo) { ... } do { ... } while (foo) while (FOREVER) { ... if (foo) break; } while (FOREVER) { if (foo) break; ... } etc. There are subtle differences to each. While, you can write code to compensate for those differences, the resulting code can look clumsy and be more error prone -- depending on the "more natural" fit of a particular idiom. Would you opt for an iterative solution over a recursive one? They're conceptually equivalent (/cf/ "duality"). But, in some cases, one may be considerably cleaner than the other. Or, use fewer resources. Your goal should always be to come up with an approach (which includes style, not just algorithm) that allows you to create CORRECT solutions that are unambiguous (when read) and easy to maintain/modify -- perhaps by yourself (at a later date when you've forgotten most of the finer details/gotchas).
> I don't have any experience > in multi-tasking systems so I am worried about them. There's a learning curve > for coding tasks in a preemptive environment that it appears to me a waste of > time if I'm able to reach the same goal with a completeley different approach > that is much more friendly to me.
What will you do when tasked with maintaining someone else's design -- that HAPPENS to have been implemented with multitasking? Do you expect to learn easier/faster/better while facing a production deadline?
> Anyway I'd like to learn a little of the other approach. This is the reason of > my posts. > >> The biggest headache in preemptive designs is worrying about which operations >> MUST be atomic -- and being sure to protect that aspect of their design. >> But, this is related to sharing. If you don't share stuff, then you don't have >> to worry about this problem! >> >> And, /ad-hoc/ sharing TENDS to be "A Bad Thing". You want to strive to isolate >> "things" as much as possible. Information hiding. etc. If there's no >> *compelling* reason for A to know about B, then why expose one to the other? >> >> And, if they *do* need to be aware of each other, make their interactions >> very visible and restricted to a small set of operations. > > In my very simple application (display showing a message) there is a sharing > resource that can't be avoided (at least to me). Imagine if many variables > would be set through the serial line: a semaphore everytime both tasks need to > access those variables!
But, you might choose to share differently. E.g., instead of copying one message into another "buffer", just display one buffer -- or the other. So, you can be filling one buffer while displaying the other. Now, you have more "slack" to play with because you don't have to do that copy (just pass a pointer) AND can use the other BIG buffer to accumulate the new message WHILE you're using the old buffer to display the previous message. In my current design, any information that a task wants to access has to be *requested* from the "owner" of that information. This adds overhead -- but only when information is requested! If this proves to be high, then you start asking yourself if the design has been factored correctly: why is A always wanting B's data?? Perhaps A should be *part* of B? Or, maybe B is the wrong entity to be maintaining that data and it fits better in A's domain. I have an application, presently, where 60+ processes are trying to asynchronously update a single display. If I implement a single lock on the "display device", then 59+ processes will typically be blocked waiting on that lock. If, instead, I design the interface to the display so that an "unlimited" number of processes can access it concurrently... AND, ensure that no two processes ever want to access the same PART of the display at the same time... ...then there's no need for the lock. No one waits. A different way of looking at the problem produces a much better solution.
>> So, if you're already working to minimize sharing, then you're already working >> to facilitate the preemptive approach. >> >> Finally, its easier to relate to and tune a preemptive system because the >> "interrupts" (preemptions) are visible -- much moreso than all those >> "yield()s" scattered through your codebase! > > Again I don't use explicit yield()s. So the worst-case superloop duration is > the sum of worst-case durations of each task, plus worst-case duration of > interrupts. > If tasks are coded as non-blocking (state-machine), this worst-case duration > could be very small and real-time requirements can be respected.
But, as I pointed out in my other post, it's hard to KNOW what this time would be! You've got to look through MOST of your code (including the stuff that you didn't post) in order to gauge how long it is LIKELY to be. And, if your code calls on library or OS functions, then you have to know what their performance is like.
>>> With FreeRTOS preemption scheduler is often enabled. It seems to me many >>> times preemption is used only to show how nice I am. >> >> Designing BIG systems with cooperative approach can become a headache. How >> do you ever know what sort of latency a particular task may encounter at >> a particular time? You have to be aware of what all the other tasks are >> doing (and how willingly they are CURRENTLY relinquishing the processor) >> in order to guesstimate the time between activations of YOUR task. > > Again, in my approach every task are *non-blocking*, so they take 100us-1ms > maximum at each loop.
How do you KNOW this? How does the next developer tasked with "adding a time-of-day display" to each message know this? What happens when your hardware changes -- different CPU, XTAL, etc.? What happens when you use a different compiler?
> If I have 10 tasks, superloop duration could be estimated > in 10ms maximum. If the most critical real-time requirement is 100ms or more, > cooperative multitasking is ok. > > Of course, we need to take into account interrupts, that are much shorter than > tasks, so they can be normally ignored. Anyway they must be considered in > preemptive scheduler too.
When I write ("wrote" as I no longer work "in the miniscule") cooperative systems, my code was LITTERED with yield()s. I'd do an incredibly tiny bit of work and then release the processor. This increased run-time overhead but, decreased latency for ALL tasks that needed "quick responses". In the case of servicing a UART (especially capturing receive data), I could then restructure my code along the lines of: main() { ... while (FOREVER) { service_UART() service_rest_of_machine() } } service_UART() { if (data_available) { // read status register in UART // or poll flag from IRQ data = readUART() // clears status or flag as side-effect *fifo++ = data if (fifo > fifo_end) fifo=fifo_start } } service_rest_of_machine() { taskA() taskB() taskC() } taskA() { do_something_short() yield() do_another_small_thing() yield() do_yet_another() ... } taskB() { for (digit = 0; digit++; digit < DIGITS) { display_digit(value[digit]) yield() } ... } [Note that transmitting data on a UART can often be done by polling with no real impact on system's functionality (though it CAN impact performance). Unless the receiving end wants/needs data arriving at a high *character* rate, you can let the UART handle the strict timing requirements of the bit rate and let the application spoon feed characters to it as it finds the time to do so!] There are myriad combinations of such code structures that you can employ to trade off between latency and work progress. E.g., if you want to make taskB run a bit quicker (i.e., get finished faster), you could make a simple change to the *top* level code -- without having to edit taskB's code: service_rest_of_machine() { taskA() taskB(); taskB(); taskB() taskC() } This works for me because I know that I yield WAY too frequently so I never have to worry that there may be a point in time when the period between yields might end up being "long". [Of course, this is predicated on yield() being incredibly fast!] For example, I'll refresh a multiplexed LED 7-segment display using something like: display() { while (FOREVER) { for (digit = 0; digit++; digit < MAX_DIGITS) { // ensure no drive while switching between LED digits // otherwise, visible artifacts cathodes = OFF; // segments to drive for the digit selected in the value displayed anodes = seven_segment[value[digit]]; cathode = digit; // select LED digit // let decoder settle cathode |= ON; // turn on drive for that digit // time expressed in tenths of milliseconds load_timer(LED_TIMER, (10000 / MAX_DIGITS) / 60); do { yield(); } while (read_timer(LED_TIMER) > 0); } } And, I know that load timer just stores a value into a specific "timer" (i.e., timer[timer_identifier] = time_value) -- just a trivial macro that helps clarify what you're doing. Likewise, read_timer() is just an accessor for that "timer[]" -- another trivial macro! [Note that the expression in the load_timer() is a compile-time operation and incurs no run-time cost] Note that value[] is almost certainly a shared datum -- SOMEONE has to decide what value you want to display! But, if your refresh rate is high enough, the visual artifacts that manifest from updating part of the display with an "old" value[] and another part with a *new* value[] are usually not significant (unless you're updating frequently). So, there's no practical need for a synchronization primitive to ensure value[] gets updated in one atomic operation (and *that* synchronized with the display refresh) If you look at the code, there's scant LESS that it can do. However, I could get sleezy and rewrite it as: display() { while (FOREVER) { for (digit = 0; digit++; digit < MAX_DIGITS) { yield(); // ensure no drive while switching between LED digits // otherwise, visible artifacts cathodes = OFF; yield(); // segments to drive for the digit selected in the value displayed anodes = seven_segment[value[digit]]; yield(); cathode = digit; // select LED digit yield(); // let decoder settle cathode |= ON; // turn on drive for that digit yield(); // time expressed in tenths of milliseconds load_timer(LED_TIMER, (10000 / MAX_DIGITS) / 60); yield(); do { yield(); } while (read_timer(LED_TIMER) > 0); } } And, if this starts to appear sluggish (display flicker), I can use the trick outlined above to increase the processor time alloted to this task on each loop iteration.

The 2024 Embedded Online Conference