Forums

From cooperative to preemptive scheduler: a real example

Started by pozz January 6, 2020
I noticed my previous post about preemptive OS involved many people and 
started many discussions, most of them theoric.

Someone wrote the synchronization of tasks in preemptive scheduler is 
not so difficult, after understanding some things. Others suggested to 
abandon at all preemptive scheduler, considering its pitfalls.

Because I know my limits, I don't think I can produce a well-written 
preemption system. However I'd like to understand a little more about 
them. Starting from an example.

Suppose my system is a display where a message is written. The message 
can be customized by a serial line. In cooperative approach, I would 
write something:

--- main.c ---
...
while(1) {
   task_display();
   task_serial();
}
--- end of main.c ---

--- display.c ---
static const char msg[32];
void display_set_message(const char *new_msg) {
   strncpy(msg, new_msg, sizeof(msg));
}
void task_display(void) {
   if (refresh_is_needed()) {
     display_printat(0, 0, msg);
   }
}
--- end of display.c ---

--- serial.c ---
static unsigned char rxbuf[64];
static size_t rxlen;
void task_serial(void)
{
   unsigned char b = serial_rx();
   if (b != EOF) {
     rxbuf[rxlen++] = b;
     if (frame_is_complete(rxbuf, rxlen)) {
       char new_msg[32];
       /* decode new message from received frame from serial line */
       display_set_message(new_msg);
       rxlen = 0;
     }
   }
}
--- end of serial.c ---

The display needs to be refreshed. display_printat() is blocking: when 
it returns, all the display was refreshed. So the display always shows 
the entire message: there's no risk the display shows a part of the 
previous message and a part of the new message.

How to convert these two tasks in a preemptive scheduler? Which priority 
to assign to them?

The simplest approach is...

--- display.c ---
static const char msg[32];
void display_set_message(const char *new_msg) {
   strncpy(msg, new_msg, sizeof(msg));
}
void task_display(void) {
   while(1) {
     if (refresh_is_needed()) {
       display_printat(0, 0, msg);
     }
   }
}
--- end of display.c ---

--- serial.c ---
static unsigned char rxbuf[32];
static size_t rxlen;
void task_serial(void)
{
   while(1) {
     unsigned char b = serial_rx();
     if (b != EOF) {
       rxbuf[rxlen++] = b;
       if (frame_is_complete(rxbuf, rxlen)) {
         char new_msg[32];
         /* decode new message from received frame from serial line */
         display_set_message(new_msg);
         rxlen = 0;
       }
     }
   }
}
--- end of serial.c ---

This code works most of the time, but the display sometime can show a 
mix of old/new messages. This happens if display task is interrupted 
during refresh by serial task that calls display_set_message(). Or when 
display_set_message() is interrupted by display task and a refresh occurs.

If I assigned a higher priority to display task, the problem would 
remain. Indeed display_printat() couldn't be interrupted, but 
display_set_message() yes.

Here the solution is to take a binary semaphore before using the shared 
resource (and give the semaphore after the job is done).

void display_set_message(const char *new_msg) {
   semaphore_take_forever();
   strncpy(msg, new_msg, sizeof(msg));
   semaphore_give();
}

...
       if (frame_is_complete(rxbuf)) {
         char new_msg[32];
         /* decode new message from received frame from serial line */
         semaphore_take_forever();
         display_set_message(new_msg);
         semaphore_give();
         rxlen = 0;
       }
...


My impression is that a very simple code is cluttered with 
synchronization things that decrease readability and maintainability and 
increase complexity. Why? Just to use preemption?

Again my impression is that preemption is NOT GOOD and must be avoided 
if it isn't required.

So the question is: when a preemption scheduler is needed? Could you 
give a real example?

 From what I have understood, preemption could solve real-time requirement.

Suppose display_printat() takes too much time to finish. This increases 
the worst-case superloop duration and could delay some system reaction.
For example, if display_printat() takes 1 second to finish, the system 
could react after 1 second from an event (the press of a button, for 
example).

If this isn't acceptable, preemption could help. Is it correct?
On 1/6/2020 6:08 PM, pozz wrote:

[ 8< ]

> My impression is that a very simple code is cluttered with synchronization > things that decrease readability and maintainability and increase complexity. > Why? Just to use preemption?
The "clutter" is introduced because your "problem" inherently involves conflict; you're allowing two competing uses for a single resource. The use of the synchronization primitive OVERTLY acknowledges this issue/possibility -- lest (subsequent) another developer fail to recognize that the possibility exists (i.e., "latent bug").
> Again my impression is that preemption is NOT GOOD and must be avoided if it > isn't required.
"Multiplication is NOT GOOD and must be avoided if it isn't required (i.e., if you can use repeated ADDITIONs, instead)"
> So the question is: when a preemption scheduler is needed? Could you give a > real example?
The "scheduler" is present in any multitasking system -- cooperative or preemptive. SOMETHING has to decide who to give the processor to when the currently executing task gives up control (or, has it removed from it) In your cooperative examples, the "while()" is used to implement the scheduler: when one task() "returns", the one listed on the next line (of the while loop) is given control... "scheduled" to run. Don't conflate "big loop" with "cooperative".
> From what I have understood, preemption could solve real-time requirement.
Preemption, like any capability, brings with it assets and liabilities. Imagine you were tasked with building a box that blinked lights (XMAS lights!) at different/varying rates. The box has a dozen solid state switches that control the individual lights (or, "light strands"). It would be really easy -- and intuitive -- to write: void lights1() { while(FOREVER) { light(1,ON); sleep(500ms); light(1,OFF); sleep(279ms); } } void lights2() { while(FOREVER) { light(2,ON); sleep(100ms); light(2,OFF); sleep(50ms); } } void lights3() { while(FOREVER) { ontime = (10.0 * rand() ) / RAND_MAX; light(3,ON); sleep(ontime); light(3,OFF); sleep(10.0 - ontime); } } etc. No silly "yields" to get in the way. No need for synchronization primitives, either, because nothing is SHARED! [Contrived example but you'll find that there are many cases of tasks co-executing that are NOT sharing anything (other than the processor)] There are other classes of problems where the problem lends itself, naturally, to "peaceful" sharing -- where you're not in conflict with another. And, other techniques to hide the sharing mitigation in other mechanisms. Preemption lets you code AS IF you were the sole owner of the processor... EXCEPT when you need to share something (which would imply that you are NOT the sole owner -- at least at THAT time! :> ) The downside to cooperative multitasking is that *it* clutters your code -- with all those yield()s -- and requires you to keep track of how "long" you've hogged the CPU in the time since your last yield (because that time gets reflected to all subsequent task runnings). When I write code in a cooperative environment, I *litter* the code with yield()s to keep *reaction* times (of other tasks) short. This then means yield() has to run like greased lightning lest it impact overall performance (because it is pure overhead!)
On 2020-01-07 3:08, pozz wrote:
> I noticed my previous post about preemptive OS involved many people and > started many discussions, most of them theoric. > > Someone wrote the synchronization of tasks in preemptive scheduler is > not so difficult, after understanding some things.
I made some such statement.
> Others suggested to > abandon at all preemptive scheduler, considering its pitfalls. > > Because I know my limits, I don't think I can produce a well-written > preemption system. However I'd like to understand a little more about > them. Starting from an example. > > Suppose my system is a display where a message is written. The message > can be customized by a serial line.
So, this system consists of a display and a serial input line and has requirements as follows: 1. The display shall at all times show a message, of at most 31 characters. - To be defined: what the initial message should be at system reset. 2. The SW shall receive characters from the serial line, buffering them in a "frame buffer" in memory, which can hold up to 64 characters. 3. After each received (and buffered) serial-line character, the SW shall check if the buffered characters form a complete "frame". - To be defined: what to do if the frame buffer is full but does not form a complete frame. (This may of course be impossible by design of the "frame_is_complete" function.) 4. When the buffered characters form a complete frame, the SW shall convert (decode) the contents of the frame into a message, of at most 31 characters, display that message until another, new frame is received, and erase the frame-buffer in preparation for the next frame. The real-time aspects are undefined, except that each message is displayed until the next frame is received.
> In cooperative approach, I would > write something: > > --- main.c --- > ... > while(1) { > &#2013266080; task_display(); > &#2013266080; task_serial(); > } > --- end of main.c --- > > --- display.c --- > static const char msg[32]; > void display_set_message(const char *new_msg) { > &#2013266080; strncpy(msg, new_msg, sizeof(msg)); > } > void task_display(void) { > &#2013266080; if (refresh_is_needed()) { > &#2013266080;&#2013266080;&#2013266080; display_printat(0, 0, msg); > &#2013266080; } > } > --- end of display.c --- > > --- serial.c --- > static unsigned char rxbuf[64]; > static size_t rxlen; > void task_serial(void) > { > &#2013266080; unsigned char b = serial_rx(); > &#2013266080; if (b != EOF) { > &#2013266080;&#2013266080;&#2013266080; rxbuf[rxlen++] = b; > &#2013266080;&#2013266080;&#2013266080; if (frame_is_complete(rxbuf, rxlen)) { > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; char new_msg[32]; > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; /* decode new message from received frame from serial line */ > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; display_set_message(new_msg); > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; rxlen = 0; > &#2013266080;&#2013266080;&#2013266080; } > &#2013266080; } > } > --- end of serial.c --- > > The display needs to be refreshed. display_printat() is blocking: when > it returns, all the display was refreshed. So the display always shows > the entire message: there's no risk the display shows a part of the > previous message and a part of the new message. > > How to convert these two tasks in a preemptive scheduler? Which priority > to assign to them?
Before that conversion one must think about the real-time requirements: deadlines, response-times. This is difficult for this example, because you have not stated any requirements. Let's assume these requirements and properties of the environment: A. The function "serial_rx" polls the one-character reception buffer of the serial line once, and returns the received character, if any, and EOF otherwise. It must be called at least as often as characters arrive (that is, depending on baud rate) to avoid overrun and loss of some characters. B. A pause in the serial-line character arrival cannot be assumed after the completion of a frame. The first character of the next frame can arrive as quickly as the baud rate allows. C. The functions "frame_is_complete" and "display_set_message" take, together, so much less time than the serial-line character period that the whole "task_serial" function also takes less time than the character period. D. The function "display_printat" can take longer than the serial-line character period. Under these assumptions, the cooperative solution does not work, because when a frame is completed, "display_printat" is called which may mean too much delay for the next "serial_rx" call and cause loss of input characters.
> The simplest approach is... > > --- display.c --- > static const char msg[32]; > void display_set_message(const char *new_msg) { > &#2013266080; strncpy(msg, new_msg, sizeof(msg)); > } > void task_display(void) { > &#2013266080; while(1) { > &#2013266080;&#2013266080;&#2013266080; if (refresh_is_needed()) { > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; display_printat(0, 0, msg); > &#2013266080;&#2013266080;&#2013266080; } > &#2013266080; } > } > --- end of display.c --- > > --- serial.c --- > static unsigned char rxbuf[32]; > static size_t rxlen; > void task_serial(void) > { > &#2013266080; while(1) { > &#2013266080;&#2013266080;&#2013266080; unsigned char b = serial_rx(); > &#2013266080;&#2013266080;&#2013266080; if (b != EOF) { > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; rxbuf[rxlen++] = b; > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; if (frame_is_complete(rxbuf, rxlen)) { > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; char new_msg[32]; > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; /* decode new message from received frame from serial line */ > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; display_set_message(new_msg); > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; rxlen = 0; > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; } > &#2013266080;&#2013266080;&#2013266080; } > &#2013266080; } > } > --- end of serial.c --- > > This code works most of the time, but the display sometime can show a > mix of old/new messages.
Because the "msg" variable in display.c is accessed from both tasks, as an unprotected shared variable. In addition to that problem, you have written both tasks to use polling, with no delay, which wastes processor resources, especially for "task_display". The "task_serial" task does need to poll "serial_rx" (per the assumptions above), but it could certainly do so at some non-zero period, computed from the baud rate and the execution times to ensure that "serial_rx" calls are frequent enough to avoid loss of input data. Of course, a serious design would use serial-line interrupts and trigger the "task_serial" only when a character has been received. For "task_display", you could replace the "refresh is needed" flag with another semaphore, which is initially zero, is "given" in "task_serial" when a new message is to be displayed, and is "taken" by "task_display" before it displays the new message. Then "task_display" consumes no processing resources until it actually has to.
> Here the solution is to take a binary semaphore before using the shared > resource (and give the semaphore after the job is done). > > void display_set_message(const char *new_msg) { > &#2013266080; semaphore_take_forever(); > &#2013266080; strncpy(msg, new_msg, sizeof(msg));
Here you need some code to set "refresh is needed" to true. That flag is also a shared variable.
> &#2013266080; semaphore_give();
If you have semaphore calls here, in "display_set_message", ...
> } > > ... > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; if (frame_is_complete(rxbuf)) { > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; char new_msg[32]; > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; /* decode new message from received frame from serial line */ > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; semaphore_take_forever(); > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; display_set_message(new_msg); > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; semaphore_give();
... then you do not need themn (and should not have them) here, around the call of "display_set_message".
> &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; rxlen = 0; > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; } > ...
You also need to use the mutex semaphore from "task_display", for example as follows: void task_display(void) { while(1) { if (refresh_is_needed()) { char new_msg[32]; semaphore_take_forever(); strncpy (new_msg, msg, sizeof (new_msg)); // Here something to set "refresh is needed" to false. semaphore_give(); display_printat(0, 0, new_msg); } } } Otherwise the "task_serial" could still overwrite the message with a new one, during the call of "display_printat". To assign priorities, you look at the deadlines of the tasks: - task_serial: deadline = serial-line character period (actually one-half of it) - task_display: no deadline defined: infinite deadline. Then you assign priorities in order of deadlines: higher priorities for shorter deadlines, hence "task_serial" will have higher priority than "task_display". The numerical values of the priorities do not matter, only their ordering. With "task_serial" having a higher priority, it can pre-empt the slow "display_printat" function whenever it needs to, and thus call "serial_rx" often enough.
> My impression is that a very simple code is cluttered with > synchronization things that decrease readability and maintainability and > increase complexity. Why? Just to use preemption?
No -- to make the SW work, where the cooperative design did not work. Maintenance is eased because the pre-emptive design continues to work even if the execution time of "display_printat" was initially short, but then increased to become longer than the serial-line character period. In larger programs there are important advantages of preemption in helping decouple modules from each other.
> From what I have understood, preemption could solve real-time requirement. > > Suppose display_printat() takes too much time to finish. This increases > the worst-case superloop duration and could delay some system reaction. > For example, if display_printat() takes 1 second to finish, the system > could react after 1 second from an event (the press of a button, for > example).
Or it could lose serial input data (under my assumptions).
> If this isn't acceptable, preemption could help. Is it correct?
Yes. -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
Il 07/01/2020 03:37, Don Y ha scritto:
> On 1/6/2020 6:08 PM, pozz wrote: > > [ 8< ] > >> My impression is that a very simple code is cluttered with >> synchronization things that decrease readability and maintainability >> and increase complexity. Why? Just to use preemption? > > The "clutter" is introduced because your "problem" inherently involves > conflict; you're allowing two competing uses for a single resource.
Howevere the shared resource complexity is present only when preemption is used.
> The use of the synchronization primitive OVERTLY acknowledges this > issue/possibility -- lest (subsequent) another developer fail to > recognize that the possibility exists (i.e., "latent bug"). > >> Again my impression is that preemption is NOT GOOD and must be avoided >> if it isn't required. > > "Multiplication is NOT GOOD and must be avoided if it isn't required (i.e., > if you can use repeated ADDITIONs, instead)"
I know all approaches have pros and cons. What I was meaning is that preemption is used too often, even when it isn't really required. With FreeRTOS preemption scheduler is often enabled. It seems to me many times preemption is used only to show how nice I am.
>> So the question is: when a preemption scheduler is needed? Could you >> give a real example? > > The "scheduler" is present in any multitasking system -- cooperative or > preemptive.&#2013266080; SOMETHING has to decide who to give the processor to when > the currently executing task gives up control (or, has it removed from it) > > In your cooperative examples, the "while()" is used to implement the > scheduler: > when one task() "returns", the one listed on the next line (of the while > loop) > is given control... "scheduled" to run. > > Don't conflate "big loop" with "cooperative".
Yes, my superloop is an example of a *very simple* cooperative scheduler, but a cooperative scheduler can be implemented in a different way (as FreeRTOS).
>> &#2013266080;From what I have understood, preemption could solve real-time >> requirement. > > Preemption, like any capability, brings with it assets and liabilities. > Imagine you were tasked with building a box that blinked lights (XMAS > lights!) at different/varying rates.&#2013266080; The box has a dozen solid state > switches that control the individual lights (or, "light strands"). > > It would be really easy -- and intuitive -- to write: > &#2013266080;&#2013266080;&#2013266080; void lights1() { > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; while(FOREVER) { > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; light(1,ON);&#2013266080; sleep(500ms); > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; light(1,OFF); sleep(279ms); > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; } > &#2013266080;&#2013266080;&#2013266080; } > > &#2013266080;&#2013266080;&#2013266080; void lights2() { > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; while(FOREVER) { > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; light(2,ON);&#2013266080; sleep(100ms); > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; light(2,OFF); sleep(50ms); > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; } > &#2013266080;&#2013266080;&#2013266080; } > > &#2013266080;&#2013266080;&#2013266080; void lights3() { > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; while(FOREVER) { > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; ontime = (10.0 * rand() ) / RAND_MAX; > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; light(3,ON);&#2013266080; sleep(ontime); > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; light(3,OFF); sleep(10.0 - ontime); > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; } > &#2013266080;&#2013266080;&#2013266080; } > > etc.&#2013266080; No silly "yields" to get in the way.&#2013266080; No need for synchronization > primitives, either, because nothing is SHARED!
In the superloop cooperative approach: void lights1() { if (state_ON && timer_expired()) { light(1, OFF); timer_arm(500ms); state_ON = false; } else if (!state_ON && timer_expired()) { light(1, ON); timer_arm(279ms); state_ON = true; } This is a state-machine and I admit it's harder to write then in preemptive scheduler.
> [Contrived example but you'll find that there are many cases of > tasks co-executing that are NOT sharing anything (other than the > processor)] > > There are other classes of problems where the problem lends itself, > naturally, > to "peaceful" sharing -- where you're not in conflict with another.&#2013266080; And, > other techniques to hide the sharing mitigation in other mechanisms. > > Preemption lets you code AS IF you were the sole owner of the processor... > EXCEPT when you need to share something (which would imply that you are > NOT the sole owner -- at least at THAT time!&#2013266080; :> )
I suspect many real applications need synchronization mess (and risks if you don't know very well what the pitfalls of multitasking). And in those cases I'm not sure if it's simpler to code in preemption/blocking/synchronization or in cooperative/non-blocking/state-machine.
> The downside to cooperative multitasking is that *it* clutters your > code -- with all those yield()s -- and requires you to keep track of > how "long" you've hogged the CPU in the time since your last yield > (because that time gets reflected to all subsequent task runnings).
>> When I write code in a cooperative environment, I *litter* the code
> with yield()s to keep *reaction* times (of other tasks) short.&#2013266080; This > then means yield() has to run like greased lightning lest it impact > overall performance (because it is pure overhead!)
If you use non-blocking state-machines, there aren't any downside to cooperative multitasking. There aren't real yield()s, they are hidden when the task function exits.
On 1/7/2020 2:11 AM, pozz wrote:
> Il 07/01/2020 03:37, Don Y ha scritto: >> On 1/6/2020 6:08 PM, pozz wrote: >> >> [ 8< ] >> >>> My impression is that a very simple code is cluttered with synchronization >>> things that decrease readability and maintainability and increase >>> complexity. Why? Just to use preemption? >> >> The "clutter" is introduced because your "problem" inherently involves >> conflict; you're allowing two competing uses for a single resource. > > Howevere the shared resource complexity is present only when preemption is used.
Because it doesn't work right in the nonpreempt case! :>
>> The use of the synchronization primitive OVERTLY acknowledges this >> issue/possibility -- lest (subsequent) another developer fail to >> recognize that the possibility exists (i.e., "latent bug"). >> >>> Again my impression is that preemption is NOT GOOD and must be avoided if it >>> isn't required. >> >> "Multiplication is NOT GOOD and must be avoided if it isn't required (i.e., >> if you can use repeated ADDITIONs, instead)" > > I know all approaches have pros and cons. What I was meaning is that preemption > is used too often, even when it isn't really required.
Much of this has to do with coding styles. E.g., I can't recall the last time I wrote a single-threaded application. My mind just doesn't see things like that, anymore. I *always* see parallelism in problems. You develop a "taste' for a particular type of coding. E.g., I have a buddy who doesn't think twice about spawning new tasks -- only to kill them off a short time later, having decided that they've served their purpose. He might have HUNDREDS in a little tiny system, at any given time! OTOH, I tend to spawn tasks that "run forever" and "do more". I have a colleague who approaches all projects top-down. I find it unnerving to watch him "think" during his design process. By contrast, I *assess* the problem from the top, down -- and then *implement* it from the bottom up with a clear view of where I'm headed! Similarly, I now deal almost exclusively with "featureful" RTOSs -- memory protection, multiple cores, multiple processors, network interfaces, high resolution timing, etc. I'm tired of counting bytes and packing 8 booleans into a byte. Processors are cheap -- my time isn't! The biggest headache in preemptive designs is worrying about which operations MUST be atomic -- and being sure to protect that aspect of their design. But, this is related to sharing. If you don't share stuff, then you don't have to worry about this problem! And, /ad-hoc/ sharing TENDS to be "A Bad Thing". You want to strive to isolate "things" as much as possible. Information hiding. etc. If there's no *compelling* reason for A to know about B, then why expose one to the other? And, if they *do* need to be aware of each other, make their interactions very visible and restricted to a small set of operations. So, if you're already working to minimize sharing, then you're already working to facilitate the preemptive approach. Finally, its easier to relate to and tune a preemptive system because the "interrupts" (preemptions) are visible -- much moreso than all those "yield()s" scattered through your codebase!
> With FreeRTOS preemption scheduler is often enabled. It seems to me many times > preemption is used only to show how nice I am.
Designing BIG systems with cooperative approach can become a headache. How do you ever know what sort of latency a particular task may encounter at a particular time? You have to be aware of what all the other tasks are doing (and how willingly they are CURRENTLY relinquishing the processor) in order to guesstimate the time between activations of YOUR task. What's BIG? BIG == COMPLEX. What's COMPLEX? COMPLEX is anything that doesn't COMPLETELY fit in your head. <grin> If you can't remember all of the pertinent details to be able to make a decision/assessment (e.g., the above scenario), then your system is COMPLEX.
>>> From what I have understood, preemption could solve real-time requirement. >> >> Preemption, like any capability, brings with it assets and liabilities. >> Imagine you were tasked with building a box that blinked lights (XMAS >> lights!) at different/varying rates. The box has a dozen solid state >> switches that control the individual lights (or, "light strands"). >> >> It would be really easy -- and intuitive -- to write: >> void lights1() { >> while(FOREVER) { >> light(1,ON); sleep(500ms); >> light(1,OFF); sleep(279ms); >> } >> } >> >> void lights2() { >> while(FOREVER) { >> light(2,ON); sleep(100ms); >> light(2,OFF); sleep(50ms); >> } >> } >> >> void lights3() { >> while(FOREVER) { >> ontime = (10.0 * rand() ) / RAND_MAX; >> light(3,ON); sleep(ontime); >> light(3,OFF); sleep(10.0 - ontime); >> } >> } >> >> etc. No silly "yields" to get in the way. No need for synchronization >> primitives, either, because nothing is SHARED! > > In the superloop cooperative approach: > > void lights1() { > if (state_ON && timer_expired()) { > light(1, OFF); > timer_arm(500ms); > state_ON = false; > } else if (!state_ON && timer_expired()) { > light(1, ON); > timer_arm(279ms); > state_ON = true; > } > > This is a state-machine and I admit it's harder to write then in preemptive > scheduler.
Yes. In the preemptive approach I described, the "state" is automatically saved for you -- it manifests as the PRESERVED value of the Program Counter at the time the task was preempted. Note that this need not be done by a time-slicer. Rather, each sleep() effectively relinquishes the processor... and resumes execution when the time period elapses. Note that you can create a cooperative solution that similarly "tracks" the "state" -- by having yield() capture the program counter and restore it on the next activation.
>> [Contrived example but you'll find that there are many cases of >> tasks co-executing that are NOT sharing anything (other than the >> processor)] >> >> There are other classes of problems where the problem lends itself, naturally, >> to "peaceful" sharing -- where you're not in conflict with another. And, >> other techniques to hide the sharing mitigation in other mechanisms. >> >> Preemption lets you code AS IF you were the sole owner of the processor... >> EXCEPT when you need to share something (which would imply that you are >> NOT the sole owner -- at least at THAT time! :> ) > > I suspect many real applications need synchronization mess (and risks if you > don't know very well what the pitfalls of multitasking). > And in those cases I'm not sure if it's simpler to code in > preemption/blocking/synchronization or in cooperative/non-blocking/state-machine.
You can actually mix them in the same design. I, for example, often implement a cooperative multitasking system *in* an ISR (so the ISRs function can evolve from iteration to iteration). There, the yield() causes another "responsibility" of the ISR to begin execution while still actively IN the original interrupt. A separate mechanism is used to return from the interrupt in the last "interrupt task" executed.
>> The downside to cooperative multitasking is that *it* clutters your >> code -- with all those yield()s -- and requires you to keep track of >> how "long" you've hogged the CPU in the time since your last yield >> (because that time gets reflected to all subsequent task runnings). > >> When I write code in a cooperative environment, I *litter* the code >> with yield()s to keep *reaction* times (of other tasks) short. This >> then means yield() has to run like greased lightning lest it impact >> overall performance (because it is pure overhead!) > > If you use non-blocking state-machines, there aren't any downside to > cooperative multitasking. There aren't real yield()s, they are hidden when the > task function exits.
Find a coding style that you're comfortable with and with which you can be reasonably expected to become proficient. Then, hone your skills on that approach -- peeking into other approaches, periodically, to see when/if they might offer you a better solution. The "right" approach is the one that works for you. *Reliably*. (if you find yourself writing buggy code -- or, coding efficiency drops -- then look at where YOUR problems lie and see if any of them are related to the environment in which you've chosen to code) The only real worry is to avoid things that are too "exotic" as they can make it hard for others to understand/adopt that approach. E.g., I like to use "finite state executives" to encode user interface operations and communication protocols. This lets me condense the rules for the interface into small tables that (*I* think) tersely encapsulate the essence of the operator's actions at any point in the interface. For example, to collect a numeric value from the user, I might have this FSM fragment: STATE ENTRY On '0' THRU '9' goto ENTRY executing AccumulateDigit() On 'BACKSPACE' goto ENTRY executing ElideDigit() On 'CLEAR' goto ENTRY executing ClearAccumulator() On 'ENTER' goto VERIFY executing CheckValue() STATE VERIFY On 'VALID' goto DONE executing AcceptValue() On 'INVALID' goto ERROR executing SignalError() STATE ERROR On 'TIMEOUT' goto ENTRY executing ClearErrorMessage() LinkTo ENTRY Everything above this sentence can compile into as few as *30* bytes, despite its verbosity! (LinkTo effectively treats the referenced state's rules as if they were also present in the table. Here, it allows the operator to type any of the "normal" data entry keystrokes enumerated in the referenced state -- ENTRY -- and have them terminate the error message -- without having to wait for the TIMEOUT) AccumulateDigit() { accumulator = 10*accumulator + (input-'0') DisplayAccumulator() } ElideDigit() { accumulator /= 10 DisplayAccumulator() } CheckValue() { signal( ((accumulator >= MIN) && (accumlator <= MAX)) ? 'VALID' : 'INVALID' ) } SignalError() { beep() DisplayString("Value out of bounds!") if (Timer) kill(Timer) spawn(Timer) } Timer() { // nominally runs concurrently with ERROR state sleep(2 sec) signal(TIMEOUT) } In *my* opinion, this is relatively self-explanatory. However, it requires additional compile-time and run-time tools to implement. And, needs to be integrated with the particular O/S. So, it's unattractive to certain clients (who want a cleaner toolchain or have their own O/S requirements). [Imagine developing an entire subsystem like this -- and then discovering a client resists your including it (REUSING it as an ALREADY DEBUGGED component!) in your solution to their problem!] Good luck!
Il 07/01/2020 15:51, Don Y ha scritto:
> On 1/7/2020 2:11 AM, pozz wrote: >> Il 07/01/2020 03:37, Don Y ha scritto: >>> On 1/6/2020 6:08 PM, pozz wrote: >>> >>> [ 8< ] >>> >>>> My impression is that a very simple code is cluttered with >>>> synchronization things that decrease readability and maintainability >>>> and increase complexity. Why? Just to use preemption? >>> >>> The "clutter" is introduced because your "problem" inherently involves >>> conflict; you're allowing two competing uses for a single resource. >> >> Howevere the shared resource complexity is present only when >> preemption is used. > > Because it doesn't work right in the nonpreempt case!&#2013266080; :>
Why do you say this? This application can work flowlessy even with cooperative multitasking.
>>> The use of the synchronization primitive OVERTLY acknowledges this >>> issue/possibility -- lest (subsequent) another developer fail to >>> recognize that the possibility exists (i.e., "latent bug"). >>> >>>> Again my impression is that preemption is NOT GOOD and must be >>>> avoided if it isn't required. >>> >>> "Multiplication is NOT GOOD and must be avoided if it isn't required >>> (i.e., >>> if you can use repeated ADDITIONs, instead)" >> >> I know all approaches have pros and cons. What I was meaning is that >> preemption is used too often, even when it isn't really required. > > Much of this has to do with coding styles.&#2013266080; E.g., I can't recall the > last time > I wrote a single-threaded application.&#2013266080; My mind just doesn't see things > like > that, anymore.&#2013266080; I *always* see parallelism in problems. > > You develop a "taste' for a particular type of coding.&#2013266080;&#2013266080; E.g., I have a > buddy > who doesn't think twice about spawning new tasks -- only to kill them off > a short time later, having decided that they've served their purpose.&#2013266080; He > might have HUNDREDS in a little tiny system, at any given time!&#2013266080; OTOH, I > tend to spawn tasks that "run forever" and "do more". > > I have a colleague who approaches all projects top-down.&#2013266080; I find it > unnerving > to watch him "think" during his design process.&#2013266080; By contrast, I *assess* > the > problem from the top, down -- and then *implement* it from the bottom up > with > a clear view of where I'm headed! > > Similarly, I now deal almost exclusively with "featureful" RTOSs -- memory > protection, multiple cores, multiple processors, network interfaces, high > resolution timing, etc.&#2013266080; I'm tired of counting bytes and packing 8 booleans > into a byte.&#2013266080; Processors are cheap -- my time isn't!
Yes, you're right. It's a matter of coding style. I don't have any experience in multi-tasking systems so I am worried about them. There's a learning curve for coding tasks in a preemptive environment that it appears to me a waste of time if I'm able to reach the same goal with a completeley different approach that is much more friendly to me. Anyway I'd like to learn a little of the other approach. This is the reason of my posts.
> The biggest headache in preemptive designs is worrying about which > operations > MUST be atomic -- and being sure to protect that aspect of their design. > But, this is related to sharing.&#2013266080; If you don't share stuff, then you > don't have > to worry about this problem! > > And, /ad-hoc/ sharing TENDS to be "A Bad Thing".&#2013266080; You want to strive to > isolate > "things" as much as possible.&#2013266080; Information hiding.&#2013266080; etc.&#2013266080; If there's no > *compelling* reason for A to know about B, then why expose one to the > other? > > And, if they *do* need to be aware of each other, make their interactions > very visible and restricted to a small set of operations.
In my very simple application (display showing a message) there is a sharing resource that can't be avoided (at least to me). Imagine if many variables would be set through the serial line: a semaphore everytime both tasks need to access those variables!
> So, if you're already working to minimize sharing, then you're already > working > to facilitate the preemptive approach. > > Finally, its easier to relate to and tune a preemptive system because the > "interrupts" (preemptions) are visible -- much moreso than all those > "yield()s" scattered through your codebase!
Again I don't use explicit yield()s. So the worst-case superloop duration is the sum of worst-case durations of each task, plus worst-case duration of interrupts. If tasks are coded as non-blocking (state-machine), this worst-case duration could be very small and real-time requirements can be respected.
>> With FreeRTOS preemption scheduler is often enabled. It seems to me >> many times preemption is used only to show how nice I am. > > Designing BIG systems with cooperative approach can become a headache.&#2013266080; How > do you ever know what sort of latency a particular task may encounter at > a particular time?&#2013266080; You have to be aware of what all the other tasks are > doing (and how willingly they are CURRENTLY relinquishing the processor) > in order to guesstimate the time between activations of YOUR task.
Again, in my approach every task are *non-blocking*, so they take 100us-1ms maximum at each loop. If I have 10 tasks, superloop duration could be estimated in 10ms maximum. If the most critical real-time requirement is 100ms or more, cooperative multitasking is ok. Of course, we need to take into account interrupts, that are much shorter than tasks, so they can be normally ignored. Anyway they must be considered in preemptive scheduler too.
> What's BIG?&#2013266080; BIG == COMPLEX. > > What's COMPLEX?&#2013266080; COMPLEX is anything that doesn't COMPLETELY fit in your > head. > <grin>&#2013266080; If you can't remember all of the pertinent details to be able to > make a decision/assessment (e.g., the above scenario), then your system is > COMPLEX. > >>>> &#2013266080;From what I have understood, preemption could solve real-time >>>> requirement. >>> >>> Preemption, like any capability, brings with it assets and liabilities. >>> Imagine you were tasked with building a box that blinked lights (XMAS >>> lights!) at different/varying rates.&#2013266080; The box has a dozen solid state >>> switches that control the individual lights (or, "light strands"). >>> >>> It would be really easy -- and intuitive -- to write: >>> &#2013266080;&#2013266080;&#2013266080;&#2013266080; void lights1() { >>> &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; while(FOREVER) { >>> &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; light(1,ON);&#2013266080; sleep(500ms); >>> &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; light(1,OFF); sleep(279ms); >>> &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; } >>> &#2013266080;&#2013266080;&#2013266080;&#2013266080; } >>> >>> &#2013266080;&#2013266080;&#2013266080;&#2013266080; void lights2() { >>> &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; while(FOREVER) { >>> &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; light(2,ON);&#2013266080; sleep(100ms); >>> &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; light(2,OFF); sleep(50ms); >>> &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; } >>> &#2013266080;&#2013266080;&#2013266080;&#2013266080; } >>> >>> &#2013266080;&#2013266080;&#2013266080;&#2013266080; void lights3() { >>> &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; while(FOREVER) { >>> &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; ontime = (10.0 * rand() ) / RAND_MAX; >>> &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; light(3,ON);&#2013266080; sleep(ontime); >>> &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; light(3,OFF); sleep(10.0 - ontime); >>> &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; } >>> &#2013266080;&#2013266080;&#2013266080;&#2013266080; } >>> >>> etc.&#2013266080; No silly "yields" to get in the way.&#2013266080; No need for synchronization >>> primitives, either, because nothing is SHARED! >> >> In the superloop cooperative approach: >> >> void lights1() { >> &#2013266080;&#2013266080; if (state_ON && timer_expired()) { >> &#2013266080;&#2013266080;&#2013266080;&#2013266080; light(1, OFF); >> &#2013266080;&#2013266080;&#2013266080;&#2013266080; timer_arm(500ms); >> &#2013266080;&#2013266080;&#2013266080;&#2013266080; state_ON = false; >> &#2013266080;&#2013266080; } else if (!state_ON && timer_expired()) { >> &#2013266080;&#2013266080;&#2013266080;&#2013266080; light(1, ON); >> &#2013266080;&#2013266080;&#2013266080;&#2013266080; timer_arm(279ms); >> &#2013266080;&#2013266080;&#2013266080;&#2013266080; state_ON = true; >> &#2013266080;&#2013266080; } >> >> This is a state-machine and I admit it's harder to write then in >> preemptive scheduler. > > Yes.&#2013266080; In the preemptive approach I described, the "state" is automatically > saved for you -- it manifests as the PRESERVED value of the Program Counter > at the time the task was preempted.&#2013266080; Note that this need not be done by > a time-slicer.&#2013266080; Rather, each sleep() effectively relinquishes the > processor... > and resumes execution when the time period elapses. > > Note that you can create a cooperative solution that similarly > "tracks" the "state" -- by having yield() capture the program counter > and restore it on the next activation. > >>> [Contrived example but you'll find that there are many cases of >>> tasks co-executing that are NOT sharing anything (other than the >>> processor)] >>> >>> There are other classes of problems where the problem lends itself, >>> naturally, >>> to "peaceful" sharing -- where you're not in conflict with another. >>> And, >>> other techniques to hide the sharing mitigation in other mechanisms. >>> >>> Preemption lets you code AS IF you were the sole owner of the >>> processor... >>> EXCEPT when you need to share something (which would imply that you are >>> NOT the sole owner -- at least at THAT time!&#2013266080; :> ) >> >> I suspect many real applications need synchronization mess (and risks >> if you don't know very well what the pitfalls of multitasking). >> And in those cases I'm not sure if it's simpler to code in >> preemption/blocking/synchronization or in >> cooperative/non-blocking/state-machine. > > You can actually mix them in the same design. > > I, for example, often implement a cooperative multitasking system *in* > an ISR (so the ISRs function can evolve from iteration to iteration). > There, the yield() causes another "responsibility" of the ISR to > begin execution while still actively IN the original interrupt. > A separate mechanism is used to return from the interrupt in the > last "interrupt task" executed. > >>> The downside to cooperative multitasking is that *it* clutters your >>> code -- with all those yield()s -- and requires you to keep track of >>> how "long" you've hogged the CPU in the time since your last yield >>> (because that time gets reflected to all subsequent task runnings). >> &#2013266080;>> When I write code in a cooperative environment, I *litter* the code >>> with yield()s to keep *reaction* times (of other tasks) short.&#2013266080; This >>> then means yield() has to run like greased lightning lest it impact >>> overall performance (because it is pure overhead!) >> >> If you use non-blocking state-machines, there aren't any downside to >> cooperative multitasking. There aren't real yield()s, they are hidden >> when the task function exits. > > Find a coding style that you're comfortable with and with which you can > be reasonably expected to become proficient.&#2013266080; Then, hone your skills > on that approach -- peeking into other approaches, periodically, to see > when/if they might offer you a better solution.&#2013266080; The "right" approach is > the one that works for you.&#2013266080; *Reliably*.&#2013266080; (if you find yourself writing > buggy code -- or, coding efficiency drops -- then look at where YOUR > problems lie and see if any of them are related to the environment > in which you've chosen to code) > > The only real worry is to avoid things that are too "exotic" as they > can make it hard for others to understand/adopt that approach. > > E.g., I like to use "finite state executives" to encode user interface > operations and communication protocols.&#2013266080; This lets me condense the rules > for the interface into small tables that (*I* think) tersely encapsulate > the essence of the operator's actions at any point in the interface. > > For example, to collect a numeric value from the user, I might have > this FSM fragment: > > &#2013266080;&#2013266080;&#2013266080;&#2013266080; STATE&#2013266080;&#2013266080; ENTRY > On&#2013266080; '0' THRU '9'&#2013266080;&#2013266080; goto ENTRY&#2013266080;&#2013266080;&#2013266080; executing&#2013266080; AccumulateDigit() > On&#2013266080; 'BACKSPACE'&#2013266080;&#2013266080;&#2013266080; goto ENTRY&#2013266080;&#2013266080;&#2013266080; executing&#2013266080; ElideDigit() > On&#2013266080; 'CLEAR'&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; goto ENTRY&#2013266080;&#2013266080;&#2013266080; executing&#2013266080; ClearAccumulator() > On&#2013266080; 'ENTER'&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; goto VERIFY&#2013266080;&#2013266080; executing&#2013266080; CheckValue() > > &#2013266080;&#2013266080;&#2013266080;&#2013266080; STATE&#2013266080;&#2013266080; VERIFY > On&#2013266080; 'VALID'&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; goto DONE&#2013266080;&#2013266080;&#2013266080;&#2013266080; executing&#2013266080; AcceptValue() > On&#2013266080; 'INVALID'&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; goto ERROR&#2013266080;&#2013266080;&#2013266080; executing&#2013266080; SignalError() > > &#2013266080;&#2013266080;&#2013266080;&#2013266080; STATE&#2013266080;&#2013266080; ERROR > On&#2013266080; 'TIMEOUT'&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; goto ENTRY&#2013266080;&#2013266080;&#2013266080; executing&#2013266080; ClearErrorMessage() > LinkTo&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; ENTRY > > Everything above this sentence can compile into as few as *30* bytes, > despite its verbosity!&#2013266080; (LinkTo effectively treats the referenced > state's rules as if they were also present in the table.&#2013266080; Here, it allows > the operator to type any of the "normal" data entry keystrokes enumerated > in the referenced state -- ENTRY -- and have them terminate the error > message -- without having to wait for the TIMEOUT) > > AccumulateDigit() { > &#2013266080;&#2013266080;&#2013266080; accumulator = 10*accumulator + (input-'0') > &#2013266080;&#2013266080;&#2013266080; DisplayAccumulator() > } > > ElideDigit() { > &#2013266080;&#2013266080;&#2013266080; accumulator /= 10 > &#2013266080;&#2013266080;&#2013266080; DisplayAccumulator() > } > > CheckValue() { > &#2013266080;&#2013266080;&#2013266080; signal( ((accumulator >= MIN) && (accumlator <= MAX)) ? > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; 'VALID' : > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; 'INVALID' ) > } > > SignalError() { > &#2013266080;&#2013266080;&#2013266080; beep() > &#2013266080;&#2013266080;&#2013266080; DisplayString("Value out of bounds!") > &#2013266080;&#2013266080;&#2013266080; if (Timer) > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; kill(Timer) > &#2013266080;&#2013266080;&#2013266080; spawn(Timer) > } > > Timer() {&#2013266080;&#2013266080;&#2013266080;&#2013266080; // nominally runs concurrently with ERROR state > &#2013266080;&#2013266080;&#2013266080; sleep(2 sec) > &#2013266080;&#2013266080;&#2013266080; signal(TIMEOUT) > } > > In *my* opinion, this is relatively self-explanatory.&#2013266080; However, it > requires additional compile-time and run-time tools to implement. > And, needs to be integrated with the particular O/S. > > So, it's unattractive to certain clients (who want a cleaner toolchain > or have their own O/S requirements). > > [Imagine developing an entire subsystem like this -- and then discovering > a client resists your including it (REUSING it as an ALREADY DEBUGGED > component!) in your solution to their problem!] > > Good luck!
Il 07/01/2020 08:38, Niklas Holsti ha scritto:
> On 2020-01-07 3:08, pozz wrote: >> I noticed my previous post about preemptive OS involved many people >> and started many discussions, most of them theoric. >> >> Someone wrote the synchronization of tasks in preemptive scheduler is >> not so difficult, after understanding some things. > > I made some such statement. > >> Others suggested to abandon at all preemptive scheduler, considering >> its pitfalls. >> >> Because I know my limits, I don't think I can produce a well-written >> preemption system. However I'd like to understand a little more about >> them. Starting from an example. >> >> Suppose my system is a display where a message is written. The message >> can be customized by a serial line. > > So, this system consists of a display and a serial input line and has > requirements as follows: > > 1. The display shall at all times show a message, of at most 31 characters. > > - To be defined: what the initial message should be at system reset. > > 2. The SW shall receive characters from the serial line, buffering them > in a "frame buffer" in memory, which can hold up to 64 characters. > > 3. After each received (and buffered) serial-line character, the SW > shall check if the buffered characters form a complete "frame". > > - To be defined: what to do if the frame buffer is full but does not > form a complete frame. (This may of course be impossible by design of > the "frame_is_complete" function.) > > 4. When the buffered characters form a complete frame, the SW shall > convert (decode) the contents of the frame into a message, of at most 31 > characters, display that message until another, new frame is received, > and erase the frame-buffer in preparation for the next frame. > > The real-time aspects are undefined, except that each message is > displayed until the next frame is received.
The only real-time is that the new message sent through the serial line appears on the display in a reasonable time: 100ms? 1s? Something similar. The second requirement is that the display mustn't show a hybrid message composed by two parts of the successive messages.
>> In cooperative approach, I would write something: >> >> --- main.c --- >> ... >> while(1) { >> &#2013266080;&#2013266080; task_display(); >> &#2013266080;&#2013266080; task_serial(); >> } >> --- end of main.c --- >> >> --- display.c --- >> static const char msg[32]; >> void display_set_message(const char *new_msg) { >> &#2013266080;&#2013266080; strncpy(msg, new_msg, sizeof(msg)); >> } >> void task_display(void) { >> &#2013266080;&#2013266080; if (refresh_is_needed()) { >> &#2013266080;&#2013266080;&#2013266080;&#2013266080; display_printat(0, 0, msg); >> &#2013266080;&#2013266080; } >> } >> --- end of display.c --- >> >> --- serial.c --- >> static unsigned char rxbuf[64]; >> static size_t rxlen; >> void task_serial(void) >> { >> &#2013266080;&#2013266080; unsigned char b = serial_rx(); >> &#2013266080;&#2013266080; if (b != EOF) { >> &#2013266080;&#2013266080;&#2013266080;&#2013266080; rxbuf[rxlen++] = b; >> &#2013266080;&#2013266080;&#2013266080;&#2013266080; if (frame_is_complete(rxbuf, rxlen)) { >> &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; char new_msg[32]; >> &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; /* decode new message from received frame from serial line */ >> &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; display_set_message(new_msg); >> &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; rxlen = 0; >> &#2013266080;&#2013266080;&#2013266080;&#2013266080; } >> &#2013266080;&#2013266080; } >> } >> --- end of serial.c --- >> >> The display needs to be refreshed. display_printat() is blocking: when >> it returns, all the display was refreshed. So the display always shows >> the entire message: there's no risk the display shows a part of the >> previous message and a part of the new message. >> >> How to convert these two tasks in a preemptive scheduler? Which >> priority to assign to them? > > Before that conversion one must think about the real-time requirements: > deadlines, response-times. This is difficult for this example, because > you have not stated any requirements. > > Let's assume these requirements and properties of the environment: > > A. The function "serial_rx" polls the one-character reception buffer of > the serial line once, and returns the received character, if any, and > EOF otherwise. It must be called at least as often as characters arrive > (that is, depending on baud rate) to avoid overrun and loss of some > characters.
No, serial driver works in interrupt mode and already use a FIFO buffer, sufficiently big. serial_rx() pop a single element from the FIFO, if any.
> B. A pause in the serial-line character arrival cannot be assumed after > the completion of a frame. The first character of the next frame can > arrive as quickly as the baud rate allows.
Why? I think the user can change the message only when he wants. Normally no activity is present on the serial line.
> C. The functions "frame_is_complete" and "display_set_message" take, > together, so much less time than the serial-line character period that > the whole "task_serial" function also takes less time than the character > period. > > D. The function "display_printat" can take longer than the serial-line > character period. > > Under these assumptions, the cooperative solution does not work, because > when a frame is completed, "display_printat" is called which may mean > too much delay for the next "serial_rx" call and cause loss of input > characters.
Serial driver interrupts guarantees no loss of input during display_printat() or other functions.
>> The simplest approach is... >> >> --- display.c --- >> static const char msg[32]; >> void display_set_message(const char *new_msg) { >> &#2013266080;&#2013266080; strncpy(msg, new_msg, sizeof(msg)); >> } >> void task_display(void) { >> &#2013266080;&#2013266080; while(1) { >> &#2013266080;&#2013266080;&#2013266080;&#2013266080; if (refresh_is_needed()) { >> &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; display_printat(0, 0, msg); >> &#2013266080;&#2013266080;&#2013266080;&#2013266080; } >> &#2013266080;&#2013266080; } >> } >> --- end of display.c --- >> >> --- serial.c --- >> static unsigned char rxbuf[32]; >> static size_t rxlen; >> void task_serial(void) >> { >> &#2013266080;&#2013266080; while(1) { >> &#2013266080;&#2013266080;&#2013266080;&#2013266080; unsigned char b = serial_rx(); >> &#2013266080;&#2013266080;&#2013266080;&#2013266080; if (b != EOF) { >> &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; rxbuf[rxlen++] = b; >> &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; if (frame_is_complete(rxbuf, rxlen)) { >> &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; char new_msg[32]; >> &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; /* decode new message from received frame from serial line */ >> &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; display_set_message(new_msg); >> &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; rxlen = 0; >> &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; } >> &#2013266080;&#2013266080;&#2013266080;&#2013266080; } >> &#2013266080;&#2013266080; } >> } >> --- end of serial.c --- >> >> This code works most of the time, but the display sometime can show a >> mix of old/new messages. > > Because the "msg" variable in display.c is accessed from both tasks, as > an unprotected shared variable. > > In addition to that problem, you have written both tasks to use polling, > with no delay, which wastes processor resources, especially for > "task_display". The "task_serial" task does need to poll "serial_rx" > (per the assumptions above), but it could certainly do so at some > non-zero period, computed from the baud rate and the execution times to > ensure that "serial_rx" calls are frequent enough to avoid loss of input > data. > > Of course, a serious design would use serial-line interrupts and trigger > the "task_serial" only when a character has been received. > > For "task_display", you could replace the "refresh is needed" flag with > another semaphore, which is initially zero, is "given" in "task_serial" > when a new message is to be displayed, and is "taken" by "task_display" > before it displays the new message. Then "task_display" consumes no > processing resources until it actually has to.
I was thinking to a refresh made at regular intervals, such as every 100ms.
>> Here the solution is to take a binary semaphore before using the >> shared resource (and give the semaphore after the job is done). >> >> void display_set_message(const char *new_msg) { >> &#2013266080;&#2013266080; semaphore_take_forever(); >> &#2013266080;&#2013266080; strncpy(msg, new_msg, sizeof(msg)); > > Here you need some code to set "refresh is needed" to true. That flag is > also a shared variable. > >> &#2013266080;&#2013266080; semaphore_give(); > > If you have semaphore calls here, in "display_set_message", ... > >> } >> >> ... >> &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; if (frame_is_complete(rxbuf)) { >> &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; char new_msg[32]; >> &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; /* decode new message from received frame from serial line */ >> &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; semaphore_take_forever(); >> &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; display_set_message(new_msg); >> &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; semaphore_give(); > > .... then you do not need themn (and should not have them) here, around > the call of "display_set_message". > >> &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; rxlen = 0; >> &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; } >> ... > > You also need to use the mutex semaphore from "task_display", for > example as follows: > > &#2013266080;&#2013266080; void task_display(void) { > &#2013266080;&#2013266080;&#2013266080;&#2013266080; while(1) { > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; if (refresh_is_needed()) { > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; char new_msg[32]; > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; semaphore_take_forever(); > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; strncpy (new_msg, msg, sizeof (new_msg)); > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; // Here something to set "refresh is needed" to false. > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; semaphore_give(); > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; display_printat(0, 0, new_msg); > &#2013266080;&#2013266080;&#2013266080;&#2013266080;&#2013266080; } > &#2013266080;&#2013266080;&#2013266080; } > &#2013266080; } > > Otherwise the "task_serial" could still overwrite the message with a new > one, during the call of "display_printat". > > To assign priorities, you look at the deadlines of the tasks: > > - task_serial: deadline = serial-line character period (actually > one-half of it) > > - task_display: no deadline defined: infinite deadline. > > Then you assign priorities in order of deadlines: higher priorities for > shorter deadlines, hence "task_serial" will have higher priority than > "task_display". The numerical values of the priorities do not matter, > only their ordering. > > With "task_serial" having a higher priority, it can pre-empt the slow > "display_printat" function whenever it needs to, and thus call > "serial_rx" often enough. > >> My impression is that a very simple code is cluttered with >> synchronization things that decrease readability and maintainability >> and increase complexity. Why? Just to use preemption? > > No -- to make the SW work, where the cooperative design did not work. > > Maintenance is eased because the pre-emptive design continues to work > even if the execution time of "display_printat" was initially short, but > then increased to become longer than the serial-line character period. > > In larger programs there are important advantages of preemption in > helping decouple modules from each other. > >> &#2013266080;From what I have understood, preemption could solve real-time >> requirement. >> >> Suppose display_printat() takes too much time to finish. This >> increases the worst-case superloop duration and could delay some >> system reaction. >> For example, if display_printat() takes 1 second to finish, the system >> could react after 1 second from an event (the press of a button, for >> example). > > Or it could lose serial input data (under my assumptions). > >> If this isn't acceptable, preemption could help. Is it correct? > > Yes. >
On 2020-01-08 1:02, pozz wrote:
> Il 07/01/2020 08:38, Niklas Holsti ha scritto: >> On 2020-01-07 3:08, pozz wrote: >>> I noticed my previous post about preemptive OS involved many people >>> and started many discussions, most of them theoric. >>> >>> Someone wrote the synchronization of tasks in preemptive scheduler is >>> not so difficult, after understanding some things. >> >> I made some such statement. >> >>> Others suggested to abandon at all preemptive scheduler, considering >>> its pitfalls. >>> >>> Because I know my limits, I don't think I can produce a well-written >>> preemption system. However I'd like to understand a little more about >>> them. Starting from an example. >>> >>> Suppose my system is a display where a message is written. The >>> message can be customized by a serial line. >> >> So, this system consists of a display and a serial input line and has >> requirements as follows: >> >> 1. The display shall at all times show a message, of at most 31 >> characters. >> >> - To be defined: what the initial message should be at system reset. >> >> 2. The SW shall receive characters from the serial line, buffering >> them in a "frame buffer" in memory, which can hold up to 64 characters. >> >> 3. After each received (and buffered) serial-line character, the SW >> shall check if the buffered characters form a complete "frame". >> >> - To be defined: what to do if the frame buffer is full but does not >> form a complete frame. (This may of course be impossible by design of >> the "frame_is_complete" function.) >> >> 4. When the buffered characters form a complete frame, the SW shall >> convert (decode) the contents of the frame into a message, of at most >> 31 characters, display that message until another, new frame is >> received, and erase the frame-buffer in preparation for the next frame. >> >> The real-time aspects are undefined, except that each message is >> displayed until the next frame is received. > > The only real-time is that the new message sent through the serial line > appears on the display in a reasonable time: 100ms? 1s? Something similar. > > The second requirement is that the display mustn't show a hybrid message > composed by two parts of the successive messages. > > >>> In cooperative approach, I would write something:
[snip code]
>>> How to convert these two tasks in a preemptive scheduler? Which >>> priority to assign to them? >> >> Before that conversion one must think about the real-time >> requirements: deadlines, response-times. This is difficult for this >> example, because you have not stated any requirements. >> >> Let's assume these requirements and properties of the environment: >> >> A. The function "serial_rx" polls the one-character reception buffer >> of the serial line once, and returns the received character, if any, >> and EOF otherwise. It must be called at least as often as characters >> arrive (that is, depending on baud rate) to avoid overrun and loss of >> some characters.
You asked about possible advantages of pre-emption; I made my assumptions, above, such that the (incomplete) example you gave shows this advantage, under these assumptions (which could be true for other, otherwise similar example applications).
> No, serial driver works in interrupt mode and already use a FIFO buffer, > sufficiently big. serial_rx() pop a single element from the FIFO, if any.
Ah, then your *system* is intrinsically pre-emptive (the interrupts pre-empt the tasks), even if the *code you showed* does not show this pre-emption. I won't reply to your other comments on my assumptions, as they are irrelevant to the point of where and when pre-emption can be good for you.
> Serial driver interrupts guarantees no loss of input during > display_printat() or other functions.
Right, because it is pre-emptive. So there you see the advantage.
>> For "task_display", you could replace the "refresh is needed" flag >> with another semaphore, which is initially zero, is "given" in >> "task_serial" when a new message is to be displayed, and is "taken" by >> "task_display" before it displays the new message. Then "task_display" >> consumes no processing resources until it actually has to. > > I was thinking to a refresh made at regular intervals, such as every 100ms.
In some systems that could result in annoying flickering of the display, which could even be dangerous (seizure-inducing) to some users. -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
On 2020-01-08 0:51, pozz wrote:
> Il 07/01/2020 15:51, Don Y ha scritto: >> On 1/7/2020 2:11 AM, pozz wrote: >>> Il 07/01/2020 03:37, Don Y ha scritto: >>>> On 1/6/2020 6:08 PM, pozz wrote: >>>> >>>> [ 8< ] >>>> >>>>> My impression is that a very simple code is cluttered with >>>>> synchronization things that decrease readability and >>>>> maintainability and increase complexity. Why? Just to use preemption? >>>> >>>> The "clutter" is introduced because your "problem" inherently involves >>>> conflict; you're allowing two competing uses for a single resource. >>> >>> Howevere the shared resource complexity is present only when >>> preemption is used. >> >> Because it doesn't work right in the nonpreempt case!&#2013266080; :> > > Why do you say this? This application can work flowlessy even with > cooperative multitasking.
Only if you have a pre-empting serial-line interrupt handler and a "serial_rx" function that interacts properly with that interrupt handler, although the two share data (the input queue) and the latter can pre-empt the former when interrupts are enabled. The design of that interrupt handler and the "serial_rx" function will exhibit some of the "clutter" you are complaining about. [snip]
> It's a matter of coding style. I don't have any > experience in multi-tasking systems so I am worried about them. There's > a learning curve for coding tasks in a preemptive environment that it > appears to me a waste of time if I'm able to reach the same goal with a > completeley different approach that is much more friendly to me.
If the system didn't supply that interrupt handler and the associated "serial_rx" function, the "friendly" approach would not reach the same goal as the pre-emptive approach.
> Anyway I'd like to learn a little of the other approach. This is the > reason of my posts.
In addition to the CSP approach -- which is a bit theoretical, as few programming languages support it directly -- you could look at how Ada does multi-tasking, by looking at the slides for the book "Real-Time Systems and Programming Languages (Fourth Edition) Ada 2005, Real-Time Java and C/Real-Time POSIX", at https://www.cs.york.ac.uk/rts/books/RTSBookFourthEdition.html (the book itself is not free, unfortunately).
>> And, if they *do* need to be aware of each other, make their interactions >> very visible and restricted to a small set of operations. > > In my very simple application (display showing a message) there is a > sharing resource that can't be avoided (at least to me). Imagine if many > variables would be set through the serial line: a semaphore everytime > both tasks need to access those variables!
What is your concern with that? You only need one semaphore to provide mutual exclusion between two tasks, not a separate semaphore for each shared variable. Are you worried about the processor time for the semaphore operations? or the code clutter? If you have many variables, shared in that way, you probably have some way of identifying a particular variable by a run-time value, such as an enumeration or a string name, and then you can write a single function that accesses any one variable when given the identifier of that variable as a parameter, and you can encapsulate the take/give of the semaphore within that function. In such cases, you should also consider carefully /when/ a task should accept a change in a variable. It is often the case that failures or bad behaviour can result if a task uses a variable, X say, in two places, but the value of X changes unexpectedly between the first use and the second use, because there is a "yield" or pre-emption between the uses. Then it is better for the task to take a local copy of X, at a suitable point in its execution, and use that local copy subsequently, until it is time to refresh the local copy. Using the local copy of course does not need mutex protection.
> So the worst-case superloop duration is the sum of worst-case > durations of each task, plus worst-case duration of interrupts.
> If tasks are coded as non-blocking (state-machine), this worst-case > duration could be very small and real-time requirements can be > respected. You might try coding an FFT or Quicksort or other complex algorithm as a state machine, with a variable overall length of the input and output arrays, and then compare the "clutter" of those state machines with the clutter of pre-emptive coding.
> Again, in my approach every task are *non-blocking*,
(Just a note that this use of the term "blocking" does not conform with its normal use in task scheduling, where a task "blocks" when it suspends itself to wait for some event that has not yet happened, or when it cannot execute because a higher-priority task is executing. Such "blocked" tasks are not running and are not using processor time. A task that just runs and computes for say 60 seconds is not "blocking" in the normal sense of the word.)
> so they take > 100us-1ms maximum at each loop. If I have 10 tasks, superloop duration > could be estimated in 10ms maximum. If the most critical real-time > requirement is 100ms or more, cooperative multitasking is ok.
Yes, everything depends on the execution times and the required response times. If cooperative works, without excessively cluttered state machines, and you are not worried about significant long-term evolution of the SW, it may be a defensible approach. -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
[I'll tackle this in multiple posts to avoid conflating the issues.
But, I *really* want to extricate myself from this discussion...]

[ 8< ]

>>>>> My impression is that a very simple code is cluttered with synchronization >>>>> things that decrease readability and maintainability and increase >>>>> complexity. Why? Just to use preemption? >>>> >>>> The "clutter" is introduced because your "problem" inherently involves >>>> conflict; you're allowing two competing uses for a single resource. >>> >>> Howevere the shared resource complexity is present only when preemption is >>> used. >> >> Because it doesn't work right in the nonpreempt case! :> > > Why do you say this? This application can work flowlessy even with cooperative > multitasking.
I think the following pieces together the various bits of your example, in its "finished" form (apologies if my cut-n-paste got sloppy): First, your cooperative implementation: --- main.c --- ... while(1) { task_display(); task_serial(); } --- end of main.c --- --- display.c --- static const char msg[32]; void display_set_message(const char *new_msg) { strncpy(msg, new_msg, sizeof(msg)); } void task_display(void) { if (refresh_is_needed()) { display_printat(0, 0, msg); } } --- end of display.c --- --- serial.c --- static unsigned char rxbuf[64]; static size_t rxlen; void task_serial(void) { unsigned char b = serial_rx(); if (b != EOF) { rxbuf[rxlen++] = b; if (frame_is_complete(rxbuf, rxlen)) { char new_msg[32]; /* decode new message from received frame from serial line */ display_set_message(new_msg); rxlen = 0; } } } --- end of serial.c --- Note that you leave a lot undefined -- including "requirements"! (so, I guess that makes it easier to meet a target! :> ) So, I'll try to walk a line between being overly pedantic and "too forgiving"... The most glaring problem is that you're using "run to completion" in each task implementation -- but it's not apparent how long each of those "efforts" will take, in actual elapsed time! E.g., how long -- worst case -- between successive invocations of unsigned char b = serial_rx(); This must be less than a single character time (at the current line rate) -- unless the UART is multiply buffered AND serial_rx() is capable of completely emptying that buffer, when invoked. And, this has to be satisfied regardless of which "execution paths" each task chooses to take (i.e., conditionals). Some (fixed?) time after the serial data has arrived at the device, it becomes "available". Some VARIABLE time after that, an invocation of serial_rx() will detect it. Worst case, ONE invocation of serial_rx() will have *just* finished checking for it and, thus, MISS seeing it become available some epsilon later. [I am assuming that EOF is returned when serial_rx() determines "no data available"] So, the trailing end of serial_rx() executes (any code after the determination of the presence of data is made -- including the "return" mechanism). Then, the test for EOF (in task_serial). And, *its* return -- to main(). The crank is turned on the while() loop and task_display() invoked. No idea how "refresh is needed" is determined. Nor how long it takes to make that determination. As it undoubtedly *is* needed, eventually, display_printat() can be invoked. Which, does "something" and takes "some amount of time". Eventually, it returns (to task_display) and task_display(), itself, returns to main. Which allows task_serial() to be reinvoked. Which, in turn, invokes serial_rx() and SOME TIME LATER, data availability is again assessed. Can another character have arrived and been MISSED in this time? <shrug> Maybe. Maybe not. You have no way of knowing -- without looking through all of this code and making an assessment as to how long each statement in the above sequence will take to execute. Note that if serial_rx() returns a real character (i.e., not EOF), then even more code is inserted into this period between successive "UART checks". And, if the worst-case time is longer than tolerable (at the fastest data rate), there's no easy way for you to tweek your code to compensate. Imagine if display_printat() examines the characters in the string presented and maps each to a bitmap representation of the character -- a "font", so to speak. Then, takes the bitmap for that character and paints it into a *graphic* display, overlaying the "dots" that were already there. Then, advancing to the next character in the string and performing the same action after having advanced the "display cursor" to take into account the dots that it has painted for the previous character. This could take a fair bit of time -- especially as the display hardware might be via some I2C bus, etc. so each dot painted has a high temporal overhead! Now, imagine a Marketeer comes along and wants to display the current time alongside each message. Or, worse -- display and UPDATE that time WHILE the message remains in place! Or, have the message stay there for at most 2 seconds and then "autoblank" -- unless another message has been received in the interim. This just adds to the code (time!) that has to be executed between serial_rx() invocations. Or, the sales guy claims that he "can sell a million of them... *IF* you could just update the maximum 'baudrate' to 115200!" (Ooops! Suddenly you only have 100us to do all that work without losing data!)