Richard Damon wrote:> On 7/3/16 9:57 PM, Les Cargill wrote: >> Richard Damon wrote: >>> On 7/3/16 10:39 AM, Les Cargill wrote: >>>> Richard Damon wrote: >>>>> On 7/3/16 1:07 AM, Les Cargill wrote: >>>>>> Richard Damon wrote: >>>>>>> On 6/29/16 10:53 AM, Les Cargill wrote: >>>>>>>> >>>>>>>> >>>>>>>> The signal difference between a preemptive and cooperative >>>>>>>> multitasker >>>>>>>> is the addition of a ( potential ) context switch in a timer ISR. >>>>>>>> >>>>>>>> Please note that you can have fully functional timers in a >>>>>>>> cooperative >>>>>>>> multitasker. >>>>>>>> >>>>>>> >>>>>>> >>>>>>> The difference between a preemptive and a cooperative system is in a >>>>>>> cooperative system, a context switch to a different task only >>>>>>> occurs a >>>>>>> defined points in each task (typically a blocking call or an >>>>>>> explicit >>>>>>> yield). A preemptive system has the possibility of an interrupt >>>>>>> event >>>>>>> causing a context switch at almost any point in the execution of a >>>>>>> task >>>>>>> (there may be small regions, critical sections, which guard against >>>>>>> such >>>>>>> possibilities). It isn't just timer ISRs that can cause the context >>>>>>> switch. >>>>>> >>>>>> Couldn't agree more - by "signal difference" I meant something >>>>>> akin to >>>>>> "primary difference". Not the only difference. >>>>>> >>>>> >>>>> I would describe the primary difference as 'Preemption', that a task >>>>> can >>>>> be switched at arbitrary points, not defined points. This is typically >>>>> done with task context switch from ISRs. >>>> >>>> Yes. >>>> >>>>> While the timer is a common >>>>> event to cause a switch, it is by far not the most important. It is >>>>> possible to make a (low performance) preemptive system using only >>>>> timer >>>>> context switches, but most systems with significant real time >>>>> requirements need switches on other interrupts. >>>> >>>> Even in a cooperative multitasker, an ISR may hit a semaphore or other >>>> object to make a thread ready. Because it's an ISR, it's quite >>>> painful to make this perform an actual context switch - this generally >>>> just marks a task or tasks as ready, the ISR finishes and the kernel >>>> then at some point does a context switch. >>> >>> In a cooperative system, an ISR can not cause a context switch, because >>> then it isn't a cooperative system. The hallmark of a cooperative system >>> is that context switches only occur at defined points. >> >> "Make ready" != "context switch". >> >>>> >>>> A raw ISR has no "furniture" to do any context switching on. >>>> >>> Actually, in my experience, most preemptive systems the context switch >>> IS normally done in an interrupt context, >> >> ... which is not the same thing as an ISR... An actual context switch is >> a context switch - similar to things available in <ucontext.h> in >> Linux. >> > > In every machine I have used, unless you are righting in very low level > assembly code, the entry into an ISR begins with a 'context switch', > where the context of the currently executing program is saved, and an > interrupt context is started. It has to, as the ISR must not change the > context of the running program. (Context is the environment that > something is executing in). Often part of this context switch is built > into the interrupt hardware, and some is in the preamble of the ISR. >So were down to "what happens when a (hardware) interrupt fires?" and that is very machine-dependent. It's also related to ( possibly ) "what happens when a soft interrupt is invoked?" I would not classify an ISR as a context switch because implementations exist where it is not a context switch; it is a separate thing. It has a lot of footprint in common with a context switch ( and may be very close to the same thing ) but there is enough difference to hold them as separate conceptual things. Since I now know that you mean FreeRTOS, then yes - a context switch and an interrupt for those are very close to the same thing.> An OS may define some additional information that is part of its context > for a user program, (your <ucontext.h>). > >>> and a task initiated context >>> MAnyswitch either uses a software interrupt or the code emulates an >>> interrupt. The ISR effectively changes which tasks stack is used to >>> return to. >>> >> >> Those happen too. >> >>>> Some O/S offerings offer a "kernel thread" so that the kernel can >>>> maintain a context in which to do things. An ISR may return to the >>>> kernel thread, which can then do what you say. But even then, I'd >>>> cal that potentially a cooperative multitasker. I'm pretty sure >>>> VxWorks works in exactly this way - when configured as "cooperative", >>>> you still have a kernel thread that enforces these disciplines. >>> >>> Many O/Ses will have a kernel thread for certain operations that don't >>> need a full independent task. Every Preemptive Real Time system I know >>> has the ability for an ISR to cause an immediate context switch. >>> >> >> I've avoided this so far - but there may be serious mechanical problems >> with having an actual interrupt cause a context switch - outside of when >> a "software" interrupt might do such a thing. >> >> Interrupt handlers may have a different preamble or postamble ( or both >> ) than a blocking context switch or return from a blocking context >> switch. There may be a different stack signature. In addition, it may >> be necessary to allocate a stack just for a certain ISR. >> >> Of course there are hundreds of O/S out there on a great number of >> processors, and they're all quite different. >> >> I'll leave it at that. If you'd identify which O/S you're referring to, >> I'll look into it. >> > > One OS I use a lot is FreeRTOS, a multi-platform real-time micro-kernel. > This is the sort of OS I would expect in an embedded system. (In my > mind, a system using an OS as big as linux is on the fringe of what is > 'embedded'). >Yes, I believe you are correct about FreeRTOS. Thanks for helping me understand what you mean. I've seen that on a PIC and there's not much difference between a context switch and an interrupt. On heavier O/S offerings, there may well be more difference.>>> The key difference between a cooperative and a preemptive system is that >>> in a cooperative system, user tasks don't need to worry as much about >>> data sharing as other user tasks can only get in at defined points to >>> change/access the data, on the other hand, every task needs to think >>> about the needs of other tasks and makes sure that every task offers >>> opportunity for other tasks to run often enough. >> >> Both are good disciplines to have anyway :) >> >>> In preemptive systems, >>> you do need to worry about data sharing, as another task (of higher >>> priority) might get in at any time, but tasks only need to worry about >>> interfering with lower priority tasks (and you should establish >>> priorities so this isn't normally an issue). >>> >>>> >>>> And note well that this may account for any bias I show. Linux is, >>>> I believe, quite different but I've never had cause to fully understand >>>> it at that level. Linux depends on having A Lot Of CPU and there are a >>>> thousand heresies on what "high performance" means there. >>>> >>>> Language gets in the way, but an ISR will essentially request a context >>>> switch, not execute one. Er, at least in what I have seen espoused as >>>> best practices. Maybe some kernel does exactly what you say, but I'd >>>> have to do some measuring with a kernel like that before I was >>>> comfortable with it. >>> >>> What I am familiar with for small Real Time OSes, the task context is >>> saved as if the task was interrupted by an interrupt, and for an ISR to >>> perform a context switch, it changes what stack to use to return. >>> >> >> I have to admit - I haven't seen that variant except for one case - a >> custom job. And it took tracking whether the current context was >> interrupt or not, and it branched in rather ugly ways. > > Sounds like you are working on 'big' embedded systems,It varies.> and either it > doesn't support anything like real-time abilityThat's a spectrum, not an either-or. > or more likely this> operation is hidden from the user behind system calls the ISR does. (Do > you even write ISRs?)I've written many, many ISRs. For going on 32 years now. Just understand that there are domain dependencies that you may be generalizing away.>> >>>> >>>> Given that, the principal difference between preemptive and cooperative >>>> continues to be the timer interrupt being allowed to preempt between >>>> tasks. >>>> >>> Absolutely WRONG. If the timer tasks can cause a context switch at the >>> task level then you are preemptive, but the timer is no different than >>> any other interrupt in this respect. >>> >> >> Ah, you missed my point. Anyway... > > You miss my point. >> >>>> There's a lot of tradeoff between latency, jitter and (very roughly) >>>> determinism. >>>> >>>> I don't care for designs where events cause lots of other events. >>>> I prefer designs that keep counters for missed transitions and >>>> degrade gracefully, rather than treating the CPU as a zero sum >>>> resource. I also find nothing noble about trying to squeeze >>>> the last full measure of ... effort from a CPU. >>> >>> If you can assume infinite CPU, then yes, you can be very sloppy with >>> allocations. In most systems I deal with, while you do still design to >>> fail gracefully, or as graceful as possible, 'missed' events tend to be >>> a sign that your system is failing and not something to just idly try to >>> gloss over. Sometimes it is a sign you need to attempt to shut down to a >>> safe mode, or you report an error and back off on operation. >> >> If you can't assume *sufficient* CPU then you have other problems. >> Again, I'm constitutionally predisposed away from trying to squeeze >> every last cycle out of a processor in this day and age. YMMV. >> > > Yes, you need sufficient CPU, but CPU time IS a zero-sum game, if you > spend CPU time on one operation, it isn't available for another (as it > is a limited resource). If you have an operation that needs to respond > within 100usecs, you only have 100usecs of CPU available to spend to get > there. You need to either manage what you do during that time period so > that you meet that dead-line, or you need a much faster processor than > you would otherwise need (which consume excess power which can be a > bigger problem). >It's always a trade. The projects I've done which used, say, a 4 MHz PIC32 recently, utilization ( and therefore jitter ) was quite low.>>>> >>>> Finally, please understand that I've had to explain to people that >>>> assigning task priority by "well, that's more important" doesn't work, >>>> and that raising task priority doesn't make your task any faster :) >>>> >>>> This has pushed me to claim that "If you have to set the vector of task >>>> priorities a certain way for your system to work, it's broken." >>>> >>>> >>> >>> Yes, higher priority won't make a task execute any faster, it just says >>> less tasks can get in your way. If a task is slow because it gets >>> blocked by other tasks, raising its priority can help (but you need to >>> watch out that raising the priority can cause it to interfere with other >>> tasks). >> >> But in that case, it may be worth thinking hard about the design >> choices. >> > > The setting of priorities is not something to do lightly, but is a > significant part of design.I like designs that don't depend on them better. -- Les Cargill
Common name for a "Task Loop"
Started by ●June 24, 2016
Reply by ●July 4, 20162016-07-04
Reply by ●July 5, 20162016-07-05
On 04.7.2016 г. 20:08, Richard Damon wrote:> ..... > > In every machine I have used, unless you are righting in very low level > assembly code, the entry into an ISR begins with a 'context switch', > where the context of the currently executing program is saved, and an > interrupt context is started. It has to, as the ISR must not change the > context of the running program. (Context is the environment that > something is executing in). Often part of this context switch is built > into the interrupt hardware, and some is in the preamble of the ISR.This is a huge generalization, has been since the early 80-s. An ISR does not have to save the context, just as much of it as it needs. As processors got more registers this became the norm. If you use an OS which saves the entire context to switch to the task where the interrupt will be handled you add a lot of latency to your system - never mind the OS calls itself "real time", it can be that of course but such an OS does ADD latency. What really matters for an OS to guarantee low latencies is the longest time it will keep the interrupts masked. Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/
Reply by ●July 5, 20162016-07-05
On Tue, 05 Jul 2016 06:04:51 +0300, Dimiter_Popoff wrote:> On 04.7.2016 г. 20:08, Richard Damon wrote: >> ..... >> >> In every machine I have used, unless you are righting in very low level >> assembly code, the entry into an ISR begins with a 'context switch', >> where the context of the currently executing program is saved, and an >> interrupt context is started. It has to, as the ISR must not change the >> context of the running program. (Context is the environment that >> something is executing in). Often part of this context switch is built >> into the interrupt hardware, and some is in the preamble of the ISR. > > This is a huge generalization, has been since the early 80-s. An ISR > does not have to save the context, just as much of it as it needs. > As processors got more registers this became the norm.+1. Yuppers. Etc. Truly minimal RISC machines only save the PC into a register (ARM and, I think, PowerPC, call it the "link register"), and it's up to the programmer to save that value to the stack (which may be software defined) before doing anything else -- and the first part of "anything else" is saving off any registers that are going to get stomped on.> If you use an OS which saves the entire context to switch to the task > where the interrupt will be handled you add a lot of latency to your > system - never mind the OS calls itself "real time", it can be that of > course but such an OS does ADD latency. > > What really matters for an OS to guarantee low latencies is the longest > time it will keep the interrupts masked.There was a slogan going around the Embedded community about a decade or two ago "Real time doesn't mean real fast". It's a concise way of expressing your comment on latency and RTOS's. As long as the maximum latency of a context switch is known, then an OS can claim to be "real time". Many, many bonus points go to the RTOS that has a context switch of constant latency, because that makes it easier to test that things function correctly (mostly by enforcing a failure when the code that uses the RTOS isn't up to snuff). -- Tim Wescott Wescott Design Services http://www.wescottdesign.com I'm looking for work -- see my website!
Reply by ●July 5, 20162016-07-05
On Tue, 5 Jul 2016 06:04:51 +0300, Dimiter_Popoff <dp@tgi-sci.com> wrote:>On 04.7.2016 ?. 20:08, Richard Damon wrote: >> ..... >> >> In every machine I have used, unless you are righting in very low level >> assembly code, the entry into an ISR begins with a 'context switch', >> where the context of the currently executing program is saved, and an >> interrupt context is started. It has to, as the ISR must not change the >> context of the running program. (Context is the environment that >> something is executing in). Often part of this context switch is built >> into the interrupt hardware, and some is in the preamble of the ISR. > >This is a huge generalization, has been since the early 80-s. An ISR >does not have to save the context, just as much of it as it needs.The hardware really needs only to save the PC and processor status word (Zero Negative, Carry, Overflow etc. flags). 8080 just saved the minimum, 6800 saved practically everything. The tradeoff is that if HW saves every registers at once, it can utilize full memory bandwidth. However, if the ISR needs to save additional registers, it usually has to do individual push/pop instructions, so in addition to the actual data transfers also instruction fetches are needed.
Reply by ●July 5, 20162016-07-05
On 7/3/2016 4:39 PM, Richard Damon wrote:> On 7/3/16 10:39 AM, Les Cargill wrote:>> This has pushed me to claim that "If you have to set the vector of task >> priorities a certain way for your system to work, it's broken." > > Yes, higher priority won't make a task execute any faster, it just says less > tasks can get in your way. If a task is slow because it gets blocked by other > tasks, raising its priority can help (but you need to watch out that raising > the priority can cause it to interfere with other tasks).The concept of "priorities" in an OS usually means you've not designed something properly (and will turn to tweeking priorities to make things perform acceptibly). In a real-time OS, only deadlines and time-value criteria make ANY sense as scheduling criteria (otherwise, your "system" falls into the "just get a faster PC" criteria). [If you think you can test your way to validation, I invite you to take a trip to Mars...]
Reply by ●July 5, 20162016-07-05
On 7/5/2016 11:57 AM, upsidedown@downunder.com wrote:> On Tue, 5 Jul 2016 06:04:51 +0300, Dimiter_Popoff <dp@tgi-sci.com> > wrote: > >> On 04.7.2016 ?. 20:08, Richard Damon wrote: >>> ..... >>> >>> In every machine I have used, unless you are righting in very low level >>> assembly code, the entry into an ISR begins with a 'context switch', >>> where the context of the currently executing program is saved, and an >>> interrupt context is started. It has to, as the ISR must not change the >>> context of the running program. (Context is the environment that >>> something is executing in). Often part of this context switch is built >>> into the interrupt hardware, and some is in the preamble of the ISR. >> >> This is a huge generalization, has been since the early 80-s. An ISR >> does not have to save the context, just as much of it as it needs. > > The hardware really needs only to save the PC and processor status > word (Zero Negative, Carry, Overflow etc. flags). > > 8080 just saved the minimum, 6800 saved practically everything. > > The tradeoff is that if HW saves every registers at once, it can > utilize full memory bandwidth. However, if the ISR needs to save > additional registers, it usually has to do individual push/pop > instructions, so in addition to the actual data transfers also > instruction fetches are needed.Different processors have, historically, treated "context" in different ways (and, their idea of context also varied). 8051's had "register sets", the 99K had a workspace (pointer), the Z80 had the alternate register set (amusing concept -- WHICH is the "alternate"?), some processors provide a different place to save foreground state, some provide different ISR's that preserve *less* state (e.g. the SA's firq), some have instructions that let you move lots of context in a single opcode (PULL/PUSH), some devices have very little inherent state to preserve! But, there's no free lunch; the more state -- regardless of whether it is implemented inside the CPU (e.g., registers) or in memory (e.g., workspaces) you want to avail yourself of, the more costly it is to preserve and restore. And, coding in a HLL (or in a richer OS environment) usually means you have less control over what "goodies" are being leveraged in the processor's architecture. The difference between a context switch and an ISR is that the context switch MUST preserve everything that might possibly be used by a task without placing constraints on where it can be interrupted (in a preemptive OS); an ISR only has to preserve (and later restore) the state that it will be "dirtying". [A context switch looks like a LONG ISR -- that may NEVER end! And, chances are every aspect of the process's state WILL be "dirtied" before the "interrupted" task is resumed] Note that there are hacks that can be used (on some processors) to defer preserving some portion of the task state as it may not be "dirtied" by other tasks -- or, may be "active" *through* the context switch (e.g., some historical floating point coprocessors could still be processing FP opcodes AFTER you've switched to a new task -- the contents of the FPU technically now part of some OTHER task beside the one that is currently executing). For systems without FP hardware, the state of the "floating point emulator" can similarly be withheld from the context switch -- by ensuring that any future user of the floating point emulation can be identified and the FP context switch implemented at that later time (the though being that if the new task never runs any FP operations, then the cost of preserving these structures can be omitted -- as unnecessary!) [Some MTOS's actually allow you to specify if a particular task will NOT be using the FPU/FPE so any task switches into/out-of that task can safely leave the FP context (hardware or software) intact] You don't truly appreciate the costs of a context switch until you move into bigger processors (e.g., think TLB's, cache flushes). And, as a result, don't see how making the *right* context switches can make a HUGE difference in overall performance! Hence, the importance of scheduling algorithms beyond just "some criteria to decide who runs next".
Reply by ●July 5, 20162016-07-05
On Tue, 05 Jul 2016 11:59:55 -0700, Don Y <blockedofcourse@foo.invalid> wrote:>On 7/3/2016 4:39 PM, Richard Damon wrote: >> On 7/3/16 10:39 AM, Les Cargill wrote: > >>> This has pushed me to claim that "If you have to set the vector of task >>> priorities a certain way for your system to work, it's broken."No problem as long as you can identify tasks of low criticality, such as (l)user interfaces, which can be dropped to 100-1000 ms response times, allowing more critical tasks to be served within a few milliseconds.>> Yes, higher priority won't make a task execute any faster, it just says less >> tasks can get in your way.The worst thing is that you have two tasks on the same priority and the OS does some round-robin "fairness" between them. This will cause a huge number of costly context switches between tasks, instead of letting the more critical task to run to completition at once.>>If a task is slow because it gets blocked by other >> tasks, raising its priority can help (but you need to watch out that raising >> the priority can cause it to interfere with other tasks).Never rise a priority of a task !!!! Just lower it ! Of course, if all tasks are at zero priority, you may have to think about the big picture :-)>The concept of "priorities" in an OS usually means you've not designed >something properly (and will turn to tweeking priorities to make things >perform acceptibly).Dropping priorities is safe. If you want some kind of predictable real time performance, you need to keep the average CPU load less than 40 - 60 %. Even some older Windows versions were surprisingly predictable at 50 % CPU load.
Reply by ●July 5, 20162016-07-05
On 7/5/2016 12:51 PM, upsidedown@downunder.com wrote:> On Tue, 05 Jul 2016 11:59:55 -0700, Don Y > <blockedofcourse@foo.invalid> wrote: > >> On 7/3/2016 4:39 PM, Richard Damon wrote: >>> On 7/3/16 10:39 AM, Les Cargill wrote: >> >>>> This has pushed me to claim that "If you have to set the vector of task >>>> priorities a certain way for your system to work, it's broken." > > No problem as long as you can identify tasks of low criticality, such > as (l)user interfaces, which can be dropped to 100-1000 ms response > times, allowing more critical tasks to be served within a few > milliseconds.Nearness of "deadline" doesn't imply *importance* of deadline. I have a 9-track tape driver that I wrote many years ago. It has deadlines every 6 microseconds (the time between "bytes" passing the read head on the transport). If it misses a deadline, the byte is gone. OTOH, I could always rewind the tape and try again! The Juno mission has had this most recent "burn" deadline pending for ~5 years. There are NO Mulligan's allowed in its timeline! [OTOH, if it misses its deadline, it's not "The End of The World" -- just some really disapppointed scientists (and taxpayers) as they try to salvage what they can as the spacecraft misses its intended target]>>> Yes, higher priority won't make a task execute any faster, it just says less >>> tasks can get in your way. > > The worst thing is that you have two tasks on the same priority and > the OS does some round-robin "fairness" between them. This will cause > a huge number of costly context switches between tasks, instead of > letting the more critical task to run to completition at once.If the tasks have the same *importance* (avoiding the term "priority") then this is exactly what *should* be done! Why should one be given preferential treatment as to finishing -- if that might mean the other gets arbitrarily delayed? Should I print YOUR paycheck this month instead of Tom's? Or, do Payroll instead of Receivables?>>> If a task is slow because it gets blocked by other >>> tasks, raising its priority can help (but you need to watch out that raising >>> the priority can cause it to interfere with other tasks). > > Never rise a priority of a task !!!!Then you end up with deadlock as, sooner or later, something will block and risk causing others of lower priority to block indefinitely. And, some of those lower priority tasks may be holding resources needed by higher priority tasks, etc. If a task is "slow", then give it more resources, not more "priority" (jiggering the quantum in a FIFO scheduler is usually easier and more predictable than arbitrarily playing with "priorities"). "Let's make everything louder than everything else..." If a task is not "responsive", then you need preemption and a means of indicating the "importance" of the task (the term "priority" suggests you can reduce this to a "small integer" that can be ranked relative to the other "priorities" of tasks). If timeliness is an issue, then you need to express importance in some sort of criteria that *conveys* timeliness ("priority" does not). "This task is priority 238!" tells you absolutely nothing about it's timeliness constraints.> Just lower it ! > > Of course, if all tasks are at zero priority, you may have to think > about the big picture :-) > >> The concept of "priorities" in an OS usually means you've not designed >> something properly (and will turn to tweeking priorities to make things >> perform acceptibly). > > Dropping priorities is safe.How SPECIFICALLY did you *set* the priorities, in the first place? What criteria did you use in assigning them (and then, changing them)? *You* are relying on yourself to keep track of all the "current priorities" in your system -- in your meatware. This means you will never be able to design a "complex" system (complex is anything that won't fit in a single braincase)> If you want some kind of predictable real time performance, you need > to keep the average CPU load less than 40 - 60 %. Even some older > Windows versions were surprisingly predictable at 50 % CPU load.Loading has nothing to do with expected performance. You can achieve stated timeliness goals with 100% utilization. But, you have to pick scheduling algorithms that are designed for that sort of utilization (arbitrary "priorities" give you *no* guarantees).
Reply by ●July 5, 20162016-07-05
Tim Wescott wrote:> There was a slogan going around the Embedded community about a decade or > two ago "Real time doesn't mean real fast". It's a concise way of > expressing your comment on latency and RTOS's.The German term for "real time" is "Echtzeit". Some vary this to "Rechtzeit" which is best translated as "in time" and describes it much better. -- Reinhardt
Reply by ●July 6, 20162016-07-06
On Tue, 05 Jul 2016 13:16:01 -0700, Don Y <blockedofcourse@foo.invalid> wrote:> >>>> If a task is slow because it gets blocked by other >>>> tasks, raising its priority can help (but you need to watch out that raising >>>> the priority can cause it to interfere with other tasks). >> >> Never rise a priority of a task !!!! > >Then you end up with deadlock as, sooner or later, something will block >and risk causing others of lower priority to block indefinitely. And, >some of those lower priority tasks may be holding resources needed >by higher priority tasks, etc.So you ended int a classical priority inversion. Low priority tasks should not use locks, in fact locks should be avoided as much as possible. Instead use atomic updates or use high priority tasks to "own" shared recourses with get/put access.>If a task is "slow", then give it more resources, not more "priority" >(jiggering the quantum in a FIFO scheduler is usually easier and >more predictable than arbitrarily playing with "priorities"). > > "Let's make everything louder than everything else..." > >If a task is not "responsive", then you need preemption and a means >of indicating the "importance" of the task (the term "priority" >suggests you can reduce this to a "small integer" that can be >ranked relative to the other "priorities" of tasks). > >If timeliness is an issue, then you need to express importance in >some sort of criteria that *conveys* timeliness ("priority" does not). > >"This task is priority 238!" tells you absolutely nothing about >it's timeliness constraints. > >> Just lower it ! >> >> Of course, if all tasks are at zero priority, you may have to think >> about the big picture :-) >> >>> The concept of "priorities" in an OS usually means you've not designed >>> something properly (and will turn to tweeking priorities to make things >>> perform acceptibly). >> >> Dropping priorities is safe. > >How SPECIFICALLY did you *set* the priorities, in the first place? >What criteria did you use in assigning them (and then, changing them)?It is a question of division of labor into tasks. Assign the lowest priority to the task that takes longest to run and so on. The worst case latencies for task X can be calculated by summing the execution time of the ISR and the sum of the execution times of the tasks in priority order ahead of task X of interest. This assumes that all tasks ahead of X becomes ready during the ISR and thus each task is executed in priority order before task X. Often in a system some hard real time responses are needed but in addition to that, some other things are completely happy with soft real time responses.>*You* are relying on yourself to keep track of all the "current priorities" >in your system -- in your meatware. This means you will never be able to >design a "complex" system (complex is anything that won't fit in a single >braincase)Done it for nearly 40 years. I try to limit the number of task to ten or less, so I can count them with my fingers. I might be able to increase it to 20 by using also my toes, but then I would have to take my socks off and my colleagues might not appreciate that :-)>> If you want some kind of predictable real time performance, you need >> to keep the average CPU load less than 40 - 60 %. Even some older >> Windows versions were surprisingly predictable at 50 % CPU load. > >Loading has nothing to do with expected performance. You can >achieve stated timeliness goals with 100% utilization. But, you >have to pick scheduling algorithms that are designed for that sort of >utilization (arbitrary "priorities" give you *no* guarantees).With the priority list and the execution times for each task, you can calculate the latency for each task and check if it OK. If not, you may have to change the division of labor, e.g. split a task into two and move the other part into a lower priority. I very rarely have had to do this.







