Tim Wescott wrote:> On Thu, 07 Jul 2016 23:09:08 +0300, Niklas Holsti wrote: > >> On 16-07-07 22:06 , Don Y wrote: >>> Sorry, but I don't have time to continue this discussion; >> >> I'm not in a hurry; come back when you have time. >> >>> I'll leave you (and everyone reading) with one *simple* way to >>> "validate" the assertion that priority-based schedulers are not toys >>> (below): >> >> [snips] >> >> Don Y wrote: >>>>> Exactly. "Priority" (in the sense of "set_task_priority()") is just >>>>> an expedient to make coding a scheduler easy. It tells you *nothing* >>>>> about the task's "importance", "timeliness constraints", etc. >> >> Niklas Holsti wrote: >>>> That is not so, if you follow a systematic and proven >>>> priority-assignment rule. >>>> For example, if you use deadline-monotonic priorities then the order >>>> of task priorities is equivalent to the order of task deadlines >>>> (expressed as the maximum duration between task activation and >>>> completion). So then the task priorities certainly tell you a lot >>>> about the (relative) timeliness constraints. >>>> >>>> If you just assign priorities based on some subjective "task >>>> importance" feeling, your statement is valid. >> >> Don Y wrote: >>> Then, EVERY product that employs a priority-based scheduler should have >>> a FORMAL document in part of its deliverables that clearly states these >>> priorities and the method by which they were derived (as you later call >>> them "the basis for one important set of the mathematical methods for >>> verifying real-time performance: schedulability analysis"). So, the >>> next bloke to look at the code knows exactly why they were chosen and >>> which assumptions he/she must *continue* to operate under for those >>> priority assignments to remain valid. >> >> Ideally yes, just as every mechanical engineering project should >> document its design assumptions, stress and strength calculations, etc., >> and every SW project should deliver full design documentation, complete >> user manuals, maintenance manuals, etc. >> >> But seriously, documentation requirements are a different dimension from >> the choice of design and implementation methods. Using a systematic >> design method does not oblige you to document the design, although it >> makes it much easier to document it. >> >>> Ask yourself how many of those documents you've seen? Authored? >> >> Several, for both questions. All the day-job projects I work on have >> such, and I am usually the author. >> >>> Or, is it just "some suggestive feeling" >>> that led to the small integers chosen? >> >> No. From the requirements, I derive a set of tasks and deadlines, and >> then assign priorities in deadline-monotonic order. Then I analyse or >> measure WCETs and crank the response-time algorithm to check >> schedulability. The main problem is that the task interactions are often >> more complex than assumed by the simpler schedulability analysis >> methods. Fortunately, in my projects it is usually possible to separate >> hard-real-time tasks from soft-real-time tasks, and the interactions of >> the hard-real-time tasks tend to be rather simple. > > I have never had a problem doing this separation myself, although I've > inherited code from other people that fails at this, and rather badly. > The most egregious example of this was code that put the most important > two jobs into one superloop inside of one task -- and did so in such a > way that bollixed up the whole system if incoming commands exceeded a > rather moderate rate. >I have found that command processing is usually the lowest priority thing in a system. But you have to buffer things properly.> I'm not entirely sure, but I think there's a possibility that if your > code _does_ have such an interaction between tasks, then it means that > there's something fundamentally wrong with your software design, or > possibly your system design as a whole. >That's kind of what I mean by "dependence on priority is Bad." -- Les Cargill
Common name for a "Task Loop"
Started by ●June 24, 2016
Reply by ●July 8, 20162016-07-08
Reply by ●July 8, 20162016-07-08
On Fri, 8 Jul 2016 12:50:49 -0500, Les Cargill <lcargill99@comcast.com> wrote:>> I'm not entirely sure, but I think there's a possibility that if your >> code _does_ have such an interaction between tasks, then it means that >> there's something fundamentally wrong with your software design, or >> possibly your system design as a whole. > >That's kind of what I mean by "dependence on priority is Bad."If you end up into such situation, the division of functionality into tasks might not be optimal. It would make sense to check if the functionality could be put into a single task. Alternatively, try to divide the same functionality into three (or more) tasks, e.g. that a new high priority quick execution time server task serves the two original or move slow functionalities to a low priority new task. After this, you can assign stable priorities to each of the three tasks.
Reply by ●July 8, 20162016-07-08
On 16-07-08 19:06 , Tim Wescott wrote:> On Thu, 07 Jul 2016 23:09:08 +0300, Niklas Holsti wrote: >> Don Y wrote: >> [snips] >>> Or, is it just "some suggestive feeling" >>> that led to the small integers chosen? >> >> No. From the requirements, I derive a set of tasks and deadlines, and >> then assign priorities in deadline-monotonic order. Then I analyse or >> measure WCETs and crank the response-time algorithm to check >> schedulability. The main problem is that the task interactions are often >> more complex than assumed by the simpler schedulability analysis >> methods. Fortunately, in my projects it is usually possible to separate >> hard-real-time tasks from soft-real-time tasks, and the interactions of >> the hard-real-time tasks tend to be rather simple. > > I have never had a problem doing this separation myself, although I've > inherited code from other people that fails at this, and rather badly. > The most egregious example of this was code that put the most important > two jobs into one superloop inside of one task -- and did so in such a > way that bollixed up the whole system if incoming commands exceeded a > rather moderate rate. > > I'm not entirely sure, but I think there's a possibility that if your > code _does_ have such an interaction between tasks, then it means that > there's something fundamentally wrong with your software design, or > possibly your system design as a whole.I agree that some troublesome task-to-task interactions are design flaws and can be eliminated by design changes, sometimes by splitting some task into one hard-real-time task and another less urgent task. But I have not always succeeded at that. One example of interactions I find difficult is a shared I/O channel or bus that must be used by various tasks, for various purposes, with most transmissions being sporadic and such that the sending task must wait for and check a response transmission. I believe that there are analysis methods which can find bounds on the response times for these tasks, including queueing delays and communication latencies, but I haven't studied or tried them yet. I may have to do so in my current project, however, because the system has three inter-communicating computers, with each computer connected by one SpaceWire link to a central router. -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
Reply by ●July 8, 20162016-07-08
On 16-07-08 20:50 , Les Cargill wrote:> Tim Wescott wrote: >> On Thu, 07 Jul 2016 23:09:08 +0300, Niklas Holsti wrote: >> >>> On 16-07-07 22:06 , Don Y wrote: >>>> Sorry, but I don't have time to continue this discussion; >>> >>> I'm not in a hurry; come back when you have time. >>> >>>> I'll leave you (and everyone reading) with one *simple* way to >>>> "validate" the assertion that priority-based schedulers are not toys >>>> (below): >>> >>> [snips] >>> >>> Don Y wrote: >>>>>> Exactly. "Priority" (in the sense of "set_task_priority()") is just >>>>>> an expedient to make coding a scheduler easy. It tells you *nothing* >>>>>> about the task's "importance", "timeliness constraints", etc. >>> >>> Niklas Holsti wrote: >>>>> That is not so, if you follow a systematic and proven >>>>> priority-assignment rule. >>>>> For example, if you use deadline-monotonic priorities then the order >>>>> of task priorities is equivalent to the order of task deadlines >>>>> (expressed as the maximum duration between task activation and >>>>> completion). So then the task priorities certainly tell you a lot >>>>> about the (relative) timeliness constraints. >>>>> >>>>> If you just assign priorities based on some subjective "task >>>>> importance" feeling, your statement is valid. >>> >>> Don Y wrote: >>>> Then, EVERY product that employs a priority-based scheduler should have >>>> a FORMAL document in part of its deliverables that clearly states these >>>> priorities and the method by which they were derived (as you later call >>>> them "the basis for one important set of the mathematical methods for >>>> verifying real-time performance: schedulability analysis"). So, the >>>> next bloke to look at the code knows exactly why they were chosen and >>>> which assumptions he/she must *continue* to operate under for those >>>> priority assignments to remain valid. >>> >>> Ideally yes, just as every mechanical engineering project should >>> document its design assumptions, stress and strength calculations, etc., >>> and every SW project should deliver full design documentation, complete >>> user manuals, maintenance manuals, etc. >>> >>> But seriously, documentation requirements are a different dimension from >>> the choice of design and implementation methods. Using a systematic >>> design method does not oblige you to document the design, although it >>> makes it much easier to document it. >>> >>>> Ask yourself how many of those documents you've seen? Authored? >>> >>> Several, for both questions. All the day-job projects I work on have >>> such, and I am usually the author. >>> >>>> Or, is it just "some suggestive feeling" >>>> that led to the small integers chosen? >>> >>> No. From the requirements, I derive a set of tasks and deadlines, and >>> then assign priorities in deadline-monotonic order. Then I analyse or >>> measure WCETs and crank the response-time algorithm to check >>> schedulability. The main problem is that the task interactions are often >>> more complex than assumed by the simpler schedulability analysis >>> methods. Fortunately, in my projects it is usually possible to separate >>> hard-real-time tasks from soft-real-time tasks, and the interactions of >>> the hard-real-time tasks tend to be rather simple. >> >> I have never had a problem doing this separation myself, although I've >> inherited code from other people that fails at this, and rather badly. >> The most egregious example of this was code that put the most important >> two jobs into one superloop inside of one task -- and did so in such a >> way that bollixed up the whole system if incoming commands exceeded a >> rather moderate rate. >> > > I have found that command processing is usually the lowest priority > thing in a system. But you have to buffer things properly.I agree that command processing is often one of the less urgent activities, but sometimes there are also urgent commands, and then it is useful to separate the incoming commands according to urgency, and send urgent commands to a high-priority task and non-urgent commands to a lower priority task.>> I'm not entirely sure, but I think there's a possibility that if your >> code _does_ have such an interaction between tasks, then it means that >> there's something fundamentally wrong with your software design, or >> possibly your system design as a whole. > > That's kind of what I mean by "dependence on priority is Bad."That is not how I have understood your dislike of priorities. My current project has about 20 tasks. One of these tasks is cyclic with a 2 ms period; another is the background task which scrubs RAM with a deadline measured in several minutes. These two tasks do not interact at all, apart from sharing the same processor. The application depends on the 2 ms task having a higher priority than the background task, and I find it hard to image how that dependency can be called "bad". -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
Reply by ●July 8, 20162016-07-08
On 08.7.2016 г. 23:35, Niklas Holsti wrote:> On 16-07-08 19:06 , Tim Wescott wrote: >> On Thu, 07 Jul 2016 23:09:08 +0300, Niklas Holsti wrote: >>> Don Y wrote: >>> [snips] >>>> Or, is it just "some suggestive feeling" >>>> that led to the small integers chosen? >>> >>> No. From the requirements, I derive a set of tasks and deadlines, and >>> then assign priorities in deadline-monotonic order. Then I analyse or >>> measure WCETs and crank the response-time algorithm to check >>> schedulability. The main problem is that the task interactions are often >>> more complex than assumed by the simpler schedulability analysis >>> methods. Fortunately, in my projects it is usually possible to separate >>> hard-real-time tasks from soft-real-time tasks, and the interactions of >>> the hard-real-time tasks tend to be rather simple. >> >> I have never had a problem doing this separation myself, although I've >> inherited code from other people that fails at this, and rather badly. >> The most egregious example of this was code that put the most important >> two jobs into one superloop inside of one task -- and did so in such a >> way that bollixed up the whole system if incoming commands exceeded a >> rather moderate rate. >> >> I'm not entirely sure, but I think there's a possibility that if your >> code _does_ have such an interaction between tasks, then it means that >> there's something fundamentally wrong with your software design, or >> possibly your system design as a whole. > > I agree that some troublesome task-to-task interactions are design flaws > and can be eliminated by design changes, sometimes by splitting some > task into one hard-real-time task and another less urgent task. But I > have not always succeeded at that. > > One example of interactions I find difficult is a shared I/O channel or > bus that must be used by various tasks, for various purposes, with most > transmissions being sporadic and such that the sending task must wait > for and check a response transmission. I believe that there are analysis > methods which can find bounds on the response times for these tasks, > including queueing delays and communication latencies, but I haven't > studied or tried them yet. I may have to do so in my current project, > however, because the system has three inter-communicating computers, > with each computer connected by one SpaceWire link to a central router. >Well sending (tcp over ethernet, let's be just practical) can be controlled by task priorities quite well. Reception is another beast, the quick fix which usually works is to throw more buffer RAM at it; then if it works you may not have to seek the right fix :-). Dimiter
Reply by ●July 8, 20162016-07-08
upsidedown@downunder.com wrote:> On Fri, 8 Jul 2016 12:50:49 -0500, Les Cargill > <lcargill99@comcast.com> wrote: > >>> I'm not entirely sure, but I think there's a possibility that if your >>> code _does_ have such an interaction between tasks, then it means that >>> there's something fundamentally wrong with your software design, or >>> possibly your system design as a whole. >> >> That's kind of what I mean by "dependence on priority is Bad." > > If you end up into such situation, the division of functionality into > tasks might not be optimal. >Yeah, that's not what I mean. Look - the relationship between consumers and producers in realtime systems is analogous the relationship between the escapement mechanism and the gearing in a regulator clock. So this forms a directed graph of dependencies. When an event fires, think of an arrow flying very accurately to hit a target that causes other arrows to fly. This roughly maps to "use an event driven approach" and that means you can do analysis using message sequence charts ( even if "messages" are just function calls ). If you have a complete description of all subsets of message sequences, then you can completely implement the system and "prove" conformance to those message sequences. I put "prove" in quotes because it's not the same as C.A.R Hoare proof of correctness. This does not mean that having poll loops is inappropriate - far from it. It depends. The goal *should* be determinism. If the goal is not determinism, then you have more trouble on your hands. Leaving dependencies as implicit leads to "well, it worked when I tested it" problems. Even if it's more trouble, it's worth making everything explicit. Worst case, you find either "priority inversion" or Heisenbugs[1]- if you find them at all. [1] assuming you're careful enough to avoid the usual math-or-memory-overwrite type Heisenbugs. ideally, you'd have detailed requirements specs and detailed design specs derived to support the design decisions being made but nobody wants to pay for that - they'd rather arbitrage a shorter process and "insure" errors through other means. I'm slightly disillusioned with our industry at this point - the reports on the later F35 software seem to indicate that they have to reboot computers at times. https://www.rt.com/usa/335318-f35-radar-reboot-required/> It would make sense to check if the functionality could be put into a > single task. >Possibly. It depends on whether that's worse than using the RTOS furniture.> Alternatively, try to divide the same functionality into three (or > more) tasks, e.g. that a new high priority quick execution time server > task serves the two original or move slow functionalities to a low > priority new task. After this, you can assign stable priorities to > each of the three tasks. >Dependency on priority is fine if you're prepared to do all the heavy lifting to truly understand the effect of all that in the kernel you use. -- Les Cargill
Reply by ●July 8, 20162016-07-08
Niklas Holsti wrote:> On 16-07-08 20:50 , Les Cargill wrote: >> Tim Wescott wrote: >>> On Thu, 07 Jul 2016 23:09:08 +0300, Niklas Holsti wrote: >>> >>>> On 16-07-07 22:06 , Don Y wrote: >>>>> Sorry, but I don't have time to continue this discussion; >>>> >>>> I'm not in a hurry; come back when you have time. >>>> >>>>> I'll leave you (and everyone reading) with one *simple* way to >>>>> "validate" the assertion that priority-based schedulers are not toys >>>>> (below): >>>> >>>> [snips] >>>> >>>> Don Y wrote: >>>>>>> Exactly. "Priority" (in the sense of "set_task_priority()") is just >>>>>>> an expedient to make coding a scheduler easy. It tells you >>>>>>> *nothing* >>>>>>> about the task's "importance", "timeliness constraints", etc. >>>> >>>> Niklas Holsti wrote: >>>>>> That is not so, if you follow a systematic and proven >>>>>> priority-assignment rule. >>>>>> For example, if you use deadline-monotonic priorities then the order >>>>>> of task priorities is equivalent to the order of task deadlines >>>>>> (expressed as the maximum duration between task activation and >>>>>> completion). So then the task priorities certainly tell you a lot >>>>>> about the (relative) timeliness constraints. >>>>>> >>>>>> If you just assign priorities based on some subjective "task >>>>>> importance" feeling, your statement is valid. >>>> >>>> Don Y wrote: >>>>> Then, EVERY product that employs a priority-based scheduler should >>>>> have >>>>> a FORMAL document in part of its deliverables that clearly states >>>>> these >>>>> priorities and the method by which they were derived (as you later >>>>> call >>>>> them "the basis for one important set of the mathematical methods for >>>>> verifying real-time performance: schedulability analysis"). So, the >>>>> next bloke to look at the code knows exactly why they were chosen and >>>>> which assumptions he/she must *continue* to operate under for those >>>>> priority assignments to remain valid. >>>> >>>> Ideally yes, just as every mechanical engineering project should >>>> document its design assumptions, stress and strength calculations, >>>> etc., >>>> and every SW project should deliver full design documentation, complete >>>> user manuals, maintenance manuals, etc. >>>> >>>> But seriously, documentation requirements are a different dimension >>>> from >>>> the choice of design and implementation methods. Using a systematic >>>> design method does not oblige you to document the design, although it >>>> makes it much easier to document it. >>>> >>>>> Ask yourself how many of those documents you've seen? Authored? >>>> >>>> Several, for both questions. All the day-job projects I work on have >>>> such, and I am usually the author. >>>> >>>>> Or, is it just "some suggestive feeling" >>>>> that led to the small integers chosen? >>>> >>>> No. From the requirements, I derive a set of tasks and deadlines, and >>>> then assign priorities in deadline-monotonic order. Then I analyse or >>>> measure WCETs and crank the response-time algorithm to check >>>> schedulability. The main problem is that the task interactions are >>>> often >>>> more complex than assumed by the simpler schedulability analysis >>>> methods. Fortunately, in my projects it is usually possible to separate >>>> hard-real-time tasks from soft-real-time tasks, and the interactions of >>>> the hard-real-time tasks tend to be rather simple. >>> >>> I have never had a problem doing this separation myself, although I've >>> inherited code from other people that fails at this, and rather badly. >>> The most egregious example of this was code that put the most important >>> two jobs into one superloop inside of one task -- and did so in such a >>> way that bollixed up the whole system if incoming commands exceeded a >>> rather moderate rate. >>> >> >> I have found that command processing is usually the lowest priority >> thing in a system. But you have to buffer things properly. > > I agree that command processing is often one of the less urgent > activities, but sometimes there are also urgent commands, and then it is > useful to separate the incoming commands according to urgency, and send > urgent commands to a high-priority task and non-urgent commands to a > lower priority task. >Indeed. But there are ways around that.>>> I'm not entirely sure, but I think there's a possibility that if your >>> code _does_ have such an interaction between tasks, then it means that >>> there's something fundamentally wrong with your software design, or >>> possibly your system design as a whole. >> >> That's kind of what I mean by "dependence on priority is Bad." > > That is not how I have understood your dislike of priorities. My current > project has about 20 tasks. One of these tasks is cyclic with a 2 ms > period; another is the background task which scrubs RAM with a deadline > measured in several minutes. These two tasks do not interact at all, > apart from sharing the same processor. The application depends on the 2 > ms task having a higher priority than the background task, and I find it > hard to image how that dependency can be called "bad". >That's a case where it doesn't matter. I've seen cases where mis-assigning the task priorities caused different behavior. That's bad. Your situation is simple enough that it's okay. With exactly two tasks, you will have highly deterministic operation - assuming the memory scrubber doesn't cause any untoward starvation . With, say a 100msec time tick that runs the task queue, that's 20:1 for 2msec - not bad at all. But I would also say that having the memory scrubber coded such that it would behave the same whether the kernel is preemptive or not would be a better design and implementation. Hopefully, you have a good handle on the jitter for the 2ms task. Ironically, one way I've estimated CPU utilization is to write a "do nothing" loop at very low priority that only increments a counter, then something periodically dumps a hi-res clock and the counter. I hope this is clearer - for some reason, this is hard to explain it seems. -- Les Cargill
Reply by ●July 8, 20162016-07-08
Niklas Holsti wrote:> On 16-07-08 19:06 , Tim Wescott wrote: >> On Thu, 07 Jul 2016 23:09:08 +0300, Niklas Holsti wrote: >>> Don Y wrote: >>> [snips] >>>> Or, is it just "some suggestive feeling" >>>> that led to the small integers chosen? >>> >>> No. From the requirements, I derive a set of tasks and deadlines, and >>> then assign priorities in deadline-monotonic order. Then I analyse or >>> measure WCETs and crank the response-time algorithm to check >>> schedulability. The main problem is that the task interactions are often >>> more complex than assumed by the simpler schedulability analysis >>> methods. Fortunately, in my projects it is usually possible to separate >>> hard-real-time tasks from soft-real-time tasks, and the interactions of >>> the hard-real-time tasks tend to be rather simple. >> >> I have never had a problem doing this separation myself, although I've >> inherited code from other people that fails at this, and rather badly. >> The most egregious example of this was code that put the most important >> two jobs into one superloop inside of one task -- and did so in such a >> way that bollixed up the whole system if incoming commands exceeded a >> rather moderate rate. >> >> I'm not entirely sure, but I think there's a possibility that if your >> code _does_ have such an interaction between tasks, then it means that >> there's something fundamentally wrong with your software design, or >> possibly your system design as a whole. > > I agree that some troublesome task-to-task interactions are design flaws > and can be eliminated by design changes, sometimes by splitting some > task into one hard-real-time task and another less urgent task. But I > have not always succeeded at that. > > One example of interactions I find difficult is a shared I/O channel or > bus that must be used by various tasks, for various purposes, with most > transmissions being sporadic and such that the sending task must wait > for and check a response transmission.So the sender sends asynchronously then blocks on a receive ( presumably with a timeout ) , with other tasks/ISRs handling the details. You may even have separate send and receive loops, with state indicating the timeout for each receive. If you do this right, very few tasks are marked "ready" at any given instant and you'll get more deterministic operation.> I believe that there are analysis > methods which can find bounds on the response times for these tasks, > including queueing delays and communication latencies, but I haven't > studied or tried them yet.It might be worth adding references to a high-res free-running timer and accumulating data. If you then wish to construct a latency model, you can then have data to see if your model makes any sense. I've even been able to make the cycle count for the high-res timer instrumentation come out even with measurement before :) Plus, a "soak loop" like that can help clarify stress behavior.> I may have to do so in my current project, > however, because the system has three inter-communicating computers, > with each computer connected by one SpaceWire link to a central router. >-- Les Cargill
Reply by ●July 8, 20162016-07-08
On Fri, 8 Jul 2016 23:35:22 +0300, Niklas Holsti <niklas.holsti@tidorum.invalid> wrote:>On 16-07-08 19:06 , Tim Wescott wrote: >> On Thu, 07 Jul 2016 23:09:08 +0300, Niklas Holsti wrote: >>> Don Y wrote: >>> [snips] >>>> Or, is it just "some suggestive feeling" >>>> that led to the small integers chosen? >>> >>> No. From the requirements, I derive a set of tasks and deadlines, and >>> then assign priorities in deadline-monotonic order. Then I analyse or >>> measure WCETs and crank the response-time algorithm to check >>> schedulability. The main problem is that the task interactions are often >>> more complex than assumed by the simpler schedulability analysis >>> methods. Fortunately, in my projects it is usually possible to separate >>> hard-real-time tasks from soft-real-time tasks, and the interactions of >>> the hard-real-time tasks tend to be rather simple. >> >> I have never had a problem doing this separation myself, although I've >> inherited code from other people that fails at this, and rather badly. >> The most egregious example of this was code that put the most important >> two jobs into one superloop inside of one task -- and did so in such a >> way that bollixed up the whole system if incoming commands exceeded a >> rather moderate rate. >> >> I'm not entirely sure, but I think there's a possibility that if your >> code _does_ have such an interaction between tasks, then it means that >> there's something fundamentally wrong with your software design, or >> possibly your system design as a whole. > >I agree that some troublesome task-to-task interactions are design flaws >and can be eliminated by design changes, sometimes by splitting some >task into one hard-real-time task and another less urgent task. But I >have not always succeeded at that. > >One example of interactions I find difficult is a shared I/O channel or >bus that must be used by various tasks, for various purposes, with most >transmissions being sporadic and such that the sending task must wait >for and check a response transmission. I believe that there are analysis >methods which can find bounds on the response times for these tasks, >including queueing delays and communication latencies, but I haven't >studied or tried them yet. I may have to do so in my current project, >however, because the system has three inter-communicating computers, >with each computer connected by one SpaceWire link to a central router.I would never even dream of letting multiple tasks directly accessing a serial line. Just put a high priority task on every serial line and let it do the arbitration between the application tasks. You can put much more intelligence than into the ISR.
Reply by ●July 9, 20162016-07-09
On 09.7.2016 г. 01:28, upsidedown@downunder.com wrote:> On Fri, 8 Jul 2016 23:35:22 +0300, Niklas Holsti > <niklas.holsti@tidorum.invalid> wrote: > >> On 16-07-08 19:06 , Tim Wescott wrote: >>> On Thu, 07 Jul 2016 23:09:08 +0300, Niklas Holsti wrote: >>>> Don Y wrote: >>>> [snips] >>>>> Or, is it just "some suggestive feeling" >>>>> that led to the small integers chosen? >>>> >>>> No. From the requirements, I derive a set of tasks and deadlines, and >>>> then assign priorities in deadline-monotonic order. Then I analyse or >>>> measure WCETs and crank the response-time algorithm to check >>>> schedulability. The main problem is that the task interactions are often >>>> more complex than assumed by the simpler schedulability analysis >>>> methods. Fortunately, in my projects it is usually possible to separate >>>> hard-real-time tasks from soft-real-time tasks, and the interactions of >>>> the hard-real-time tasks tend to be rather simple. >>> >>> I have never had a problem doing this separation myself, although I've >>> inherited code from other people that fails at this, and rather badly. >>> The most egregious example of this was code that put the most important >>> two jobs into one superloop inside of one task -- and did so in such a >>> way that bollixed up the whole system if incoming commands exceeded a >>> rather moderate rate. >>> >>> I'm not entirely sure, but I think there's a possibility that if your >>> code _does_ have such an interaction between tasks, then it means that >>> there's something fundamentally wrong with your software design, or >>> possibly your system design as a whole. >> >> I agree that some troublesome task-to-task interactions are design flaws >> and can be eliminated by design changes, sometimes by splitting some >> task into one hard-real-time task and another less urgent task. But I >> have not always succeeded at that. >> >> One example of interactions I find difficult is a shared I/O channel or >> bus that must be used by various tasks, for various purposes, with most >> transmissions being sporadic and such that the sending task must wait >> for and check a response transmission. I believe that there are analysis >> methods which can find bounds on the response times for these tasks, >> including queueing delays and communication latencies, but I haven't >> studied or tried them yet. I may have to do so in my current project, >> however, because the system has three inter-communicating computers, >> with each computer connected by one SpaceWire link to a central router. > > I would never even dream of letting multiple tasks directly accessing > a serial line.Of course, it is insane to have multiple tasks access the same port at the same time, I suppose he does not mean that. [ROFL, I did exactly *that* buy forgetting something obvious *this year*. I had a customer who got himself a new auxiliary HV source for his netMCA; to make things easier for him I offered to install drivers etc. remotely. I did and it did not work (the HV attaches to an otherwise unused serial port). I started to chase the problem, looked at the HV via the DPS terminal program. The output from the HV was garbled... Baud rate etc. could not be, then I was sleepy and did not notice that the output was garbled in that characters were missing but none were damaged. You have no idea through what I put the poor guy - I asked him to borrow another aux HV in his country from a nearby institute which had one, same thing. I asked him to do various cable inspections - no result. Took all day. He was as understanding and cooperative as one could dream of, better than that really, I am still in disbelief at how patient he was. Eventually I discovered my fault; I had forgotten I used to start a shell early in the boot process through the serial port and this was done not using the "uart" driver the HV and anybody should use but an early ersatz of it.. So both were going for the same port. You have no idea what an idiot I felt, I apologized for being such, the guy was just relieved we got it right eventually. ]> > Just put a high priority task on every serial line and let it do the > arbitration between the application tasks. You can put much more > intelligence than into the ISR. >The way I do this - say for ethernet output or PPP output - is have an I/O task which processes an output queue of packets. The tasks are queuing their packets (pointers to scattered data really) on their own and will succeed better based on priority; the I/O task simply pushes the queue (FIFO) out. The input part is more complex - the incoming packets are buffered in a number of buffers, then each packet is passed to its processing object (based on protocol etc), then say IP packets go into another queue (just pointers of course, no copying) which is processed by another task which eventually discards the packets and frees the buffer when all packets are discarded (the I/O task periodically clearing buffers from packets unprocessed after a time limit). Task priority for input would step in if there is a lot of traffic and a low priority task just does not get enough time to process its incoming data which will eventually be silently discarded. Not that I see much if any of it in normal life of course. Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/







