EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

Task priorities in non strictly real-time systems

Started by pozz January 3, 2020
On Mon, 06 Jan 2020 13:29:46 -0500, George Neuner
<gneuner2@comcast.net> wrote:

>On Mon, 06 Jan 2020 15:39:01 +0200, upsidedown@downunder.com wrote: > >>Smells like a time sharing system, in which the quantum defines how >>much CPU time is given to each time share user before switching to >>next user. > >A "jiffy" typically is a single clock increment, and a timeslice >"quantum" is some (maybe large) number of jiffies. > >E.g., a modern system can have a 1 microsecond clock increment, but a >typical Linux timeslice quantum is 10 milliseconds.
, There is no way that the interrupt frequency would be 1 MHz. The old Linux default interrupt rate (HZ) was 100 and IIRC it is now 1000 Hz. That microsecond is just the time unit used in the time accumulator. With HZ=100 (10 ms) 10000 was added to the time accumulator in each clock interrupt. By using addends >= 10001 the clock will run faster and <= 9999 slower. This is useful if the interrupt rate is not exactly as specified or when you want a NTP client to slowly catch up to the NTP server time, without causing time jumps. 100 ns time units in the time accumulator have been used in VMS and WinNT for decades.
On Mon, 6 Jan 2020 16:10:03 -0700, Don Y <blockedofcourse@foo.invalid>
wrote:

>[attrs elided] > >On 1/6/2020 6:39 AM, upsidedown@downunder.com wrote: >>> [There are certain contexts where it takes on very specific -- and >>> numerical -- meanings] >>> >>> There's nothing magical about the jiffy. But, it's typically >>> wired to an IRQ (directly or indirectly). With all the right >>> mechanisms in place for a preemptive system, the jiffy can just >>> raise an event (JIFFY_OCCURRED) and a very high priority "task" >>> *in* the kernel will be dispatched which will run the scheduler. >> >> Unless you want to update a time of day time clock, what is the use of >> regular timer interrupts ? > >It gives you a quantifiable way of accounting for the passage of time. > >How do you blink an indicator at a nominal rate without some notion of >when it's next time to turn it on/off? > >How do you detect a long SPACE on a serial port line? > >A "regular" timer IRQ typically drives your timing system. Apps >then use *that* for their notion of elapsed time (both to measure and >to wait/delay) > >> In some systems an I/O may arm a one shot clock interrupt waiting for >> the I/O or until a timeout occurs. If the I/O is completed in time, >> the receiver routine disables the timer request and the timer >> interrupt never occurs. > >Wonderful if you have a HARDWARE timer to dedicate to each such use.
If you do not need regular timer interrupts, you have at least one free timer. You just need a clock queue. When the first entry is entered into the queue also arm the timer for that expiration time. When the timer expires, check the clock queue for the next time and arm the timer accordingly. If there are two entries in the queue with nearly the same expiration time, combine these to one entry. Special care is needed when entering new or canceling entries from the queue. The problem with regular timer interrupt is that the interrupt rate must be kept quite low (say 10 ms) to avoid interrupt overhead, so if you sometimes need some 100 us timing resolution, how do you do it ? With a clock queue you can have a 100 us timing and perhaps the next timing event might happen after 70 ms. Only two timer interrupts in 70.1 ms
Richard Damon wrote:
> Just a notice that the generally accepted definition of a "Cooperative" > scheduling system, vs a "Preemptive" one is that a Cooperative system > will only invoke the scheduler at defined points in the program, that > generally includes most 'system calls', as opposed to preemptive, where > the scheduler can be invoked at almost any point (except for limited > critical sections). >
Right. And for path-dependent reasons, that *usually* means the timer tick ... thing. It, of course, doesn't have to.
> A system that only runs the scheduler at explicit calls to it isn't the > normal definition of a cooperative system, I would call that more of a > manually scheduled system. > > The advantage of a cooperative system is that since the scheduler > happens at discrete points, most of the asynchronous interaction issues > (races) go away (as it is very unlikely that the code will call the > system functions in the middle of such an operation. >
It's all fun and games until you have interrupt service... :)
> The advantage of a preemptive system is that, while you need to be more > careful of race conditions, low priority code doesn't need to worry > about the scheduling requirements of the high priority code.
-- Les Cargill
upsidedown@downunder.com wrote:
> On Mon, 6 Jan 2020 16:10:03 -0700, Don Y <blockedofcourse@foo.invalid> > wrote: > >> [attrs elided] >> >> On 1/6/2020 6:39 AM, upsidedown@downunder.com wrote: >>>> [There are certain contexts where it takes on very specific -- and >>>> numerical -- meanings] >>>> >>>> There's nothing magical about the jiffy. But, it's typically >>>> wired to an IRQ (directly or indirectly). With all the right >>>> mechanisms in place for a preemptive system, the jiffy can just >>>> raise an event (JIFFY_OCCURRED) and a very high priority "task" >>>> *in* the kernel will be dispatched which will run the scheduler. >>> >>> Unless you want to update a time of day time clock, what is the use of >>> regular timer interrupts ? >> >> It gives you a quantifiable way of accounting for the passage of time. >> >> How do you blink an indicator at a nominal rate without some notion of >> when it's next time to turn it on/off? >> >> How do you detect a long SPACE on a serial port line? >> >> A "regular" timer IRQ typically drives your timing system. Apps >> then use *that* for their notion of elapsed time (both to measure and >> to wait/delay) >> >>> In some systems an I/O may arm a one shot clock interrupt waiting for >>> the I/O or until a timeout occurs. If the I/O is completed in time, >>> the receiver routine disables the timer request and the timer >>> interrupt never occurs. >> >> Wonderful if you have a HARDWARE timer to dedicate to each such use. > > If you do not need regular timer interrupts, you have at least one > free timer. You just need a clock queue. When the first entry is > entered into the queue also arm the timer for that expiration time. >
Clock queues work quite well.
> When the timer expires, check the clock queue for the next time and > arm the timer accordingly. If there are two entries in the queue with > nearly the same expiration time, combine these to one entry. Special > care is needed when entering new or canceling entries from the queue. > > > The problem with regular timer interrupt is that the interrupt rate > must be kept quite low (say 10 ms) to avoid interrupt overhead, so if > you sometimes need some 100 us timing resolution, how do you do it ? >
You need timing services other than the jiffy clock. That may mean extra hardware timers or it may mean an expansion of the jiffy service. We'd reprogram the timer on DOS machines to some value, then count that down for the "regular" jiffy timer , mainly for TSRs.
> With a clock queue you can have a 100 us timing and perhaps the next > timing event might happen after 70 ms. Only two timer interrupts in > 70.1 ms >
Yeah; they're nice. You might even be able to synthesize a timer queue out of a thread in say, Linux, using usleep(). Great way to leverage hardware timers as well. -- Les Cargill
On Mon, 6 Jan 2020 16:10:03 -0700, Don Y <blockedofcourse@foo.invalid>
wrote:

>> In a typical RT system nearly all (and most of the time all) task are >> in some sort of wait state waiting for a hardware (or software) > >Misconception. That depends on the system's utilization factor >which, in turn, depends on the pairing of hardware resources to application >needs.
In my experience, keep the long time average CPU load below 40-60 % and the RT system behaves quite nicely.
On Mon, 6 Jan 2020 16:10:03 -0700, Don Y <blockedofcourse@foo.invalid>
wrote:

<description of simple RT kernel for 6809 with typically 6 to 12 tasks
skipped>

>>>> The scheduler checked the task state byte of each created task >>>> (requiring just 3 instructions/task). If no new higher priority task >>>> had become runnable, the scheduler performed a simple return from >>>> interrupt restoring the registers from the local stack and the >>>> suspended task was resumed. >>> >>> Deciding and picking can often be far more complicated actions. >>> And, the criteria used to make that selection are often not simple >>> (static?) priorities.
Assuming a task can be in a WAITING state (not READY), runnable (READY) and actually RUNNING. The scheduler simply scans the priority list in priority order and when it detects the first task in READY state (or already RUNNING) execute that task. The scan omits all tasks in WAITING state. Only if a very large number (dozens) of tasks are expected to be in WAITING state before detecting the first task in READY state it might make sense to have a separate queue for READY or RUNNING tasks only
>> Why complicate things with dynamically changing priorities ? For >> avoiding priority inversion or improve user responsiveness ? But as I > >I find the whole notion of "priorities" to be abhorrent. It suggests >that you KNOW what the priorities SHOULD be -- as part of the DESIGN >of the system.
Of course I know :-).
>In practice, they tend to be twiddled after-the-fact >when things don't quite work as you'd HOPED (i.e., you screwed up >the design and are now using a rejiggering of priorities to try to >coax it into behaving the way you'd HOPED)
I may sometimes have to split a task that consumes more than expected CPU time for its priority level into two task, moving the high time requirement to a separate task and run it at a lower priority.
>When you assign arbitrary numeric values to impose an ordering on >tasks ("relative importance"), do you prepare a formal justification >for those choices? A *rationale* that defends your choices?
Of course I check the deadline requirement of each task before assigning priorities. Those tasks without formal deadline requirement can often be execute in the null task with lowest priority. This of course requires that the average CPU load is well below 100 % so that the null task is sometimes executed.
On 06/01/2020 04:26, Les Cargill wrote:
> Don Y wrote: >> On 1/5/2020 12:32 PM, Les Cargill wrote: >>> pozz wrote: >>>> Il 03/01/2020 15:19, David Brown ha scritto: >>> <snop> >>>> >>>> You're right, cooperative scheduling is better if I want to reuse >>>> the functions used in superloop architecture (that is a cooperative >>>> scheduler). >>> >>> Preemptive scheduling probably causes more problems than it solves, >>> over some problem domains. SFAIK, cooperative multitasking can be >>> very close to fully deterministic, with interrupts being the part >>> that's not quite deterministic. >> >> Preemptive frameworks can be implemented in a variety of ways. >> It need NOT mean that the processor can be pulled out from under >> your feet at any "random" time. >> >> Preemption happens whenever the scheduler is invoked.&#4294967295; In a system >> with a time-driven scheduler, then the possibility of the processor >> being rescheduled at any time exists -- whenever the jiffy dictates. >> > > > That seems to me to be incorrect. "Preemptive" means "the scheduler runs > on the timer tick." I'd say "inherently".
I agree that Don is wrong here - but you are wrong too! "Pre-emptive" means that tasks can, in general, be pre-empted. The processor /can/ be pulled out from under them at any time. Thus your threads must be written in a way that the code works correctly even if something else steals the processor time. But pre-emptive does not require a timer tick, or any other time-based scheduling. The pre-emption can be triggered by other means, such as non-timer interrupts. (To be a "real time operating system", you need a timing mechanism in control.)
On 06/01/2020 19:29, George Neuner wrote:
> On Mon, 06 Jan 2020 15:39:01 +0200, upsidedown@downunder.com wrote: > >> Smells like a time sharing system, in which the quantum defines how >> much CPU time is given to each time share user before switching to >> next user. > > A "jiffy" typically is a single clock increment, and a timeslice > "quantum" is some (maybe large) number of jiffies. > > E.g., a modern system can have a 1 microsecond clock increment, but a > typical Linux timeslice quantum is 10 milliseconds. >
I don't believe there is any kind of "official" definition for a "jiffy". On Linux at least, a "jiffy" /is/ the timeslice quantum - that is what it means. And it is generally 1 millisecond, but can sometimes be slower on server-optimised kernels. (It used to be 10 milliseconds on desktops too, but that was a long time ago.)
On 1/7/20 2:38 AM, Les Cargill wrote:
> Richard Damon wrote: >> Just a notice that the generally accepted definition of a >> "Cooperative" scheduling system, vs a "Preemptive" one is that a >> Cooperative system will only invoke the scheduler at defined points in >> the program, that generally includes most 'system calls', as opposed >> to preemptive, where the scheduler can be invoked at almost any point >> (except for limited critical sections). >> > > Right. And for path-dependent reasons, that *usually* means the timer > tick ... thing. It, of course, doesn't have to.
It can be ANY of the various interrupts that the machine has, it could be the timer, or it could be a serial port, or any other device. I find most of my scheduler invocations are a result of a device driver interrupt, and only a lesser number from the system timer. If you REALLY are doing most of your rescheduling on timer ticks then in my experience you likely aren't really needing real-time performance.
> >> A system that only runs the scheduler at explicit calls to it isn't >> the normal definition of a cooperative system, I would call that more >> of a manually scheduled system. >> >> The advantage of a cooperative system is that since the scheduler >> happens at discrete points, most of the asynchronous interaction >> issues (races) go away (as it is very unlikely that the code will call >> the system functions in the middle of such an operation. >> > > It's all fun and games until you have interrupt service... :)
ISRs should have a very limited focus in what they manipulate, so most of the code shouldn't be touching anything that the ISR is going to touch. In my opinion, if you are trying to 'peek' at the progress of an interrupt based operation, your probably doing it wrong.
> >> The advantage of a preemptive system is that, while you need to be >> more careful of race conditions, low priority code doesn't need to >> worry about the scheduling requirements of the high priority code. >
On 1/7/20 4:11 AM, David Brown wrote:
> But pre-emptive does not require a timer tick, or any other time-based > scheduling. The pre-emption can be triggered by other means, such as > non-timer interrupts. (To be a "real time operating system", you need a > timing mechanism in control.)
Real-Time does NOT need a timing mechanism in control. Real-Time means that operations have a reasonably strong definition of a dead-line of when they need to get done, but many systems can be designed to do that without needing a clock/timer. For example, a given device needs to have its request serviced within a specified time. I can design the system so that requirement is met by the known workload and priority given to the various tasks. Often the timer is only needed to detect that something is wrong, and I need to sacrifice some deadline to meet another, or abort a failed operation. A system with a static priority system, and a run the highest priority ready task scheduling can be simpler in design (IF it can meet the requirements).

The 2024 Embedded Online Conference