EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

Task priorities in non strictly real-time systems

Started by pozz January 3, 2020
David Brown wrote:
> On 06/01/2020 04:26, Les Cargill wrote: >> Don Y wrote: >>> On 1/5/2020 12:32 PM, Les Cargill wrote: >>>> pozz wrote: >>>>> Il 03/01/2020 15:19, David Brown ha scritto: >>>> <snop> >>>>> >>>>> You're right, cooperative scheduling is better if I want to reuse >>>>> the functions used in superloop architecture (that is a cooperative >>>>> scheduler). >>>> >>>> Preemptive scheduling probably causes more problems than it solves, >>>> over some problem domains. SFAIK, cooperative multitasking can be >>>> very close to fully deterministic, with interrupts being the part >>>> that's not quite deterministic. >>> >>> Preemptive frameworks can be implemented in a variety of ways. >>> It need NOT mean that the processor can be pulled out from under >>> your feet at any "random" time. >>> >>> Preemption happens whenever the scheduler is invoked.&#4294967295; In a system >>> with a time-driven scheduler, then the possibility of the processor >>> being rescheduled at any time exists -- whenever the jiffy dictates. >>> >> >> >> That seems to me to be incorrect. "Preemptive" means "the scheduler runs >> on the timer tick." I'd say "inherently". > > I agree that Don is wrong here - but you are wrong too! >
:)
> "Pre-emptive" means that tasks can, in general, be pre-empted. The > processor /can/ be pulled out from under them at any time. Thus your > threads must be written in a way that the code works correctly even if > something else steals the processor time. > > But pre-emptive does not require a timer tick, or any other time-based > scheduling. The pre-emption can be triggered by other means, such as > non-timer interrupts. (To be a "real time operating system", you need a > timing mechanism in control.) >
I'd really think that in practice, the timer tick would be "first among equals" . I do have to admit that I have never really seen a case where there was preemption and no timer tick. -- Les Cargill
On 1/7/20 8:42 AM, Don Y wrote:
> On 1/7/2020 1:21 AM, upsidedown@downunder.com wrote: >> On Mon, 6 Jan 2020 16:10:03 -0700, Don Y <blockedofcourse@foo.invalid> >> wrote: >> >>>> In a typical RT system nearly all (and most of the time all) task are >>>> in some sort of wait state waiting for a hardware (or software) >>> >>> Misconception.&nbsp; That depends on the system's utilization factor >>> which, in turn, depends on the pairing of hardware resources to >>> application >>> needs. >> >> In my experience, keep the long time average CPU load below 40-60 % >> and the RT system behaves quite nicely. > > The devices I've designed typically operate at, near or *BEYOND* 100%. > Time spent doing nothing means you've got resources that you don't need. > Resources = product cost.&nbsp; (most of my past work was done pinching > pennies; the idea of *adding* hardware to a design was anathema!) > > [I can recall recoding FINISHED code to replace 3-byte instructions > with 2-byte instructions; or eliding an opcode by rearranging the > order of register assignments; or noticing if the CY was set so > I could eliminate an increment following an add by replacing it with > an add-with carry.... all to see if I could trim a few hundred bytes > out of the running/tested binary to remove a ROM device from the > bill of materials!] > > If you design systems to gracefully degrade and recognize that almost > all RT problems are "soft" -- or can be made so -- and know how to design > in that arena, then there's no real downside to designing for no margin. > > The system with the barcode reader that I described elsewhere frequently > ran into overload.&nbsp; Things would get "sluggish", occasionally.&nbsp; But, never > broke.&nbsp; (The barcode reader would typically tie up much of the machine's > "real time" for a tenth of a second) > > The barcode reader, itself, became sort of a game; folks would rub > labels back and forth across the detector as fast as they could > in an attempt to crash the device.&nbsp; You'd watch the display start > to get sluggish.&nbsp; UARTS would stop transmitting.&nbsp; Keypresses > would take longer to be recognized.&nbsp; etc. > > But, it wouldn't take long before "arms got tired" and the system > rebounded.&nbsp; (lots of colorful/obscene comments when you'd watch someone > doing this sort of thing!&nbsp; :> )
Different fields, different requirements and guidelines. For most of my work 50% utilization is a bit high. There might be a SPEC that (at least at product first release) the processor utilization will be below 50%, and I have had cases with even lower specs (I think they knew that they were going to up spec the work needed so wanted some head room. But then what I do the processor is cheap, but lack of performance is critical (literally someone could get killed in some worse cases). I often am putting in dollars of processor into a unit that costs thousands, so pinching pennies on the processor isn't worth it. If the scanner falls behind means the user needs to stop a wait is one thing, and is likely tolerable. If it means you lost track of a package, it might be something very different. If it means you have gotten out of sync between packages and their labels and start miss-directing them you have a real big problem. That is sort of the distinction between a system that is 'Real Time' and a system that just sort of needs to be fast enough.
On 1/7/2020 23:43, Clifford Heath wrote:
> On 8/1/20 5:53 am, Dimiter_Popoff wrote: >> On 1/7/2020 20:09, George Neuner wrote: >>> ..... >>> >>> You do have to modify the kernel to get the constants right ... but >>> the clock interrupt handler is extremely simple: it's in kernel mode, >>> and all it does is increment a variable.&nbsp; You easily _can_ run a 10KHz >>> clock on most Intel/AMD systems of the last 15 years. >> >> Didn't they get around to introducing an architecture level timebase >> register yet? Like the one power has since its origin (80-s I think)? >> Or anything like the decrementer register? >> >> I used to check Intel every 5 years or so to see if they had become >> usable to me, gave up on that about maybe 15 years ago. > > You didn't look closely enough. The TSC register has been in the Intel > architecture since the first Pentium. > > CH
No I clearly did not look close enough. Or did and have forgotten... (less likely). I suppose seeing their register model has been enough to put me off. Dimiter ====================================================== Dimiter Popoff, TGI http://www.tgi-sci.com ====================================================== http://www.flickr.com/photos/didi_tgi/
On Tue, 7 Jan 2020 22:23:31 -0500, Richard Damon
<Richard@Damon-Family.org> wrote:

>On 1/7/20 8:42 AM, Don Y wrote: >> On 1/7/2020 1:21 AM, upsidedown@downunder.com wrote: >>> On Mon, 6 Jan 2020 16:10:03 -0700, Don Y <blockedofcourse@foo.invalid> >>> wrote: >>> >>>>> In a typical RT system nearly all (and most of the time all) task are >>>>> in some sort of wait state waiting for a hardware (or software) >>>> >>>> Misconception.&#4294967295; That depends on the system's utilization factor >>>> which, in turn, depends on the pairing of hardware resources to >>>> application >>>> needs. >>> >>> In my experience, keep the long time average CPU load below 40-60 % >>> and the RT system behaves quite nicely. >> >> The devices I've designed typically operate at, near or *BEYOND* 100%. >> Time spent doing nothing means you've got resources that you don't need. >> Resources = product cost.&#4294967295; (most of my past work was done pinching >> pennies; the idea of *adding* hardware to a design was anathema!) >> >> [I can recall recoding FINISHED code to replace 3-byte instructions >> with 2-byte instructions; or eliding an opcode by rearranging the >> order of register assignments; or noticing if the CY was set so >> I could eliminate an increment following an add by replacing it with >> an add-with carry.... all to see if I could trim a few hundred bytes >> out of the running/tested binary to remove a ROM device from the >> bill of materials!] >> >> If you design systems to gracefully degrade and recognize that almost >> all RT problems are "soft" -- or can be made so -- and know how to design >> in that arena, then there's no real downside to designing for no margin. >> >> The system with the barcode reader that I described elsewhere frequently >> ran into overload.&#4294967295; Things would get "sluggish", occasionally.&#4294967295; But, never >> broke.&#4294967295; (The barcode reader would typically tie up much of the machine's >> "real time" for a tenth of a second) >> >> The barcode reader, itself, became sort of a game; folks would rub >> labels back and forth across the detector as fast as they could >> in an attempt to crash the device.&#4294967295; You'd watch the display start >> to get sluggish.&#4294967295; UARTS would stop transmitting.&#4294967295; Keypresses >> would take longer to be recognized.&#4294967295; etc. >> >> But, it wouldn't take long before "arms got tired" and the system >> rebounded.&#4294967295; (lots of colorful/obscene comments when you'd watch someone >> doing this sort of thing!&#4294967295; :> ) > >Different fields, different requirements and guidelines. For most of my >work 50% utilization is a bit high. There might be a SPEC that (at least >at product first release) the processor utilization will be below 50%, >and I have had cases with even lower specs (I think they knew that they >were going to up spec the work needed so wanted some head room. But >then what I do the processor is cheap, but lack of performance is >critical (literally someone could get killed in some worse cases). I >often am putting in dollars of processor into a unit that costs >thousands, so pinching pennies on the processor isn't worth it.
Of course these utilization figures are somewhat usable rules of thumb and it depends on how many RT tasks really needs to be hard-RT and how many can be soft-RT. The hard-RT environment is quite demanding, since the worst case sum of interrupt latencies added by the sum of any higher priority RT-task execution times must be shorter than the deadline. In practice not all interrupts want to run just ahead of your task and neither does all high priority task want to run just before your task. so many soft-RT task might be happy executing in 99 % or even 95 % of cases below a soft real time deadline.

The 2024 Embedded Online Conference