Reply by January 9, 20202020-01-09
On Tue, 7 Jan 2020 22:23:31 -0500, Richard Damon
<Richard@Damon-Family.org> wrote:

>On 1/7/20 8:42 AM, Don Y wrote: >> On 1/7/2020 1:21 AM, upsidedown@downunder.com wrote: >>> On Mon, 6 Jan 2020 16:10:03 -0700, Don Y <blockedofcourse@foo.invalid> >>> wrote: >>> >>>>> In a typical RT system nearly all (and most of the time all) task are >>>>> in some sort of wait state waiting for a hardware (or software) >>>> >>>> Misconception.&#4294967295; That depends on the system's utilization factor >>>> which, in turn, depends on the pairing of hardware resources to >>>> application >>>> needs. >>> >>> In my experience, keep the long time average CPU load below 40-60 % >>> and the RT system behaves quite nicely. >> >> The devices I've designed typically operate at, near or *BEYOND* 100%. >> Time spent doing nothing means you've got resources that you don't need. >> Resources = product cost.&#4294967295; (most of my past work was done pinching >> pennies; the idea of *adding* hardware to a design was anathema!) >> >> [I can recall recoding FINISHED code to replace 3-byte instructions >> with 2-byte instructions; or eliding an opcode by rearranging the >> order of register assignments; or noticing if the CY was set so >> I could eliminate an increment following an add by replacing it with >> an add-with carry.... all to see if I could trim a few hundred bytes >> out of the running/tested binary to remove a ROM device from the >> bill of materials!] >> >> If you design systems to gracefully degrade and recognize that almost >> all RT problems are "soft" -- or can be made so -- and know how to design >> in that arena, then there's no real downside to designing for no margin. >> >> The system with the barcode reader that I described elsewhere frequently >> ran into overload.&#4294967295; Things would get "sluggish", occasionally.&#4294967295; But, never >> broke.&#4294967295; (The barcode reader would typically tie up much of the machine's >> "real time" for a tenth of a second) >> >> The barcode reader, itself, became sort of a game; folks would rub >> labels back and forth across the detector as fast as they could >> in an attempt to crash the device.&#4294967295; You'd watch the display start >> to get sluggish.&#4294967295; UARTS would stop transmitting.&#4294967295; Keypresses >> would take longer to be recognized.&#4294967295; etc. >> >> But, it wouldn't take long before "arms got tired" and the system >> rebounded.&#4294967295; (lots of colorful/obscene comments when you'd watch someone >> doing this sort of thing!&#4294967295; :> ) > >Different fields, different requirements and guidelines. For most of my >work 50% utilization is a bit high. There might be a SPEC that (at least >at product first release) the processor utilization will be below 50%, >and I have had cases with even lower specs (I think they knew that they >were going to up spec the work needed so wanted some head room. But >then what I do the processor is cheap, but lack of performance is >critical (literally someone could get killed in some worse cases). I >often am putting in dollars of processor into a unit that costs >thousands, so pinching pennies on the processor isn't worth it.
Of course these utilization figures are somewhat usable rules of thumb and it depends on how many RT tasks really needs to be hard-RT and how many can be soft-RT. The hard-RT environment is quite demanding, since the worst case sum of interrupt latencies added by the sum of any higher priority RT-task execution times must be shorter than the deadline. In practice not all interrupts want to run just ahead of your task and neither does all high priority task want to run just before your task. so many soft-RT task might be happy executing in 99 % or even 95 % of cases below a soft real time deadline.
Reply by Dimiter_Popoff January 8, 20202020-01-08
On 1/7/2020 23:43, Clifford Heath wrote:
> On 8/1/20 5:53 am, Dimiter_Popoff wrote: >> On 1/7/2020 20:09, George Neuner wrote: >>> ..... >>> >>> You do have to modify the kernel to get the constants right ... but >>> the clock interrupt handler is extremely simple: it's in kernel mode, >>> and all it does is increment a variable.&nbsp; You easily _can_ run a 10KHz >>> clock on most Intel/AMD systems of the last 15 years. >> >> Didn't they get around to introducing an architecture level timebase >> register yet? Like the one power has since its origin (80-s I think)? >> Or anything like the decrementer register? >> >> I used to check Intel every 5 years or so to see if they had become >> usable to me, gave up on that about maybe 15 years ago. > > You didn't look closely enough. The TSC register has been in the Intel > architecture since the first Pentium. > > CH
No I clearly did not look close enough. Or did and have forgotten... (less likely). I suppose seeing their register model has been enough to put me off. Dimiter ====================================================== Dimiter Popoff, TGI http://www.tgi-sci.com ====================================================== http://www.flickr.com/photos/didi_tgi/
Reply by Richard Damon January 7, 20202020-01-07
On 1/7/20 8:42 AM, Don Y wrote:
> On 1/7/2020 1:21 AM, upsidedown@downunder.com wrote: >> On Mon, 6 Jan 2020 16:10:03 -0700, Don Y <blockedofcourse@foo.invalid> >> wrote: >> >>>> In a typical RT system nearly all (and most of the time all) task are >>>> in some sort of wait state waiting for a hardware (or software) >>> >>> Misconception.&nbsp; That depends on the system's utilization factor >>> which, in turn, depends on the pairing of hardware resources to >>> application >>> needs. >> >> In my experience, keep the long time average CPU load below 40-60 % >> and the RT system behaves quite nicely. > > The devices I've designed typically operate at, near or *BEYOND* 100%. > Time spent doing nothing means you've got resources that you don't need. > Resources = product cost.&nbsp; (most of my past work was done pinching > pennies; the idea of *adding* hardware to a design was anathema!) > > [I can recall recoding FINISHED code to replace 3-byte instructions > with 2-byte instructions; or eliding an opcode by rearranging the > order of register assignments; or noticing if the CY was set so > I could eliminate an increment following an add by replacing it with > an add-with carry.... all to see if I could trim a few hundred bytes > out of the running/tested binary to remove a ROM device from the > bill of materials!] > > If you design systems to gracefully degrade and recognize that almost > all RT problems are "soft" -- or can be made so -- and know how to design > in that arena, then there's no real downside to designing for no margin. > > The system with the barcode reader that I described elsewhere frequently > ran into overload.&nbsp; Things would get "sluggish", occasionally.&nbsp; But, never > broke.&nbsp; (The barcode reader would typically tie up much of the machine's > "real time" for a tenth of a second) > > The barcode reader, itself, became sort of a game; folks would rub > labels back and forth across the detector as fast as they could > in an attempt to crash the device.&nbsp; You'd watch the display start > to get sluggish.&nbsp; UARTS would stop transmitting.&nbsp; Keypresses > would take longer to be recognized.&nbsp; etc. > > But, it wouldn't take long before "arms got tired" and the system > rebounded.&nbsp; (lots of colorful/obscene comments when you'd watch someone > doing this sort of thing!&nbsp; :> )
Different fields, different requirements and guidelines. For most of my work 50% utilization is a bit high. There might be a SPEC that (at least at product first release) the processor utilization will be below 50%, and I have had cases with even lower specs (I think they knew that they were going to up spec the work needed so wanted some head room. But then what I do the processor is cheap, but lack of performance is critical (literally someone could get killed in some worse cases). I often am putting in dollars of processor into a unit that costs thousands, so pinching pennies on the processor isn't worth it. If the scanner falls behind means the user needs to stop a wait is one thing, and is likely tolerable. If it means you lost track of a package, it might be something very different. If it means you have gotten out of sync between packages and their labels and start miss-directing them you have a real big problem. That is sort of the distinction between a system that is 'Real Time' and a system that just sort of needs to be fast enough.
Reply by Les Cargill January 7, 20202020-01-07
David Brown wrote:
> On 06/01/2020 04:26, Les Cargill wrote: >> Don Y wrote: >>> On 1/5/2020 12:32 PM, Les Cargill wrote: >>>> pozz wrote: >>>>> Il 03/01/2020 15:19, David Brown ha scritto: >>>> <snop> >>>>> >>>>> You're right, cooperative scheduling is better if I want to reuse >>>>> the functions used in superloop architecture (that is a cooperative >>>>> scheduler). >>>> >>>> Preemptive scheduling probably causes more problems than it solves, >>>> over some problem domains. SFAIK, cooperative multitasking can be >>>> very close to fully deterministic, with interrupts being the part >>>> that's not quite deterministic. >>> >>> Preemptive frameworks can be implemented in a variety of ways. >>> It need NOT mean that the processor can be pulled out from under >>> your feet at any "random" time. >>> >>> Preemption happens whenever the scheduler is invoked.&#4294967295; In a system >>> with a time-driven scheduler, then the possibility of the processor >>> being rescheduled at any time exists -- whenever the jiffy dictates. >>> >> >> >> That seems to me to be incorrect. "Preemptive" means "the scheduler runs >> on the timer tick." I'd say "inherently". > > I agree that Don is wrong here - but you are wrong too! >
:)
> "Pre-emptive" means that tasks can, in general, be pre-empted. The > processor /can/ be pulled out from under them at any time. Thus your > threads must be written in a way that the code works correctly even if > something else steals the processor time. > > But pre-emptive does not require a timer tick, or any other time-based > scheduling. The pre-emption can be triggered by other means, such as > non-timer interrupts. (To be a "real time operating system", you need a > timing mechanism in control.) >
I'd really think that in practice, the timer tick would be "first among equals" . I do have to admit that I have never really seen a case where there was preemption and no timer tick. -- Les Cargill
Reply by Les Cargill January 7, 20202020-01-07
Don Y wrote:
> Hi Les, > > [much elided as this thread is consuming more time than I'd > prepared to spend on it...] >
Fair enough - thanks for your thoughts. <snip> -- Les Cargill
Reply by January 7, 20202020-01-07
On Wed, 8 Jan 2020 08:43:56 +1100, Clifford Heath <no.spam@please.net>
wrote:

>On 8/1/20 5:53 am, Dimiter_Popoff wrote: >> On 1/7/2020 20:09, George Neuner wrote: >>> ..... >>> >>> You do have to modify the kernel to get the constants right ... but >>> the clock interrupt handler is extremely simple: it's in kernel mode, >>> and all it does is increment a variable.&#4294967295; You easily _can_ run a 10KHz >>> clock on most Intel/AMD systems of the last 15 years. >> >> Didn't they get around to introducing an architecture level timebase >> register yet? Like the one power has since its origin (80-s I think)? >> Or anything like the decrementer register? >> >> I used to check Intel every 5 years or so to see if they had become >> usable to me, gave up on that about maybe 15 years ago. > >You didn't look closely enough. The TSC register has been in the Intel >architecture since the first Pentium.
The CPU clock, which drives the TSC, has horrible temperature stability. There are also issues with various power saving modes as well as with multicore and multiprocessors (set affinity to one processor only).
Reply by January 7, 20202020-01-07
On Tue, 07 Jan 2020 13:09:29 -0500, George Neuner
<gneuner2@comcast.net> wrote:

>On Tue, 07 Jan 2020 07:05:58 +0200, upsidedown@downunder.com wrote: > >>On Mon, 06 Jan 2020 13:29:46 -0500, George Neuner >><gneuner2@comcast.net> wrote: >> >>>On Mon, 06 Jan 2020 15:39:01 +0200, upsidedown@downunder.com wrote: >>> >>>>Smells like a time sharing system, in which the quantum defines how >>>>much CPU time is given to each time share user before switching to >>>>next user. >>> >>>A "jiffy" typically is a single clock increment, and a timeslice >>>"quantum" is some (maybe large) number of jiffies. >>> >>>E.g., a modern system can have a 1 microsecond clock increment, but a >>>typical Linux timeslice quantum is 10 milliseconds. >>, >>There is no way that the interrupt frequency would be 1 MHz. The old >>Linux default interrupt rate (HZ) was 100 and IIRC it is now 1000 Hz. >>That microsecond is just the time unit used in the time accumulator. >>With HZ=100 (10 ms) 10000 was added to the time accumulator in each >>clock interrupt. By using addends >= 10001 the clock will run faster >>and <= 9999 slower. This is useful if the interrupt rate is not >>exactly as specified or when you want a NTP client to slowly catch up >>to the NTP server time, without causing time jumps. > >Note that I said "can" rather than "does". > >You do have to modify the kernel to get the constants right ... but >the clock interrupt handler is extremely simple: it's in kernel mode, >and all it does is increment a variable. You easily _can_ run a 10KHz >clock on most Intel/AMD systems of the last 15 years.
In WinNT you can use the user mode function SetSystemTimeAdjustment() to control what number of 100 ns units are added to the time accumulator during each clock interrupt.
>More modern multicore chips (e.g, Haswell onward) can handle much >higher interrupt rates - it's just a question of whether your clock >can provide them ... and while commodity system board clocks don't, >there certainly there are addons available that will.
It doesn't seem to be very sensible to use an extremely high interrupt rate (from any source) in highly pipelined processors, since each interrupt will flush the pipeline.
>>100 ns time units in the time accumulator have been used in VMS and >>WinNT for decades. > >Yes. But that really was an illusion because the clock interrupt rate >in NT was 1Khz or less (again depending on hardware).
When I experimented with the SetSystemTimeAdjustment() in Win2000 the single processor interrupt frequency was 100 Hz (10 ms) and 64 Hz (15.625 ms) in dual processor machines, thus in a dual processor machine 156250 is added to the 64 bit time accumulator during each clock interrupt. Divide the time accumulator by 10 million, you get the number of seconds since a start date. By adding a large constant (instead of 1) into the time accumulator, no matte what exotic interrupt frequency (say 123.456789 Hz) is used, rounding errors are much smaller. Conceptually the time accumulator is similar to the phase accumulator used in direct digital RF synthesis (DDS).
Reply by Clifford Heath January 7, 20202020-01-07
On 8/1/20 5:53 am, Dimiter_Popoff wrote:
> On 1/7/2020 20:09, George Neuner wrote: >> ..... >> >> You do have to modify the kernel to get the constants right ... but >> the clock interrupt handler is extremely simple: it's in kernel mode, >> and all it does is increment a variable.&nbsp; You easily _can_ run a 10KHz >> clock on most Intel/AMD systems of the last 15 years. > > Didn't they get around to introducing an architecture level timebase > register yet? Like the one power has since its origin (80-s I think)? > Or anything like the decrementer register? > > I used to check Intel every 5 years or so to see if they had become > usable to me, gave up on that about maybe 15 years ago.
You didn't look closely enough. The TSC register has been in the Intel architecture since the first Pentium. CH
Reply by Dimiter_Popoff January 7, 20202020-01-07
On 1/7/2020 20:09, George Neuner wrote:
> ..... > > You do have to modify the kernel to get the constants right ... but > the clock interrupt handler is extremely simple: it's in kernel mode, > and all it does is increment a variable. You easily _can_ run a 10KHz > clock on most Intel/AMD systems of the last 15 years.
Didn't they get around to introducing an architecture level timebase register yet? Like the one power has since its origin (80-s I think)? Or anything like the decrementer register? I used to check Intel every 5 years or so to see if they had become usable to me, gave up on that about maybe 15 years ago. Dimiter ====================================================== Dimiter Popoff, TGI http://www.tgi-sci.com ====================================================== http://www.flickr.com/photos/didi_tgi/
Reply by George Neuner January 7, 20202020-01-07
On Mon, 6 Jan 2020 19:50:23 -0700, Don Y <blockedofcourse@foo.invalid>
wrote:

>Hi George, > >On 1/6/2020 1:05 PM, George Neuner wrote: >> On Sun, 5 Jan 2020 01:58:04 +0100, pozz <pozzugno@gmail.com> wrote: >> >>> Could you point on a good simple material to study (online or book)? >> >> Tony Hoare's book "Communicating Sequential Processes" is online at: >> http://www.usingcsp.com/cspbook.pdf >> >> It will teach you everything you need to know about how to use message >> passing to solve synchronization and coordination problems. It won't >> teach you about your specific operating system's messaging facilities. > >It's not a panacea. It introduces design and run-time overhead, as well >as adding another opportunity to "get things wrong". <frown>
Yes, but not the point. The question was study material, and Hoare's book shows how to do it right.
>And, can complicate the cause-effect understanding of what's happening >in your system (esp if you support asynchronous messages... note how >many folks have trouble handling "signal()"!)
Yeah, but signal is a lossy channel. MPC over network also might be lossy (though not necessarily), but within a single host it is reliable. The case of within a single host (mostly) is the subject of this discussion.
>*But*, the UNDERstated appeal is that it exposes the interactions between >entities (that can be largely *isolated*, otherwise). If you engineer >those entities well, this "goodness" will be apparent in the nature of >the messages and their frequency. > >E.g., in my world, looking at the IDL goes a LONG way to telling you >what CAN happen -- and WHERE!
George