On 06/08/14 22:31, Randy Yates wrote:> Tom Gardner <spamjunk@blueyonder.co.uk> writes: > >> On 06/08/14 20:56, Jack wrote: >>> Paul Rubin <no.email@nospam.invalid> wrote: >>> >>>> Rob Gaddi <rgaddi@technologyhighland.invalid> writes: >>>>> How do you guarantee microsecond level response from Python (and I >>>>> assume Linux)? >>>> >>>> Linux has a realtime scheduler but guaranteeing microsecond response is >>>> not realistic because of nondeterministic cache misses and that sort of >>>> thing. For soft realtime maybe it's feasible. Milliseconds are easier >>>> than microseconds of course. >>> >>> or you use something like Linux RTAI that gives you hard real time. >> >> .. providing, of course, the processor neither instruction nor >> data caches. If either are present then the ratio of mean:max >> latency rapidly becomes very significant. >> >> Even a 486 with its tiny caches showed a 10:1 interrupt latency >> depending on what was/wasn't in the caches. (IIRC that was measured >> with a tiny kernel, certainly nothing like the size/complexity >> of a linux kernel) > > Aren't interrupt routines in some permanently-cached portion of the MMU?No, and once an MMU is involved all the paging information might or might not be cached. Double whammy.
Linux question -- how to tell if serial port in /dev is for real?
Started by ●August 4, 2014
Reply by ●August 6, 20142014-08-06
Reply by ●August 7, 20142014-08-07
Tom Gardner <spamjunk@blueyonder.co.uk> writes:> On 06/08/14 22:31, Randy Yates wrote: >> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >> >>> On 06/08/14 20:56, Jack wrote: >>>> Paul Rubin <no.email@nospam.invalid> wrote: >>>> >>>>> Rob Gaddi <rgaddi@technologyhighland.invalid> writes: >>>>>> How do you guarantee microsecond level response from Python (and I >>>>>> assume Linux)? >>>>> >>>>> Linux has a realtime scheduler but guaranteeing microsecond response is >>>>> not realistic because of nondeterministic cache misses and that sort of >>>>> thing. For soft realtime maybe it's feasible. Milliseconds are easier >>>>> than microseconds of course. >>>> >>>> or you use something like Linux RTAI that gives you hard real time. >>> >>> .. providing, of course, the processor neither instruction nor >>> data caches. If either are present then the ratio of mean:max >>> latency rapidly becomes very significant. >>> >>> Even a 486 with its tiny caches showed a 10:1 interrupt latency >>> depending on what was/wasn't in the caches. (IIRC that was measured >>> with a tiny kernel, certainly nothing like the size/complexity >>> of a linux kernel) >> >> Aren't interrupt routines in some permanently-cached portion of the MMU? > > No, and once an MMU is involved all the paging information > might or might not be cached. Double whammy.So you're telling me that Intel made a processor that, by design, could not service interrupts in a deterministic fashion? Hard to believe. Is that also the case for the present-day Intel architectures? -- Randy Yates Digital Signal Labs http://www.digitalsignallabs.com
Reply by ●August 7, 20142014-08-07
Randy Yates <yates@digitalsignallabs.com> writes:> Tom Gardner <spamjunk@blueyonder.co.uk> writes: > >> On 06/08/14 22:31, Randy Yates wrote: >>> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >>> >>>> On 06/08/14 20:56, Jack wrote: >>>>> Paul Rubin <no.email@nospam.invalid> wrote: >>>>> >>>>>> Rob Gaddi <rgaddi@technologyhighland.invalid> writes: >>>>>>> How do you guarantee microsecond level response from Python (and I >>>>>>> assume Linux)? >>>>>> >>>>>> Linux has a realtime scheduler but guaranteeing microsecond response is >>>>>> not realistic because of nondeterministic cache misses and that sort of >>>>>> thing. For soft realtime maybe it's feasible. Milliseconds are easier >>>>>> than microseconds of course. >>>>> >>>>> or you use something like Linux RTAI that gives you hard real time. >>>> >>>> .. providing, of course, the processor neither instruction nor >>>> data caches. If either are present then the ratio of mean:max >>>> latency rapidly becomes very significant. >>>> >>>> Even a 486 with its tiny caches showed a 10:1 interrupt latency >>>> depending on what was/wasn't in the caches. (IIRC that was measured >>>> with a tiny kernel, certainly nothing like the size/complexity >>>> of a linux kernel) >>> >>> Aren't interrupt routines in some permanently-cached portion of the MMU? >> >> No, and once an MMU is involved all the paging information >> might or might not be cached. Double whammy. > > So you're telling me that Intel made a processor that, by design, could > not service interrupts in a deterministic fashion? Hard to believe. > > Is that also the case for the present-day Intel architectures?I should add that real-time operation is therefore not possible on such processors, regardless of what operating system is used. This just doesn't sound right to me... -- Randy Yates Digital Signal Labs http://www.digitalsignallabs.com
Reply by ●August 7, 20142014-08-07
Randy Yates wrote:> Tom Gardner <spamjunk@blueyonder.co.uk> writes: > >> On 06/08/14 22:31, Randy Yates wrote: >>> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >>> >>>> On 06/08/14 20:56, Jack wrote: >>>>> Paul Rubin <no.email@nospam.invalid> wrote: >>>>> >>>>>> Rob Gaddi <rgaddi@technologyhighland.invalid> writes: >>>>>>> How do you guarantee microsecond level response from Python (and I >>>>>>> assume Linux)? >>>>>> >>>>>> Linux has a realtime scheduler but guaranteeing microsecond response is >>>>>> not realistic because of nondeterministic cache misses and that sort of >>>>>> thing. For soft realtime maybe it's feasible. Milliseconds are easier >>>>>> than microseconds of course. >>>>> >>>>> or you use something like Linux RTAI that gives you hard real time. >>>> >>>> .. providing, of course, the processor neither instruction nor >>>> data caches. If either are present then the ratio of mean:max >>>> latency rapidly becomes very significant. >>>> >>>> Even a 486 with its tiny caches showed a 10:1 interrupt latency >>>> depending on what was/wasn't in the caches. (IIRC that was measured >>>> with a tiny kernel, certainly nothing like the size/complexity >>>> of a linux kernel) >>> >>> Aren't interrupt routines in some permanently-cached portion of the MMU? >> >> No, and once an MMU is involved all the paging information >> might or might not be cached. Double whammy. > > So you're telling me that Intel made a processor that, by design, could > not service interrupts in a deterministic fashion? Hard to believe. >I believe that is the case. They got down to millisecond-ish resolution. I was using PPC boards at the time, so I didn't try it. ARM may be better, or not. When people need fast and deterministic, the answer has been generally to use an FPGA or some high-speed PIC.> Is that also the case for the present-day Intel architectures? >Yes. Everything is highly buffered, although with Windows there are some services in the multimedia sphere that may be better*. I've never seen an ASIO audio driver than gets much below 1msec, but that may be in part to limit turnarounds on exchanging data with the card/bus device. *may be true of Linux; dunno. You should see some of the things online gamers have to deal with related to latency. -- Les Cargill
Reply by ●August 7, 20142014-08-07
upsidedown@downunder.com wrote:> On Wed, 06 Aug 2014 13:08:32 +0800, Reinhardt Behm<snip>> > The only reason I like the Linux approach is when a large number of > serial lines (more than a few dozen) are needed or there is a need to > wait for a mix of real serial ports and serial ports connected trough > ethernet/serial converters (using TCP/UDP sockets), this can all be > done in a single thread with a single select statement. >The MingW GCC port I use for 'C' programming on Windows has a "winsock.h" and a "winsock2.h" that both offer select(). I presume it's available for Microsoft toolchains but have no way of finding out for sure. -- Les Cargill
Reply by ●August 7, 20142014-08-07
Les Cargill wrote:> Randy Yates wrote: >> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >> >>> On 06/08/14 22:31, Randy Yates wrote: >>>> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >>>> >>>>> On 06/08/14 20:56, Jack wrote: >>>>>> Paul Rubin <no.email@nospam.invalid> wrote: >>>>>> >>>>>>> Rob Gaddi <rgaddi@technologyhighland.invalid> writes: >>>>>>>> How do you guarantee microsecond level response from Python (and I >>>>>>>> assume Linux)? >>>>>>> >>>>>>> Linux has a realtime scheduler but guaranteeing microsecond response >>>>>>> is not realistic because of nondeterministic cache misses and that >>>>>>> sort of >>>>>>> thing. For soft realtime maybe it's feasible. Milliseconds are >>>>>>> easier than microseconds of course. >>>>>> >>>>>> or you use something like Linux RTAI that gives you hard real time. >>>>> >>>>> .. providing, of course, the processor neither instruction nor >>>>> data caches. If either are present then the ratio of mean:max >>>>> latency rapidly becomes very significant. >>>>> >>>>> Even a 486 with its tiny caches showed a 10:1 interrupt latency >>>>> depending on what was/wasn't in the caches. (IIRC that was measured >>>>> with a tiny kernel, certainly nothing like the size/complexity >>>>> of a linux kernel) >>>> >>>> Aren't interrupt routines in some permanently-cached portion of the >>>> MMU? >>> >>> No, and once an MMU is involved all the paging information >>> might or might not be cached. Double whammy. >> >> So you're telling me that Intel made a processor that, by design, could >> not service interrupts in a deterministic fashion? Hard to believe. >> > > I believe that is the case. They got down to millisecond-ish > resolution. > > I was using PPC boards at the time, so I didn't try it. ARM > may be better, or not. > > When people need fast and deterministic, the answer has been generally > to use an FPGA or some high-speed PIC. > >> Is that also the case for the present-day Intel architectures? >> > > Yes. Everything is highly buffered, although with Windows there are > some services in the multimedia sphere that may be better*. I've > never seen an ASIO audio driver than gets much below 1msec, but > that may be in part to limit turnarounds on exchanging data > with the card/bus device. > > *may be true of Linux; dunno. > > You should see some of the things online gamers have to deal with > related to latency.It was even worse with CPUs like the Geode. The video system on the chip "stole" the CPU at every HSYNCH for some microseconds. Really nice for real time operation. -- Reinhardt
Reply by ●August 7, 20142014-08-07
On 07/08/14 05:36, Randy Yates wrote:> Randy Yates <yates@digitalsignallabs.com> writes: > >> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >> >>> On 06/08/14 22:31, Randy Yates wrote: >>>> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >>>> >>>>> On 06/08/14 20:56, Jack wrote: >>>>>> Paul Rubin <no.email@nospam.invalid> wrote: >>>>>> >>>>>>> Rob Gaddi <rgaddi@technologyhighland.invalid> writes: >>>>>>>> How do you guarantee microsecond level response from Python (and I >>>>>>>> assume Linux)? >>>>>>> >>>>>>> Linux has a realtime scheduler but guaranteeing microsecond response is >>>>>>> not realistic because of nondeterministic cache misses and that sort of >>>>>>> thing. For soft realtime maybe it's feasible. Milliseconds are easier >>>>>>> than microseconds of course. >>>>>> >>>>>> or you use something like Linux RTAI that gives you hard real time. >>>>> >>>>> .. providing, of course, the processor neither instruction nor >>>>> data caches. If either are present then the ratio of mean:max >>>>> latency rapidly becomes very significant. >>>>> >>>>> Even a 486 with its tiny caches showed a 10:1 interrupt latency >>>>> depending on what was/wasn't in the caches. (IIRC that was measured >>>>> with a tiny kernel, certainly nothing like the size/complexity >>>>> of a linux kernel) >>>> >>>> Aren't interrupt routines in some permanently-cached portion of the MMU? >>> >>> No, and once an MMU is involved all the paging information >>> might or might not be cached. Double whammy. >> >> So you're telling me that Intel made a processor that, by design, could >> not service interrupts in a deterministic fashion? Hard to believe. >> >> Is that also the case for the present-day Intel architectures? > > I should add that real-time operation is therefore not possible on such > processors, regardless of what operating system is used. This just > doesn't sound right to me... >Yes, that is correct. There is always a tradeoff between deterministic real-time behaviour and high throughput. You can't optimise for both - either in a cpu or in an OS. So processors like desktop x86 devices have long latencies and reactions to interrupts, which is countered by using buffers, DMA, etc. Processors like Cortex M devices have short reaction times, but less throughput per clock. And the same applies to OS's. You can change the compromises to some extent. Linux has options in the kernel to control the balance, such as by controlling the preemption of kernel calls (disabled preemption means smoother flow for greater throughput on servers, enabled preemption means faster reactions to user input on a desktop), and the OS and cpu can work together by locking interrupts or processes to particular cores in order to avoid cache flushes.
Reply by ●August 7, 20142014-08-07
Il 06/08/2014 04:38, Tim Wescott ha scritto:> On Tue, 05 Aug 2014 16:15:44 -0700, Paul Rubin wrote: > >> Tim Wescott <tim@seemywebsite.really> writes: >>> All of the desktop serial-port stuff I've done in the last decade has >>> been in support of embedded work, ... >>> So I'm constrained to C or C++. >> >> If this is about embedded Linux, Python works great for that. > > Will Python run on an ARM Cortex M0 with 64k of ROM and 8K of RAM? > > With room left over for actual application code? Linux on ARM Cortex M0? Fantastic... could you give us more details about the board? Is it a custom board? Are you able to run a full Linux (no uclinux) on a M0-based board?
Reply by ●August 7, 20142014-08-07
On 07/08/14 04:32, Randy Yates wrote:> Tom Gardner <spamjunk@blueyonder.co.uk> writes: > >> On 06/08/14 22:31, Randy Yates wrote: >>> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >>> >>>> On 06/08/14 20:56, Jack wrote: >>>>> Paul Rubin <no.email@nospam.invalid> wrote: >>>>> >>>>>> Rob Gaddi <rgaddi@technologyhighland.invalid> writes: >>>>>>> How do you guarantee microsecond level response from Python (and I >>>>>>> assume Linux)? >>>>>> >>>>>> Linux has a realtime scheduler but guaranteeing microsecond response is >>>>>> not realistic because of nondeterministic cache misses and that sort of >>>>>> thing. For soft realtime maybe it's feasible. Milliseconds are easier >>>>>> than microseconds of course. >>>>> >>>>> or you use something like Linux RTAI that gives you hard real time. >>>> >>>> .. providing, of course, the processor neither instruction nor >>>> data caches. If either are present then the ratio of mean:max >>>> latency rapidly becomes very significant. >>>> >>>> Even a 486 with its tiny caches showed a 10:1 interrupt latency >>>> depending on what was/wasn't in the caches. (IIRC that was measured >>>> with a tiny kernel, certainly nothing like the size/complexity >>>> of a linux kernel) >>> >>> Aren't interrupt routines in some permanently-cached portion of the MMU? >> >> No, and once an MMU is involved all the paging information >> might or might not be cached. Double whammy. > > So you're telling me that Intel made a processor that, by design, could > not service interrupts in a deterministic fashion? Hard to believe.If we ignore bugs and errata (which we shouldn't) then the end result is deterministic but the time delay depends /significantly/ on the current state of the processor, MMU and memory system. And all those are effectively unpredictable.> Is that also the case for the present-day Intel architectures?Yes, in spades. The last Intel processor that made a nod to *hard* realtime was the i860, which had a small instruction cache and an instruction to lock down whatever was in the cache.
Reply by ●August 7, 20142014-08-07
On 07/08/14 04:36, Randy Yates wrote:> Randy Yates <yates@digitalsignallabs.com> writes: > >> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >> >>> On 06/08/14 22:31, Randy Yates wrote: >>>> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >>>> >>>>> On 06/08/14 20:56, Jack wrote: >>>>>> Paul Rubin <no.email@nospam.invalid> wrote: >>>>>> >>>>>>> Rob Gaddi <rgaddi@technologyhighland.invalid> writes: >>>>>>>> How do you guarantee microsecond level response from Python (and I >>>>>>>> assume Linux)? >>>>>>> >>>>>>> Linux has a realtime scheduler but guaranteeing microsecond response is >>>>>>> not realistic because of nondeterministic cache misses and that sort of >>>>>>> thing. For soft realtime maybe it's feasible. Milliseconds are easier >>>>>>> than microseconds of course. >>>>>> >>>>>> or you use something like Linux RTAI that gives you hard real time. >>>>> >>>>> .. providing, of course, the processor neither instruction nor >>>>> data caches. If either are present then the ratio of mean:max >>>>> latency rapidly becomes very significant. >>>>> >>>>> Even a 486 with its tiny caches showed a 10:1 interrupt latency >>>>> depending on what was/wasn't in the caches. (IIRC that was measured >>>>> with a tiny kernel, certainly nothing like the size/complexity >>>>> of a linux kernel) >>>> >>>> Aren't interrupt routines in some permanently-cached portion of the MMU? >>> >>> No, and once an MMU is involved all the paging information >>> might or might not be cached. Double whammy. >> >> So you're telling me that Intel made a processor that, by design, could >> not service interrupts in a deterministic fashion? Hard to believe. >> >> Is that also the case for the present-day Intel architectures? > > I should add that real-time operation is therefore not possible on such > processors, regardless of what operating system is used. This just > doesn't sound right to me...That depends on your requirements. Soft realtime certainly is possible. For hard realtime then you will have to determine the mean:max latency and "derate" the processor appropriately. As I noted, you needed 10:1 for the i486, and I have no idea whatsoever what you need for a current Intel processor. The problem is not confined to Intel; it *must* occur wherever there are caches. After all, the whole point of caches is to speed up things *on average*, so by definition there must be some sequences that perform worse than average. Your job, for hard realtime systems, is to determine the pessimal sequence :) (Optimal sequence be damned!)







