EmbeddedRelated.com
Forums
The 2026 Embedded Online Conference

Linux question -- how to tell if serial port in /dev is for real?

Started by Tim Wescott August 4, 2014
On 07/08/14 14:42, upsidedown@downunder.com wrote:
> On Thu, 07 Aug 2014 12:53:24 +0200, David Brown > <david.brown@hesbynett.no> wrote: > >> >> Another thing to remember in all this is that you do not have to prove >> that your deadlines will be reached in 100% of cases. > > If it is not 100 %, then it is not hard real time.
By that definition, there is no such thing as "hard real time". If I ask you to build me a blinking LED with an absolute 100% guarantee that it will blink at least once every second, you cannot do so. You could give me MTBF estimates suggesting that the LED and the microcontroller /should/ have an expected lifetime of 10 years - but you can't guarantee it. And you can't guarantee that the device will not suffer from a single-event upset, or a hit from a cosmic ray, that will cause malfunction. You might be able to prove that part of the system - the software code - is 100% good enough. But that only applies on the assumption that everything else, including the hardware and the development tools, is perfect. Your job as a software engineer working on a "hard real time" system is to ensure that the contribution made to the expected failure rate as a result of the software is minor in comparison to other expected failure causes. When you reach the point that you can say "when the system fails to meet the deadlines, it is highly unlikely to be the fault of the software", then your software is "hard real time". (Ideally, of course, the software for critical parts should be simple enough that you /are/ sure it will be within deadlines 100% of the time - given that everything else works according to specification. But that's just the ideal case.)
> >> Perhaps 99.999% >> is good enough, or perhaps you need 7 nines. But your task is never to >> aim for "perfect" - it is to be "good enough". > > That is by definition soft real time. > >> If you can provide >> statistical evidence that it is more likely for the user to be killed by >> a meteorite than for a deadline to be missed, then that is often good >> enough for the job. Of course you must be careful doing this sort of >> thing - but there is always a balance to be struck between the >> reliability of a system and the cost. > > If you really need HRT, you need to keep the system as simple as > possible, in order to do meaningful worst case calculations. >
David Brown <david.brown@hesbynett.no> writes:

> On 07/08/14 14:42, upsidedown@downunder.com wrote: >> On Thu, 07 Aug 2014 12:53:24 +0200, David Brown >> <david.brown@hesbynett.no> wrote: >> >>> >>> Another thing to remember in all this is that you do not have to prove >>> that your deadlines will be reached in 100% of cases. >> >> If it is not 100 %, then it is not hard real time. > > By that definition, there is no such thing as "hard real time". If I > ask you to build me a blinking LED with an absolute 100% guarantee > that it will blink at least once every second, you cannot do so. You > could give me MTBF estimates suggesting that the LED and the > microcontroller /should/ have an expected lifetime of 10 years - but > you can't guarantee it. And you can't guarantee that the device will > not suffer from a single-event upset, or a hit from a cosmic ray, that > will cause malfunction. > > You might be able to prove that part of the system - the software code > - > is 100% good enough. But that only applies on the assumption that > everything else, including the hardware and the development tools, is > perfect. > > Your job as a software engineer working on a "hard real time" system > is to ensure that the contribution made to the expected failure rate > as a result of the software is minor in comparison to other expected > failure causes. When you reach the point that you can say "when the > system fails to meet the deadlines, it is highly unlikely to be the > fault of the software", then your software is "hard real time". > > (Ideally, of course, the software for critical parts should be simple > enough that you /are/ sure it will be within deadlines 100% of the > time - given that everything else works according to specification. > But that's just the ideal case.)
What you're arguing here are definitions. Sure, you are correct that you can never guarantee anything - a power outage totally hoses any hope of real-time. But I think most folks do not include such things in the hard real time definition. -- Randy Yates Digital Signal Labs http://www.digitalsignallabs.com
On 07/08/14 14:01, David Brown wrote:
> On 07/08/14 14:42, upsidedown@downunder.com wrote: >> On Thu, 07 Aug 2014 12:53:24 +0200, David Brown >> <david.brown@hesbynett.no> wrote: >> >>> >>> Another thing to remember in all this is that you do not have to prove >>> that your deadlines will be reached in 100% of cases. >> >> If it is not 100 %, then it is not hard real time. > > By that definition, there is no such thing as "hard real time". If I ask you to build me a blinking LED with an absolute 100% guarantee that it will blink at least once every second, you cannot do > so. You could give me MTBF estimates suggesting that the LED and the microcontroller /should/ have an expected lifetime of 10 years - but you can't guarantee it. And you can't guarantee that the > device will not suffer from a single-event upset, or a hit from a cosmic ray, that will cause malfunction.
The blinking light will continue to blink providing something does not stop it operating normally. Caches, when operating /normally/, can cause a system to breach the guarantee. That's a fundamental difference, and one that lawyers would exploit if necessary!
On 07/08/14 16:41, Tom Gardner wrote:
> On 07/08/14 14:01, David Brown wrote: >> On 07/08/14 14:42, upsidedown@downunder.com wrote: >>> On Thu, 07 Aug 2014 12:53:24 +0200, David Brown >>> <david.brown@hesbynett.no> wrote: >>> >>>> >>>> Another thing to remember in all this is that you do not have to prove >>>> that your deadlines will be reached in 100% of cases. >>> >>> If it is not 100 %, then it is not hard real time. >> >> By that definition, there is no such thing as "hard real time". If I >> ask you to build me a blinking LED with an absolute 100% guarantee >> that it will blink at least once every second, you cannot do >> so. You could give me MTBF estimates suggesting that the LED and the >> microcontroller /should/ have an expected lifetime of 10 years - but >> you can't guarantee it. And you can't guarantee that the >> device will not suffer from a single-event upset, or a hit from a >> cosmic ray, that will cause malfunction. > > The blinking light will continue to blink providing something does not > stop it operating normally. > > Caches, when operating /normally/, can cause a system to breach the > guarantee. > > That's a fundamental difference, and one that lawyers would exploit if > necessary! >
I agree that this is a fundamental difference - and you clearly have to take the cache's behaviour into account when determining if you can be confident enough that your system will meet its deadlines. In practice, people don't have trouble making real-time systems with microcontrollers with caches. It is an extra issue to consider and deal with, but it is perfectly possible. As with all real-time systems, you have to divide up tasks and figure out what is important, and how to be sure you can meet your deadlines, and perhaps move faster response tasks to hardware, dedicated microcontrollers, or whatever. Other possible solutions including putting critical interrupt routines into uncached static ram, or locking cache lines. Microcontrollers have such features precisely so that you can get the responses you need and still use cache. Desktop cpus don't have such features - that is one of the reasons why they are unsuitable for hard real-time tasks.
On 14-08-07 10:37 , Tom Gardner wrote:
> On 07/08/14 04:36, Randy Yates wrote: >> Randy Yates <yates@digitalsignallabs.com> writes: >> >>> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >>> >>>> On 06/08/14 22:31, Randy Yates wrote: >>>>> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >>>>> >>>>>> On 06/08/14 20:56, Jack wrote: >>>>>>> Paul Rubin <no.email@nospam.invalid> wrote: >>>>>>> >>>>>>>> Rob Gaddi <rgaddi@technologyhighland.invalid> writes: >>>>>>>>> How do you guarantee microsecond level response from Python (and I >>>>>>>>> assume Linux)? >>>>>>>> >>>>>>>> Linux has a realtime scheduler but guaranteeing microsecond >>>>>>>> response is >>>>>>>> not realistic because of nondeterministic cache misses and that >>>>>>>> sort of >>>>>>>> thing. For soft realtime maybe it's feasible. Milliseconds are >>>>>>>> easier >>>>>>>> than microseconds of course. >>>>>>> >>>>>>> or you use something like Linux RTAI that gives you hard real time. >>>>>> >>>>>> .. providing, of course, the processor neither instruction nor >>>>>> data caches. If either are present then the ratio of mean:max >>>>>> latency rapidly becomes very significant. >>>>>> >>>>>> Even a 486 with its tiny caches showed a 10:1 interrupt latency >>>>>> depending on what was/wasn't in the caches. (IIRC that was measured >>>>>> with a tiny kernel, certainly nothing like the size/complexity >>>>>> of a linux kernel) >>>>> >>>>> Aren't interrupt routines in some permanently-cached portion of the >>>>> MMU? >>>> >>>> No, and once an MMU is involved all the paging information >>>> might or might not be cached. Double whammy. >>> >>> So you're telling me that Intel made a processor that, by design, could >>> not service interrupts in a deterministic fashion? Hard to believe. >>> >>> Is that also the case for the present-day Intel architectures? >> >> I should add that real-time operation is therefore not possible on such >> processors, regardless of what operating system is used. This just >> doesn't sound right to me... > > That depends on your requirements. Soft realtime certainly is > possible. For hard realtime then you will have to determine the > mean:max latency and "derate" the processor appropriately. > > As I noted, you needed 10:1 for the i486, and I have > no idea whatsoever what you need for a current Intel > processor. > > The problem is not confined to Intel; it *must* occur wherever > there are caches. After all, the whole point of caches is to > speed up things *on average*, so by definition there must be > some sequences that perform worse than average. > > Your job, for hard realtime systems, is to determine the > pessimal sequence :) (Optimal sequence be damned!)
This is attempted by static WCET (Worst-Case Execution-Time) analysis tools such as aiT from AbsInt (www.absint.com). Works IMO pretty well for instruction caches, less so for data caches (that is, you get a considerable over-estimate in WCET), but much depends on the regularity and complexity of the program. Preemptive scheduling is also a bit of a problem. -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
On 14-08-07 15:33 , upsidedown@downunder.com wrote:
> On Thu, 07 Aug 2014 11:35:48 +0100, Tom Gardner > <spamjunk@blueyonder.co.uk> wrote: > >> On 07/08/14 10:18, upsidedown@downunder.com wrote: >>> On Thu, 07 Aug 2014 08:37:26 +0100, Tom Gardner >>> <spamjunk@blueyonder.co.uk> wrote:
[snip]
>>> The only interesting thing is that the worst case execution time is >>> _below_ the deadline time. >> >> Of course. Now /prove/ the worst case timing when caches >> are operating. > > Are you saying that there are braindead processors that are slower > when caches are enabled compared to situations in which all caches are > disabled ? I guess that must be quite pathological cases :-).
There are certainly processors in which a cache miss at a certain point in a program leads to an overall faster execution of the program than if a cache miss occurs at that point. The reason is often that the cache hit lets the processor execute more things speculatively, and if the speculation turns out not to be needed (for example, a branch prediction was wrong) then the speculation, and its effects on the caches etc., may cause more delay than the cache miss would have caused. In the WCET analysis community, such cases are known as "timing anomalies" and they are the bane of static WCET analysis, because their presence means that the analysis cannot make worst-case assumptions at each point in the program, but must analyse many, many possible cases and combinations. There are also programs (at least constructed examples) which have almost no cache hits. For some processors, enabling the cache (or including a cache in the HDL model) makes cache misses more expensive than cache-less main memory accesses because one or a few cycles are used in the cache look-up before the miss is detected and a main memory access is started. Then, for programs which have few cache hits, execution with a cache can be slower than execution without a cache. But that is of course not true for the "average program", whatever that means. -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
On 14-08-07 23:28 , Niklas Holsti wrote:
> On 14-08-07 15:33 , upsidedown@downunder.com wrote: >> On Thu, 07 Aug 2014 11:35:48 +0100, Tom Gardner >> <spamjunk@blueyonder.co.uk> wrote: >> >>> On 07/08/14 10:18, upsidedown@downunder.com wrote: >>>> On Thu, 07 Aug 2014 08:37:26 +0100, Tom Gardner >>>> <spamjunk@blueyonder.co.uk> wrote: > > [snip] > >>>> The only interesting thing is that the worst case execution time is >>>> _below_ the deadline time. >>> >>> Of course. Now /prove/ the worst case timing when caches >>> are operating. >> >> Are you saying that there are braindead processors that are slower >> when caches are enabled compared to situations in which all caches are >> disabled ? I guess that must be quite pathological cases :-). > > There are certainly processors in which a cache miss at a certain point > in a program leads to an overall faster execution of the program than if > a cache miss occurs at that point.
^^^^ Whoops, I intended to write "hit" there... -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
On Wed, 06 Aug 2014 23:31:57 -0500, Les Cargill
<lcargill99@comcast.com> wrote:

>upsidedown@downunder.com wrote: >> On Wed, 06 Aug 2014 13:08:32 +0800, Reinhardt Behm ><snip> >> >> The only reason I like the Linux approach is when a large number of >> serial lines (more than a few dozen) are needed or there is a need to >> wait for a mix of real serial ports and serial ports connected trough >> ethernet/serial converters (using TCP/UDP sockets), this can all be >> done in a single thread with a single select statement. >> > >The MingW GCC port I use for 'C' programming on Windows has a >"winsock.h" and a "winsock2.h" that both offer select(). > >I presume it's available for Microsoft toolchains but have no way of >finding out for sure.
Winsock is a DLL available to any Microsoft application. However, Winsock's select() only works with sockets and not also with files, serial ports, etc. as it does in Unix/Linux. The Windows equivalent of Unix's select() is WaitForMultipleObjects(), and to use it you have to use the asynchronous event APIs for all the "objects" involved. It isn't difficult really, but it is quite different from Unix where much of the complexity is hidden. George
On 07/08/14 20:07, David Brown wrote:
> Other possible solutions including putting critical interrupt routines into uncached static ram, or locking cache lines. Microcontrollers have such features precisely so that you can > get the responses you need and still use cache. Desktop cpus don't have such features - that is one of the reasons why they are unsuitable for hard real-time tasks.
That's interesting and useful. I haven't come across any recently for the simple reason that I haven't needed to look for one. Which processors would you favour, in the absence of any other information?
George Neuner wrote:
> On Wed, 06 Aug 2014 23:31:57 -0500, Les Cargill > <lcargill99@comcast.com> wrote: > >> upsidedown@downunder.com wrote: >>> On Wed, 06 Aug 2014 13:08:32 +0800, Reinhardt Behm >> <snip> >>> >>> The only reason I like the Linux approach is when a large number of >>> serial lines (more than a few dozen) are needed or there is a need to >>> wait for a mix of real serial ports and serial ports connected trough >>> ethernet/serial converters (using TCP/UDP sockets), this can all be >>> done in a single thread with a single select statement. >>> >> >> The MingW GCC port I use for 'C' programming on Windows has a >> "winsock.h" and a "winsock2.h" that both offer select(). >> >> I presume it's available for Microsoft toolchains but have no way of >> finding out for sure. > > Winsock is a DLL available to any Microsoft application. >
I figured as much.
> However, Winsock's select() only works with sockets and not also with > files, serial ports, etc. as it does in Unix/Linux. >
Ach. I was afraid of that. This being said, an enterprising person might write something to map a serial port to a socket using the programming language of their choice - sort of an internal terminal server.
> The Windows equivalent of Unix's select() is WaitForMultipleObjects(), > and to use it you have to use the asynchronous event APIs for all the > "objects" involved. It isn't difficult really, but it is quite > different from Unix where much of the complexity is hidden. >
Yep.
> George >
-- Les Cargill
The 2026 Embedded Online Conference