On 08/08/14 00:30, Tom Gardner wrote:> On 07/08/14 20:07, David Brown wrote: >> Other possible solutions including putting critical interrupt routines >> into uncached static ram, or locking cache lines. Microcontrollers >> have such features precisely so that you can >> get the responses you need and still use cache. Desktop cpus don't >> have such features - that is one of the reasons why they are >> unsuitable for hard real-time tasks. > > That's interesting and useful. > > I haven't come across any recently for the simple reason that I haven't > needed to look for one. Which processors would you favour, in the > absence of any other information? >The only microcontroller I have used with a cache in which I wanted to be particularly careful about fast responses was a Freescale MPC5674F. This has two cpus (which can be run independently, or in lock-step), each with their own cache. It could hardly be called easy-to-use, but is a very powerful device. Cache lines can be locked in different ways, and you can do all sorts of fiddling with different rules for different memory areas (such as making some parts uncached, some parts write-back, and some parts write-through). For more "normal" microcontrollers, such as fast M3/M4 cores with caches, you won't usually get quite as many features like that. But you will always have a solid chunk of static RAM (perhaps /all/ the onboard ram) that can be accessed quickly without caching - you put your critical routines and data there, and enable caching for everything else (in flash, off-chip ram, etc.). Higher-end micros such those as the Cortex R or PPC cores (like the MPC5764F) will also have some fast-access ram even if the main onboard ram is slower than the cpu. Sometimes this will be combined with the cache - you can configure all or some of the cache to be static ram. (Actually, I believe that is possible on at least some x86 cpus - I don't know details, but I have heard of them being used without any external memory!). Faster ARM cores also often have "tightly coupled" memories, that can be used for this sort of critical code and data.
Linux question -- how to tell if serial port in /dev is for real?
Started by ●August 4, 2014
Reply by ●August 8, 20142014-08-08
Reply by ●August 8, 20142014-08-08
On Thu, 07 Aug 2014 23:14:05 +0300, Niklas Holsti <niklas.holsti@tidorum.invalid> wrote:>On 14-08-07 10:37 , Tom Gardner wrote:>> Your job, for hard realtime systems, is to determine the >> pessimal sequence :) (Optimal sequence be damned!) > >This is attempted by static WCET (Worst-Case Execution-Time) analysis >tools such as aiT from AbsInt (www.absint.com). > >Works IMO pretty well for instruction caches, less so for data caches >(that is, you get a considerable over-estimate in WCET), but much >depends on the regularity and complexity of the program. Preemptive >scheduling is also a bit of a problem.Is significant overestimation really that bad thing ? It is of course a bad thing if you ship millions of units a year, but assuming hundreds or a few thousand units a year, this is not so significant. For instance, the possibility to use the same hardware as used in non HRT applications simplifies the logistics.
Reply by ●August 8, 20142014-08-08
On 08/08/14 08:20, upsidedown@downunder.com wrote:> On Thu, 07 Aug 2014 23:14:05 +0300, Niklas Holsti > <niklas.holsti@tidorum.invalid> wrote: > >> On 14-08-07 10:37 , Tom Gardner wrote: > >>> Your job, for hard realtime systems, is to determine the >>> pessimal sequence :) (Optimal sequence be damned!) >> >> This is attempted by static WCET (Worst-Case Execution-Time) analysis >> tools such as aiT from AbsInt (www.absint.com). >> >> Works IMO pretty well for instruction caches, less so for data caches >> (that is, you get a considerable over-estimate in WCET), but much >> depends on the regularity and complexity of the program. Preemptive >> scheduling is also a bit of a problem. > > Is significant overestimation really that bad thing ?For hard real time, any estimation is A Bad Thing :)> It is of course a bad thing if you ship millions of units a year, but > assuming hundreds or a few thousand units a year, this is not so > significant. For instance, the possibility to use the same hardware as > used in non HRT applications simplifies the logistics.Agreed, subject to the above.
Reply by ●August 8, 20142014-08-08
Niklas Holsti <niklas.holsti@tidorum.invalid> writes:> This is attempted by static WCET (Worst-Case Execution-Time) analysis > tools such as aiT from AbsInt (www.absint.com). > Works IMO pretty well for instruction caches, less so for data cachesWe're talking about Linux, which means there's not just caches, but also an MMU, preemptive multitasking, etc. I think microsecond HRT in this environment is simply not on the menu. The Beaglebone Black has a pair of realtime coprocessors built into the main CPU chip because of that.
Reply by ●August 8, 20142014-08-08
On Thu, 07 Aug 2014 23:28:03 +0300, Niklas Holsti <niklas.holsti@tidorum.invalid> wrote:>On 14-08-07 15:33 , upsidedown@downunder.com wrote: >> On Thu, 07 Aug 2014 11:35:48 +0100, Tom Gardner >> <spamjunk@blueyonder.co.uk> wrote: >> >>> On 07/08/14 10:18, upsidedown@downunder.com wrote: >>>> On Thu, 07 Aug 2014 08:37:26 +0100, Tom Gardner >>>> <spamjunk@blueyonder.co.uk> wrote: > > [snip] > >>>> The only interesting thing is that the worst case execution time is >>>> _below_ the deadline time. >>> >>> Of course. Now /prove/ the worst case timing when caches >>> are operating. >> >> Are you saying that there are braindead processors that are slower >> when caches are enabled compared to situations in which all caches are >> disabled ? I guess that must be quite pathological cases :-). > >There are certainly processors in which a cache miss at a certain point >in a program leads to an overall faster execution of the program than if >a cache miss occurs at that point.>> ^^^^ >>Whoops, I intended to write "hit" there...>The reason is often that the cache >hit lets the processor execute more things speculatively, and if the >speculation turns out not to be needed (for example, a branch prediction >was wrong) then the speculation, and its effects on the caches etc., may >cause more delay than the cache miss would have caused.Usually the main memory (or at least the memory interface bandwidth) is very slow compared to cache and processor cycles. If dynamic RAM is used, loading a cache line would typically mean 1 x RAS cycle + n x CAS cycles. Depending of memory bus width and hence the size of "n", this will take a while. By pessimistically assuming that any memory byte access would cause a full DRAM cycle, you should be on the safe side, compared to any speculative execution issues. Of course, if instructions are always on word boundary and Word/DWord data accesses are properly aligned, you could use a single full RAS/CAS sequence time for Word/DWords.>In the WCET analysis community, such cases are known as "timing >anomalies" and they are the bane of static WCET analysis, because their >presence means that the analysis cannot make worst-case assumptions at >each point in the program, but must analyse many, many possible cases >and combinations.For any kind of WCET analysis, you really need some kind of programs these days. I have done pre-emptive scheduler task switching worst case performance analysis for 6502/6800/6809 using manual (pen and paper) methods. Anything more complex and you can't do that analysis by hand :-).>There are also programs (at least constructed examples) which have >almost no cache hits. For some processors, enabling the cache (or >including a cache in the HDL model) makes cache misses more expensive >than cache-less main memory accesses because one or a few cycles are >used in the cache look-up before the miss is detected and a main memory >access is started. Then, for programs which have few cache hits, >execution with a cache can be slower than execution without a cache.Then use cache lookup plus RAS/CAS sequence time for each memory access.>But that is of course not true for the "average program", whatever that >means.One should remember that in soft/hard RT environment, you really do not want to load the CPU close to 100 %. For soft RT, I would consider anything above 60 % short time (1 s) average load as overloading. If you have multiple RT tasks at different priorities, you can reliably predict only the highest priority task latencies (based on interrupt and kernel scheduler latencies). The latencies for the next highest task depends not only on those latencies but also on he worst execution time of the highest priority task. In practice, you can have only one HRT task and multiple soft-RT tasks below it, unless you do a worst case execution time analysis after each HRT task software update.
Reply by ●August 8, 20142014-08-08
On Fri, 08 Aug 2014 01:09:06 -0700, Paul Rubin <no.email@nospam.invalid> wrote:>Niklas Holsti <niklas.holsti@tidorum.invalid> writes: >> This is attempted by static WCET (Worst-Case Execution-Time) analysis >> tools such as aiT from AbsInt (www.absint.com). >> Works IMO pretty well for instruction caches, less so for data caches > >We're talking about Linux, which means there's not just caches, but also >an MMU, preemptive multitasking, etc. I think microsecond HRT in this >environment is simply not on the menu. The Beaglebone Black has a pair >of realtime coprocessors built into the main CPU chip because of that.Most RT extensions are actually true RT kernels and you can put Linux, Windows etc. desktop operating systems into the NULL task to consume CPU cycles not needed by RT tasks. Of course, this Linux/Windows NUL task will schedule various applications based of their internal scheduling algorithm, such as priority based or even time sharing scheduling (nice). Of course, the RT kernel does not know anything about this low priority activities.
Reply by ●August 8, 20142014-08-08
upsidedown@downunder.com writes:> On Fri, 08 Aug 2014 01:09:06 -0700, Paul Rubin > <no.email@nospam.invalid> wrote: > >>Niklas Holsti <niklas.holsti@tidorum.invalid> writes: >>> This is attempted by static WCET (Worst-Case Execution-Time) analysis >>> tools such as aiT from AbsInt (www.absint.com). >>> Works IMO pretty well for instruction caches, less so for data caches >> >>We're talking about Linux, which means there's not just caches, but also >>an MMU, preemptive multitasking, etc. I think microsecond HRT in this >>environment is simply not on the menu. The Beaglebone Black has a pair >>of realtime coprocessors built into the main CPU chip because of that. > > Most RT extensions are actually true RT kernels and you can put Linux, > Windows etc. desktop operating systems into the NULL task to consume > CPU cycles not needed by RT tasks.My first thought on this was, "Yeah! That's a cool way to crack this nut." But what about the tasks in the NULL task (i.e., kernel tasks) that disable interrupts? One of the requirements for hard real-time is that there is an application-specific limit on the maximum time interrupts can be disabled. -- Randy Yates Digital Signal Labs http://www.digitalsignallabs.com
Reply by ●August 8, 20142014-08-08
On Mon, 04 Aug 2014 19:39:12 -0500, Tim Wescott wrote:> OK, wrong group. But you guys are smart, and mostly aren't snobs. > > So -- is there a way to tell if a device in /dev is for real? There's > about a bazzilion /dev/tty<this and that> in my computer at the moment, > but there's only one (/dev/ttyUSB0) that's actually connected to a > working serial port. > > I would like to know how to know. > > Thanks.FWIW, I've just started using the Qt Serial Port library, and it has some magic that does this. It apparently uses the udev library to interrogate for serial ports. At this point I don't know or care about the details: as long as the magic works, I'll be a happy, ignorant magic-user. (Linux just has too many layers between me and the hardware, and I just don't CARE how it's done, as long as it works). -- Tim Wescott Control system and signal processing consulting www.wescottdesign.com
Reply by ●August 8, 20142014-08-08
Tim Wescott <tim@seemywebsite.please> writes:> On Mon, 04 Aug 2014 19:39:12 -0500, Tim Wescott wrote: > >> OK, wrong group. But you guys are smart, and mostly aren't snobs. >> >> So -- is there a way to tell if a device in /dev is for real? There's >> about a bazzilion /dev/tty<this and that> in my computer at the moment, >> but there's only one (/dev/ttyUSB0) that's actually connected to a >> working serial port. >> >> I would like to know how to know. >> >> Thanks. > > FWIW, I've just started using the Qt Serial Port library, and it has some > magic that does this. It apparently uses the udev library to interrogate > for serial ports. > > At this point I don't know or care about the details: as long as the > magic works, I'll be a happy, ignorant magic-user. (Linux just has too > many layers between me and the hardware, and I just don't CARE how it's > done, as long as it works).Tim, I'm glad to see someone else here praising Qt. I've been using it for a few months and find it absolutely wonderful (98 percent of the time...). E.g., awhile back I used it for it's audio interface abstractions. They worked on my desktop system and a Beagle Bone Black with a Sabre USB connected. I've also written a couple of database-centric utility apps for my wife and I around the house. They work beautifully! Overall it's a powerful way to generate user interfaces and its abstractions save a lot of time. /end{QtDrumBeating} -- Randy Yates Digital Signal Labs http://www.digitalsignallabs.com
Reply by ●August 8, 20142014-08-08
On Fri, 08 Aug 2014 12:28:22 -0400, Randy Yates wrote:> Tim Wescott <tim@seemywebsite.please> writes: > >> On Mon, 04 Aug 2014 19:39:12 -0500, Tim Wescott wrote: >> >>> OK, wrong group. But you guys are smart, and mostly aren't snobs. >>> >>> So -- is there a way to tell if a device in /dev is for real? There's >>> about a bazzilion /dev/tty<this and that> in my computer at the >>> moment, >>> but there's only one (/dev/ttyUSB0) that's actually connected to a >>> working serial port. >>> >>> I would like to know how to know. >>> >>> Thanks. >> >> FWIW, I've just started using the Qt Serial Port library, and it has >> some magic that does this. It apparently uses the udev library to >> interrogate for serial ports. >> >> At this point I don't know or care about the details: as long as the >> magic works, I'll be a happy, ignorant magic-user. (Linux just has too >> many layers between me and the hardware, and I just don't CARE how it's >> done, as long as it works). > > Tim, > > I'm glad to see someone else here praising Qt. I've been using it for a > few months and find it absolutely wonderful (98 percent of the time...). > > E.g., awhile back I used it for it's audio interface abstractions. They > worked on my desktop system and a Beagle Bone Black with a Sabre USB > connected. > > I've also written a couple of database-centric utility apps for my wife > and I around the house. They work beautifully! > > Overall it's a powerful way to generate user interfaces and its > abstractions save a lot of time. > > /end{QtDrumBeating}On the one hand it's bloatware. On the other hand, if I'm careful about how I write things I can write PC- side software using Qt for the GUI and a boatload of software in the middle that also gets compiled into stuff that's embedded into customer products and have not a whiff of Qt about them. So it works well for me. I think that if I were _just_ writing for the PC I'd use Java or Python or something like that -- but I'm not, so I don't. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com







