Tim Wescott <seemywebsite@myfooter.really> writes:> [...] > On the one hand it's bloatware.How so? True, you're not going to get 2k executables, but in the days of 2TB drives, who gives a rat's behind? -- Randy Yates Digital Signal Labs http://www.digitalsignallabs.com
Linux question -- how to tell if serial port in /dev is for real?
Started by ●August 4, 2014
Reply by ●August 8, 20142014-08-08
Reply by ●August 8, 20142014-08-08
On Fri, 08 Aug 2014 13:33:42 -0400, Randy Yates wrote:> Tim Wescott <seemywebsite@myfooter.really> writes: >> [...] >> On the one hand it's bloatware. > > How so? True, you're not going to get 2k executables, but in the days of > 2TB drives, who gives a rat's behind?The size of the created file, and the thought of all the signals and pointers and whatnot going on behind the scenes just to say "Hello World" pains my aesthetic sensibilities. Who _should_ give a rat's behind? Probably no one. But I never claimed to be rational when it comes to my aesthetic sensibilities. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com
Reply by ●August 8, 20142014-08-08
On 08/08/14 09:35, upsidedown@downunder.com wrote:> On Fri, 08 Aug 2014 01:09:06 -0700, Paul Rubin > <no.email@nospam.invalid> wrote: > >> Niklas Holsti <niklas.holsti@tidorum.invalid> writes: >>> This is attempted by static WCET (Worst-Case Execution-Time) analysis >>> tools such as aiT from AbsInt (www.absint.com). >>> Works IMO pretty well for instruction caches, less so for data caches >> >> We're talking about Linux, which means there's not just caches, but also >> an MMU, preemptive multitasking, etc. I think microsecond HRT in this >> environment is simply not on the menu. The Beaglebone Black has a pair >> of realtime coprocessors built into the main CPU chip because of that. > > Most RT extensions are actually true RT kernels and you can put Linux, > Windows etc. desktop operating systems into the NULL task to consume > CPU cycles not needed by RT tasks. > > Of course, this Linux/Windows NUL task will schedule various > applications based of their internal scheduling algorithm, such as > priority based or even time sharing scheduling (nice). Of course, the > RT kernel does not know anything about this low priority activities.Personally I'd prefer to have two processors such as the dual cores in a Xilinx Zynq running different operating systems, one a "real" RTOS and the other linux. Communication between the two would be via trivial custom hardware in the FPGA fabric. Apparently that's possible, but I haven't done it (yet?). Have to check about memory contention, though. That's presumably equivalent to the coprocessors in the Beaglebone Black.
Reply by ●August 8, 20142014-08-08
On Fri, 08 Aug 2014 11:04:56 -0400, Randy Yates <yates@digitalsignallabs.com> wrote:>upsidedown@downunder.com writes: > >> On Fri, 08 Aug 2014 01:09:06 -0700, Paul Rubin >> <no.email@nospam.invalid> wrote: >> >>>Niklas Holsti <niklas.holsti@tidorum.invalid> writes: >>>> This is attempted by static WCET (Worst-Case Execution-Time) analysis >>>> tools such as aiT from AbsInt (www.absint.com). >>>> Works IMO pretty well for instruction caches, less so for data caches >>> >>>We're talking about Linux, which means there's not just caches, but also >>>an MMU, preemptive multitasking, etc. I think microsecond HRT in this >>>environment is simply not on the menu. The Beaglebone Black has a pair >>>of realtime coprocessors built into the main CPU chip because of that. >> >> Most RT extensions are actually true RT kernels and you can put Linux, >> Windows etc. desktop operating systems into the NULL task to consume >> CPU cycles not needed by RT tasks. > >My first thought on this was, "Yeah! That's a cool way to crack this >nut." But what about the tasks in the NULL task (i.e., kernel tasks) >that disable interrupts? One of the requirements for hard real-time is >that there is an application-specific limit on the maximum time >interrupts can be disabled.Good point. However, with any reasonable hardware (e.g. modify and test instructions) there are very limited needs for the non-RT OS to disable interrupt to handle mutual exclusion etc. For more than a decade ago I evaluated some RTOS extensions for Windows and Linux, but the _soft_ realtime performance with standard Windows/Linux OS was adequate (+/- 1 ms), provided that strict hardware (headless systems etc.) and strict software selections were used. It seems that the old RMX/86 kernel is still alive and kicking in the form of INtime kernel, running Windows applications in the null task.
Reply by ●August 8, 20142014-08-08
Op 07-Aug-14 12:35, Tom Gardner schreef:> On 07/08/14 10:18, upsidedown@downunder.com wrote: >> On Thu, 07 Aug 2014 08:37:26 +0100, Tom Gardner >> <spamjunk@blueyonder.co.uk> wrote: >> >>> On 07/08/14 04:36, Randy Yates wrote: >>>> Randy Yates <yates@digitalsignallabs.com> writes: >>>> >>>>> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >>>>> >>>>>> On 06/08/14 22:31, Randy Yates wrote: >>>>>>> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >>>>>>> >>>>>>>> On 06/08/14 20:56, Jack wrote: >>>>>>>>> Paul Rubin <no.email@nospam.invalid> wrote: >>>>>>>>> >>>>>>>>>> Rob Gaddi <rgaddi@technologyhighland.invalid> writes: >>>>>>>>>>> How do you guarantee microsecond level response from Python >>>>>>>>>>> (and I >>>>>>>>>>> assume Linux)? >>>>>>>>>> >>>>>>>>>> Linux has a realtime scheduler but guaranteeing microsecond >>>>>>>>>> response is >>>>>>>>>> not realistic because of nondeterministic cache misses and >>>>>>>>>> that sort of >>>>>>>>>> thing. For soft realtime maybe it's feasible. Milliseconds >>>>>>>>>> are easier >>>>>>>>>> than microseconds of course. >>>>>>>>> >>>>>>>>> or you use something like Linux RTAI that gives you hard real >>>>>>>>> time. >>>>>>>> >>>>>>>> .. providing, of course, the processor neither instruction nor >>>>>>>> data caches. If either are present then the ratio of mean:max >>>>>>>> latency rapidly becomes very significant. >>>>>>>> >>>>>>>> Even a 486 with its tiny caches showed a 10:1 interrupt latency >>>>>>>> depending on what was/wasn't in the caches. (IIRC that was measured >>>>>>>> with a tiny kernel, certainly nothing like the size/complexity >>>>>>>> of a linux kernel) >>>>>>> >>>>>>> Aren't interrupt routines in some permanently-cached portion of >>>>>>> the MMU? >>>>>> No, and once an MMU is involved all the paging information >>>>>> might or might not be cached. Double whammy.On Windows interrupt routines themselves must be located in non-paged memory, but don't have to be present in the cache.>>>>> So you're telling me that Intel made a processor that, by design, >>>>> could >>>>> not service interrupts in a deterministic fashion? Hard to believe. >>>>> >>>>> Is that also the case for the present-day Intel architectures? >>>> >>>> I should add that real-time operation is therefore not possible on such >>>> processors, regardless of what operating system is used. This just >>>> doesn't sound right to me... >>> >>> That depends on your requirements. Soft realtime certainly is >>> possible. For hard realtime then you will have to determine the >>> mean:max latency and "derate" the processor appropriately. >>> >>> As I noted, you needed 10:1 for the i486, and I have >>> no idea whatsoever what you need for a current Intel >>> processor. >>> >>> The problem is not confined to Intel; it *must* occur wherever >>> there are caches. After all, the whole point of caches is to >>> speed up things *on average*, so by definition there must be >>> some sequences that perform worse than average. >>> >>> Your job, for hard realtime systems, is to determine the >>> pessimal sequence :) (Optimal sequence be damned!) >> >> In most systems, various caches (data, instruction, MMU TLB) can be >> disabled or at least frequently invalidated, so you get the worst case >> performance. > > Disabling resolves the problem, "frequent invalidation" > merely allows you to falsely convince yourself that the > problem is resolved.Disabling caches only eliminates the non-deterministic behavior caused by caches. On modern PC's there are many other sources of non-deterministic timing behavior inside and outside the CPU; branch prediction may get it right most of the time but not always, access to main memory isn't constant time, interference of other devices on shared busses (DMA), chipsets that try to be clever at unexpected moments...etc. There is no practical way to give a 100% guaranteed response time on a modern PC even when using a RTOS due to the complexity and the number of unknowns in a PC. Derating by the observed worst case timing plus a significant margin can be an option if meeting the deadlines 99.9999% of the time is acceptable.
Reply by ●August 8, 20142014-08-08
On 14-08-08 10:20 , upsidedown@downunder.com wrote:> On Thu, 07 Aug 2014 23:14:05 +0300, Niklas Holsti > <niklas.holsti@tidorum.invalid> wrote: > >> On 14-08-07 10:37 , Tom Gardner wrote: > >>> Your job, for hard realtime systems, is to determine the >>> pessimal sequence :) (Optimal sequence be damned!) >> >> This is attempted by static WCET (Worst-Case Execution-Time) analysis >> tools such as aiT from AbsInt (www.absint.com). >> >> Works IMO pretty well for instruction caches, less so for data caches >> (that is, you get a considerable over-estimate in WCET), but much >> depends on the regularity and complexity of the program. Preemptive >> scheduling is also a bit of a problem. > > Is significant overestimation really that bad thing ?Yes, if it is "significant" :-) Say you specify the amount of processing to be done and the processor to be used, aiming at 50% CPU load. If your WCET tool over-estimates by a factor of 2, you are in trouble -- or sooner, if you must satisfy margin requirements. If your tool over-estimates by a factor of 10, you must aim for at most 10% CPU load, and so on. With the current high cost of cache misses, even a small increase in the fraction of memory accesses that the analysis tool cannot prove will be cache hits will increase the WCET bound considerably.> It is of course a bad thing if you ship millions of units a year, but > assuming hundreds or a few thousand units a year, this is not so > significant. For instance, the possibility to use the same hardware as > used in non HRT applications simplifies the logistics.True, but the problem with choosing a very powerful processor is that it may be too complex for static WCET analysis, or there may be no such analysis tool available for this processor. You can then fall back to a hybrid static/dynamic analyzer such as RapiTime (www.rapitasystems.com) but then you must have a good test suite, and you get an estimated WCET bound which has a certain (hopefully small) risk of being underestimated, unlike the static analysis case where (assuming the analysis tool has no bugs) the computed WCET bound is always safe. -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
Reply by ●August 8, 20142014-08-08
On 08/08/14 22:11, Dombo wrote:> Op 07-Aug-14 12:35, Tom Gardner schreef: >> On 07/08/14 10:18, upsidedown@downunder.com wrote: >>> On Thu, 07 Aug 2014 08:37:26 +0100, Tom Gardner >>> <spamjunk@blueyonder.co.uk> wrote: >>> >>>> On 07/08/14 04:36, Randy Yates wrote: >>>>> Randy Yates <yates@digitalsignallabs.com> writes: >>>>> >>>>>> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >>>>>> >>>>>>> On 06/08/14 22:31, Randy Yates wrote: >>>>>>>> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >>>>>>>> >>>>>>>>> On 06/08/14 20:56, Jack wrote: >>>>>>>>>> Paul Rubin <no.email@nospam.invalid> wrote: >>>>>>>>>> >>>>>>>>>>> Rob Gaddi <rgaddi@technologyhighland.invalid> writes: >>>>>>>>>>>> How do you guarantee microsecond level response from Python >>>>>>>>>>>> (and I >>>>>>>>>>>> assume Linux)? >>>>>>>>>>> >>>>>>>>>>> Linux has a realtime scheduler but guaranteeing microsecond >>>>>>>>>>> response is >>>>>>>>>>> not realistic because of nondeterministic cache misses and >>>>>>>>>>> that sort of >>>>>>>>>>> thing. For soft realtime maybe it's feasible. Milliseconds >>>>>>>>>>> are easier >>>>>>>>>>> than microseconds of course. >>>>>>>>>> >>>>>>>>>> or you use something like Linux RTAI that gives you hard real >>>>>>>>>> time. >>>>>>>>> >>>>>>>>> .. providing, of course, the processor neither instruction nor >>>>>>>>> data caches. If either are present then the ratio of mean:max >>>>>>>>> latency rapidly becomes very significant. >>>>>>>>> >>>>>>>>> Even a 486 with its tiny caches showed a 10:1 interrupt latency >>>>>>>>> depending on what was/wasn't in the caches. (IIRC that was measured >>>>>>>>> with a tiny kernel, certainly nothing like the size/complexity >>>>>>>>> of a linux kernel) >>>>>>>> >>>>>>>> Aren't interrupt routines in some permanently-cached portion of >>>>>>>> the MMU? >>>>>>> No, and once an MMU is involved all the paging information >>>>>>> might or might not be cached. Double whammy. > > On Windows interrupt routines themselves must be located in non-paged memory, but don't have to be present in the cache. > >>>>>> So you're telling me that Intel made a processor that, by design, >>>>>> could >>>>>> not service interrupts in a deterministic fashion? Hard to believe. >>>>>> >>>>>> Is that also the case for the present-day Intel architectures? >>>>> >>>>> I should add that real-time operation is therefore not possible on such >>>>> processors, regardless of what operating system is used. This just >>>>> doesn't sound right to me... >>>> >>>> That depends on your requirements. Soft realtime certainly is >>>> possible. For hard realtime then you will have to determine the >>>> mean:max latency and "derate" the processor appropriately. >>>> >>>> As I noted, you needed 10:1 for the i486, and I have >>>> no idea whatsoever what you need for a current Intel >>>> processor. >>>> >>>> The problem is not confined to Intel; it *must* occur wherever >>>> there are caches. After all, the whole point of caches is to >>>> speed up things *on average*, so by definition there must be >>>> some sequences that perform worse than average. >>>> >>>> Your job, for hard realtime systems, is to determine the >>>> pessimal sequence :) (Optimal sequence be damned!) >>> >>> In most systems, various caches (data, instruction, MMU TLB) can be >>> disabled or at least frequently invalidated, so you get the worst case >>> performance. >> >> Disabling resolves the problem, "frequent invalidation" >> merely allows you to falsely convince yourself that the >> problem is resolved. > > Disabling caches only eliminates the non-deterministic behavior caused by caches. On modern PC's there are many other sources of non-deterministic timing behavior inside and outside the CPU; branch > prediction may get it right most of the time but not always, access to main memory isn't constant time, interference of other devices on shared busses (DMA), chipsets that try to be clever at > unexpected moments...etc.Quite right. I've only pointed out that, in order to have hard realtime operation it is /necessary/ to avoid caches. I have never claimed that avoiding caches is /sufficient/ for hard real time operation.> There is no practical way to give a 100% guaranteed response time on a modern PC even when using a RTOS due to the complexity and the number of unknowns in a PC. Derating by the observed worst case > timing plus a significant margin can be an option if meeting the deadlines 99.9999% of the time is acceptable.Quite right. The only difficulty is in adequately demonstrating that your chosen derating factor is sufficient to satisfy your objectives.
Reply by ●August 8, 20142014-08-08
On 14-08-08 11:15 , upsidedown@downunder.com wrote:> On Thu, 07 Aug 2014 23:28:03 +0300, Niklas Holsti > <niklas.holsti@tidorum.invalid> wrote: > >> On 14-08-07 15:33 , upsidedown@downunder.com wrote: >>> On Thu, 07 Aug 2014 11:35:48 +0100, Tom Gardner >>> <spamjunk@blueyonder.co.uk> wrote: >>> >>>> On 07/08/14 10:18, upsidedown@downunder.com wrote: >>>>> On Thu, 07 Aug 2014 08:37:26 +0100, Tom Gardner >>>>> <spamjunk@blueyonder.co.uk> wrote: >> >> [snip] >> >>>>> The only interesting thing is that the worst case execution time is >>>>> _below_ the deadline time. >>>> >>>> Of course. Now /prove/ the worst case timing when caches >>>> are operating. >>> >>> Are you saying that there are braindead processors that are slower >>> when caches are enabled compared to situations in which all caches are >>> disabled ? I guess that must be quite pathological cases :-). >> >> There are certainly processors in which a cache miss at a certain point >> in a program leads to an overall faster execution of the program than if >> a cache miss occurs at that point. > >>> ^^^^ >>> Whoops, I intended to write "hit" there... > >> The reason is often that the cache >> hit lets the processor execute more things speculatively, and if the >> speculation turns out not to be needed (for example, a branch prediction >> was wrong) then the speculation, and its effects on the caches etc., may >> cause more delay than the cache miss would have caused. > > Usually the main memory (or at least the memory interface bandwidth) > is very slow compared to cache and processor cycles. If dynamic RAM is > used, loading a cache line would typically mean > 1 x RAS cycle + n x CAS cycles. Depending of memory bus width and > hence the size of "n", this will take a while. By pessimistically > assuming that any memory byte access would cause a full DRAM cycle, > you should be on the safe side, compared to any speculative execution > issues.Ok, but then the analysis assumes 100% cache miss rate, which can give a hugely overestimated and probably useless WCET bound. The point is that the speculation may evict stuff from the caches, leading to later cache misses when the program accesses this evicted stuff; these misses would not have occurred if the initial memory access had been a cache miss which would have prevented the speculation from being done at all. So the relative slowness of main memory balances out and the timing anomaly remains. There are several other forms of timing anomalies in many current processors. Designing fast processors without anomalies is not easy but the HRT academics are trying. Some processors even have "domino effects" in which the occurrence of one "timing accident", such as a cache miss, typically within a loop, causes further cache misses or other effects which delay *every* later iteration of this loop. In other words, the initial timing accident is never "forgotten"; the processor never regains its original "balance".>> In the WCET analysis community, such cases are known as "timing >> anomalies" and they are the bane of static WCET analysis, because their >> presence means that the analysis cannot make worst-case assumptions at >> each point in the program, but must analyse many, many possible cases >> and combinations. > > For any kind of WCET analysis, you really need some kind of programs > these days.I fully agree, but even such programs have combinatorial-explosion problems when the target processor has timing anomalies.>> There are also programs (at least constructed examples) which have >> almost no cache hits. For some processors, enabling the cache (or >> including a cache in the HDL model) makes cache misses more expensive >> than cache-less main memory accesses because one or a few cycles are >> used in the cache look-up before the miss is detected and a main memory >> access is started. Then, for programs which have few cache hits, >> execution with a cache can be slower than execution without a cache. > > Then use cache lookup plus RAS/CAS sequence time for each memory > access.I did not claim that these processors and programs cannot be analysed; the question was just if there exist processors which are slower with caches than without. And for certain programs that happens.> If you have multiple RT tasks at different priorities, you can > reliably predict only the highest priority task latencies (based on > interrupt and kernel scheduler latencies). > > The latencies for the next highest task depends not only on those > latencies but also on he worst execution time of the highest priority > task. In practice, you can have only one HRT task and multiple soft-RT > tasks below it, unless you do a worst case execution time analysis > after each HRT task software update.Standard schedulability analysis methods such as Response-Time Analysis work for any number of HRT tasks at different priorities, assuming that you have WCETs for each task in isolation (and a suitably constrained model of inter-task interactions). These methods account for the pre-emption of lower-priority tasks by higher-priority tasks. A difficulty here is that caches add to the delay caused by pre-emption, because a task that has been pre-empted and is then resumed has probably lost much of its cached data, and will run slower for a while before it has reloaded its working set into the cache. This is called Cache-Related Preemption Delay (CRPD). A number of ways to avoid CRPD or include it in WCET and schedulability analysis have been proposed, and some seem to work not too badly. -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
Reply by ●August 8, 20142014-08-08
On 14-08-09 00:50 , Tom Gardner wrote:> On 08/08/14 22:11, Dombo wrote: >> Op 07-Aug-14 12:35, Tom Gardner schreef: >>> On 07/08/14 10:18, upsidedown@downunder.com wrote: >>>> On Thu, 07 Aug 2014 08:37:26 +0100, Tom Gardner >>>> <spamjunk@blueyonder.co.uk> wrote: >>>> >>>>> On 07/08/14 04:36, Randy Yates wrote: >>>>>> Randy Yates <yates@digitalsignallabs.com> writes: >>>>>> >>>>>>> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >>>>>>> >>>>>>>> On 06/08/14 22:31, Randy Yates wrote: >>>>>>>>> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >>>>>>>>> >>>>>>>>>> On 06/08/14 20:56, Jack wrote: >>>>>>>>>>> Paul Rubin <no.email@nospam.invalid> wrote: >>>>>>>>>>> >>>>>>>>>>>> Rob Gaddi <rgaddi@technologyhighland.invalid> writes: >>>>>>>>>>>>> How do you guarantee microsecond level response from Python >>>>>>>>>>>>> (and I >>>>>>>>>>>>> assume Linux)? >>>>>>>>>>>> >>>>>>>>>>>> Linux has a realtime scheduler but guaranteeing microsecond >>>>>>>>>>>> response is >>>>>>>>>>>> not realistic because of nondeterministic cache misses and >>>>>>>>>>>> that sort of >>>>>>>>>>>> thing. For soft realtime maybe it's feasible. Milliseconds >>>>>>>>>>>> are easier >>>>>>>>>>>> than microseconds of course. >>>>>>>>>>> >>>>>>>>>>> or you use something like Linux RTAI that gives you hard real >>>>>>>>>>> time. >>>>>>>>>> >>>>>>>>>> .. providing, of course, the processor neither instruction nor >>>>>>>>>> data caches. If either are present then the ratio of mean:max >>>>>>>>>> latency rapidly becomes very significant. >>>>>>>>>> >>>>>>>>>> Even a 486 with its tiny caches showed a 10:1 interrupt latency >>>>>>>>>> depending on what was/wasn't in the caches. (IIRC that was >>>>>>>>>> measured >>>>>>>>>> with a tiny kernel, certainly nothing like the size/complexity >>>>>>>>>> of a linux kernel) >>>>>>>>> >>>>>>>>> Aren't interrupt routines in some permanently-cached portion of >>>>>>>>> the MMU? >>>>>>>> No, and once an MMU is involved all the paging information >>>>>>>> might or might not be cached. Double whammy. >> >> On Windows interrupt routines themselves must be located in non-paged >> memory, but don't have to be present in the cache. >> >>>>>>> So you're telling me that Intel made a processor that, by design, >>>>>>> could >>>>>>> not service interrupts in a deterministic fashion? Hard to believe. >>>>>>> >>>>>>> Is that also the case for the present-day Intel architectures? >>>>>> >>>>>> I should add that real-time operation is therefore not possible on >>>>>> such >>>>>> processors, regardless of what operating system is used. This just >>>>>> doesn't sound right to me... >>>>> >>>>> That depends on your requirements. Soft realtime certainly is >>>>> possible. For hard realtime then you will have to determine the >>>>> mean:max latency and "derate" the processor appropriately. >>>>> >>>>> As I noted, you needed 10:1 for the i486, and I have >>>>> no idea whatsoever what you need for a current Intel >>>>> processor. >>>>> >>>>> The problem is not confined to Intel; it *must* occur wherever >>>>> there are caches. After all, the whole point of caches is to >>>>> speed up things *on average*, so by definition there must be >>>>> some sequences that perform worse than average. >>>>> >>>>> Your job, for hard realtime systems, is to determine the >>>>> pessimal sequence :) (Optimal sequence be damned!) >>>> >>>> In most systems, various caches (data, instruction, MMU TLB) can be >>>> disabled or at least frequently invalidated, so you get the worst case >>>> performance. >>> >>> Disabling resolves the problem, "frequent invalidation" >>> merely allows you to falsely convince yourself that the >>> problem is resolved. >> >> Disabling caches only eliminates the non-deterministic behavior caused >> by caches. On modern PC's there are many other sources of >> non-deterministic timing behavior inside and outside the CPU; branch >> prediction may get it right most of the time but not always, access to >> main memory isn't constant time, interference of other devices on >> shared busses (DMA), chipsets that try to be clever at >> unexpected moments...etc. > > Quite right. > > I've only pointed out that, in order to have hard realtime > operation it is /necessary/ to avoid caches.No, static WCET analysis works well for several forms of caches. Airbus jets use cached processors in their flight control systems, with WCET analysis tools. But as has been said repeatedly in this thread, modern PCs have many other sources of hard-to-predict execution time. Some may be amenable to static WCET analysis, but I don't know of any off-the-shelf tools for it.>> There is no practical way to give a 100% guaranteed response time on a >> modern PC even when using a RTOS due to the complexity and the number >> of unknowns in a PC. Derating by the observed worst case >> timing plus a significant margin can be an option if meeting the >> deadlines 99.9999% of the time is acceptable. > > Quite right. > > The only difficulty is in adequately demonstrating that your > chosen derating factor is sufficient to satisfy your objectives.Some of the people working on "probabilistic WCET analysis" claim that the mathematical tools of "extreme-value statistics" can provide that demonstration. I am not convinced, but I may be wrong. -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
Reply by ●August 8, 20142014-08-08
On 08.8.2014 г. 20:49, Tim Wescott wrote:> On Fri, 08 Aug 2014 13:33:42 -0400, Randy Yates wrote: > >> Tim Wescott <seemywebsite@myfooter.really> writes: >>> [...] >>> On the one hand it's bloatware. >> >> How so? True, you're not going to get 2k executables, but in the days of >> 2TB drives, who gives a rat's behind? > > The size of the created file, and the thought of all the signals and > pointers and whatnot going on behind the scenes just to say "Hello World" > pains my aesthetic sensibilities. > > Who _should_ give a rat's behind? Probably no one. But I never claimed > to be rational when it comes to my aesthetic sensibilities. >There is more than aesthetic sensibilities to it, though most people seem to have stopped noticing it. Bloated code runs slower - often much slower - simply because it takes more time to transfer to/from memory, wastes memory thus causes swapping, last but not least programmers who write bloated code routinely are simply incapable of writing good code. Todays OS etc. code is bloated by more than one order of magnitude, I'd say more than two orders really, often 3 and above. Most people, having not seen much else, just accept it and get on with it, I suppose. Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/sets/72157600228621276/







