EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

RTOS popularity

Started by Philipp Klaus Krause December 26, 2015
On 19/01/16 01:17, Les Cargill wrote:
> I do, generally, or I make it message passing, or > interlocking state machines with message passing...
That's a very sound, scalable, and fault-resilient way of thinking, which has the benefit of being implementable by different companies in different countries using different implementation technologies. Existence proof: the largest and most complex machines that the human race has ever developed - the telecoms system.
> Here's the key - it's now possible to reason about these systems in a > proof-like manner. You can enumerate all the states cross events and > check lists of invariants at each transition.
Well, /proof/ in the mathematical sense rapidly becomes untenable with real FSMs due to state-space explosion. But your other points are spot-on. I'd add that it is trivial to add instrumentation that allows comprehensive performance measurement in live systems, and the ability to prove that your system is correct and the other company is at fault. Been there, done that :) >> Randy Yates wrote:
>> I agree that this is not really related to managing multiple >> programmers, however.
Yes, it can be. And managing multiple, ahem, "cooperating" companies.
Les Cargill <lcargill99@comcast.com> writes:

> Randy Yates wrote: >> Les Cargill <lcargill99@comcast.com> writes: >>> [...] >>> Preemptive opens up a lot of disparate and ugly cans of worms. With >>> "run to completion". you can get a lot closer to proving the system >>> correct to within some epsilon ( sometimes a rather large epsilon ). >> >> That sounds bogus. Whether 100 lines of "preemptive" or 100 lines >> of "run to completion," you can have a LOT of bugs. >> > > Of course you can. But with preemption, you get Heisenbugs of a > certain sort, thread-safe issues, reentrancy and race conditions > of another certain sort. > > Of course, you gain habits in how to avoid these, but still... > > I just really prefer deterministic operation of > systems. And, frankly, I don't understand other people not > preferring that. I fully realize you can't always do that, but > I bet you can get closer than you think you can. > >> I would much, much, MUCH (did I say much?) rather use a preemptive OS. >> You can always make it a "run to completion" by placing all tasks at the >> same priority and ensuring nothing blocks in the body of each task >> (except at the end, e.g., a yield()). > > > I do, generally, or I make it message passing, or > interlocking state machines with message passing...
Well we may be in violent agreement after all. I've based a few of my threaded projects on the paradigm of doing everything through messages and blocking (for the most part) on a message being available. Works beautifully.
> Here's the key - it's now possible to reason about these systems in a > proof-like manner. You can enumerate all the states cross events and > check lists of invariants at each transition.
What do you mean by "invariants?" This paragraph is Greek to me.
> So what if it's big? Sure, you'll make mistakes but if > you keep this as a regression check, I bet it's worth it. > You can get a long way with < 100 states and < 12 events, well, > that's 1200 things. Not impossible at all.
Don't get me wrong, I believe any testing is good; very good. But just testing across states and events doesn't give you a lot of coverage, does it? What about the order and timing of the events and inputs? -- Randy Yates, DSP/Embedded Firmware Developer Digital Signal Labs http://www.digitalsignallabs.com
Randy Yates wrote:
> Les Cargill <lcargill99@comcast.com> writes: > >> Randy Yates wrote: >>> Les Cargill <lcargill99@comcast.com> writes: >>>> [...] >>>> Preemptive opens up a lot of disparate and ugly cans of worms. With >>>> "run to completion". you can get a lot closer to proving the system >>>> correct to within some epsilon ( sometimes a rather large epsilon ). >>> >>> That sounds bogus. Whether 100 lines of "preemptive" or 100 lines >>> of "run to completion," you can have a LOT of bugs. >>> >> >> Of course you can. But with preemption, you get Heisenbugs of a >> certain sort, thread-safe issues, reentrancy and race conditions >> of another certain sort. >> >> Of course, you gain habits in how to avoid these, but still... >> >> I just really prefer deterministic operation of >> systems. And, frankly, I don't understand other people not >> preferring that. I fully realize you can't always do that, but >> I bet you can get closer than you think you can. >> >>> I would much, much, MUCH (did I say much?) rather use a preemptive OS. >>> You can always make it a "run to completion" by placing all tasks at the >>> same priority and ensuring nothing blocks in the body of each task >>> (except at the end, e.g., a yield()). >> >> >> I do, generally, or I make it message passing, or >> interlocking state machines with message passing... > > Well we may be in violent agreement after all.
I am sure we are.
> I've based a few of my > threaded projects on the paradigm of doing everything through messages > and blocking (for the most part) on a message being available. Works > beautifully. >
Badda bing, badda boom.
>> Here's the key - it's now possible to reason about these systems in a >> proof-like manner. You can enumerate all the states cross events and >> check lists of invariants at each transition. > > What do you mean by "invariants?" This paragraph is Greek to me. >
An invariant is something that is always true. For state machines, you can check sets of true/false tests based on state and learn an awful lot about whether the thing works or not.
>> So what if it's big? Sure, you'll make mistakes but if >> you keep this as a regression check, I bet it's worth it. >> You can get a long way with < 100 states and < 12 events, well, >> that's 1200 things. Not impossible at all. > > Don't get me wrong, I believe any testing is good; very good. But just > testing across states and events doesn't give you a lot of coverage, > does it?
Sure it does. Done properly, it can reduce error rates dramatically. But this is somehow easier if things are, more or less, a state machine. Again, I think of these things as a fancy regulator clock, with complicated escapement mechanisms. If it's permutation oriented, we're programmers. We know how to automate generating permutations. If it's uncomfortable running this on the target, hoist it out to a PC program and run it against the test vector that way. You still have to do things like worst-case, stress and illegal input tests. And you'll still miss stuff. I'd rather slip a week doing this than spend months going through wreckage. And frankly, those approach feels more productive to me.
> What about the order and timing of the events and inputs? >
You (nearly) always have to tolerate out of order events. Timing should be manageable by buffering. And you do what you can to count lost events. If you lose events, it might be worth considering polling, in cases ( especially on small, high-speed micros ). -- Les Cargill
On 21/01/16 06:29, Les Cargill wrote:
> Randy Yates wrote: >> Les Cargill <lcargill99@comcast.com> writes: >> >>> Randy Yates wrote: >>>> Les Cargill <lcargill99@comcast.com> writes: >>>>> [...] >>>>> Preemptive opens up a lot of disparate and ugly cans of worms. With >>>>> "run to completion". you can get a lot closer to proving the system >>>>> correct to within some epsilon ( sometimes a rather large epsilon ). >>>> >>>> That sounds bogus. Whether 100 lines of "preemptive" or 100 lines >>>> of "run to completion," you can have a LOT of bugs. >>>> >>> >>> Of course you can. But with preemption, you get Heisenbugs of a >>> certain sort, thread-safe issues, reentrancy and race conditions >>> of another certain sort. >>> >>> Of course, you gain habits in how to avoid these, but still... >>> >>> I just really prefer deterministic operation of >>> systems. And, frankly, I don't understand other people not >>> preferring that. I fully realize you can't always do that, but >>> I bet you can get closer than you think you can. >>> >>>> I would much, much, MUCH (did I say much?) rather use a preemptive OS. >>>> You can always make it a "run to completion" by placing all tasks at the >>>> same priority and ensuring nothing blocks in the body of each task >>>> (except at the end, e.g., a yield()). >>> >>> >>> I do, generally, or I make it message passing, or >>> interlocking state machines with message passing... >> >> Well we may be in violent agreement after all. > > I am sure we are. > >> I've based a few of my >> threaded projects on the paradigm of doing everything through messages >> and blocking (for the most part) on a message being available. Works >> beautifully. >> > > Badda bing, badda boom. > >>> Here's the key - it's now possible to reason about these systems in a >>> proof-like manner. You can enumerate all the states cross events and >>> check lists of invariants at each transition. >> >> What do you mean by "invariants?" This paragraph is Greek to me. >> > > An invariant is something that is always true. > > For state machines, you can check sets of true/false tests based > on state and learn an awful lot about whether the thing works or not.
There is a large difference between "proof" and "increased confidence". While explicit FSMs are almost /necessary/ for analysis and *proof*, they aren't /sufficient/ for many real systems. The state space explosion rapidly becomes intractable when all possible sequences of events is considered. Nonetheless FSMs are highly beneficial and the best known technique for the reasons you and I have previously agreed.
>>> So what if it's big? Sure, you'll make mistakes but if >>> you keep this as a regression check, I bet it's worth it. >>> You can get a long way with < 100 states and < 12 events, well, >>> that's 1200 things. Not impossible at all. >> >> Don't get me wrong, I believe any testing is good; very good. But just >> testing across states and events doesn't give you a lot of coverage, >> does it? > > Sure it does. Done properly, it can reduce error rates > dramatically. But this is somehow easier if things > are, more or less, a state machine. > > Again, I think of these things as a fancy regulator clock, with > complicated escapement mechanisms. > > If it's permutation oriented, we're programmers. We know how > to automate generating permutations. If it's uncomfortable > running this on the target, hoist it out to a PC program > and run it against the test vector that way. > > You still have to do things like worst-case, stress and illegal > input tests. And you'll still miss stuff. > > I'd rather slip a week doing this than spend months going through wreckage. And > frankly, those approach feels more productive to me. > >> What about the order and timing of the events and inputs? >> > > You (nearly) always have to tolerate out of order events. > > Timing should be manageable by buffering. And you do what you can > to count lost events. > > If you lose events, it might be worth considering polling, in > cases ( especially on small, high-speed micros ).
Those arguments are missing the point.
Tom Gardner wrote:
> On 21/01/16 06:29, Les Cargill wrote: >> Randy Yates wrote: >>> Les Cargill <lcargill99@comcast.com> writes: >>> >>>> Randy Yates wrote: >>>>> Les Cargill <lcargill99@comcast.com> writes: >>>>>> [...] >>>>>> Preemptive opens up a lot of disparate and ugly cans of worms. With >>>>>> "run to completion". you can get a lot closer to proving the system >>>>>> correct to within some epsilon ( sometimes a rather large epsilon ). >>>>> >>>>> That sounds bogus. Whether 100 lines of "preemptive" or 100 lines >>>>> of "run to completion," you can have a LOT of bugs. >>>>> >>>> >>>> Of course you can. But with preemption, you get Heisenbugs of a >>>> certain sort, thread-safe issues, reentrancy and race conditions >>>> of another certain sort. >>>> >>>> Of course, you gain habits in how to avoid these, but still... >>>> >>>> I just really prefer deterministic operation of >>>> systems. And, frankly, I don't understand other people not >>>> preferring that. I fully realize you can't always do that, but >>>> I bet you can get closer than you think you can. >>>> >>>>> I would much, much, MUCH (did I say much?) rather use a preemptive OS. >>>>> You can always make it a "run to completion" by placing all tasks >>>>> at the >>>>> same priority and ensuring nothing blocks in the body of each task >>>>> (except at the end, e.g., a yield()). >>>> >>>> >>>> I do, generally, or I make it message passing, or >>>> interlocking state machines with message passing... >>> >>> Well we may be in violent agreement after all. >> >> I am sure we are. >> >>> I've based a few of my >>> threaded projects on the paradigm of doing everything through messages >>> and blocking (for the most part) on a message being available. Works >>> beautifully. >>> >> >> Badda bing, badda boom. >> >>>> Here's the key - it's now possible to reason about these systems in a >>>> proof-like manner. You can enumerate all the states cross events and >>>> check lists of invariants at each transition. >>> >>> What do you mean by "invariants?" This paragraph is Greek to me. >>> >> >> An invariant is something that is always true. >> >> For state machines, you can check sets of true/false tests based >> on state and learn an awful lot about whether the thing works or not. > > There is a large difference between "proof" and "increased > confidence". > > While explicit FSMs are almost /necessary/ for analysis > and *proof*, they aren't /sufficient/ for many real systems. > > The state space explosion rapidly becomes intractable when > all possible sequences of events is considered. >
The point is to constrain that space. The actual problem is the driver for any such explosion; we're just managing it. As a practical matter, I have not seen too many problems where state space explosion was a practical limitation. The "all possible sequences of events" thing is slightly red-herring; I've not seen too many cases where it was impossible to control this. Events tend to be pretty orthogonal. If they're not, make 'em orthogonal.
> Nonetheless FSMs are highly beneficial and the best known > technique for the reasons you and I have previously agreed. > > >>>> So what if it's big? Sure, you'll make mistakes but if >>>> you keep this as a regression check, I bet it's worth it. >>>> You can get a long way with < 100 states and < 12 events, well, >>>> that's 1200 things. Not impossible at all. >>> >>> Don't get me wrong, I believe any testing is good; very good. But just >>> testing across states and events doesn't give you a lot of coverage, >>> does it? >> >> Sure it does. Done properly, it can reduce error rates >> dramatically. But this is somehow easier if things >> are, more or less, a state machine. >> >> Again, I think of these things as a fancy regulator clock, with >> complicated escapement mechanisms. >> >> If it's permutation oriented, we're programmers. We know how >> to automate generating permutations. If it's uncomfortable >> running this on the target, hoist it out to a PC program >> and run it against the test vector that way. >> >> You still have to do things like worst-case, stress and illegal >> input tests. And you'll still miss stuff. >> >> I'd rather slip a week doing this than spend months going through >> wreckage. And >> frankly, those approach feels more productive to me. >> >>> What about the order and timing of the events and inputs? >>> >> >> You (nearly) always have to tolerate out of order events. >> >> Timing should be manageable by buffering. And you do what you can >> to count lost events. >> >> If you lose events, it might be worth considering polling, in >> cases ( especially on small, high-speed micros ). > > Those arguments are missing the point. > >
I just reject the general ... nihilism of most of the discourse on the subject. -- Les Cargill
On 22/01/16 03:12, Les Cargill wrote:
> Tom Gardner wrote: >> On 21/01/16 06:29, Les Cargill wrote: >>> Randy Yates wrote: >>>> Les Cargill <lcargill99@comcast.com> writes: >>>> >>>>> Randy Yates wrote: >>>>>> Les Cargill <lcargill99@comcast.com> writes: >>>>>>> [...] >>>>>>> Preemptive opens up a lot of disparate and ugly cans of worms. With >>>>>>> "run to completion". you can get a lot closer to proving the system >>>>>>> correct to within some epsilon ( sometimes a rather large epsilon ). >>>>>> >>>>>> That sounds bogus. Whether 100 lines of "preemptive" or 100 lines >>>>>> of "run to completion," you can have a LOT of bugs. >>>>>> >>>>> >>>>> Of course you can. But with preemption, you get Heisenbugs of a >>>>> certain sort, thread-safe issues, reentrancy and race conditions >>>>> of another certain sort. >>>>> >>>>> Of course, you gain habits in how to avoid these, but still... >>>>> >>>>> I just really prefer deterministic operation of >>>>> systems. And, frankly, I don't understand other people not >>>>> preferring that. I fully realize you can't always do that, but >>>>> I bet you can get closer than you think you can. >>>>> >>>>>> I would much, much, MUCH (did I say much?) rather use a preemptive OS. >>>>>> You can always make it a "run to completion" by placing all tasks >>>>>> at the >>>>>> same priority and ensuring nothing blocks in the body of each task >>>>>> (except at the end, e.g., a yield()). >>>>> >>>>> >>>>> I do, generally, or I make it message passing, or >>>>> interlocking state machines with message passing... >>>> >>>> Well we may be in violent agreement after all. >>> >>> I am sure we are. >>> >>>> I've based a few of my >>>> threaded projects on the paradigm of doing everything through messages >>>> and blocking (for the most part) on a message being available. Works >>>> beautifully. >>>> >>> >>> Badda bing, badda boom. >>> >>>>> Here's the key - it's now possible to reason about these systems in a >>>>> proof-like manner. You can enumerate all the states cross events and >>>>> check lists of invariants at each transition. >>>> >>>> What do you mean by "invariants?" This paragraph is Greek to me. >>>> >>> >>> An invariant is something that is always true. >>> >>> For state machines, you can check sets of true/false tests based >>> on state and learn an awful lot about whether the thing works or not. >> >> There is a large difference between "proof" and "increased >> confidence". >> >> While explicit FSMs are almost /necessary/ for analysis >> and *proof*, they aren't /sufficient/ for many real systems. >> >> The state space explosion rapidly becomes intractable when >> all possible sequences of events is considered. >> > > The point is to constrain that space. The actual problem is the driver > for any such explosion; we're just managing it. > > As a practical matter, I have not seen too many problems where state space > explosion was a practical limitation. > > The "all possible sequences of events" thing is slightly red-herring; > I've not seen too many cases where it was impossible to control this. > > Events tend to be pretty orthogonal. If they're not, make 'em > orthogonal.
When the events and states are defined by standards bodies you don't have that option. Doubly so if the standards are rationalisations of what's found in existing manufacturers' equipment. That's the case for telecom and networking standards, and I'm sure many other examples.
>> Nonetheless FSMs are highly beneficial and the best known >> technique for the reasons you and I have previously agreed. >> >> >>>>> So what if it's big? Sure, you'll make mistakes but if >>>>> you keep this as a regression check, I bet it's worth it. >>>>> You can get a long way with < 100 states and < 12 events, well, >>>>> that's 1200 things. Not impossible at all. >>>> >>>> Don't get me wrong, I believe any testing is good; very good. But just >>>> testing across states and events doesn't give you a lot of coverage, >>>> does it? >>> >>> Sure it does. Done properly, it can reduce error rates >>> dramatically. But this is somehow easier if things >>> are, more or less, a state machine. >>> >>> Again, I think of these things as a fancy regulator clock, with >>> complicated escapement mechanisms. >>> >>> If it's permutation oriented, we're programmers. We know how >>> to automate generating permutations. If it's uncomfortable >>> running this on the target, hoist it out to a PC program >>> and run it against the test vector that way. >>> >>> You still have to do things like worst-case, stress and illegal >>> input tests. And you'll still miss stuff. >>> >>> I'd rather slip a week doing this than spend months going through >>> wreckage. And >>> frankly, those approach feels more productive to me. >>> >>>> What about the order and timing of the events and inputs? >>>> >>> >>> You (nearly) always have to tolerate out of order events. >>> >>> Timing should be manageable by buffering. And you do what you can >>> to count lost events. >>> >>> If you lose events, it might be worth considering polling, in >>> cases ( especially on small, high-speed micros ). >> >> Those arguments are missing the point. >> >> > > I just reject the general ... nihilism of most of the discourse on the > subject.
I reject panglossian optimism in favour of realistic objectives.
Tom Gardner wrote:
> On 22/01/16 03:12, Les Cargill wrote: >> Tom Gardner wrote: >>> On 21/01/16 06:29, Les Cargill wrote: >>>> Randy Yates wrote: >>>>> Les Cargill <lcargill99@comcast.com> writes: >>>>> >>>>>> Randy Yates wrote: >>>>>>> Les Cargill <lcargill99@comcast.com> writes: >>>>>>>> [...] >>>>>>>> Preemptive opens up a lot of disparate and ugly cans of worms. With >>>>>>>> "run to completion". you can get a lot closer to proving the system >>>>>>>> correct to within some epsilon ( sometimes a rather large >>>>>>>> epsilon ). >>>>>>> >>>>>>> That sounds bogus. Whether 100 lines of "preemptive" or 100 lines >>>>>>> of "run to completion," you can have a LOT of bugs. >>>>>>> >>>>>> >>>>>> Of course you can. But with preemption, you get Heisenbugs of a >>>>>> certain sort, thread-safe issues, reentrancy and race conditions >>>>>> of another certain sort. >>>>>> >>>>>> Of course, you gain habits in how to avoid these, but still... >>>>>> >>>>>> I just really prefer deterministic operation of >>>>>> systems. And, frankly, I don't understand other people not >>>>>> preferring that. I fully realize you can't always do that, but >>>>>> I bet you can get closer than you think you can. >>>>>> >>>>>>> I would much, much, MUCH (did I say much?) rather use a >>>>>>> preemptive OS. >>>>>>> You can always make it a "run to completion" by placing all tasks >>>>>>> at the >>>>>>> same priority and ensuring nothing blocks in the body of each task >>>>>>> (except at the end, e.g., a yield()). >>>>>> >>>>>> >>>>>> I do, generally, or I make it message passing, or >>>>>> interlocking state machines with message passing... >>>>> >>>>> Well we may be in violent agreement after all. >>>> >>>> I am sure we are. >>>> >>>>> I've based a few of my >>>>> threaded projects on the paradigm of doing everything through messages >>>>> and blocking (for the most part) on a message being available. Works >>>>> beautifully. >>>>> >>>> >>>> Badda bing, badda boom. >>>> >>>>>> Here's the key - it's now possible to reason about these systems in a >>>>>> proof-like manner. You can enumerate all the states cross events and >>>>>> check lists of invariants at each transition. >>>>> >>>>> What do you mean by "invariants?" This paragraph is Greek to me. >>>>> >>>> >>>> An invariant is something that is always true. >>>> >>>> For state machines, you can check sets of true/false tests based >>>> on state and learn an awful lot about whether the thing works or not. >>> >>> There is a large difference between "proof" and "increased >>> confidence". >>> >>> While explicit FSMs are almost /necessary/ for analysis >>> and *proof*, they aren't /sufficient/ for many real systems. >>> >>> The state space explosion rapidly becomes intractable when >>> all possible sequences of events is considered. >>> >> >> The point is to constrain that space. The actual problem is the driver >> for any such explosion; we're just managing it. >> >> As a practical matter, I have not seen too many problems where state >> space >> explosion was a practical limitation. >> >> The "all possible sequences of events" thing is slightly red-herring; >> I've not seen too many cases where it was impossible to control this. >> >> Events tend to be pretty orthogonal. If they're not, make 'em >> orthogonal. > > When the events and states are defined by standards bodies > you don't have that option. Doubly so if the standards are > rationalisations of what's found in existing manufacturers' > equipment. > > That's the case for telecom and networking standards, and > I'm sure many other examples. > >
You are 100% correct in that - although in the limited case I'm aware of, the number of events & states are low.
>>> Nonetheless FSMs are highly beneficial and the best known >>> technique for the reasons you and I have previously agreed. >>> >>> >>>>>> So what if it's big? Sure, you'll make mistakes but if >>>>>> you keep this as a regression check, I bet it's worth it. >>>>>> You can get a long way with < 100 states and < 12 events, well, >>>>>> that's 1200 things. Not impossible at all. >>>>> >>>>> Don't get me wrong, I believe any testing is good; very good. But just >>>>> testing across states and events doesn't give you a lot of coverage, >>>>> does it? >>>> >>>> Sure it does. Done properly, it can reduce error rates >>>> dramatically. But this is somehow easier if things >>>> are, more or less, a state machine. >>>> >>>> Again, I think of these things as a fancy regulator clock, with >>>> complicated escapement mechanisms. >>>> >>>> If it's permutation oriented, we're programmers. We know how >>>> to automate generating permutations. If it's uncomfortable >>>> running this on the target, hoist it out to a PC program >>>> and run it against the test vector that way. >>>> >>>> You still have to do things like worst-case, stress and illegal >>>> input tests. And you'll still miss stuff. >>>> >>>> I'd rather slip a week doing this than spend months going through >>>> wreckage. And >>>> frankly, those approach feels more productive to me. >>>> >>>>> What about the order and timing of the events and inputs? >>>>> >>>> >>>> You (nearly) always have to tolerate out of order events. >>>> >>>> Timing should be manageable by buffering. And you do what you can >>>> to count lost events. >>>> >>>> If you lose events, it might be worth considering polling, in >>>> cases ( especially on small, high-speed micros ). >>> >>> Those arguments are missing the point. >>> >>> >> >> I just reject the general ... nihilism of most of the discourse on the >> subject. > > I reject panglossian optimism in favour of realistic objectives. >
I've done this very thing, so I'm hesitant to call it Panglossian... Much depends on properly generating the suite of permutations. -- Les Cargill
Tom Gardner wrote:
> On 22/01/16 03:12, Les Cargill wrote:
<snip>
>> I just reject the general ... nihilism of most of the discourse on the >> subject. > > I reject panglossian optimism in favour of realistic objectives. >
Let me rephrase, Tom. ( how critical is that comma? ) I think (still) after 30 years, that we still have to try. You won't get 'em all. Doesn't matter. The critical thing is that we continually and habitually overestimate the size of The Beast Within. If you can hold all the interruptions off for... half a day, half a week, half a month, you can get well into the belly of it. it may matter; it may not matter. Here is to those times when it does. -- Les Cargill
Les Cargill <lcargill99@comcast.com> writes:
> Much depends on properly generating the suite of permutations.
Sometimes you can prune the state space with formal or automated reasoning. https://github.com/tomahawkins/improve/wiki/ImProve might be of interest. It's a DSL which uses an SMT solver to check the invariants. A walkthrough of a simple example is here: https://github.com/tomahawkins/improve/wiki/Verification-Primer The ImProve compiler turns the ImProve input into C, Ada, or Simulink. I haven't used it but I've played with the related Atom DSL by the same guy. It's pretty cool.
On 23/01/16 04:41, Les Cargill wrote:
> Tom Gardner wrote: >> On 22/01/16 03:12, Les Cargill wrote: > <snip> >>> I just reject the general ... nihilism of most of the discourse on the >>> subject. >> >> I reject panglossian optimism in favour of realistic objectives. >> > > > Let me rephrase, Tom. ( how critical is that comma? ) > > I think (still) after 30 years, that we still have to try. You won't > get 'em all. Doesn't matter. > > The critical thing is that we continually and habitually overestimate > the size of The Beast Within. If you can hold all the interruptions > off for... half a day, half a week, half a month, you can get well into the > belly of it. > > it may matter; it may not matter. Here is to those times > when it does.
And there we are in violent agreement. In particular, we strongly believe the FSM specification and implementation techniques are extremely valuable and still offer the best mechanisms for producing reliable hardware and software. (Of course other techniques are beneficially applied in addition to those techniques) The only difference is the extent to which proof is practical - but that doesn't *reduce* the benefits, it merely puts an *upper limit* on them. (I'm still mildly pleased that when I and another schoolboy implemented programs for converting from one 5-channel paper tape format to another, my program worked first time and his never did. My program was ~80 words (160 instructions) long and my first assembler program, his was enormous. I later found out I had reinvented a simple FSM with two states: figure-shift and letter-shift.)

The 2024 Embedded Online Conference