Tom Gardner <spamjunk@blueyonder.co.uk> writes:> Processors with up to 32 cores and 4000MIPS, and interrupt latencies > of 10ns.Why stop there? http://greenarraychipss.com
When exactly do you choose to use a RTOS (instead of a non-OS approach)?
Started by ●December 9, 2017
Reply by ●January 5, 20182018-01-05
Reply by ●January 5, 20182018-01-05
On 01/05/2018 04:12 AM, Tom Gardner wrote:> On 04/01/18 19:31, rickman wrote: >> rickman wrote on 1/4/2018 3:33 AM: >>> Les Cargill wrote on 1/3/2018 6:59 PM: >>>> rickman wrote: >>>>> Les Cargill wrote on 1/3/2018 7:07 AM: >>>>>> rickman wrote: >>>>>>> Ed Prochak wrote on 12/19/2017 11:19 AM: >>>>>>>> On Saturday, December 16, 2017 at 12:56:31 PM UTC-5, >>>>>>>> Les Cargill wrote: >>>>>>>>> Mike Perkins wrote: >>>>>>>>>> On 09/12/2017 16:05, pozz wrote: >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> I use it where I have a number of 'tasks' which >>>>>>>>>> then interact with each other. >>>>>>>>>> >>>>>>>>>> If your system is a pure state machine then there >>>>>>>>>> is no need for an RTOS. >>>>>>>>>> >>>>>>>>> >>>>>>>>> Indeed. Indeed. >>>>>>>> >>>>>>>> If it is just one state machine. Then yes, indeed. ed >>>>>>> >>>>>>> It can always be one state machine. The only issue is >>>>>>> how complex the combined state machine is. >>>>>>> >>>>>> >>>>>> It's always a case of "how long should a man's legs be?" >>>>>> Lincoln is alleged to have said "long enough to touch the >>>>>> ground." >>>>>> >>>>>>> Actually the issue is not how many state machines you >>>>>>> have. It is the timing requirements of the various state >>>>>>> machines. If your timing is lax, sequential operation of >>>>>>> multiple machines is easy. But this often becomes a huge >>>>>>> discussion with everyone talking past each other. >>>>>>> >>>>>> >>>>>> They should use state machines to keep track then :) >>>>>> >>>>>> If your system can be decomposed as roughly events cross >>>>>> state, you have a snowball's chance of "understanding" it. >>>>> >>>>> Sorry, I don't know what this means "roughly events cross >>>>> state". >>>>> >>>>> >>>> >>>> You are in state A. Event 42 occurs. Lookup.... ah, here - if >>>> we get event 42 whilst in state A, we move to state B and send >>>> message m. >>> >>> So what makes that so hard to understand? >> >> To be more explicit, I think every FSM I've ever coded was along >> the lines of a case statement on the state with IF conditions on >> all the interesting inputs. I find this structure to be self >> documenting if the signal names are chosen well. >> >> Why is this structure hard to understand? > > I've seen a commercial case where that type of structure was mutated > by people under pressure to: - have a single state machine for all > different customers - make the minimum change - do it fast - where > the original designers had left - a custom domain specific language, > ugh > > The result was an unholy unmaintainable mess, with if-then-elses > nested up to 10 (ten!) deep. > > Completely insane, of course. >Yup. The whole point of state machine design is to avoid that sort of logic-driven horror.> My preference is for a dispatch table based on event+state, since > that forces you to consider all possibilities. The dispatch table can > be either a 2D array of function pointers, or inherent in an OOP > inheritance hierarchy where the hierarchy is a direct mirror of the > state hierarchy.Cheers Phil Hobbs -- Dr Philip C D Hobbs Principal Consultant ElectroOptical Innovations LLC / Hobbs ElectroOptics Optics, Electro-optics, Photonics, Analog Electronics Briarcliff Manor NY 10510 http://electrooptical.net http://hobbs-eo.com
Reply by ●January 5, 20182018-01-05
Tom Gardner wrote:> On 04/01/18 19:31, rickman wrote: >> rickman wrote on 1/4/2018 3:33 AM: >>> Les Cargill wrote on 1/3/2018 6:59 PM: >>>> rickman wrote: >>>>> Les Cargill wrote on 1/3/2018 7:07 AM: >>>>>> rickman wrote: >>>>>>> Ed Prochak wrote on 12/19/2017 11:19 AM: >>>>>>>> On Saturday, December 16, 2017 at 12:56:31 PM UTC-5, Les Cargill >>>>>>>> wrote: >>>>>>>>> Mike Perkins wrote: >>>>>>>>>> On 09/12/2017 16:05, pozz wrote: >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> I use it where I have a number of 'tasks' which then interact >>>>>>>>>> with each >>>>>>>>>> other. >>>>>>>>>> >>>>>>>>>> If your system is a pure state machine then there is no need >>>>>>>>>> for an >>>>>>>>>> RTOS. >>>>>>>>>> >>>>>>>>> >>>>>>>>> Indeed. Indeed. >>>>>>>> >>>>>>>> If it is just one state machine. Then yes, indeed. >>>>>>>> ed >>>>>>> >>>>>>> It can always be one state machine. The only issue is how >>>>>>> complex the >>>>>>> combined state machine is. >>>>>>> >>>>>> >>>>>> It's always a case of "how long should a man's legs be?" Lincoln >>>>>> is alleged to have said "long enough to touch the ground." >>>>>> >>>>>>> Actually the issue is not how many state machines you have. It >>>>>>> is the >>>>>>> timing requirements of the various state machines. If your >>>>>>> timing is lax, >>>>>>> sequential operation of multiple machines is easy. But this >>>>>>> often becomes >>>>>>> a huge discussion with everyone talking past each other. >>>>>>> >>>>>> >>>>>> They should use state machines to keep track then :) >>>>>> >>>>>> If your system can be decomposed as roughly events cross >>>>>> state, you have a snowball's chance of "understanding" it. >>>>> >>>>> Sorry, I don't know what this means "roughly events cross state". >>>>> >>>>> >>>> >>>> You are in state A. Event 42 occurs. Lookup.... ah, here - >>>> if we get event 42 whilst in state A, we move to state B >>>> and send message m. >>> >>> So what makes that so hard to understand? >> >> To be more explicit, I think every FSM I've ever coded was along the >> lines of a >> case statement on the state with IF conditions on all the interesting >> inputs. I >> find this structure to be self documenting if the signal names are >> chosen well. >> >> Why is this structure hard to understand? > > I've seen a commercial case where that type of structure was > mutated by people under pressure to: > - have a single state machine for all different customers > - make the minimum change > - do it fast > - where the original designers had left > - a custom domain specific language, ugh > > The result was an unholy unmaintainable mess, with > if-then-elses nested up to 10 (ten!) deep. >Eh, doing it totally, totally wrong. With all due respect - you really need a configuration system managed well within the code itself, and customer specializations are but different settings. On one product, the "model number' was nothing more than a name that pointed to a hard table of defaults for the configuration, and further config changes were possible. If the support people requested it, we'd grow a model number when they got too painful. That being said - OO and configuration go together like chalk and cheese :)> Completely insane, of course. >Well, yeah. You can make anything disgusting if you work hard enough :)> My preference is for a dispatch table based on event+state, > since that forces you to consider all possibilities.Yup yup. And you can generate a test vector for it with combinators. Then testing essentially becomes compressing the output - tedious, but ... rewarding. If the tested FSM is properly tested, then all remaining defects are 1) either tweak/tone of unrealistic expectations or 2) hardware bugs.> The > dispatch table can be either a 2D array of function pointers, > or inherent in an OOP inheritance hierarchy where the > hierarchy is a direct mirror of the state hierarchy.There's something to be said for the 2d approach. It's more canonical ( and more normal ) for one. -- Les Cargill
Reply by ●January 5, 20182018-01-05
Paul Rubin wrote:> StateMachineCOM <statemachineguru@gmail.com> writes: >> However, this does not mean that the *concept* of active objects is >> necessarily heavyweight. To the contrary, event-driven systems are >> known to be lighter weight than traditional RTOS-based systems, > > Sure, they've been in Forth multitaskers for almost 50 years, as this > article describes: > > http://www.complang.tuwien.ac.at/anton/euroforth/ef17/genproceedings/papers/haley-slides.pdf > > There's some oft-repeated wisdom that as the application gets more > complicated, cooperative multasking runs into more and more problems > (tasks hanging or blocking crash the whole system etc) and preemption > becomes important. So it's good to support both. >SO in order to have true determinism, one needs run-to-completion. This forces the design to account for any indeterminate situations. In some hard realtime domains, having the timer pop is a *fault*. It indicates that the task has overrun its time budget. So preemption seems at times to try to make that lemon into lemonade :) Preemption is the "timesharing" thing; we may or may not need it for embedded. It depends on your appetite for indeterminacy; I say if you're leaning on the timer tick to operate, you have multiple latent bugs. -- Les Cargill
Reply by ●January 5, 20182018-01-05
Les Cargill <lcargill99@comcast.com> writes:> It depends on your appetite for indeterminacy; I say if > you're leaning on the timer tick to operate, you have multiple > latent bugs.A complex enough program will have latent bugs no matter what. If the application critically depends on having no bugs, that dependence is a bug in its own right. A reliable system has to be able to recover from faults including software bugs. Erlang/OTP is set up so if something goes wrong in an operation, the process handling it crashes and a supervision process restarts it so things are in a known state again. This is enough to recover from quite a lot of unexpected problems. You look at the crash log the next day, figure out what happened, and roll out a fix. You can even upgrade the software while it's still running.
Reply by ●January 6, 20182018-01-06
On 06/01/18 03:29, Les Cargill wrote:> Tom Gardner wrote: >> On 04/01/18 19:31, rickman wrote: >>> rickman wrote on 1/4/2018 3:33 AM: >>>> Les Cargill wrote on 1/3/2018 6:59 PM: >>>>> rickman wrote: >>>>>> Les Cargill wrote on 1/3/2018 7:07 AM: >>>>>>> rickman wrote: >>>>>>>> Ed Prochak wrote on 12/19/2017 11:19 AM: >>>>>>>>> On Saturday, December 16, 2017 at 12:56:31 PM UTC-5, Les Cargill wrote: >>>>>>>>>> Mike Perkins wrote: >>>>>>>>>>> On 09/12/2017 16:05, pozz wrote: >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> I use it where I have a number of 'tasks' which then interact with each >>>>>>>>>>> other. >>>>>>>>>>> >>>>>>>>>>> If your system is a pure state machine then there is no need for an >>>>>>>>>>> RTOS. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Indeed. Indeed. >>>>>>>>> >>>>>>>>> If it is just one state machine. Then yes, indeed. >>>>>>>>> ed >>>>>>>> >>>>>>>> It can always be one state machine. The only issue is how complex the >>>>>>>> combined state machine is. >>>>>>>> >>>>>>> >>>>>>> It's always a case of "how long should a man's legs be?" Lincoln >>>>>>> is alleged to have said "long enough to touch the ground." >>>>>>> >>>>>>>> Actually the issue is not how many state machines you have. It is the >>>>>>>> timing requirements of the various state machines. If your timing is lax, >>>>>>>> sequential operation of multiple machines is easy. But this often becomes >>>>>>>> a huge discussion with everyone talking past each other. >>>>>>>> >>>>>>> >>>>>>> They should use state machines to keep track then :) >>>>>>> >>>>>>> If your system can be decomposed as roughly events cross >>>>>>> state, you have a snowball's chance of "understanding" it. >>>>>> >>>>>> Sorry, I don't know what this means "roughly events cross state". >>>>>> >>>>>> >>>>> >>>>> You are in state A. Event 42 occurs. Lookup.... ah, here - >>>>> if we get event 42 whilst in state A, we move to state B >>>>> and send message m. >>>> >>>> So what makes that so hard to understand? >>> >>> To be more explicit, I think every FSM I've ever coded was along the lines of a >>> case statement on the state with IF conditions on all the interesting inputs. I >>> find this structure to be self documenting if the signal names are chosen well. >>> >>> Why is this structure hard to understand? >> >> I've seen a commercial case where that type of structure was >> mutated by people under pressure to: >> - have a single state machine for all different customers >> - make the minimum change >> - do it fast >> - where the original designers had left >> - a custom domain specific language, ugh >> >> The result was an unholy unmaintainable mess, with >> if-then-elses nested up to 10 (ten!) deep. >> > > Eh, doing it totally, totally wrong. With all due respect - you really > need a configuration system managed well within the code itself, and > customer specializations are but different settings.I didn't have due respect; I had due disrespect. It was all contained in a configuration system, that bloated wrong-headed pile of ordure IBM Rational Clearcase. I once, and only once, looked at the version trees. For a couple of years it was respectable and sane: a trunk with a few branches. Then it exploded over the screen and looked like a plant with a serious enzyme disorder: there were even backwards branching loops. I have no idea how they achieved that nor why (other that get it out the door tomorrow).> On one product, the "model number' was nothing more than a name > that pointed to a hard table of defaults for the configuration, > and further config changes were possible. If the support > people requested it, we'd grow a model number when they got too painful. > > That being said - OO and configuration go together like chalk and cheese > :) > >> Completely insane, of course. >> > > Well, yeah. You can make anything disgusting > if you work hard enough :)And they did work hard enough; enhancements were positively sclerotic.>> My preference is for a dispatch table based on event+state, >> since that forces you to consider all possibilities. > > Yup yup. And you can generate a test vector for it with combinators. > Then testing essentially becomes compressing the output - tedious, but > ... rewarding. > > If the tested FSM is properly tested, then all remaining > defects are 1) either tweak/tone of unrealistic expectations or > 2) hardware bugs. > >> The >> dispatch table can be either a 2D array of function pointers, >> or inherent in an OOP inheritance hierarchy where the >> hierarchy is a direct mirror of the state hierarchy. > > > There's something to be said for the 2d approach. It's > more canonical ( and more normal ) for one.Agreed, but it can be convenient to express the design in other terms. For example: - the top level catches events that have not been dealt with elsewhere; I prefer this to be a "should not happen" which is logged - next level down is divided into a small number of superstates, maybe "initialising", "working normally", "gross fault recovery" - bottom level is the normal states, e.g. "door open", "door closed", "fire alarm", etc Events are "delivered" to the bottom level and, if not handled, percolate up to the next level. Levels naturally correspond to an OOP class hierarchy. That's nothing new, see Harel's StateCharts. Having said that, I'm not overly fond of some aspects of StateCharts.
Reply by ●January 6, 20182018-01-06
On 06/01/18 03:29, Les Cargill wrote:> Tom Gardner wrote: >> My preference is for a dispatch table based on event+state, >> since that forces you to consider all possibilities. > > Yup yup. And you can generate a test vector for it with combinators. > Then testing essentially becomes compressing the output - tedious, but > ... rewarding. > > If the tested FSM is properly tested, then all remaining > defects are 1) either tweak/tone of unrealistic expectations or > 2) hardware bugs.Unfortunately you can't test FSMs "properly", for my definition of "properly" :) That's based on the unfashionable and inconvenient concept that "you can't test quality into a product". For the systems I've designed, there's a lot to be gained by keeping a compressed log of "state trajectory" around until not needed. That plus accurate timestamping of events to/from external systems has enabled me to quickly and unambiguously deflect blame away from my stuff and onto other companies. The lawyers were never even called :)
Reply by ●January 6, 20182018-01-06
On 06/01/18 00:35, Paul Rubin wrote:> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >> Processors with up to 32 cores and 4000MIPS, and interrupt latencies >> of 10ns. > > Why stop there? http://greenarraychipss.com1) I believe you can simply add more chips in parallel, and the comms simply works, albeit with increased latency 2) hardware is easy; the software is more difficult and more important. XMOS has very good software support (based on 40 years of theory and practical implementations), plus excellent integration with the hardware. 3) look at the investor lists for each company IMNSHO, point 2 is the killer advantage/USP.
Reply by ●January 6, 20182018-01-06
On Sat, 6 Jan 2018 09:45:31 +0000, Tom Gardner <spamjunk@blueyonder.co.uk> wrote:>On 06/01/18 00:35, Paul Rubin wrote: >> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >>> Processors with up to 32 cores and 4000MIPS, and interrupt latencies >>> of 10ns. >> >> Why stop there? http://greenarraychipss.com > >1) I believe you can simply add more chips >in parallel, and the comms simply works, albeit >with increased latency > >2) hardware is easy; the software is more difficult >and more important. XMOS has very good software >support (based on 40 years of theory and practical >implementations), plus excellent integration with >the hardware.The xCore style architecture is nice for multichannel DSP applications, in which each channel is assigned a dedicated core and the single sampling clock is routed to all cores, starting the execution of all cores at each sample clock transition. The xCore could also be used to implement PLCs (Programmable Logic Controller) with each execution loop executed in a dedicated core, usually with different clocks for each loop. Quite a lot of problems can be solved in a PLC type environment and the IEC 61131 programming environment is quite handy. IEC 61131 has multiple kinds of programming languages, e.g. ladder logic or ST (Structured Text, a Modula/Pascal style programming language). However, I do _not_ think that the xCore would be very handy for ad hoc parallel processing, even with XMOS programming environment (which is quite versatile).> >3) look at the investor lists for each company > >IMNSHO, point 2 is the killer advantage/USP.
Reply by ●January 6, 20182018-01-06
On 06/01/18 10:32, upsidedown@downunder.com wrote:> On Sat, 6 Jan 2018 09:45:31 +0000, Tom Gardner > <spamjunk@blueyonder.co.uk> wrote: > >> On 06/01/18 00:35, Paul Rubin wrote: >>> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >>>> Processors with up to 32 cores and 4000MIPS, and interrupt latencies >>>> of 10ns. >>> >>> Why stop there? http://greenarraychipss.com >> >> 1) I believe you can simply add more chips >> in parallel, and the comms simply works, albeit >> with increased latency >> >> 2) hardware is easy; the software is more difficult >> and more important. XMOS has very good software >> support (based on 40 years of theory and practical >> implementations), plus excellent integration with >> the hardware. > > The xCore style architecture is nice for multichannel DSP > applications, in which each channel is assigned a dedicated core and > the single sampling clock is routed to all cores, starting the > execution of all cores at each sample clock transition.It is great for *much* more than that, e.g. a general purpose test instrument. My "kick the tyres" test was a reciprocal frequency counter with zero added hardware; software counted the transitions in a 62.5Mb/s input data streams - two cores for the two primary inputs; absolute timing guarantees were required. I could squeeze it so that each input "cycle" was suspended for ~100ns before the next input event arrived - one core for front panel input - one core for front panel output - one controller core orchestrating all the other cores - many cores for a USB interface Having guaranteed i/o timing in the presence of random interaction with a user and a host computer is rather nice :) The IDE gave those guarantees; I don't know of any other processor that could do that. Interrupts: get thee behind me Satan! Overall I was /remarkably/ pleased at how simple and predictable everything turned out to be. The data sheets and programming tutorials are exemplary in their conciseness and correctness. That's a benefit of having the same team design the hardware and software without being unduly constrained by the PDP11 and 1970 compiler technology :)> The xCore could also be used to implement PLCs (Programmable Logic > Controller) with each execution loop executed in a dedicated core, > usually with different clocks for each loop. Quite a lot of problems > can be solved in a PLC type environment and the IEC 61131 programming > environment is quite handy. IEC 61131 has multiple kinds of > programming languages, e.g. ladder logic or ST (Structured Text, a > Modula/Pascal style programming language).I don't know anything about PLCs, but I'm suspicious about the clocking strategy you mention. Better, I would presume, to let each core have a 100MHz clock and to stall until it has something to do - i.e. an i/o event or message event from another core. Of course, such an i/o or message event could well be a "software" clock.> However, I do _not_ think that the xCore would be very handy for ad > hoc parallel processing, even with XMOS programming environment (which > is quite versatile).There are two points there. Firstly the xCORE processors are embedded devices not general purpose devices. Anybody planning to use them as general purpose compute engines will be, um, disappointed :) Having said that, ISTR some having one core being an ARM processor (other 7 being xCORE), and the IDE can generate code for ARMs. I have not investigated any of that. Secondly, xC is strongly based on Hoare's Communicating Sequential Processes (CSP), as was Occam in the 80. Several new languages (Go, Rust) also contain CSP syntax and semantics, thus indicating that people still think CSP is A Good Thing. In addition, it appears that message passing is still the most reliable general purpose way to structure large scale High Performance Computing (HPC) programs. That raises the question as to whether CSP (and "hence" xC) is, on balance, better/worse than the alternatives. That is not clear to me. Mind you, I do like xC's "asynchronous" "interface" extensions to CSP; they appear to be necessary on limited core devices>> 3) look at the investor lists for each company >> >> IMNSHO, point 2 is the killer advantage/USP.