EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

When exactly do you choose to use a RTOS (instead of a non-OS approach)?

Started by pozz December 9, 2017
rickman <gnuarm.deletethisbit@gmail.com> writes:
> I can't think of any program or correctly constructed digital logic > design that would not produce a non-deterministic FSM.
NSFM's (actually usually called NFA, non-deterministic finite automata) are common in software. Each NFA state potentially has multiple outgoing edges labelled with the same symbol, and you "run" the NFA by exploring edges and backtracking when needed. Regexes are straightforward to convert to NFA. Then you can convert the NFA to a DFA by making a separate DFA state for each subset of NFA states that the NFA could possibly be in. This is explained in the "dragon book" (https://en.wikipedia.org/wiki/Dragon_Book) and other places.
> What is the point in such a distinction?
The NFA size is linear in the regex size, but it can potentially use exponential time because of the backtracking (it usually doesn't). The DFA is fast but the number of states can potentially be exponential in the regex size (it usually isn't). Also, compiling the NFA to a DFA takes time. So it's a trade-off. DFA conversion is more likely to be worthwhile if you're scanning for the pattern a lot of times.
rickman <gnuarm.deletethisbit@gmail.com> writes:
> In your example you only need to test state A for every possible input > to verify it doesn't go to state B.
An input is a sequence of events e1,e2,e3,...e_n. For example, think of pressing buttons and flipping switches in the cockpit of the space shuttle. Each flip is an event that can represent any of the 100s of switches being flipped. Maybe a drunk astronaut flips switch 5, then switch 27, then switch 9, etc. Question: is there some sequence of flips, maybe 1000 flips long, that blows up the space shuttle by accident? Your spec might require that no such sequence exists, but how do you make sure? You can't exhaustively search every sequence of 1000 flips to test that. And even if you somehow check all the 1000-flip sequences, you still have to worry about the possibility of a 2000-flip sequence, etc. There are ways to deal with this problem (the link I gave is a useful approach) but it's not so simple, and lots of work is done on it.
> You don't need to consider infinitely long sequences because it is a > FINITE state machine.
Not infinitely long, but of arbitrarily large finite length. You don't have to worry about infinitely long sequences, but there are infinitely many of finite size, and you do have to worry about those. It's like saying there is no infinite integer (1,2,3... are all finite). But since they can be arbitrarily large, the collection of them is not finite.
On 07/01/18 08:39, Paul Rubin wrote:
> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >> Hence I'd like to see some independent assessment on some _existing_ >> real-world protocols/FSMs. > > I don't know where to find any of those. Got any suggestions?
The early 90s one was the FDDI token ring LAN. The various FSMs had to deal with lost token, duplicated token, broken links, split and reconnected rings, traffic priority _etc_. IMNSHO there's a reason why nice simple ethernet is common today and token rings aren't :) The naughties ones were telecoms protocols such as ETSI Parlay. Those specific protocols were not much more than an attempt to provide a common interface that would simplify connecting one telco's equipment to another. The problem was they manufacturers all implemented slightly different versions of the underlying protocols, so the Parlay specs tended to be a lowest common _multiple_ (but where each manufacturer only implemented a subset!) Quite frankly, you would have to be brave to look spend time looking at such horrors!
>> Failing that, how do their techniques perform when developing _new_ >> protocols/FSMs? If they are generally a help, then that would be >> great! > > I'd be interested to know what you think of ImProve. The link I gave > earlier: > > https://github.com/tomahawkins/improve/wiki/ImProve > > I'd look at the "verification primer" section to get an idea of what it > does. Don't worry about understanding the syntax etc, which takes some > getting used to. > > The basic approach is to control the state space explosion using a SAT > solver. Those have gotten very powerful in recent years. They might > get even more powerful with reinforcement learning techniques soon.
I'll have a quick look at that; thanks. I wonder how much I will understand? :) Ideally there would be some simple pedagogical examples, _plus_ some statements of success with "real world" protocols.
On 07/01/18 03:40, rickman wrote:
> Tom Gardner wrote on 1/6/2018 8:17 PM: >> On 06/01/18 23:57, rickman wrote: >>> Tom Gardner wrote on 1/5/2018 4:12 AM: >>>> On 04/01/18 19:31, rickman wrote: >>>>> rickman wrote on 1/4/2018 3:33 AM: >>>>>> Les Cargill wrote on 1/3/2018 6:59 PM: >>>>>>> rickman wrote: >>>>>>>> Les Cargill wrote on 1/3/2018 7:07 AM: >>>>>>>>> rickman wrote: >>>>>>>>>> Ed Prochak wrote on 12/19/2017 11:19 AM: >>>>>>>>>>> On Saturday, December 16, 2017 at 12:56:31 PM UTC-5, Les >>>>>>>>>>> Cargill wrote: >>>>>>>>>>>> Mike Perkins wrote: >>>>>>>>>>>>> On 09/12/2017 16:05, pozz wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> I use it where I have a number of 'tasks' which then >>>>>>>>>>>>> interact with each other. >>>>>>>>>>>>> >>>>>>>>>>>>> If your system is a pure state machine then there is >>>>>>>>>>>>> no need for an RTOS. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Indeed. Indeed. >>>>>>>>>>> >>>>>>>>>>> If it is just one state machine. Then yes, indeed. ed >>>>>>>>>> >>>>>>>>>> It can always be one state machine. The only issue is how >>>>>>>>>> complex the combined state machine is. >>>>>>>>>> >>>>>>>>> >>>>>>>>> It's always a case of "how long should a man's legs be?" >>>>>>>>> Lincoln is alleged to have said "long enough to touch the >>>>>>>>> ground." >>>>>>>>> >>>>>>>>>> Actually the issue is not how many state machines you have. >>>>>>>>>> It is the timing requirements of the various state >>>>>>>>>> machines. If your timing is lax, sequential operation of >>>>>>>>>> multiple machines is easy. But this often becomes a huge >>>>>>>>>> discussion with everyone talking past each other. >>>>>>>>>> >>>>>>>>> >>>>>>>>> They should use state machines to keep track then :) >>>>>>>>> >>>>>>>>> If your system can be decomposed as roughly events cross >>>>>>>>> state, you have a snowball's chance of "understanding" it. >>>>>>>> >>>>>>>> Sorry, I don't know what this means "roughly events cross >>>>>>>> state". >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> You are in state A. Event 42 occurs. Lookup.... ah, here - if we >>>>>>> get event 42 whilst in state A, we move to state B and send >>>>>>> message m. >>>>>> >>>>>> So what makes that so hard to understand? >>>>> >>>>> To be more explicit, I think every FSM I've ever coded was along the >>>>> lines of a case statement on the state with IF conditions on all the >>>>> interesting inputs. I find this structure to be self documenting if >>>>> the signal names are chosen well. >>>>> >>>>> Why is this structure hard to understand? >>>> >>>> I've seen a commercial case where that type of structure was mutated by >>>> people under pressure to: - have a single state machine for all >>>> different customers - make the minimum change - do it fast - where the >>>> original designers had left - a custom domain specific language, ugh >>>> >>>> The result was an unholy unmaintainable mess, with if-then-elses nested >>>> up to 10 (ten!) deep. >>>> >>>> Completely insane, of course. >>>> >>>> My preference is for a dispatch table based on event+state, since that >>>> forces you to consider all possibilities. The dispatch table can be >>>> either a 2D array of function pointers, or inherent in an OOP >>>> inheritance hierarchy where the hierarchy is a direct mirror of the >>>> state hierarchy. >>> >>> Bad code can be written by any method. >> >> And some methods help writing good code, and some hinder writing good >> code. >> >> What's your next motherhood and apple pie statement? > > What is your next pointless anecdote? There are many examples of work going > well and work going poorly. What is the point of any one example? A method > is only as good as those using it. >
Thank you for a direct and simple answer to my question.
On Sat, 6 Jan 2018 11:16:36 +0000, Tom Gardner
<spamjunk@blueyonder.co.uk> wrote:

>On 06/01/18 10:32, upsidedown@downunder.com wrote: >> On Sat, 6 Jan 2018 09:45:31 +0000, Tom Gardner >> <spamjunk@blueyonder.co.uk> wrote: >> >>> On 06/01/18 00:35, Paul Rubin wrote: >>>> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >>>>> Processors with up to 32 cores and 4000MIPS, and interrupt latencies >>>>> of 10ns. >>>> >>>> Why stop there? http://greenarraychipss.com >>> >>> 1) I believe you can simply add more chips >>> in parallel, and the comms simply works, albeit >>> with increased latency >>> >>> 2) hardware is easy; the software is more difficult >>> and more important. XMOS has very good software >>> support (based on 40 years of theory and practical >>> implementations), plus excellent integration with >>> the hardware. >> >> The xCore style architecture is nice for multichannel DSP >> applications, in which each channel is assigned a dedicated core and >> the single sampling clock is routed to all cores, starting the >> execution of all cores at each sample clock transition. > >It is great for *much* more than that, e.g. a general >purpose test instrument. > >My "kick the tyres" test was a reciprocal frequency >counter with zero added hardware; software counted >the transitions in a 62.5Mb/s input data streams > - two cores for the two primary inputs; absolute > timing guarantees were required. I could squeeze it > so that each input "cycle" was suspended for ~100ns > before the next input event arrived > - one core for front panel input > - one core for front panel output > - one controller core orchestrating all the other cores > - many cores for a USB interface > >Having guaranteed i/o timing in the presence of random >interaction with a user and a host computer is rather >nice :) The IDE gave those guarantees; I don't know of >any other processor that could do that. > >Interrupts: get thee behind me Satan!
What so bad bout interrupts ? These are nice in kicking around the state machine. In the simplest case non-interrupt code is required only at startup to set up the ISRs and then go to a low power wait-for-interrupt loop. The processor spends most o the time i the wait-for-interrupt instruction. Things get a bit more complicated with multiple interrupt, when it would be preferable to prioritized the interrupts in hardware, in order to control the deadlines. The nice thing abut xCore is that each "interrupt" starts immediately in a dedicated core, so no deadline issues in properly designed system.
> >Overall I was /remarkably/ pleased at how simple >and predictable everything turned out to be. The >data sheets and programming tutorials are exemplary >in their conciseness and correctness. That's a benefit >of having the same team design the hardware and >software without being unduly constrained by the PDP11 >and 1970 compiler technology :) > > >> The xCore could also be used to implement PLCs (Programmable Logic >> Controller) with each execution loop executed in a dedicated core, >> usually with different clocks for each loop. Quite a lot of problems >> can be solved in a PLC type environment and the IEC 61131 programming >> environment is quite handy. IEC 61131 has multiple kinds of >> programming languages, e.g. ladder logic or ST (Structured Text, a >> Modula/Pascal style programming language). > >I don't know anything about PLCs, but I'm >suspicious about the clocking strategy you >mention.
In a PLC you have at least one loop, that is executed at regular intervals, typically controlled by a timer. There are several steps that is done in each iteration of the loop: 1. Wait for activation from a timer 2. Read (sample) all needed inputs 3. Execute the processing for all signals in this sample 4. Write processing results to output pins 5. Go to sleep waiting for the next timer activation. Of course this sounds stupid, if you have only a few signals to handle, but if the loop processes tens or hundreds signals in each iteration, the overhead per signal of running the loop remains low. When processing binary signals, the operation done on each signal can be quite simple, such as some AND and OR functions. If interrupts or separate threads would be used separately for each signal, the overhead would be huge compared to the primitive actual work done. A loop for binary signals would typically executed every 1 to 20 ms. To implement flip-flops etc. some storage is needed between loop iterations to remember the state from the previous loop iteration, i.e. there can be as many state machines as there are internal or external signals. For analog signals typically a 50 to 200 ms cycle time is often sufficient. The reading (sampling) input would typically be reading the ADC, sample processing e.g. running one sample into the PID controller and output typically feeding a DAC. The PID needs some storage across the loop iterations to maintain the PID internal state. A loop updating a display could typically be executed every 500 ms. When implementing multiple loops with some RTOS, the binary loop would have the highest priority, followed by the analog loop ad at the bottom the non-time sensitive display loop. The human user would hardly notice, if there is a 50 ms timing jitter, if the analog loop kicks in during display update. With xCore type systems, you can assign a dedicated core to each loop, without worrying about priorities. In practice, you could have 2, 5, 20, 50 and 200 ms binary loops and 40, 200, 400 and 2000 s analog loops. With xCore, you could still assign an own core for each loop.
> >Better, I would presume, to let each core >have a 100MHz clock and to stall until it >has something to do - i.e. an i/o event or >message event from another core. > >Of course, such an i/o or message event >could well be a "software" clock. > > > >> However, I do _not_ think that the xCore would be very handy for ad >> hoc parallel processing, even with XMOS programming environment (which >> is quite versatile). > >There are two points there. > >Firstly the xCORE processors are embedded devices >not general purpose devices. Anybody planning to >use them as general purpose compute engines will >be, um, disappointed :) > >Having said that, ISTR some having one core being >an ARM processor (other 7 being xCORE), and the >IDE can generate code for ARMs. I have not >investigated any of that. > >Secondly, xC is strongly based on Hoare's Communicating >Sequential Processes (CSP), as was Occam in the 80. >Several new languages (Go, Rust) also contain CSP >syntax and semantics, thus indicating that people still >think CSP is A Good Thing. In addition, it appears >that message passing is still the most reliable >general purpose way to structure large scale High >Performance Computing (HPC) programs. > >That raises the question as to whether CSP (and "hence" >xC) is, on balance, better/worse than the alternatives. >That is not clear to me. > >Mind you, I do like xC's "asynchronous" "interface" >extensions to CSP; they appear to be necessary on >limited core devices > > >>> 3) look at the investor lists for each company >>> >>> IMNSHO, point 2 is the killer advantage/USP. >
On 07/01/18 11:35, upsidedown@downunder.com wrote:
> On Sat, 6 Jan 2018 11:16:36 +0000, Tom Gardner > <spamjunk@blueyonder.co.uk> wrote: > >> On 06/01/18 10:32, upsidedown@downunder.com wrote: >>> On Sat, 6 Jan 2018 09:45:31 +0000, Tom Gardner >>> <spamjunk@blueyonder.co.uk> wrote: >>> >>>> On 06/01/18 00:35, Paul Rubin wrote: >>>>> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >>>>>> Processors with up to 32 cores and 4000MIPS, and interrupt latencies >>>>>> of 10ns. >>>>> >>>>> Why stop there? http://greenarraychipss.com >>>> >>>> 1) I believe you can simply add more chips >>>> in parallel, and the comms simply works, albeit >>>> with increased latency >>>> >>>> 2) hardware is easy; the software is more difficult >>>> and more important. XMOS has very good software >>>> support (based on 40 years of theory and practical >>>> implementations), plus excellent integration with >>>> the hardware. >>> >>> The xCore style architecture is nice for multichannel DSP >>> applications, in which each channel is assigned a dedicated core and >>> the single sampling clock is routed to all cores, starting the >>> execution of all cores at each sample clock transition. >> >> It is great for *much* more than that, e.g. a general >> purpose test instrument. >> >> My "kick the tyres" test was a reciprocal frequency >> counter with zero added hardware; software counted >> the transitions in a 62.5Mb/s input data streams >> - two cores for the two primary inputs; absolute >> timing guarantees were required. I could squeeze it >> so that each input "cycle" was suspended for ~100ns >> before the next input event arrived >> - one core for front panel input >> - one core for front panel output >> - one controller core orchestrating all the other cores >> - many cores for a USB interface >> >> Having guaranteed i/o timing in the presence of random >> interaction with a user and a host computer is rather >> nice :) The IDE gave those guarantees; I don't know of >> any other processor that could do that. >> >> Interrupts: get thee behind me Satan! > > What so bad bout interrupts ? These are nice in kicking around the > state machine. In the simplest case non-interrupt code is required > only at startup to set up the ISRs and then go to a low power > wait-for-interrupt loop. The processor spends most o the time i the > wait-for-interrupt instruction.
As you note just below, they make it somewhat difficult to guarantee latency and throughput, except in "embarrassingly slow" applications. I will admit to using wry overstatements to shock youngsters out of the mistaken belief that interrupts are the only mechanism. I doubt that's necessary in your case :)
> Things get a bit more complicated with multiple interrupt, when it > would be preferable to prioritized the interrupts in hardware, in > order to control the deadlines. > > The nice thing abut xCore is that each "interrupt" starts immediately > in a dedicated core, so no deadline issues in properly designed > system. > >> >> Overall I was /remarkably/ pleased at how simple >> and predictable everything turned out to be. The >> data sheets and programming tutorials are exemplary >> in their conciseness and correctness. That's a benefit >> of having the same team design the hardware and >> software without being unduly constrained by the PDP11 >> and 1970 compiler technology :) >> >> >>> The xCore could also be used to implement PLCs (Programmable Logic >>> Controller) with each execution loop executed in a dedicated core, >>> usually with different clocks for each loop. Quite a lot of problems >>> can be solved in a PLC type environment and the IEC 61131 programming >>> environment is quite handy. IEC 61131 has multiple kinds of >>> programming languages, e.g. ladder logic or ST (Structured Text, a >>> Modula/Pascal style programming language). >> >> I don't know anything about PLCs, but I'm >> suspicious about the clocking strategy you >> mention. > > In a PLC you have at least one loop, that is executed at regular > intervals, typically controlled by a timer. There are several steps > that is done in each iteration of the loop: > > 1. Wait for activation from a timer > 2. Read (sample) all needed inputs > 3. Execute the processing for all signals in this sample > 4. Write processing results to output pins > 5. Go to sleep waiting for the next timer activation. > > Of course this sounds stupid, if you have only a few signals to > handle, but if the loop processes tens or hundreds signals in each > iteration, the overhead per signal of running the loop remains low. > > When processing binary signals, the operation done on each signal can > be quite simple, such as some AND and OR functions. If interrupts or > separate threads would be used separately for each signal, the > overhead would be huge compared to the primitive actual work done. A > loop for binary signals would typically executed every 1 to 20 ms. > > To implement flip-flops etc. some storage is needed between loop > iterations to remember the state from the previous loop iteration, > i.e. there can be as many state machines as there are internal or > external signals. > > For analog signals typically a 50 to 200 ms cycle time is often > sufficient. The reading (sampling) input would typically be reading > the ADC, sample processing e.g. running one sample into the PID > controller and output typically feeding a DAC. The PID needs some > storage across the loop iterations to maintain the PID internal state. > > A loop updating a display could typically be executed every 500 ms. > > When implementing multiple loops with some RTOS, the binary loop would > have the highest priority, followed by the analog loop ad at the > bottom the non-time sensitive display loop. The human user would > hardly notice, if there is a 50 ms timing jitter, if the analog loop > kicks in during display update.
That all makes sense, and I would class that as an "embarrassingly slow" application domain.
> With xCore type systems, you can assign a dedicated core to each loop, > without worrying about priorities. In practice, you could have 2, 5, > 20, 50 and 200 ms binary loops and 40, 200, 400 and 2000 s analog > loops. With xCore, you could still assign an own core for each loop.
I don't think xCORE would offer any killer advantages in such an implementation. OTOH, xCORE could simultaneously communicate over ethernet or USB, or control LCDs segments directly (i.e. preserving zero DC bias), with _zero_ changes to the latency and throughput. That kind of consideration can make system design less brittle in the face of evolving requirements. Whether that is beneficial depends on circumstances, of course.
>> Better, I would presume, to let each core >> have a 100MHz clock and to stall until it >> has something to do - i.e. an i/o event or >> message event from another core. >> >> Of course, such an i/o or message event >> could well be a "software" clock. >> >> >> >>> However, I do _not_ think that the xCore would be very handy for ad >>> hoc parallel processing, even with XMOS programming environment (which >>> is quite versatile). >> >> There are two points there. >> >> Firstly the xCORE processors are embedded devices >> not general purpose devices. Anybody planning to >> use them as general purpose compute engines will >> be, um, disappointed :) >> >> Having said that, ISTR some having one core being >> an ARM processor (other 7 being xCORE), and the >> IDE can generate code for ARMs. I have not >> investigated any of that. >> >> Secondly, xC is strongly based on Hoare's Communicating >> Sequential Processes (CSP), as was Occam in the 80. >> Several new languages (Go, Rust) also contain CSP >> syntax and semantics, thus indicating that people still >> think CSP is A Good Thing. In addition, it appears >> that message passing is still the most reliable >> general purpose way to structure large scale High >> Performance Computing (HPC) programs. >> >> That raises the question as to whether CSP (and "hence" >> xC) is, on balance, better/worse than the alternatives. >> That is not clear to me. >> >> Mind you, I do like xC's "asynchronous" "interface" >> extensions to CSP; they appear to be necessary on >> limited core devices >> >> >>>> 3) look at the investor lists for each company >>>> >>>> IMNSHO, point 2 is the killer advantage/USP. >> >
Paul Rubin wrote on 1/7/2018 3:58 AM:
> rickman <gnuarm.deletethisbit@gmail.com> writes: >> In your example you only need to test state A for every possible input >> to verify it doesn't go to state B. > > An input is a sequence of events e1,e2,e3,...e_n. For example, think of > pressing buttons and flipping switches in the cockpit of the space > shuttle. Each flip is an event that can represent any of the 100s of > switches being flipped. Maybe a drunk astronaut flips switch 5, then > switch 27, then switch 9, etc. > > Question: is there some sequence of flips, maybe 1000 flips long, that > blows up the space shuttle by accident? Your spec might require that no > such sequence exists, but how do you make sure? You can't exhaustively > search every sequence of 1000 flips to test that. And even if you > somehow check all the 1000-flip sequences, you still have to worry about > the possibility of a 2000-flip sequence, etc. > > There are ways to deal with this problem (the link I gave is a useful > approach) but it's not so simple, and lots of work is done on it. > >> You don't need to consider infinitely long sequences because it is a >> FINITE state machine. > > Not infinitely long, but of arbitrarily large finite length. You don't > have to worry about infinitely long sequences, but there are infinitely > many of finite size, and you do have to worry about those.
There is *no* infinity is finite state machines. Not input sequence lengths or input sequence combinations. I can count all possible combinations of inputs and states which is why it is a *finite* state machine. Perhaps the part you are missing is that it is irrelevant how a state was reached. Once in a state it is the same having been reached from states A through L as from states M through Z and the input which caused the transition to the current state is also irrelevant. The only things that matters are the state you are in and the input at that point. I've already explained how you could analyze your example of states A and B being separated by state C always. Making up arbitrarily complex scenarios is not useful.
> It's like saying there is no infinite integer (1,2,3... are all finite). > But since they can be arbitrarily large, the collection of them is not > finite.
Hmmm... clearly you don't understand the nature of the infinite. Integers *are* infinite. -- Rick C Viewed the eclipse at Wintercrest Farms, on the centerline of totality since 1998
Tom Gardner <spamjunk@blueyonder.co.uk> writes:
>> I don't know where to find any of those. Got any suggestions? > The early 90s one was the FDDI token ring LAN.
Well, I meant I hoped for a url that I could look at, with a not too verbose description of the FSMs.
> The various FSMs had to deal with lost token, duplicated token, broken > links, split and reconnected rings, traffic priority _etc_.
Without knowing the requirements I can't tell why this has to be implemented with FSMs.
> I'll have a quick look at that; thanks. I wonder how much I will > understand? :)
Basically that you write the requirements in what I hope is a comprehensible way, and also write the code in a fairly traditional way (except for the weird syntax). The prover then verifies that the code implements the requirements.
rickman <gnuarm.deletethisbit@gmail.com> writes:
> There is *no* infinity is finite state machines. Not input sequence > lengths or input sequence combinations.
There are infinitely many sequences of inputs. Of course since it's an FSM, you end up getting repeated states. For example, consider the event sequence [A,B]. Then consider [A,A,B]. Then [A,A,A,B]. Then [A,A,A,A,B], etc. There are infinitely many such sequences. Obviously after a while they will start repeating states, and you can analyze that example since every transition except the last one is on event A. The problem happens when there's multiple transitions from each state, so you have to consider all the combinations [A,C], [B,C], [A,A,C], [A,B,C], [B,A,C], [B,B,C], etc. The number of such combinations is exponential in the number of states. And the state space itself can be exponentially large. E.g. if a straightforward code implementation of something involves a 3-bit retry counter, you need 8 states to handle all the possible counts, maybe ok. But if it's a 64-bit packet size counter, the number of states required is ridiculous.
> I can count all possible combinations of inputs and states which is > why it is a *finite* state machine.
We're talking about what you can do in practice on a real computer, not whether something is mathematically finite. Can you count to 1000? Sure. To a million? Maybe. To a googol (1e100)? I don't think so, even though a googol is finite.
> Hmmm... clearly you don't understand the nature of the infinite. > Integers *are* infinite.
I am always willing to learn. I'd be interested in seeing an example of an infinite integer.
On 07/01/18 18:43, Paul Rubin wrote:
> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >>> I don't know where to find any of those. Got any suggestions? >> The early 90s one was the FDDI token ring LAN. > > Well, I meant I hoped for a url that I could look at, with a not too > verbose description of the FSMs.
I haven't looked at it in 25 years; I have no idea whether there are any live FDDI rings!
>> The various FSMs had to deal with lost token, duplicated token, broken >> links, split and reconnected rings, traffic priority _etc_. > > Without knowing the requirements I can't tell why this has to be > implemented with FSMs.
It was defined/specified that way in the FDDI standards. I can't think of a better way to define/specify and implement such functionality.
>> I'll have a quick look at that; thanks. I wonder how much I will >> understand? :) > > Basically that you write the requirements in what I hope is a > comprehensible way, and also write the code in a fairly traditional way > (except for the weird syntax). The prover then verifies that the code > implements the requirements.
I had a quick skim of the "verification primer" and, quite reasonably, it is a small simple pedagogical example. It is of similar size to the examples I saw in the early 90s, and isn't representative of the FSMs I've had to deal with. The kind of thing I'd like to see is - presuming a token ring is partitioned and then reconnected - define the maximum time that two tokens can be circulating - before detection, and - before one is deleted and normal operation is resumed

The 2024 Embedded Online Conference