EmbeddedRelated.com
Forums

Parallax Propeller

Started by Peter Jakacki January 13, 2013
On Sun, 20 Jan 2013 17:12:17 -0500, Walter Banks
<walter@bytecraft.com> wrote:

>> Look how many interrupts a modern PC or large embedded system has - they >> outnumber the number of cores by 50 to 1 at least. Interrupts are not >> going away. > >Interrupts are not going away anytime soon. > >There are event riven processors that are essentially all interrupts. > >Add run to completion (to eliminate preemption overhead) and multiple >cores for interrupts to use the next available execution unit and a lot of >processing overheads go away with comparable reduction in software >complexity.
Of course you could design a core which restarts every time an external or internal interrupt occurs (such as a request packet sent by an other core), run to completion and pot core in low power halt state. Of course, this works for some problems, but sooner or later you end up with a hellish state machine, which remembers, where you were when the previous interrupt occurred.
upsidedown@downunder.com wrote:

> Of course you could design a core which restarts every time an > external or internal interrupt occurs (such as a request packet sent > by an other core), run to completion and pot core in low power halt > state. > > Of course, this works for some problems, but sooner or later you end > up with a hellish state machine, which remembers, where you were when > the previous interrupt occurred.
It does look as though the AVR TWI interface was designed to be controlled just that way. Mel.
On 1/19/2013 12:29 PM, George Neuner wrote:
> On Fri, 18 Jan 2013 21:07:58 -0500, Ben Bradley > <ben_u_bradley@etcmail.com> wrote: > >> ... I first saw the propellor mentioned years ago, the 8 32-bit cores >> thing sounds nice, but no interrupts was a deal killer for me. A year >> or two back (with maybe earlier mention of the P2) I looked on the >> "official" support/discussion forums for the Propellor and saw this >> longish thread on "why doesn't it have interrupts" and there were >> posts there that covered every objection I've had or seen to a >> microcontroller not having interrupts, even "why not add interrupts? >> It would take very little silicon and you don't have to use 'em if you >> don't want to." It's against that designer guru guy's religion or >> something. > > There has been much discussions in comp.arch re: this very question. > The consensus has been that interrupts are extremely difficult to > implement properly (in the hardware and/or microcode), and most chips > don't do it right, leading to the occasional unavoidable glitch even > when handler code is written correctly per the CPU documentation.
How do you define "implement properly" for interrupts? Like most things, if interrupts are kept simple, they work. It's the multiple priority, many task processing that is hard to do right.
> There also has been much discussion of non-interrupting systems where > cores can be devoted to device handling. The consensus there is that > interrupts per se are not necessary, but such systems still require > inter-processor signaling. There has been considerable debate about > the form(s) such signaling should take.
There is always more than one way to skin a cat. I'm not convinced interrupts are not necessary, well, maybe I should say "not useful" instead, how's that? Rick
On 1/19/2013 8:56 PM, bob@bob.com wrote:
> On Sat, 19 Jan 2013 10:42:36 -0800, Paul Rubin > <no.email@nospam.invalid> wrote: > >> Ben Bradley<ben_u_bradley@etcmail.com> writes: >>> As far as I know there's no other microcontroller that doesn't have >>> interrupts, and I can't recall one that didn't. >> >> The GA144 has no interrupts since you just dedicate a processor to >> the event you want to listen for. The processors have i/o ports >> (to external pins or to adjacent processors on the chip) that block >> on read, so the processor doesn't burn power while waiting for data. > > > I use timer interrupts mainly to have many nice accurate timers. > > It makes it so easy. I could use polled timers in main() but really > prefer the ease of an interrupt and if it doesn't require any context > switching, what could be easier ?
In my opinion, the lack of timers is one of a number of significant shortcomings in the GA144. One of the claimed selling features of the device is the low power possible. But if you need to wait for a specific amount of time, not at all uncommon in real time systems which many embedded systems are, you have to put a processor into a spin loop to time it! If you read Chuck Moore's blog he spent some time trying to implement a video output without a clock. He only gave up that idea when he found the pixels jittered on the screen because of the vagaries of async processor timing loops. Had he implemented a simple timer on each processor driven by... yes, a global chip clock (Oh! The horror!!!) many timing events would be so much simpler and likely lower power. 5 mW per core in a spin loop. It doesn't take many of those to add up to significant power. In a recent design I considered for the GA144 I found the CPU core expended more than half it's power in the spin loop timing the ADC converter and that was just a 6% duty cycle! This exceeded the power budget. With an on chip clock the ADC could have been timed at very low power possibly making the design practical. Rick
On Jan 23, 5:29&#4294967295;pm, rickman <gnu...@gmail.com> wrote:
> On 1/19/2013 12:29 PM, George Neuner wrote: > > > > > > > > > > > On Fri, 18 Jan 2013 21:07:58 -0500, Ben Bradley > > <ben_u_brad...@etcmail.com> &#4294967295;wrote: > > >> ... I first saw the propellor mentioned years ago, the 8 32-bit cores > >> thing sounds nice, but no interrupts was a deal killer for me. A year > >> or two back (with maybe earlier mention of the P2) I looked on the > >> "official" support/discussion forums for the Propellor and saw this > >> longish thread on "why doesn't it have interrupts" and there were > >> posts there that covered every objection I've had or seen to a > >> microcontroller not having interrupts, even &#4294967295;"why not add interrupts? > >> It would take very little silicon and you don't have to use 'em if you > >> don't want to." It's against that designer guru guy's religion or > >> something. > > > There has been much discussions in comp.arch re: this very question. > > The consensus has been that interrupts are extremely difficult to > > implement properly (in the hardware and/or microcode), and most chips > > don't do it right, leading to the occasional unavoidable glitch even > > when handler code is written correctly per the CPU documentation. > > How do you define "implement properly" for interrupts? &#4294967295;Like most > things, if interrupts are kept simple, they work. &#4294967295;It's the multiple > priority, many task processing that is hard to do right. > > > There also has been much discussion of non-interrupting systems where > > cores can be devoted to device handling. &#4294967295;The consensus there is that > > interrupts per se are not necessary, but such systems still require > > inter-processor signaling. &#4294967295;There has been considerable debate about > > the form(s) such signaling should take. > > There is always more than one way to skin a cat. &#4294967295;I'm not convinced > interrupts are not necessary, well, maybe I should say "not useful" > instead, how's that? > > Rick
I'm inclined to agree, though I've only had experience with 'classic' micro-processors in this regard, so maybe my thoughts on the issue are simply out of date. I can see that if you have a lot cores you can effectively make your own interrupt controller by dedicating a core or more to it. That idea seems to make sense on a simple device like the GA devices, where each core is very primitive in its own right, so one can argue that the 'cost' of assigning a core to the task of interrupt detection is low. However, the idea does not sit well with me when talking about complex devices such as the Propeller. Dedicating a cog to interrupt control sounds bonkers to me, especially when a cog has its own video controller - that's real overkill. I get the impression that the Propeller is somewhat dumbed-down for the hobbyist market. I cite its programming language, and the lack of interrupts as two examples. Why they couldn't they add a 9th super- simple core just for interrupts, that could pipe certain types of interrupts to certain cogs? Best of both worlds. The TMS99xx family of processors (very old) has 16 prioritised cascading interrupts. Probably inherited from mini-computer architecture. Very very powerful for its day. Since they were prioritised, a lower level interrupt would not interrupt a higher level interrupt until the higher level ISR terminated. Makes serving multiple interrupts an absolute doddle. Not bad for 1976.
rickman <gnuarm@gmail.com> writes:
> If you read Chuck Moore's blog he spent some time trying to implement a > video output without a clock. He only gave up that idea when he found > the pixels jittered on the screen because of the vagaries of async > processor timing loops.
I played with the PropTerm and found that the CPU-generated VGA bit stream (a CPU got dedicated to the task) resulted in displays which always had a little bit of fuzziness. It worked, and was quite readable, but the sharpness of a regular PC display really made me aware of the limits of a pure software approach to analog generation. Andy
On Jan 24, 3:03&#4294967295;pm, Mark Wills <markrobertwi...@yahoo.co.uk> wrote:
> The TMS99xx family of processors (very old) has 16 prioritised > cascading interrupts. Probably inherited from mini-computer > architecture. Very very powerful for its day. Since they were > prioritised, a lower level interrupt would not interrupt a higher > level interrupt until the higher level ISR terminated. Makes serving > multiple interrupts an absolute doddle. Not bad for 1976.
Doddle? I've never heard that word before. Is a doddle good or bad?
On Jan 25, 6:55&#4294967295;am, Hugh Aguilar <hughaguila...@yahoo.com> wrote:
> On Jan 24, 3:03&#4294967295;pm, Mark Wills <markrobertwi...@yahoo.co.uk> wrote: > > > The TMS99xx family of processors (very old) has 16 prioritised > > cascading interrupts. Probably inherited from mini-computer > > architecture. Very very powerful for its day. Since they were > > prioritised, a lower level interrupt would not interrupt a higher > > level interrupt until the higher level ISR terminated. Makes serving > > multiple interrupts an absolute doddle. Not bad for 1976. > > Doddle? I've never heard that word before. Is a doddle good or bad?
doddle = extremely simple/easy "Did you manage to fix that bug?" "Yeah, it was a doddle!" :-)
On 1/24/2013 6:26 PM, None wrote:
> rickman<gnuarm@gmail.com> writes: >> If you read Chuck Moore's blog he spent some time trying to implement a >> video output without a clock. He only gave up that idea when he found >> the pixels jittered on the screen because of the vagaries of async >> processor timing loops. > > I played with the PropTerm and found that the CPU-generated VGA bit stream (a > CPU got dedicated to the task) resulted in displays which always had a little > bit of fuzziness. It worked, and was quite readable, but the sharpness > of a regular PC display really made me aware of the limits of a pure software > approach to analog generation.
How do you know the display "fuzziness" was due to software timing? I would expect software timing on a clocked processor to be on par with other means of timing. There are other aspects of design that could cause fuzziness or timing ambiguities in the signal. Rick

None wrote:

> rickman <gnuarm@gmail.com> writes: > > If you read Chuck Moore's blog he spent some time trying to implement a > > video output without a clock. He only gave up that idea when he found > > the pixels jittered on the screen because of the vagaries of async > > processor timing loops. > > I played with the PropTerm and found that the CPU-generated VGA bit stream (a > CPU got dedicated to the task) resulted in displays which always had a little > bit of fuzziness. It worked, and was quite readable, but the sharpness > of a regular PC display really made me aware of the limits of a pure software > approach to analog generation. >
I have worked on a couple event driven ISA designs. Jitter is visible on displays but it is equally problematic with control systems. The best solution that I have seen/used is to have the hardware transfer out a precomputed value or latch an input on the event interrupt trigger. Output values are almost always known in advance. This minor change has essentially little impact on the processor silicon complexity. A second important performance issue is to have an easily accesses data area associated with each interrupt source. It means that a lot of common code (pwm, ac phase control... ) can be a single executable. In some cases preloading an index register with the start of data for that interrupt in some hardware has significant performance improvements. Walter Banks Byte Craft Limited