EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

ARM Interrupts

Started by Andrew Blackburn July 14, 2006
Guys

The application is real time embedded.  .Surely it all depends on the 
criticallity of the interrupt to be serviced and the amount of latency one 
can aford.

Ive been loking a the Sharp ARM7s.  Plenty of IO but pants on the interrupt 
vectors.  High and Low priority only provided by the IPC.

The application for those interested is industrial metal detection.  We're 
talking plent of signal processing as well as handling rejection mechaninsm 
half a dozen uarts and a QVGA with touch screen.

Cheers for all your input.

Andrew


"Mr. C" <fakeemail@hotmail.com> wrote in message 
news:8k0vb2hc8t7394th9grqagie5d0l7lp8su@4ax.com...
> >The need for several nested interrupt levels usually >>comes from an attempt to avoid a thread scheduler or >>attempt to do wrong things in the interrupt service. > > No, it comes from trying to use a processor to do real-time things. > Why are you saying he is trying to avoid using a scheduler? Maybe he > is trying to do something useful in the real world like count pulses > from a flow meter. Man, what has happened to "embedded" engineering? > It is like handling interrupts is some kind of foreign thing. > > Lou >
On Fri, 21 Jul 2006 20:01:49 GMT, Chris Quayle <devnull@zzzz.co.uk>
wrote:

>Pete Fenelon wrote: >> >> Very few designs actually *need* multi-level interrupts -- IMHO they >> encourage the development of sloppy code, running things that should >> really be task bodies at above user level. The job of an ISR is to >> service the interrupt source and get back down to user level for real >> work to be done ASAP. >> >> Unfortunately, baroque interrupt controllers feature on too many CPUs >> these days, and "because it's there", fully exploiting them becomes a >> tick-list feature for RTOSes. >> >> I've seen designs where the customer demanded a complex multi-level >> nested interrupt scheme in the OS, then when it turned out to be slow >> (duuuuh, surprise!) managed to design everything elegantly using only >> single-level interrupts. >> >> A plethora of priorities isn't an excuse for abandoning good design. >> > >If i'm reading this correctly, not sure I agree with all of that. Coming >from a time in micro design where you had only wire or irq and nmi, the >problem is that you can execute a page of code just to get to the device >that caused the interrupt, which is not very helpfull. > >The good designs, imho, are the ones that have fully vectored interrupts >and priority levels that can be assigned to the different on and off >chip peripherals via a register bitmap, in a user preferred order. This >provides the best response time and allows high priority devices like >scheduling timers to be set to a higher priority than serial drivers >etc. It's also good for code modularity... >
I fully agree with this. Where interrupts are implimented as in the PC architecture one ends up with having extreme difficulty in meeting some realtime deadlines. Whoever decided to make the keyboard interrupt the second highest priority on the PC should be shot. The fact that they used a poorly designed interrupt controller does not help either. In a proparly designed interrupt controller as described above, one would have had the option of changing the interrupt priorities to something sensible. Regards Anton Erasmus
Anton Erasmus wrote:
> > > I fully agree with this. Where interrupts are implimented as in the PC > architecture one ends up with having extreme difficulty in meeting > some realtime deadlines. Whoever decided to make the keyboard > interrupt the second highest priority on the PC should be shot. The > fact that they used a poorly designed interrupt controller does not > help either. In a proparly designed interrupt controller as described > above, one would have had the option of changing the interrupt > priorities to something sensible. > > Regards > Anton Erasmus
Even the very first pdp11 from 1969 had fully vectored interrupts, so it's not a new idea. It's spirit lives on in processors like the Coldfire and MSP430, both of which have a risc core. It's faster to have dedicated hardware vector an interrupt than to poll registers and it can aid data hiding and code modularity because you don't end up with header files from several drivers included in the interrupt dispatch module. Of course, it saves silicon if you drive the functionality into software and would suspect that's the real reason for leaving it out - can't thing of any other justification... Chris -- Greenfield Designs Ltd ----------------------------------------------------------- Embedded Systems & Electronics: Research Design Development Oxford. England. (44) 1865 750 681
On Mon, 24 Jul 2006 14:03:15 GMT, Chris Quayle <devnull@zzzz.co.uk>
wrote:

>Anton Erasmus wrote: >> >> >> I fully agree with this. Where interrupts are implimented as in the PC >> architecture one ends up with having extreme difficulty in meeting >> some realtime deadlines. Whoever decided to make the keyboard >> interrupt the second highest priority on the PC should be shot. The >> fact that they used a poorly designed interrupt controller does not >> help either. In a proparly designed interrupt controller as described >> above, one would have had the option of changing the interrupt >> priorities to something sensible. >> >> Regards >> Anton Erasmus > >Even the very first pdp11 from 1969 had fully vectored interrupts, so >it's not a new idea. It's spirit lives on in processors like the >Coldfire and MSP430, both of which have a risc core. It's faster to have >dedicated hardware vector an interrupt than to poll registers and it can >aid data hiding and code modularity because you don't end up with header >files from several drivers included in the interrupt dispatch module. Of >course, it saves silicon if you drive the functionality into software >and would suspect that's the real reason for leaving it out - can't >thing of any other justification... >
Inexperience on the part of the designers ? Incompetence ? I sometimes find it disheartening that it seems that best technical designs are commercial failures, and visa versa. Regards Anton Erasmus
Anton Erasmus wrote:
> > > Inexperience on the part of the designers ? Incompetence ? I sometimes > find it disheartening that it seems that best technical designs are > commercial failures, and visa versa. > > Regards > Anton Erasmus
Difficult to say, perhaps it wasn't priority because the target market didn't need it, or they never got round to doing it right :-). Apparently, the original designers of the Arm were inspired by 8 bit designs like the 6502, widely used in early home computers. 8 bit micros typically had only a 2 level interrupt structure, though istr the Z80 peripheral devices were fully vectored. At the time, processor throughput and interrupt response were good enough, software was not very demanding, interface data rates were low etc. Now we have streaming video and the rest of the MM circus that require much more throughput, memory bandwidth, dma etc, but perhaps still don't need smart interrupt handling. From what I can see, Arm's market is primarily multimedia - phones, pda, set top boxes, hard disk video recorders etc, for which it seems to do a great job. Different requirements to traditional embedded real time though. Even Wind River offer a supported Linux - wonder if they were pushed into this by the affect on revenue from open source, too much good and usefull stuff to ignore, or what ?... Chris -- Greenfield Designs Ltd ----------------------------------------------------------- Embedded Systems & Electronics: Research Design Development Oxford. England. (44) 1865 750 681
Anton Erasmus wrote:
> I fully agree with this. Where interrupts are implimented as in the PC > architecture one ends up with having extreme difficulty in meeting > some realtime deadlines. Whoever decided to make the keyboard > interrupt the second highest priority on the PC should be shot. The > fact that they used a poorly designed interrupt controller does not > help either. In a proparly designed interrupt controller as described > above, one would have had the option of changing the interrupt > priorities to something sensible.
While the 8259 is pretty basic as far as interrupt controllers go, it's trivially reprogrammable to put any of the eight interrupts at the top of the priority stack (eg. you can request that the priority sequence is 23456701), and it supports equal (rotating) priority mode as well as a mode (special mask mode) in which the ISRs can specify which interrupts are enabled with a couple of instructions, which lets you implement any priority scheme you want. That is complicated in cascaded more (eg. with the AT scheme). That the keyboard ended up as the second highest priority interrupt has more to do with the ways the 8259 is set up by the BIOS. The APIC is rather more flexible.
On 24 Jul 2006 17:40:48 -0700, "robertwessel2@yahoo.com"
<robertwessel2@yahoo.com> wrote:

> >Anton Erasmus wrote: >> I fully agree with this. Where interrupts are implimented as in the PC >> architecture one ends up with having extreme difficulty in meeting >> some realtime deadlines. Whoever decided to make the keyboard >> interrupt the second highest priority on the PC should be shot. The >> fact that they used a poorly designed interrupt controller does not >> help either. In a proparly designed interrupt controller as described >> above, one would have had the option of changing the interrupt >> priorities to something sensible. > > >While the 8259 is pretty basic as far as interrupt controllers go, it's >trivially reprogrammable to put any of the eight interrupts at the top >of the priority stack (eg. you can request that the priority sequence >is 23456701), and it supports equal (rotating) priority mode as well as >a mode (special mask mode) in which the ISRs can specify which >interrupts are enabled with a couple of instructions, which lets you >implement any priority scheme you want. That is complicated in >cascaded more (eg. with the AT scheme).
The problem if you set up the interrupts to 23456701, is that the timer interrupt is then then second lowest. For many type of apps, one would typically want a timer interrupt to be the highest priority, and then maybe some I/O interrupts for disk drives, ethernet etc.
>That the keyboard ended up as the second highest priority interrupt has >more to do with the ways the 8259 is set up by the BIOS.
The fact remains that the 8259 is not fully configurable, there are still some hard limits in that one can only rotate the priorities, not choose them arbitrarily as on the 68000.
>The APIC is rather more flexible.
Have'nt looked at this in detail, but I would expect an Advance Peripheral Interrupt Controller to be a bit more flexible than the 8259. Is it possible with the APIC to have the timer interrupt the higest, COM1 the second highrst, the ethernet interface third higest etc. ? Regards Anton Erasmus
Anton Erasmus wrote:
> >The APIC is rather more flexible. > > Have'nt looked at this in detail, but I would expect an Advance > Peripheral Interrupt Controller to be a bit more flexible than the > 8259. Is it possible with the APIC to have the timer interrupt the > higest, COM1 the second highrst, the ethernet interface third higest > etc. ?
Yes, there is considerable flexibility in assigning interrupt sources to targets (IOW, this interrupt pin to that interrupt vector on that CPU), although using it requires that you be rather aware of the way interrupts actually work in a modern system. The legacy IRQ0-15 stuff for the most part doesn't exist in any real fashion, is only appears because the APIC is in 8259 simulation mode. For example, each PCI slot has hour interrupt pins (A/B/C/D). Those may be shared in various ways between slots, or not at all, and eventually the mapping between those and actual IRQs has to be done by setting up the local and I/O APICs. Message signaled interrupts add a further wrinkle, as do multiple CPUs and multiple APICs. OTOH, if you just want to mess with priorities, special mask mode on a 8259 can do a lot. Of course hardware I/O interrupt priorities are rarely* the correct solution. Fixing the software is. If interrupts are disabled in the CPU, you're going to have latency problems no matter what that hardware does, OTOH, there's no real reason not to just handle an interrupt and then reenable interrupts in general. Or just special mask mode, just take the interrupt, disable the one you just got and reenable all the others. *Excepting some hardware (usually broken) with really hard latency requirements living with (arguably broken) system interrupt handler code that actually reenables interrupts but won't EOI the controller.

The 2024 Embedded Online Conference