Reply by Elder Costa March 18, 20052005-03-18
Jim Granville wrote:
>> Would you mind to define "too slow" in this context, in particular >> interrupt latency? I am considering Philips LPC213x family for some >> applications and I may have missed something. > > > It's mostly an issue when you work very close to the iron...
I guess it's not my case. I am going to use Philips LPC213x as a replacement for PICs, AVRs and 80C188EC and a latency on the order of microsseconds is more than adequate for all of my applications. We are going to purchase IAR tools. I hope their MakeApp generate decent code. Regards. Elder.
Reply by Joerg March 17, 20052005-03-17
Hello Jim,

> It's mostly an issue when you work very close to the iron...
I like that phrase...
> ARM's tend to have better peripheral buffering, which helps tolerate a > more elastic response time, so the areas to watch are where the > better peripherals don't help, like SW DACs or SW current protection, > or SW modulation.
Or when there is something in the hardware around the uC that absolutely has to have an interrupt handled in x clock cycles. There is stuff that could blow up if this doesn't happen. I remember a big wall-to-wall crack in a concrete floor that was the result of a phase synchronizer not being synchronized in time. No idea who dunnit but this was expensive. Before a dead-stick switch from one uC to another I'd carefully look at all the hardware that it supports, in all designs that are current.
> > If you have special areas like that, their code is normally small, > so I'd suggest a tiny 80C51 as a real time co-processor/peripheral.
That is a great idea. But then you'd have to do what Lewin's company wants to avoid: Maintain the 80C51 tools and local expertise. In that case they might as well leave such designs to the 51 architecture altogether. Regards, Joerg http://www.analogconsultants.com
Reply by Jim Granville March 17, 20052005-03-17
Elder Costa wrote:

> Jim Granville wrote: > >> The newer single cycle C51s (SiLabs, Atmel et al) have very nimble >> interrupt handling, it would be a shame to do all the porting, only to >> find the ARM is too slow.. > > > Would you mind to define "too slow" in this context, in particular > interrupt latency? I am considering Philips LPC213x family for some > applications and I may have missed something.
It's mostly an issue when you work very close to the iron... The typical modern 80c51 allows 4 levels of INT priority, and has direct data and boolean opcodes - which mean you can get deterministic direct (limited) action in interrupts, without any PUSH/POP. The jitter on the int time is also relatively low. ARM's tend to have better peripheral buffering, which helps tolerate a more elastic response time, so the areas to watch are where the better peripherals don't help, like SW DACs or SW current protection, or SW modulation. Some 32 bit uP/uC have separate 'co-processors' for handling the timers, and critical IO, so their main CPU response time ( and importantly, how that changes over time with Sw revisions ) is insulated from the real IO. IIRC the TI ARMs have this ? If you have special areas like that, their code is normally small, so I'd suggest a tiny 80C51 as a real time co-processor/peripheral. -jg
Reply by Ulf Samuelsson March 17, 20052005-03-17
> ARM architecture itself defines a single IRQ vector. Most ARM7 MCU > provides some sort of fast vectored interrupt processing unit so it > can handle multiple sources, but it still mean that it needs to take > at least couple hops before it gets to your handler. Then of course > you have to save the volatile registers which of course is more > expensive on the ARM (32 bits registers times X numbers to > save/restore)...
The interrupt in the ARM7 will jump to address 0x18. The IRQ vector is immediately followed by the FIQ vector at 0x1c. Since you enter 32 bit ARM mode immediately, you have precisely one instruction to handle the interrupt so it obviously have to be a jump/branch. In the AT91SAM7S series you can do an indirect jump to the highest priority interrupt, pc := (pc + displacement); The displacement allows you to select the interrupt vector from the AIC (Advanced Interrupt Controller). So you need two jumps to enter the interrupt routine. At 48 MHz, that should be pretty quick. If you need even faster interrupts, you can connect a single interrupt in the AIC to the FIQ (Fast Interrupt). This gives you 5 registers for free, which do not need to be pushed or popped. A MOVEM (Move multiple) could solve some problems in push/pop. Your trouble starts if you want to support nested interupts. This will cost some code. -- Best Regards, Ulf Samuelsson ulf@a-t-m-e-l.com This message is intended to be my own personal view and it may or may not be shared by my employer Atmel Nordic AB
Reply by Richard M. March 17, 20052005-03-17
Elder Costa wrote:

> Jim Granville wrote: > >> The newer single cycle C51s (SiLabs, Atmel et al) have very nimble >> interrupt handling, it would be a shame to do all the porting, only >> to find the ARM is too slow.. > > > Would you mind to define "too slow" in this context, in particular > interrupt latency? I am considering Philips LPC213x family for some > applications and I may have missed something. >
8 bit traditional MCU like the 8051/AVR/6811 typically take minimal processing from HW receiving an interrupt to your handler. Some micros provide vectored interrupts for various IRQ sources so it is very fast even if you multiple types of interrupt sources. ARM architecture itself defines a single IRQ vector. Most ARM7 MCU provides some sort of fast vectored interrupt processing unit so it can handle multiple sources, but it still mean that it needs to take at least couple hops before it gets to your handler. Then of course you have to save the volatile registers which of course is more expensive on the ARM (32 bits registers times X numbers to save/restore)... -- // richard http://www.imagecraft.com
Reply by Elder Costa March 17, 20052005-03-17
Jim Granville wrote:
> The newer single cycle C51s (SiLabs, Atmel et al) have very nimble > interrupt handling, it would be a shame to do all the porting, only to > find the ARM is too slow..
Would you mind to define "too slow" in this context, in particular interrupt latency? I am considering Philips LPC213x family for some applications and I may have missed something. TIA. Elder.
Reply by March 16, 20052005-03-16
> First, move the ASM to C, but stay on the C51. > Then, you can compare the migration more easily, and give the > bean counters the chip price delta.
Trouble with this is that the C migration would have to be done as a skunkworks project. It's very hard to justify the engineering cost to do it as an intermediate step (at a rough guess it would have a budgetary cost of three quarters of a million dollars just to develop the code, and perhaps a quarter of a million again to test and qualify it, assuming that engineering delivered a bug-free product the first time around). And then it probably wouldn't fit in the '51 variants we have qualified, and then we have tens of thousands of dollars and six months' delay in regulatory paperwork to get the new designs approved... not something we can just do experimentally :)
> The newer single cycle C51s (SiLabs, Atmel et al) have very nimble > interrupt handling, it would be a shame to do all the porting, only
to
> find the ARM is too slow..
The timing requirements are mild, though. I'm aware of latency issues; they have been discussed at the vendor pow-wows. FIQ is good enough for everything we do.
> Also, a wholesale move to ARM will be less than price-optimal - > most designer I know are looking to support C51 _and_ ARM, and
You're right /now/ but the quotes from ARM vendors, in the volumes we use, are getting more attractive every week. Plus once the success has been demo'ed on C51, we can eliminate about half a dozen radically different micros (some quasi-obsolete, some expensive, most very single-source) by porting those other projects.
Reply by Richard M. March 16, 20052005-03-16
larwe@larwe.com wrote:

>A move from 8051 to ARM7 (with Thumb) is being contemplated. This move >would also incorporate a move from asm to C. Has anyone published >metrics for code space increases in such a migration? I'm imagining a >40% size increase (wider instructions vs. optimize-friendly >architecture) but this is basically a guess. > > >
If you ask this in two years, I bet you will get real life figures (well, if people would actually share such info). Your 40% is as good as any. OTOH, 8051 addresses up to 64K w/o bank switching right, and then the smallest ARM7 so far is the Atmel SAM7S32 with 32K, so any reasonable ARM7 code should handle any 8051 app with room to spare :-) -- // richard http://www.imagecraft.com
Reply by Jim Granville March 16, 20052005-03-16
larwe@larwe.com wrote:
> There's not much XDATA moving around. I can't provide application > details (sorry, it's a work thing) but the info you and An Schwob > provided is very useful. > > By the way, the reason for the migration is standardization of tools > and skills. The specific projects that are being migrated don't > actually have any need to run on ARM, but the idea is to cut down the > number of different micros, toolchains, emulators, etc. that are being > used, and to make engineers more interchangeable amongst different > projects, plus to pave the way to start building new, more > complex/higher performance apps based on the old code. > > The end goal is to have everything written in C and running on one of a > few different ARM variants. Currently it's a big mixture.
If that's the motivation, I'd do it in two steps. First, move the ASM to C, but stay on the C51. Then, you can compare the migration more easily, and give the bean counters the chip price delta. You can also verify the ARM can actually replace the uC. The newer single cycle C51s (SiLabs, Atmel et al) have very nimble interrupt handling, it would be a shame to do all the porting, only to find the ARM is too slow.. Also, a wholesale move to ARM will be less than price-optimal - most designer I know are looking to support C51 _and_ ARM, and the chip vendors themselves have drawn the line, at ~32-64K Code, and >= 48 pins. ie, makes little sense to replace a 85c C51 variant with a much more expensive ARM, just to do the same task ? C51's are also moving down in price and package size (~TSSOP14), so can do many serial I/O and wdog type tasks. -jg
Reply by Tauno Voipio March 16, 20052005-03-16
larwe@larwe.com wrote:
> A move from 8051 to ARM7 (with Thumb) is being contemplated. This move > would also incorporate a move from asm to C. Has anyone published > metrics for code space increases in such a migration? I'm imagining a > 40% size increase (wider instructions vs. optimize-friendly > architecture) but this is basically a guess. > > I'd prefer a documented case study rather than simply anecdotes, but > I'll take anecdotes if that's all I can get :) > > Performance is not a concern for this application; code size is. >
I don't know whether this is OT, but moving a project from 80C186EB (Borland C) to ARM/Thumb (GCC) kept the code size roughly equal (40064 vs. 40032 bytes). The architectures of ARM and 8051 are so vastly different that the result depends very much (e.g. 16 or 32 bit arithmetic, bit operations etc). -- Tauno Voipio tauno voipio (at) iki fi