EmbeddedRelated.com
Forums

AVR or 8051

Started by Yukuan January 19, 2005
On 2005-01-20, An Schwob in USA <schwobus@aol.com> wrote:

>> Your scenareo is a bit worse than most of the projects I've >> worked on. But, I'd have to agree that unless the second part >> is pin-for-pin, drop-in, same-part-number-on-the-BOM comptable, >> changing to a different architecture just isn't much more work >> than changing to a "similar" part with the same ISA but >> different pinouts and peripheral interfaces. >> >> Once you've got to re-layout the board and re-write the >> peripheral handling code, you can almost as easily switch to >> different architecture. > > what about buying and / or learning the new tools if switching > architecture?
That usually takes a few days -- though thanks to Gnu tools you can switch from AVR to ARM to H8/300 to 6812 to 68K to IA32 to PPC to SPARC and never have to learn a new toolchain. :)
> If you are working as a one man show that penalty might be > minor, however for a team of designers the pain and loss in > productivity is significant.
I guess I never found it to be that difficult to figure out the options for a new compiler or write a commadn file for a new linker. The ones I've worked with all worked fairly similarly.
> As an attempt to answer the original question why AVR, why > 8051, if you are familiar with one and not the other, stick > with the one as long as you can get the micro you are looking > for.
That depends on volume. If the unfamiliar one will save you $1/unit on 10 million units, you better figure out how to get familiar.
> AVR look at the LPC900 family from Philips. If you are looking > at MEGA AVR, sopt that! do not start with an 8-bit engine, go > to the "8051 of the 32-bit world" and start developing with an > ARM microcontroller. You get a 32-bit micro for less than $5 > that outperforms any AVR in features and performance while > costing less.
There are some very cost effective ARM parts out there. Hitachi (now Renesas) has some very good, cheap H8/300 uControllers as well. -- Grant Edwards grante Yow! Hey, LOOK!! A pair of at SIZE 9 CAPRI PANTS!! They visi.com probably belong to SAMMY DAVIS, JR.!!
> Hi Grant, > > what about buying and / or learning the new tools if switching > architecture? If you are working as a one man show that penalty might > be minor, however for a team of designers the pain and loss in > productivity is significant. > As an attempt to answer the original question why AVR, why 8051, if you > are familiar with one and not the other, stick with the one as long as > you can get the micro you are looking for. > If you do a new design you might find more 51s with suitable > peripherals than AVRs, simply because there are more out there. Asking > multiple vendors for competitive bits has always been helpful to get a > good price ;-) AVR, one vendor one more or less competitive bit. > For the widest range of high quality peripherals based on 51 core have > a look at Sylabs, looking for the best value in 51 based small chips, > comparible to Tiny AVR look at the LPC900 family from Philips. If you > are looking at MEGA AVR, sopt that! do not start with an 8-bit engine, > go to the "8051 of the 32-bit world" and start developing with an ARM > microcontroller. You get a 32-bit micro for less than $5 that > outperforms any AVR in features and performance while costing less. > An Schwob in USA >
The new ARM7s are opening up new low cost capability, but I think you find that the "lower" cost is not neccessarily true. The mega series goes down to the ATmega48 and you will find no ARM which will beat that price. When you compare chips like the ATmega256 and the equivalent AT91SAM7S256 then the 0,18u process of the AT91SAM7S256 will definitely be an advantage for die size, But the wafer cost and yield is also of interest. There are other parameters which will influence the decision., Do you need 5 V Vcc? Do you need low power? The AVR goes down to 1,8V, but the AT91 needs 3.3V I/O. I think you will find that the cheap ARMs are implemented in 0,18u processes which have much higher leakage current than the 0,35 u process used for the 5 V capable AVRs. The AT91SAM7S256 has less peripherals than the ATmega256. For some customers the AT91SAM7S256 will be better and for others the ATmega256 will be better. If you need to work with multiple application, then the memory and peripheral range you need will give hints for the best choice. If all the applications are 32 KB and higher, maybe up to megabytes, then of course the ARM is the right way to go, if the applications are mostly in the 8-32 kB with the odd higher, then I am certain that the AVR is a better choice. As for selecting C51s many customers would like to reduce their vendors and not mess with 10-20 different C51 manufacturers. As for competition, it stops once the part is designed in unless there is a true second source. Rasing prices will work only for a short while, and for low volume accounts. The mobile phone vendors will typically redesign once per year, and market price is the name of the game. I am happy to be allowed to play with both! -- Best Regards Ulf at atmel dot com These comments are intended to be my own opinion and they may, or may not be shared by my employer, Atmel Sweden.
> larwe@larwe.com wrote > >Nicholas O. Lindan writ: > > > > > Second sourcing is still important in industrial products. > > > A uP without a viable producing second source won't (shouldn't) > > > get designed in. > > > > There are VERY few uCs with second sources.
My guess is those might be the ones that will still be around in 25 years.
> > Even for a popular core > > like 8051, every vendor adds their own twists and niggles (at least) or > > massive add-on functionality slices.
Harvard MBA's: o Product differentiation; o A belief America can't compete in commodity electronics. Result: o So many incompatible products that each vendor only gets a small slice of the pie - right back where they would have been if they made commodity 12th sourced uPs; o End Product Manufacturers faced with proprietary products with a short market life; o EPM's unable to take advantage of advanced product features if they want a long product lifetimes.
> > At my [Fortune 50] employer we regard all uCs as single-source > > products. We do not design in a uC unless we have a guaranteed 5-year > > buy life on the part.
I have seen that fall flat on it's face. If the manufacturer goes under the contract may be hard to enforce. I have a client who maintained a five year inventory in lieu of no 2nd source. The IC maker promptly went belly up after the client took delivery of the 5 years' worth. The client's product took off like a rocket - the 5 year stockpile lasted less than a year. What a way to make a flop. OffT: Speaking of worthless contracts: I once had an engineer try to sell me a patent: The firm where he once worked still owned the patent; But that was OK, he had an agreement with the President of the firm to privately sell the patent to non-competitors; No, the agreement wasn't in writing, it was a handshake contract; And it was really sad that said company President died last year; He then confessed that the new owners were really litigious, and of course didn't know about the agreement he had with 'The Old Man', it was "sort of secret". A secret verbal contract with a dead man. * * * OnT: "Chris Hills" <chris@phaedsys.org> writes:
> However as there are about 600 odd 8051's out there if the one you use > disappears there will be a very similar one from some one else that will > do the job.
This has happened to me, at one time vanilla P8732's weren't available (or so the client's purchasing department said). A Philips with a timer array worked just fine. No change in the code required. And the generics are available again. -- Nicholas O. Lindan, Cleveland, Ohio Consulting Engineer: Electronics; Informatics; Photonics. Remove spaces etc. to reply: n o lindan at net com dot com psst.. want to buy an f-stop timer? nolindan.com/da/fstop/
Grant Edwards wrote:
>
... snip ...
> > The last time I worked on a project with an honest > second-sourced CPU it was 15 years ago using an 8086 in a DIP > package. IIRC, the signal thresholds on one of the interrupts > pins weren't _quite_ the same on the two parts, and some > component value changes actually did have to be made when > purchasing decided to by the parts from the second source. :(
Back then nobody in their right mind would design in a single-sourced component. The manufacturers were out looking for second sources to license long before their own product hit the market. What happened? -- "If you want to post a followup via groups.google.com, don't use the broken "Reply" link at the bottom of the article. Click on "show options" at the top of the article, then click on the "Reply" at the bottom of the article headers." - Keith Thompson
On Thu, 20 Jan 2005 13:41:45 +1300, Jim Granville <no.spam@designtools.co.nz>
wrote:

> RISC made sense when memory was off chip, and large, but in a 8 bit uC >you can have working poduct in 64-256 bytes RAM, so it only makes sense >to have opcodes that can directly access that ? - see the Z8 for a >example of a register-register uC design that understands that.
I cannot follow this, Jim. Most times, I think I find myself in near or complete agreement with you. But not here. When the MIPS R2000 was first released into the market (and I consider it one of the first, if not precisely *the* first, commercially available true-RISC CPU), one of the truly mind-boggling problems was keeping the darned thing fed from memory. I used 8kx8 RAM chips for the caches that were single-source at the time (Performance Semi) and burned one watt apiece! I haven't even mentioned the difficulties with the connectors between the CPU board and the memory board! The required bandwidth to memory was one of the PROBLEMS, not one of the benefits, as you suggest above. The reason was simple. To get a task done, more instructions were required. They were very fast, but you needed some 40% more memory to hold them. And that put pressure on memory channels. What I really wished to have was such a CPU with the RAM built into it, so that external chip-to-chip type drivers weren't needed and the speeds could be more easily maintained. Anyway, the "RISC made sense when memory was off chip" just slaps me in the face, big time. I know different. From personal experience. It was CISC that had the advantage in terms of memory, because the code was denser. On the other hand, RISC was fantastic in the sense of the speed you could get by converting all that silicon real-estate (which at the time was at a premium, but is not nearly so these days) used for microcode and microcode sequencing logic (which on the 68020, for example, occupied some 70% of the total die space) and turning it into more (add new register space, multipliers, ALUs, etc.) and/or faster (less sequential and more combinatorial) functional units. At the time, the trade off made tremendous sense -- especially if you weren't Motorola or Intel and didn't have access to the very top of the line FABS and had to live in the cracks, so to speak, with fewer transistor equivalents and still outperform the competition who had access to 5-10X more available in their expensive FABs.) Jon
Jonathan Kirwan wrote:

> On Thu, 20 Jan 2005 13:41:45 +1300, Jim Granville <no.spam@designtools.co.nz> > wrote: > > >> RISC made sense when memory was off chip, and large, but in a 8 bit uC >>you can have working poduct in 64-256 bytes RAM, so it only makes sense >>to have opcodes that can directly access that ? - see the Z8 for a >>example of a register-register uC design that understands that. > > > I cannot follow this, Jim. Most times, I think I find myself in near or > complete agreement with you. But not here. > > When the MIPS R2000 was first released into the market (and I consider it one of > the first, if not precisely *the* first, commercially available true-RISC CPU), > one of the truly mind-boggling problems was keeping the darned thing fed from > memory. I used 8kx8 RAM chips for the caches that were single-source at the > time (Performance Semi) and burned one watt apiece! I haven't even mentioned > the difficulties with the connectors between the CPU board and the memory board! > The required bandwidth to memory was one of the PROBLEMS, not one of the > benefits, as you suggest above. > > The reason was simple. To get a task done, more instructions were required. > They were very fast, but you needed some 40% more memory to hold them. And that > put pressure on memory channels. What I really wished to have was such a CPU > with the RAM built into it, so that external chip-to-chip type drivers weren't > needed and the speeds could be more easily maintained. > > Anyway, the "RISC made sense when memory was off chip" just slaps me in the > face, big time. I know different. From personal experience. It was CISC that > had the advantage in terms of memory, because the code was denser. On the other > hand, RISC was fantastic in the sense of the speed you could get by converting > all that silicon real-estate (which at the time was at a premium, but is not > nearly so these days) used for microcode and microcode sequencing logic (which > on the 68020, for example, occupied some 70% of the total die space) and turning > it into more (add new register space, multipliers, ALUs, etc.) and/or faster > (less sequential and more combinatorial) functional units. At the time, the > trade off made tremendous sense -- especially if you weren't Motorola or Intel > and didn't have access to the very top of the line FABS and had to live in the > cracks, so to speak, with fewer transistor equivalents and still outperform the > competition who had access to 5-10X more available in their expensive FABs.)
It was brief, so I will elaborate. First I meant RAM, rather than CODE, but that may be unclear.... With a register-register core, with no direct memory opcodes, all RAM access has a relatively high cost - but the reach of that ram tends to be larger. ie just loading a 16 bit pointer costs 4 bytes in a 16 bit opcode core, (but you can reach 64K bytes), then you need to get/operate/putback that variable => high code cost on ALL RAM variables. This is the idea of opcode knee, I have mentioned before. In the 80C51, you can access 128/256 bytes in the variable length opcodes. [ Some newer 80C51's have RAM frame pointers, to extend direct meomory opcodes across all on chip RAM, but not many ] Now, if you want random access into 8K/32K of data, that's not much benefit, but in the embedded microcontroller sector, on chip RAM is commonly well under 1K. eg a DJNZ opcode in the 80C51 is 2 bytes on a reg, and 3 bytes on any of 128 RAM locations - makes for very efficent loops. Atomic bit access is a natural on a 80C51 You also confirm that RISC was a microPROCESSOR solution, not a MicroController solution. On chip DATA memory was simply not on their radar, but it was very much on the Radar of the 80C51 and Z8 designers, who were building a single chip microcontroller. The Z8 is a good example of register-register 'done right' for a uC, and the Intel 8096 was similar. For larger memory systems, the ARM will become the natural next 8051, followed closely by the Cortex respin by ARM, which takes a RISC that was designed as a microprocessor, more into the microcontroller space, and to improve memory usage, esp that of RAM. - ie for a 256K Code and 64K RAM system, one would not choose a 80C51... Once you bring RAM on chip, (and onto the designers radar), then a register frame pointer makes a lot of sense - as seen in the Z8, the 166, and IIRC, in the SPARC, where they have a nice scheme for partial register overlays, so you use some registers to pass params, and some for local variables. -jg
larwe@larwe.com wrote:

> Chris Hills wrote: > > >>>There are VERY few uCs with second sources. Even for a popular core >> >>However as there are about 600 odd 8051's out there if the one you >>use disappears there will be a very similar one from some one else that will > > In my field, as in many others I think, the porting effort (SW/HW > engineering time) is trivial compared to the length and expense of the > testing and qualification process that follows it. If we change a part > number and firmware, we need to go through a minimum of three months QA > functional testing (this usually takes > 4 months due to resource > availability) and in most cases also FCC and UL re-cert. UL leadtimes > are being quoted as 16 weeks now. All in all, it takes at least six to > seven months to pull this "trivial" change through the pipeline, not > counting production ramp-up and component leadtimes (some of our mask > parts have 15-18 week leadtimes and that clock often cannot start to > tick until regulatory approval is finished).
<snip more good version control/approval points> In this formal environment, how do you handle die revisions by the supplier ? - Strictly, that should see a re-cycle, but I am not sure that actually gets done :) Certainly a migration like AT90S2313 [EOL] to ATtiny2313 should see the full re-approval cycle, correct ? -jg
"CBFalconer" <cbfalconer@yahoo.com> wrote

> [Second sourcing] What happened?
Greed and complacency. -- Nicholas O. Lindan, Cleveland, Ohio Consulting Engineer: Electronics; Informatics; Photonics. Remove spaces etc. to reply: n o lindan at net com dot com psst.. want to buy an f-stop timer? nolindan.com/da/fstop/
On 2005-01-20, Nicholas O. Lindan <see@sig.com> wrote:

>> [Second sourcing] What happened? > > Greed and complacency.
Lack of demand. From a silicon vendor's point of view, setting up second sourcing takes time and money and cuts into profits. They're only going to do it if enough customers demand it to make it worthwhile. Given a choice between older, slower, more-expensive second-sourced parts, and newer, faster, cheaper single-sourced parts, people picked the latter in droves. -- Grant Edwards grante Yow! Awright, which one of at you hid my PENIS ENVY? visi.com
Hi Jim,

> > testing and qualification process that follows it. If we change a
part
> > number and firmware, we need to go through a minimum of three
months QA
> > functional testing (this usually takes > 4 months due to resource > > availability) and in most cases also FCC and UL re-cert. UL
leadtimes
> > In this formal environment, how do you handle die revisions by the > supplier ? - Strictly, that should see a re-cycle, but I am not sure > that actually gets done :)
This is an interesting question indeed. I can tell you fersure to some extent we rely on purchasing, which relies in turn on the manufacturers to disclose when they are changing things. If the manufacturer doesn't tell us there's been a die-shrink then we won't know until the ESD damage victims start to roll into field support. We DO contractually require them to give us notice, of course, but in theory they might not.
> Certainly a migration like AT90S2313 [EOL] to ATtiny2313 should see
> the full re-approval cycle, correct ?
A part _number_ change automatically triggers full recertification. The testing may well be abbreviated due to similarity, but that never seems to translate into a shorter time period somehow.