EmbeddedRelated.com
Forums
Memfault Beyond the Launch

Any ARMs with hardware divide?

Started by Michael Noone May 1, 2005
Jim Granville wrote:
> paulg@at-cantab-dot.net wrote: >> In comp.sys.arm CBFalconer <cbfalconer@yahoo.com> wrote: >> >>> Another example of fouling binary compatibility is the Rabbit, >>> which could have easily preserved z80 binary compatibility. >>> This immediately cost them access to a great wealth of >>> pre-existing software. >> >> It's not the same situation. A processor could have been made >> that preserved absolute binary backwards compatability and had >> the new features. The problem being that it would take so many >> gates to implement this that the resultant core would be far too >> big and power hungry to sell into the deeply embedded market. > > Without specific hard numbers, this is hard to verify. > > A common problem with core-focused design, is it looses the > bigger picture, and the fact that the +5%..+15% ( or whatever ) of > gates, is a much lower % of total die, when you add RAM, FLASH, > Peripherals (which may include a DSP ) - AND, you _can_ find that > those extra gates give SMALLER CODE space, so the TOTAL die size > can be smaller. > > The difference is often swallowed in a single die shrink, anyway. > > There are also other ways, in a system design, to preserve Binary > Compat - eg some CPU vendors use SW traps on the deprecated > opcodes, that call emulation routines in ROM (already there for > Boot load and ISD), and so the CPU does not choke on the phase-out > opcodes. The Crusoe processor is an extreme example of SW Opcode > emulation.
There is a nebulous complexity border, below which it is not worth while to implement any such (or sufficiently complex) emulations. If one has a machine with 100 storage bytes and 1000 opcode storage, it is hardly worth while emulating another opcode at all. In addition, some things are just not emulatable - for example the XTHL instruction in the 8080 and Z80, which exchanges a register with the top of stack. This can be used to manipulate the stack with absolutely no loss of register information, and I found to be very handy. It simply cannot be emulated, because any such requires a temporary storage location, whose content is lost. This meant I could not safely port my 8080 assembly code to the 8086 family and use it in interrupt service, etc. -- Chuck F (cbfalconer@yahoo.com) (cbfalconer@worldnet.att.net) Available for consulting/temporary embedded and systems. <http://cbfalconer.home.att.net> USE worldnet address!
"Jim Granville" <no.spam@designtools.co.nz> wrote in message
news:427d48f1$1@clear.net.nz...
> paulg@at-cantab-dot.net wrote: > > In comp.sys.arm CBFalconer <cbfalconer@yahoo.com> wrote:
> > It's not the same situation. A processor could have been made that preserved > > absolute binary backwards compatability and had the new features. The > > problem being that it would take so many gates to implement this that the > > resultant core would be far too big and power hungry to sell into the deeply > > embedded market. > > Without specific hard numbers, this is hard to verify. > > A common problem with core-focused design, is it looses the bigger > picture, and the fact that the +5%..+15% ( or whatever ) of gates, is > a much lower % of total die, when you add RAM, FLASH, Peripherals (which > may include a DSP ) - AND, you _can_ find that those extra gates give > SMALLER CODE space, so the TOTAL die size can be smaller.
Yes, the core is an ever shrinking part of the total die size, however the point is that a few thousand gates are equivalent to a few KBytes of flash/SRAM (which is much denser). On the M3 the gates have indeed been spent to improve codesize and performance. Adding ARM doesn't make any sense: performance is worse, codesize is worse, the programming model is more complex etc. Just removing some of the exception registers saves thousands of transistors - registers, especially if multiported, are quite expensive.
> The difference is often swallowed in a single die shrink, anyway.
Yes, but MCUs are typically made on older processes. Even 0.18u would be quite modern.
> There are also other ways, in a system design, to preserve Binary > Compat - eg some CPU vendors use SW traps on the deprecated opcodes, > that call emulation routines in ROM (already there for Boot load and > ISD), and so the CPU does not choke on the phase-out opcodes. > The Crusoe processor is an extreme example of SW Opcode emulation.
No CPU I've ever heard of "chokes" - they will always trap so that you can emulate if needed. The Linux kernel traps unaligned memory accesses and emulates the instruction to get the desired behaviour (as if you were running it on an v6 core which supports unaligned access). The VFP uses an emulators to get full IEEE support (the hardware traps on difficult operatiosn). This is standard stuff on just about all modern architectures. Wilco
Wilco Dijkstra wrote:
> "Jim Granville" <no.spam@designtools.co.nz> wrote in message > news:427d48f1$1@clear.net.nz... > >>paulg@at-cantab-dot.net wrote: >> >>>In comp.sys.arm CBFalconer <cbfalconer@yahoo.com> wrote: > > >>>It's not the same situation. A processor could have been made that preserved >>>absolute binary backwards compatability and had the new features. The >>>problem being that it would take so many gates to implement this that the >>>resultant core would be far too big and power hungry to sell into the deeply >>>embedded market. >> >>Without specific hard numbers, this is hard to verify. >> >>A common problem with core-focused design, is it looses the bigger >>picture, and the fact that the +5%..+15% ( or whatever ) of gates, is >>a much lower % of total die, when you add RAM, FLASH, Peripherals (which >>may include a DSP ) - AND, you _can_ find that those extra gates give >>SMALLER CODE space, so the TOTAL die size can be smaller. > > > Yes, the core is an ever shrinking part of the total die size, however the > point is that a few thousand gates are equivalent to a few KBytes of > flash/SRAM (which is much denser). On the M3 the gates have indeed > been spent to improve codesize and performance. Adding ARM doesn't > make any sense: performance is worse, codesize is worse, the > programming model is more complex etc. Just removing some of the > exception registers saves thousands of transistors - registers, especially > if multiported, are quite expensive.
It will be interesting to see what the Chip Vendors themselves consider makes more sense, as devices eventually appear for the merchant market using Coretx. eg If I were and Atmel/Philips/ST etc, imagine these pitches: a) With this model, you get 10K (or whatever) more FLASH, but you loose binary compatibiliy, and you will need new tools, and full code requalify, and need to carefully separate your different ARM flows. But don't worry, everyone has complete source code control, and the new tools will be fully bug-aligned with your existing ones.... [Yeah, right] b) With this model, using a 'better Cortex' core[A,R,new spin?], you get slightly less FLASH, but you can use existing tools, existing qualified 'non bios' library code is fine, and you can migrate key performance areas, and new designs, to better tools as you see fit. You can also very quickly test and evaluate and compare, using existing tools and code. If you want to take the performance gain purely from our new XYX process, that's fine by us too. It is rare to use 100% of peripherals on these uC, so designers aleady understand the benefits of standardisised/higher volume devices. Small Test: Which pitch is AMD ( and now Intel) using very successfully with their move from 32 bit to 64 bit cores ? What new spin could ARM apply : Well, they can re-define the -M3/deeply embeeded as meaning for very high volume ROM ASICs (etc), and excluding FLASH General purpose Microcontrollers, and create a new -F3, as a Cortex optimised for Embedded Flash uC, and wide, portable software applications. Then the IC vendors, and their customers, can decide for themselves which feature trade off is more important ? -jg
CBFalconer wrote:
<snip>
> There is a nebulous complexity border, below which it is not worth > while to implement any such (or sufficiently complex) emulations. > If one has a machine with 100 storage bytes and 1000 opcode > storage, it is hardly worth while emulating another opcode at all. > In addition, some things are just not emulatable - for example the > XTHL instruction in the 8080 and Z80, which exchanges a register > with the top of stack. This can be used to manipulate the stack > with absolutely no loss of register information, and I found to be > very handy.
Yup, the 80C51 has an XCH opcode too, very nice for that lowest-level stuff. Found in the better microcontrollers ... :) -jg
> What new spin could ARM apply : > Well, they can re-define the -M3/deeply embeeded as meaning for very high > volume ROM ASICs (etc), and excluding FLASH General purpose > Microcontrollers, and create a new -F3, as a Cortex optimised for Embedded > Flash uC, and wide, portable software applications. > Then the IC vendors, and their customers, can decide for themselves which > feature trade off is more important ? > > -jg >
Many customers select the ARM because it is a de-facto standard. Cortex isn't, so the value of the Cortex core is significantly lower than that of an ARM even if it is superior. There is no inherent benefit in selecting a non-standard core, just because it is provided by ARM. Cortex will only be valuable if major customers sees a benefit in the extra Cortex functionality but those customers would then probably consider everything else on the market as well. Why would anyone want to put a Cortex core in a cellular phone/PDA if that means that the applications already developed risk to break. -- Best Regards, Ulf Samuelsson This is intended to be my personal opinion which may, or may bot be shared by my employer Atmel Nordic AB
Ulf Samuelsson wrote:
>> What new spin could ARM apply : >>Well, they can re-define the -M3/deeply embeeded as meaning for very high >>volume ROM ASICs (etc), and excluding FLASH General purpose >>Microcontrollers, and create a new -F3, as a Cortex optimised for Embedded >>Flash uC, and wide, portable software applications. >> Then the IC vendors, and their customers, can decide for themselves which >>feature trade off is more important ? >> >>-jg >> > > > Many customers select the ARM because it is a de-facto standard. > Cortex isn't, so the value of the Cortex core is significantly lower than > that of an ARM even if it is superior. > There is no inherent benefit in selecting a non-standard core, just because > it is provided by ARM. > > Cortex will only be valuable if major customers sees a benefit in the extra > Cortex functionality > but those customers would then probably consider everything else on the > market as well. > > Why would anyone want to put a Cortex core in a cellular phone/PDA if that > means that > the applications already developed risk to break.
Fully agree. but some Cortex's are compatible, and some are not. So, imagine a hypothetical one, done properly by ARM, so that it has all the nice features of Thumb2, BUT also operates (choke free) on any ARM opcodes that may arrive (but at reduced speed is fine, especially if that saves silicon). [ ie new design weighting applied, so as to be smaller than the -A, -R variants, but not as broken as the -M variant. ] Surely THAT would have to interest the ARM_uC vendors like Atmel, Philips, ST, AnalogDevices, STm etc ? Once one of them had it, the others are rather forced to play catch-up ? ie then the Value of the Cortex_uCFIX core becomes higher, and the older ARM7s are the ones significantly lower value.... We will see how this plays out over the next 18 months.. -jg
In comp.sys.arm Ulf Samuelsson <ulf@atmel.nospam.com> wrote:
> Why would anyone want to put a Cortex core in a cellular phone/PDA if that > means that the applications already developed risk to break.
No one would put an Cortex M3 in a phone or PDA. It is not the market it is aimed at. They would most probably use a Cortex-A series core. -p -- "What goes up must come down, ask any system administrator" --------------------------------------------------------------------
"Jim Granville" <no.spam@designtools.co.nz> wrote in message
news:427b5a17$1@clear.net.nz...

> >> I did note that ARMs 'benchmarks' to justify the Cortex, > >> focus on > >>narrow bus systems, but there ARE very small uC shipping, > >>with wide busses...
Do you have a reference? The only benchmark source I can find on Cortex-M3 is the comparison with other MCUs (1MByte): http://www.arm.com/Multimedia/DevCon2004_presentation.pdf The reason I find this statement surprising is that in fact Thumb-2 works best on wider interfaces (>= 32-bits), as it uses 32-bit instructions. It is faster than ARM when using the same flash interface since it fetches less code.
> > In a wide bus uncached system the perfomance benifits are > > going to be very slight over stright ARM code, but the code is > > considerably smaller. > > You did read their numbers ?
Yes, it looks promising, but without further details it is difficult to figure out why those numbers look suspiciously good. It's obvious that a prefetch buffer can hide the fetch latency in straight-line code. However typical code branches a lot and the latency of non-sequential accesses can only be hidden by a cache. Maybe that is what it does... Note a wide interface will not only speedup ARM, but also Thumb-2.
> <snip> Wilco Dijkstra wrote: > >>>The Cortex family is very similar: there will be multiple > >>>CPUs at different performance levels within each of the A, > >>>R and M strands, and these will be binary compatible (ie. > >>>no recompile needed). > >> > >>Err What ?! [Who is confused here ?] > >> > >>Earlier in this thread, you stated > >>" ... so existing compilers and objects will continue to > >>work (as long as they don't contain ARM code). "
I said that *within* families CPUs are 100% compatible. The paragraphs are consistent. To clarify with a detailed example: Suppose we have 2 different Cortex-M cores: M3 and M4. These are fully binary compatible in that you should be able to run an M3 binary on the M4 and visa versa [1][2]. If newer versions provide the performance and features you want then you'll never want to move to another Cortex family (ie. binary compatibility is a non-issue). However say we also have an R5 core. You should be able to run M3 and M4 binaries on the R5 [1]. However you will need to do some more porting and recompilation to get the best out of the new CPU [3]. The same is true today when you move from an ARM7 to an ARM11. Alternatively you can also run R5 binaries that have been compiled with downwards compatibility in mind (ie. no ARM code, no R5-specific features etc) on the M3 and M4. Doing this requires a bit of care of course, but no more than you need today for code that is designed to run on many architectures (eg. C libraries). So moving *within* a Cortex family is generally trivial - you'll get full binary compatibility. Moving *between* Cortex families may require some porting and care to get full binary compatibility. In all cases a recompilation is highly desirable as the compiler can then optimise for that particular CPU. So... where do you want to migrate to today? (tm) [1] Of course this level of compatibility only applies to the instruction set - most MCUs have lots of peripherals which cause another level of incompatibility. For example any code that runs on the AT91 series can't run on the LPC2000 series (or visa versa). Even with identical interfaces one chip may have 2 timers and another 8. So if you use a purist definition of "binary compatible" no 2 chips are compatible. [2] Of course while your M3 code runs fine on newer versions, the pipeline may be a little different, and so your code doesn't run as fast as it could (unless you recompile it - you may not care, but your competitor might). [3] You'll end up running with the caches disabled as the M3 doesn't have a cache and thus has no code to enable it. So you're not getting full use of the new features - and the difference between potential and actual performance is likely much larger than [2].
> >> So, exactly what DOES happen when a Cortex M3 encounters > >> an ARM (not thumb) opcode ? > >> > >> If it chokes, it is not binary compatible. Very simple. > > > > Yes, it chokes. > > Good, so we have established it is NOT binary compatible.
We already knew that the M3 does not run ARM code natively, however it does run existing Thumb and Thumb-2 code, so it is binary compatible with that. So you could port your OS to the M3 (which is something you would have to do even if the M3 supported ARM), then relink your existing Thumb objects/libraries. If you did have any ARM objects without source you could disassemble them and reassemble for Thumb-2 without too much effort. Not 100% compatible, but close enough. Also a key goal of the M3 is to aid migration of non-ARM 8/16-bit MCU to the ARM world. The ARM world is totally incompatible of course, but if the gain is worth more than the cost, people will move. The M3 tries to lower the entry barrier as much as possible by removing features that cause new users trouble (like ARM/Thumb interworking, the OS model), and introducing features that make things easier (Thumb-2, DIV, faster interrupts, more flash for a given die size).
> So far, history proves to be quite intolerant to > not-binary-compatible options, that cause admin and version control > grief, and force users to carefully check > "Now, _which_ ARM did we use in that model ? -was it that Cortex-M?"
I'd expect tools to automatically detect incompatibilities: (a) when linking (automatically select compatible libs, error if incompatible) (b) when simulating/debugging an image (c) when burning an image into flash (d) when running on hardware (trap when executing an incompatible instruction) This is basic stuff. You could even emulate unsupported instructions if you absolutely needed it.
> Many ideas in Cortex are very good, and fix the shortfalls in the ARM > for embedded control, but I fear ARM looks to be repeating the mistakes > of history, by not learning from it.... > > Will we find that the Cortex-M quietly gets 'de-emphasised' ?
Given that M3 outperforms the good old ARM7tdmi by such a large margin on all aspects and Cortex has Thumb-2 written all over it, what do you think may quietly get "de-emphasised"? :-) Wilco
Wilco Dijkstra wrote:
> "Jim Granville" <no.spam@designtools.co.nz> wrote in message > news:427b5a17$1@clear.net.nz... > > >>>> I did note that ARMs 'benchmarks' to justify the Cortex, >>>> focus on >>>>narrow bus systems, but there ARE very small uC shipping, >>>>with wide busses... > > > Do you have a reference? The only benchmark source I can find > on Cortex-M3 is the comparison with other MCUs (1MByte): > http://www.arm.com/Multimedia/DevCon2004_presentation.pdf > > The reason I find this statement surprising is that in fact Thumb-2 works > best on wider interfaces (>= 32-bits), as it uses 32-bit instructions. It is > faster than ARM when using the same flash interface since it fetches less > code.
Yes, the ARM info is sparse, and poorly detailed, but what they have published shows Thumb2 to have LOWER peformance than ARM, but better code density. Thumb2 _does_ decrease the Step effect, between ARM//Thumb, and adds smarter embedded opcodes. They state it is a mix of 16 bit and 32 bit opcodes.
> >>>In a wide bus uncached system the perfomance benifits are >>>going to be very slight over stright ARM code, but the code is >>>considerably smaller. >> >>You did read their numbers ? > > > Yes, it looks promising, but without further details it is difficult > to figure out why those numbers look suspiciously good. It's obvious > that a prefetch buffer can hide the fetch latency in straight-line code. > However typical code branches a lot and the latency of non-sequential > accesses can only be hidden by a cache. Maybe that is what it does... > > Note a wide interface will not only speedup ARM, but also Thumb-2.
Yes, but the biggest effect is to shift the normal hit 32 bit opcode fetch encounters. It is an opcode-bandsidth, and matching that to memory bandwidth issue.
> > >><snip> Wilco Dijkstra wrote: >> >>>>>The Cortex family is very similar: there will be multiple >>>>>CPUs at different performance levels within each of the A, >>>>>R and M strands, and these will be binary compatible (ie. >>>>>no recompile needed). >>>> >>>>Err What ?! [Who is confused here ?] >>>> >>>>Earlier in this thread, you stated >>>>" ... so existing compilers and objects will continue to >>>>work (as long as they don't contain ARM code). " > > > I said that *within* families CPUs are 100% compatible. The > paragraphs are consistent. To clarify with a detailed example: > > Suppose we have 2 different Cortex-M cores: M3 and M4. These > are fully binary compatible in that you should be able to run an M3 > binary on the M4 and visa versa [1][2]. If newer versions provide > the performance and features you want then you'll never want to move > to another Cortex family (ie. binary compatibility is a non-issue). > > However say we also have an R5 core. You should be able to > run M3 and M4 binaries on the R5 [1]. However you will need > to do some more porting and recompilation to get the best out of the > new CPU [3]. The same is true today when you move from an ARM7 > to an ARM11. > > Alternatively you can also run R5 binaries that have been compiled > with downwards compatibility in mind (ie. no ARM code, no > R5-specific features etc) on the M3 and M4. Doing this requires > a bit of care of course, but no more than you need today for code > that is designed to run on many architectures (eg. C libraries). > > So moving *within* a Cortex family is generally trivial - you'll get > full binary compatibility. Moving *between* Cortex families may > require some porting and care to get full binary compatibility. > In all cases a recompilation is highly desirable as the compiler can > then optimise for that particular CPU.
These verbal gymnastics aptly demonstrate my point that calling M3 something clearly different would have helped. When you have to underline the difference between 'within', and 'between', then perhaps a clearer name scheme would have been smarter.
> > So... where do you want to migrate to today? (tm) > > > [1] Of course this level of compatibility only applies to the instruction > set - most MCUs have lots of peripherals which cause another level of > incompatibility. For example any code that runs on the AT91 series can't > run on the LPC2000 series (or visa versa). Even with identical interfaces > one chip may have 2 timers and another 8. So if you use a purist definition > of "binary compatible" no 2 chips are compatible.
Binary compatible means what it does on the 80C51. NO opcode choking. Very simple. SFR and Peripheral compatability are easier to manage.
> > [2] Of course while your M3 code runs fine on newer versions, the > pipeline may be a little different, and so your code doesn't run as > fast as it could (unless you recompile it - you may not care, but your > competitor might). > > [3] You'll end up running with the caches disabled as the M3 doesn't > have a cache and thus has no code to enable it. So you're not getting > full use of the new features - and the difference between potential > and actual performance is likely much larger than [2]. > > > >>>> So, exactly what DOES happen when a Cortex M3 encounters >>>> an ARM (not thumb) opcode ? >>>> >>>> If it chokes, it is not binary compatible. Very simple. >>> >>>Yes, it chokes. >> >>Good, so we have established it is NOT binary compatible. > > > We already knew that the M3 does not run ARM code natively, however > it does run existing Thumb and Thumb-2 code, so it is binary compatible > with that. So you could port your OS to the M3 (which is something you > would have to do even if the M3 supported ARM), then relink your existing > Thumb objects/libraries. If you did have any ARM objects without source > you could disassemble them and reassemble for Thumb-2 without too much > effort. Not 100% compatible, but close enough.
'Close enough' for who ? ARM users will make that call, not ARM marketing.
> > Also a key goal of the M3 is to aid migration of non-ARM 8/16-bit MCU to > the ARM world. The ARM world is totally incompatible of course, but if the > gain is worth more than the cost, people will move. The M3 tries to lower > the entry barrier as much as possible by removing features that cause new > users trouble (like ARM/Thumb interworking, the OS model), and introducing > features that make things easier (Thumb-2, DIV, faster interrupts, more flash > for a given die size).
and this seems to be the crux of problem. ARM seem to think they can replace the 8051/8 bit sector with this new variant. Instead, they have lost focus on what attracts users to ARM ( see Ulf's comments ). Atmel, Philips et al _already_ have sub $3 offerings, so there is substantial overlap into the 8/16 bit arena now. And this with an ARM/Thumb offering. Mostly, the uC selection decisions I see made, hinge on Peripherals & FLASH/RAM, NOT the core itself. As Ulf says, they choose ARM's _because_ they are binary [opcode] compatible. Philips seem to have a HW solution that simply and effectively reduces the ARM/Thumb step effect. Thus any "new core" benchmarks that exlude this solution, lack credibility. It is better to talk about the better embedded opcodes/features in Cortex. - and the A and R variants _include_ ARM opcodes. After all, code size is steadily getting both larger and cheaper, with FLASH ARMs now well clear of 8/16 bit models in FLASH resource.
>> So far, history proves to be quite intolerant to >>not-binary-compatible options, that cause admin and version control >>grief, and force users to carefully check >>"Now, _which_ ARM did we use in that model ? -was it that Cortex-M?" > > > I'd expect tools to automatically detect incompatibilities: > > (a) when linking (automatically select compatible libs, error if incompatible) > (b) when simulating/debugging an image > (c) when burning an image into flash > (d) when running on hardware (trap when executing an incompatible instruction) > > This is basic stuff. You could even emulate unsupported instructions if you > absolutely needed it.
Key words here are 'expect' and 'could'. We are talking about existing, proven tools and in use right now, not horizonware.
> >>Many ideas in Cortex are very good, and fix the shortfalls in the ARM >>for embedded control, but I fear ARM looks to be repeating the mistakes >>of history, by not learning from it.... >> >>Will we find that the Cortex-M quietly gets 'de-emphasised' ? > > > Given that M3 outperforms the good old ARM7tdmi by such a large > margin on all aspects and Cortex has Thumb-2 written all over it, what do > you think may quietly get "de-emphasised"? :-)
That's easy : The lack of binary [opcpode] compatibility. Will Ulf be pushing Atmel to release a -M3 microcontroller : I doubt it! I simply don't see the 'such a large margin on all aspects' in ARMs published information at all ? These graphs show Thumb-2 as being LARGER than Thumb, and SLOWER than ARM ?! [but also smaller than ARM, and faster than Thumb] Their example claim of a system Size saving of a (mere) 9%, also avoids any comments on Speed. Hmmmm... ? To me, Thumb2 is a sensible, middle ground between ARM and Thumb, ( fixes some of the older core's shortcommings ) but the removal of ARM binary compatibility on the M3, and apparent pitch into a space users are leaving void, is poorly researched. Time will show who is right :) -jg
> and this seems to be the crux of problem. ARM seem to think they can > replace the 8051/8 bit sector with this new variant. Instead, they have > lost focus on what attracts users to ARM ( see Ulf's comments ). > Atmel, Philips et al _already_ have sub $3 offerings, so there is > substantial overlap into the 8/16 bit arena now. And this with an > ARM/Thumb offering.
The cost is in the memory, but meory is getting cheaper and most customers would not care about 10% anyway.
> Mostly, the uC selection decisions I see made, hinge on Peripherals & > FLASH/RAM, NOT the core itself. As Ulf says, they choose ARM's > _because_ they are binary [opcode] compatible. > > Philips seem to have a HW solution that simply and effectively > reduces the ARM/Thumb step effect. Thus any "new core" benchmarks that > exlude this solution, lack credibility.
I just question the attitude that because ARM is successful with the ARM/Thumb core someone needs to bother about the Cortex core. If it is not binary compatible, then it should be compared on an equal basis. I think the Intel "Itanic" proves that the name of the manufacturer is not the issue.
> Will Ulf be pushing Atmel to release a -M3 microcontroller : I doubt it! >
Better things to do :-) If it is needed, then it probably does not take long time to get it. Most people will select the micro on the peripherals anyway as Jim pointed out. -- Best Regards, Ulf Samuelsson This is intended to be my personal opinion which may, or may bot be shared by my employer Atmel Nordic AB

Memfault Beyond the Launch