> ARM were the right people, at the right time, with the right deals. The > original ARM processor was a very nice design - it was done by smart > folk who looked at available processors, and thought of a better way to > do make a cpu. It was low-power and low-size from the beginning.It probably helped that they had only a small team, hence they could only design in so much transistors. A few decades later a limited amount of transistors meant a low power consumption, which made the ARM a good choice for the battery-powered hand held thingies that evolved into todays telephones and tablets.> They made an attempt with Microchip - but they were perhaps the > worst possible partner. Microchip are very popular with small hobby > users - but that gives them a reputation of /only/ being for small hobby > users, and their other microcontrollers have a well-justified reputation > for having weird and painful cpu architectures and often very poor > tools. (Microchip have many good points too - I'm just talking about > the cpu cores here.) So the PIC32 is assumed to be another in the line > of bizarre Microchip-specific cores, and the decision to make the free > tools limited in optimisation (rather than the more common space > limitations) means that people trying them find the chips to be very slow.When it came out I was very ethousiastic about the PIC32: a 32-bit chip available in various sleek DIP housings, at good prices! But the compiler situation made me look elsewhere. I definitely want independent and free (GCC and/or LVM) support, including C++, full optimization, startup code, and register definition header files. This is all readily available for most ARM chips. Wouter van Ooijen
x86 vs The World...
Started by ●June 4, 2014
Reply by ●June 5, 20142014-06-05
Reply by ●June 5, 20142014-06-05
On 05.6.2014 г. 17:03, Walter Banks wrote:> Dimiter_Popoff wrote: > .... >> .... >> >> Power architecture is very much alive, just check what Freescale are >> making. > > PowerPC in quite a few of the Freescale parts is designed to work > in bad environments both electrical and physical. (automotive > engine controllers, and bad environment applications like process > control. > > w.. >Yes, these are their MCUs based on power. Mostly automotive, ECU targeted etc. But they also have the QorIQ series - they could have come up with a better name for it but some products of it are already available and others look close enough. GHz range multicore 32 and 64 bit power arch. monsters - smallish to really large beasts. Then they have parts not that new which are still to be matched for smaller systems - e.g. the mpc5200B, yet to be beaten by any competitor part in its niche. Whoever laid out the power architecture in the 80-s has been quite a visionary, it does not leave much if anything to ask for after all these years. Except the awful assembly mnemonics, but whether one writes in my vpa (68k and further mnemonics) or in C this is a non-issue. Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/sets/72157600228621276/
Reply by ●June 5, 20142014-06-05
Like in high school debate teams, sometimes you have to argue even if you disagree with yourself. I am playing devil's advocate and assign myself to the x86 team. 1. Yes, segmented 16 bits x86 is painfully ugly. But 32 bits x86 is fine, especially when we don't have to look at it. We just tell the compiler what to do. 2. We just can't ignore the fact that it was the first popular mass marketed chip and will continue to be around. 3. Most PC/labtop are still x86. It is nice to be able to develope and test on the same system. Used laptops are cheaper than $100. They are great development and test machines. 4. Regarding power, another post on S.E.D:> 13 July 2011, Inside Manchester�s million ARM electronic brain: > http://www.electronicsweekly.com/Articles/12/07/2011/51444/inside-manchesters-million-arm-electronic-brain.htm > Quote: "... > The 18 core IC is claimed to deliver the computing power of a PC and > dissipate 1W, said the University.Well, which PC? The Atom (x86) dual-core Atom PC is twice the PC at 2W; so, it's about 1W per PC. The Cavium (ARM) chip is 48 core at 100W if fully utilized. The core does not really matter much. More than half of the heat come from the 16Mbytes cache per core. OTOH, it might not make sense to have uniform cache size. Perhaps some with 32M, 64M and 128M, etc.
Reply by ●June 6, 20142014-06-06
rickman wrote:> Is there any compelling technical reason for the emergence of the ARM > over other non-x86 processors?One thing a lot of people forget is that ARM was actually the first commercial RISC processor. Yes, it was inspired by the work of the Berkeley-RISC and Stanford-MIPS teams, but their commercial results, namely SPARC and MIPS, came a little bit later. Also most of the other early RISC processors were designed for fast workstations, while Acorn was looking for a successor for the 6502 they used in their earlier computers and it should have a better latency than a 68000 or x86. Thus their design led to a pretty power efficient CPU, because they weren't aiming for raw processing power. A non-technical reason might have been the work of Robin Saxby, the first CEO of ARM. He basically set up office in a jet, flew around the world, and tried to sell ARM cores to anyone who was willing to listen to him. I guess it worked...
Reply by ●June 6, 20142014-06-06
On 2014-06-06, Michael Koenig <mikenospam@email.de> wrote:> > Also most of the other early RISC processors were designed for fast > workstations, while Acorn was looking for a successor for the 6502 they used > in their earlier computers and it should have a better latency than a 68000 or > x86. Thus their design led to a pretty power efficient CPU, because they > weren't aiming for raw processing power. > > A non-technical reason might have been the work of Robin Saxby, the first CEO > of ARM. He basically set up office in a jet, flew around the world, and tried > to sell ARM cores to anyone who was willing to listen to him. > I guess it worked...I still think one of the reasons why an architecture becomes popular is because people have the opportunity to be exposed to it before they have to start making recommendations within the workplace. The reason is the obvious one that people are far more likely to recommend something if they have prior positive experience of it. That generally means having an infrastructure to get the architecture into the hands of people like students and hobbyists and at a price those people can afford. Of the alternative architectures listed, only ARM meets that criteria, with MIPS a very poor and distant second. Simon. -- Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP Microsoft: Bringing you 1980s technology to a 21st century world
Reply by ●June 13, 20142014-06-13
On Thu, 05 Jun 2014 16:11:23 +0200, Wouter van Ooijen wrote:> When it came out I was very ethousiastic about the PIC32: a 32-bit chip > available in various sleek DIP housings, at good prices! But the > compiler situation made me look elsewhere. I definitely want independent > and free (GCC and/or LVM) support, including C++, full optimization, > startup code, and register definition header files. This is all readily > available for most ARM chips. >But pic32 is MIPS and GCC had MIPS support forever---probably longer than ARM! after all, MIPS was the original RISC architecture from Stanford (Hennesy/Patterson). I think it was more the development tools (hardware and software debuggers) that kept it proprietary and alone.
Reply by ●June 13, 20142014-06-13
On Fri, 06 Jun 2014 13:59:46 +0000, Michael Koenig wrote:> One thing a lot of people forget is that ARM was actually the first > commercial RISC processor. Yes, it was inspired by the work of the > Berkeley-RISC and Stanford-MIPS teams, but their commercial results, > namely SPARC and MIPS, came a little bit later. >Not quite: MIPS and RISC came out in 1983 and ARM-1 in 1985. Berkeley RISC did not really have a commercial followup, and of course MIPS was the target of the Hennesy Patterson book that implanted the RISC ideology in the industry.
Reply by ●June 13, 20142014-06-13
Przemek Klosowski schreef op 13-Jun-14 5:14 AM:> On Thu, 05 Jun 2014 16:11:23 +0200, Wouter van Ooijen wrote: > >> When it came out I was very ethousiastic about the PIC32: a 32-bit chip >> available in various sleek DIP housings, at good prices! But the >> compiler situation made me look elsewhere. I definitely want independent >> and free (GCC and/or LVM) support, including C++, full optimization, >> startup code, and register definition header files. This is all readily >> available for most ARM chips. >> > > But pic32 is MIPS and GCC had MIPS support forever---probably longer than > ARM! after all, MIPS was the original RISC architecture from Stanford > (Hennesy/Patterson). I think it was more the development tools (hardware > and software debuggers) that kept it proprietary and alone.Microchip also added some optimization pass to the compiler. And at the time there was no easiliy available linkerscript, startup code, etc. And the Microchip compiler was C only, no C++ or other GCC languages. Wouter
Reply by ●June 13, 20142014-06-13
On 13/06/14 08:44, Wouter van Ooijen wrote:> Przemek Klosowski schreef op 13-Jun-14 5:14 AM: >> On Thu, 05 Jun 2014 16:11:23 +0200, Wouter van Ooijen wrote: >> >>> When it came out I was very ethousiastic about the PIC32: a 32-bit chip >>> available in various sleek DIP housings, at good prices! But the >>> compiler situation made me look elsewhere. I definitely want independent >>> and free (GCC and/or LVM) support, including C++, full optimization, >>> startup code, and register definition header files. This is all readily >>> available for most ARM chips. >>> >> >> But pic32 is MIPS and GCC had MIPS support forever---probably longer than >> ARM! after all, MIPS was the original RISC architecture from StanfordYes, gcc support for MIPS is much older than for ARM - it was one of the earliest targets supported after the original m68k.>> (Hennesy/Patterson). I think it was more the development tools (hardware >> and software debuggers) that kept it proprietary and alone. > > Microchip also added some optimization pass to the compiler. And at the > time there was no easiliy available linkerscript, startup code, etc. And > the Microchip compiler was C only, no C++ or other GCC languages. >I don't know that Microchip added much to gcc - at most just tweaks. What Microchip did was take the gcc source code, add in license protection, and bundle it together with a debugger, library and linker scripts. Because of the GPL, they /very/ reluctantly made the source code for their modified gcc available (including the licence protection, which other users had to then remove from the source before compiling). The GPL did not let them restrict users of gcc - so they added licensing clauses to their library (whose source was kept secret) to say that it may only be used along with Microchip-supplied binaries of gcc. The license protection on their gcc only allowed -O0 (no optimisation) and C only on the free version - you had to pay substantial amounts to be allowed to enable optimisation. The shenanigans pulled by Microchip were technically legal and within the limitations of the GPL (other companies charge for gcc + extras bundles) - but morally they stole the compiler and sold licenses to their users, charging them significantly for something they could get for free, and spoiling their own market (which is not illegal, but is pretty stupid). Some users went to the effort of compiling gcc themselves from Microchips source, but then they had to find a library themselves (typically newlib) and put things together. Or they went to CodeSourcery, who provide working free MIPS gcc toolchains or paid-for versions with support (from the people who wrote much of the compiler, unlike the Microchip folk). But mostly people felt it was not worth the effort, and bought ARM chips instead.
Reply by ●June 26, 20142014-06-26
On Wed, 04 Jun 2014 18:50:39 -0700, Paul Rubin wrote:> Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> writes: >> forgotten how utterly crap the x86 architecture is (at bare metal level) >> when compared to ARM. > > I've heard (maybe someone else knows specifics) that x86 is much better > than ARM with regard to things like predictable ordering of memory > operations in multicore systems.x86 (and x86-64) does the least amount of memory re-ordering of mainstream architectures. Alpha does the most, followed by ARM and IA-64 (Itanium): http://en.wikipedia.org/wiki/Memory_ordering This is largely an artifact of backward compatibility going all the way back to 8086. More aggressive re-ordering would likely break a lot of existing code. New architectures don't have existing code to worry about. The issue isn't so much that x86 is more predicable per se, but that achieving predictability requires fewer memory barrier instructions, which means fewer opportunites for the programmer to omit one.