>> From the very start, the 68k had a 32-bit programming architecture.
>> For cost reasons, the implementation used a 16-bit ALU and datapath, but
>> all the registers were 32-bit, and all instructions supported 8-bit,
>> 16-bit and 32-bit widths (even though the 32-bit versions took twice as
>> many clock cycles). This meant that when 32-bit ALUs became
>> economically feasible, the 68k just got faster with the same software,
>> unlike the x86 architecture that got seriously ugly in the move to 32 bits.
>
> In a way, we have to thank the x86 marketer for beating the 68k.
> Otherwise, many programmers would stay with assemblers and C would not
> be as popular. C masks out the ugly x86 architecture.
>
People have used C on the 68k for about as long as there has been C (68k
cpus were a popular choice for early unix workstations such as Sun's
first machines, unlike x86 which only gained serious *nix popularity
with Linux. It was also the original target for gcc). Writing a C
compiler for the 68k is peanuts compared to writing one for the x86,
since the 68k has a wide set of mostly orthogonal registers, plenty of
address registers, and addressing modes ideal for C. Getting the best
out of an x86 device is a black art, and it was a long time before C
compilers could compete with professional x86 assembler programmers. So
I expect most serious x86 development was still being done in assembly
long after C (and other high level languages) were standard on the 68k
(the Mac OS being a notable exception, written mostly in assembly for
some reason).
The legacy of assembly on the x86 is one of the reasons why the
instruction set is so hideous - it has had to keep 100% binary
compatibility because you can't just recompile your assembly code for a
new architecture. The 68k architecture, on the other hand, has seen
many binary incompatible changes (such as the removal of rarer
addressing modes) to improve efficiency.
Reply by Paul Keinanen●September 7, 20082008-09-07
>On Sep 7, 9:15�pm, linnix <m...@linnix.info-for.us> wrote:
>> ....
>> In a way, we have to thank the x86 marketer for beating the 68k.
>> Otherwise, many programmers would stay with assemblers and C would not
>> be as popular. C masks out the ugly x86 architecture.
>
>Indeed so. The world has to thank x86 for making C popular and
>thus totally messing up programming for decades - so far.
>This probably sounds weird to almost everyone; likely because
>C has always been the only thing "everyone" has ever been good
>at...
The strange thing is that C became so common and not PLM-86.
A few years earlier, most 8-bitters were programmed in assembler, but
Intel marketed the PLM-80 to mask the ugly 8080 architecture. PLM-80
was used quite a lot in those days.
Paul
Reply by Didi●September 7, 20082008-09-07
On Sep 7, 9:15=A0pm, linnix <m...@linnix.info-for.us> wrote:
> ....
> In a way, we have to thank the x86 marketer for beating the 68k.
> Otherwise, many programmers would stay with assemblers and C would not
> be as popular. C masks out the ugly x86 architecture.
Indeed so. The world has to thank x86 for making C popular and
thus totally messing up programming for decades - so far.
This probably sounds weird to almost everyone; likely because
C has always been the only thing "everyone" has ever been good
at...
Didi
------------------------------------------------------
Dimiter Popoff Transgalactic Instruments
http://www.tgi-sci.com
------------------------------------------------------
http://www.flickr.com/photos/didi_tgi/sets/72157600228621276/
Original message: http://groups.google.com/group/comp.arch.embedded/msg/f72=
4b65b3f8f4483?dmode=3Dsource
Reply by linnix●September 7, 20082008-09-07
> From the very start, the 68k had a 32-bit programming architecture.
> For cost reasons, the implementation used a 16-bit ALU and datapath, but
> all the registers were 32-bit, and all instructions supported 8-bit,
> 16-bit and 32-bit widths (even though the 32-bit versions took twice as
> many clock cycles). This meant that when 32-bit ALUs became
> economically feasible, the 68k just got faster with the same software,
> unlike the x86 architecture that got seriously ugly in the move to 32 bits.
In a way, we have to thank the x86 marketer for beating the 68k.
Otherwise, many programmers would stay with assemblers and C would not
be as popular. C masks out the ugly x86 architecture.
Reply by David Brown●September 7, 20082008-09-07
ChrisQ wrote:
> Didi wrote:
>
>> The tiny coldfires compete for the lowest power market segment, only
>> the 430 is in that category. And CF has that true 68K style IRQ
>> priority scheme, none of the rest have it (and very few poeople know
>> what to do with it, of course).
>>
>> Didi
>>
>
> Have been a long term fan of 68k and considering that it's been around
> since 1979'ish, has to be one of the longest lasting embedded 16 bit
> architectures around. A clean, orthoganal design with (as you say) a
> fully vectored prioritised interrupt subsystem that's hard to better 25+
> years later. Renesas copied it almost verbatim in the M30870.. series
> and probably many others as well.
>
From the very start, the 68k had a 32-bit programming architecture.
For cost reasons, the implementation used a 16-bit ALU and datapath, but
all the registers were 32-bit, and all instructions supported 8-bit,
16-bit and 32-bit widths (even though the 32-bit versions took twice as
many clock cycles). This meant that when 32-bit ALUs became
economically feasible, the 68k just got faster with the same software,
unlike the x86 architecture that got seriously ugly in the move to 32 bits.
> Looked at coldfire and wanted an excuse to use it, but can't ignore the
> logic of arm, which has dozens of vendors. Parts are cheap and powerfull
> as well and you just have to live with stuff like the idiosyncratic
> interrupt structure. Nothing like as clean as 68k, but I guess that's
> progress and still haven't really forgiven Freescale for eol'ing
> Dragonball 68k at pretty short notice...
>
There are lots of vendors of ARM devices, (almost) none of which are
compatible with each other in terms of peripherals, pin outs, or pretty
much anything except the core. For most developers, you don't choose to
use ARMs - you choose to use Atmel ARMs or NXP ARMs, or whatever. There
is not actually much of a difference between that and choosing the
FreeScale ColdFires - you base the choice on things like peripheral
mixes for particular devices, cost, distributors, tools, etc.
The Dragonball was always a special case for Freescale - it was a
one-off device, not part of a family (though the core was), and aimed at
a few specific large customers (such as Palm). When these customers
moved on, 99% of the sales dried up. It's a different matter for
devices like the 68332, which has a much wider range of customers. That
device must be nearly 20 years, and Freescale have still not managed to
EOL it despite trying for years to move customers to the MPC5xx line
(PPC based) and the later MCF523x (ColdFire base, and much more popular
among 68332 users).
Reply by David Brown●September 7, 20082008-09-07
larwe wrote:
> On Sep 5, 10:41 am, David Brown <da...@westcontrol.removethisbit.com>
> wrote:
>
>> That may be the impression *you* have got of the ColdFire, but it is
>> totally at odds with reality. The ColdFire is very much a major 32-bit
>
> Mere availability of a wide range of devices is an orthogonal issue to
> the matter of obsolescence. ColdFire is used, yes, but (if you buy the
> reports) it's experiencing a shrinking number of design wins.
> Freescale as a whole isn't doing amazingly well these days, FTM.
>
Availability of *new* devices is a good indication that are architecture
is not obsolete (or going obsolete). I agree about *existing* devices,
whose availability is, as you say, orthogonal to obsolescence. But no
company will devote significant resources to making new products for a
dying range (look at Freescale's MCore range, for an example).
I don't have any numbers for where the ColdFire stands in either the
number of designs or number of parts, but I have seen nothing to
indicate that it is not a major architecture, and will continue to be one.
>> processor architecture with devices ranging from tiny low-power with
>> integrated memories to superscaler devices at several hundred MHz.
>
> ... exactly like the popular cores, viz. ARM and MIPS. ColdFire
> occupies the same space for Freescale that AVR32 does for Atmel (and
> most of the other silicon vendors have their own proprietary 32-bit
> cores, too - NEC, ST, ...). They're generally available but not really
> what one would call mainstream.
>
Like the AVR32, and unlike the ARM, MIPS, PPC, and x86, the ColdFire is
single-source. If by "mainstream" you mean "available from multiple
sources", then the ColdFire is not mainstream. But if by "mainstream"
you mean popular, easily available in small and large quantities, well
supported by its manufacturer, distributors, and third-party tool
developers (open source, small commercial, and big commercial vendors,
for compilers, debuggers, OS's, software libraries, hardware development
kits, etc.), then I think the ColdFire is mainstream.
>> The ColdFire core bears no resemblance to the 8-bit Freescale cores -
>> perhaps you are thinking only of the ColdFire v1 cores that are
>> available in the same package and with the same peripherals as a range
>> of 68S08 devices (the idea being that you can easily move between
>
> Yes, that's what I meant. Wasn't implying any architectural similarity
> between the cores, I was talking about the migration path Freescale
> touts.
>
Fair enough.
Reply by Didi●September 6, 20082008-09-06
On Sep 6, 6:05=A0pm, larwe <zwsdot...@gmail.com> wrote:
> ...
> My point is that there needs to be a compelling reason to choose a
> proprietary core.
I do not see how ARM is more mainstream than 68k, other than
in marketeer talk. What do you get if your MCU is ARM based
and not CF or whatever? It is still single sourced. There is
likely more 68k code around than there is ARM - has been used
much longer in much larger applications than ARM goes into.
There must be some compelling reason nowadays with all that
C programming mess to prefer a part based on its core anyway;
access to an efficient assembly language for someone who can
take advantage of it looks like one to me.
> > =A0How much does the part you refer to consume running full power
> > at 50 MHz core clock? The MCF51QE128 @3.3V, 50 MHz core/25 MHz bus
>
> The part isn't characterized at that precise frequency (it is
> characterized from 8 to 72MHz in various steps), but at 48MHz core,
> fbus=3Dfcore (but 1WS, effectively 24MHz) it's 36.1mA with all
> peripherals enabled, 24.4 with all peripherals disabled (running from
> flash) or 31.5/20.5mA running from RAM. Definitely the same ballpark.
Same ballpark indeed, if the ARM based part has also the lowest power
saving modes - slower internal clock etc. - this becomes even more so.
> > =A0The prices I know of are for 1000+, are yours at 1000+ as well?
>
> We don't bother to get pricing for anything in quantities under
> 10000 :)
So prices are the same. Well, I am sure I could pack a lot more code
into a CF part than anyone could pack using C in either ARM or CF
(factor of 10+) and I would be faster practically for any project
larger than something saying "hello world", so I know what my choice
would be.
Didi
------------------------------------------------------
Dimiter Popoff Transgalactic Instruments
http://www.tgi-sci.com
------------------------------------------------------
http://www.flickr.com/photos/didi_tgi/sets/72157600228621276/
Original message: http://groups.google.com/group/comp.arch.embedded/msg/e1a=
59d95f4de6143?dmode=3Dsource
Reply by larwe●September 6, 20082008-09-06
On Sep 6, 10:37=A0am, Didi <d...@tgi-sci.com> wrote:
> Perhaps someone familiar with both families could comment,
> I am not that familiar with ARM and you don't seem to be
> very familiar with CF. Actually I have yet to become very
My point is that there needs to be a compelling reason to choose a
proprietary core. I remember having the same argument in this NG about
AVR32. The Atmel guys have given up on trying to sell it to us, when
they come round to talk about products they just say "oh, and of
course there's AVR32 as well". Microchip keeps trying to sell us the
32-bit PICs... which reminds me they were going to give me an EVB to
play with at home. I must ping them about that.
> =A0How much does the part you refer to consume running full power
> at 50 MHz core clock? The MCF51QE128 @3.3V, 50 MHz core/25 MHz bus
The part isn't characterized at that precise frequency (it is
characterized from 8 to 72MHz in various steps), but at 48MHz core,
fbus=3Dfcore (but 1WS, effectively 24MHz) it's 36.1mA with all
peripherals enabled, 24.4 with all peripherals disabled (running from
flash) or 31.5/20.5mA running from RAM. Definitely the same ballpark.
That's why I got puzzled when you or someone else in this thread
started talking about low power consumption; CF simply isn't amazingly
slender in this day and age. Low-power apps that don't require much
horsepower go with an 8-bitter or an MSP430; apps that do require
number-crunching horsepower often use a low-power DSP these days.
> =A0The prices I know of are for 1000+, are yours at 1000+ as well?
We don't bother to get pricing for anything in quantities under
10000 :)
Reply by Didi●September 6, 20082008-09-06
On Sep 6, 4:40=A0pm, larwe <zwsdot...@gmail.com> wrote:
> On Sep 5, 10:05=A0pm, Didi <d...@tgi-sci.com> wrote:
>
> > =A0But the tiny CF parts - which are all <$5, some are closer
> > to $2 - are really interesting and have what it takes to
>
> How can they compare to a sub-$1 Cortex-M3 part with 64K flash, 16K
> RAM, and loads of peripherals?
Perhaps someone familiar with both families could comment,
I am not that familiar with ARM and you don't seem to be
very familiar with CF. Actually I have yet to become very
familiar with it myself, but I have already done some work
in this direction.
How much does the part you refer to consume running full power
at 50 MHz core clock? The MCF51QE128 @3.3V, 50 MHz core/25 MHz bus
specifies 33.4 mA - everything on and running, obviously it
goes down through uA to nA range using different power saving
modes.
The prices I know of are for 1000+, are yours at 1000+ as well?
Didi
------------------------------------------------------
Dimiter Popoff Transgalactic Instruments
http://www.tgi-sci.com
------------------------------------------------------
http://www.flickr.com/photos/didi_tgi/sets/72157600228621276/
Reply by larwe●September 6, 20082008-09-06
On Sep 5, 10:05=A0pm, Didi <d...@tgi-sci.com> wrote:
> =A0But the tiny CF parts - which are all <$5, some are closer
> to $2 - are really interesting and have what it takes to
How can they compare to a sub-$1 Cortex-M3 part with 64K flash, 16K
RAM, and loads of peripherals?