Reply by Grant Edwards●February 16, 20092009-02-16
On 2009-02-16, George Neuner <gneuner2@comcast.net> wrote:
>>I once used a processor which I and most other people would
>>call a 16-bit processor (16-bit registers, 16-bit address
>>space, 16-bit data paths). However, it was built out of a set
>>of AM2901 bit-slice processors. Since each of the AM2901 ALUs
>>was 4-bits wide, I guess George would say that the CPU in
>>question was a 4-bit CPU.
>>
>>There were some pretty famous CPUs built using the AM2901
>>family: DEC PDP-10, DG Nova, AN/UYK-44, and so on. All of them
>>4-bit CPUs, presumably.
>
> Ha Ha.
>
> Bit-slicing is an implementation detail
That's what we said about the ALU width(s) in the m86k family.
It was an implementation detail that was hidden from the
compiler.
> - what matters is how many bits are being computed in
> parallel.
No, that doesn't matter at all to somebody designing or
building a compiler.
> I didn't want to get deeper into this discussion, but here
> goes ...
>
> The problem is to specify the chip's capability and give some
> indication as to its performance (at least relative to other
> members of the same family).
No, that's not the problem. The problem we're discussing is
how to describe the "width" of a CPU in the context of compiler
design and implementation. I don't care if a VAX CPU is
running at 1Hz or 100GHz. I don't care if it does ALU
operations 128 bits at a time or 2 bits at a time. The VAX CPU
is a 32-bit CPU.
> The ISA defines the chip's programming API.
And that's what we're talking about in this thread.
> AFAICS, the ALU's bit width (total combined bit width if
> sliced 8) and its set of primitive operations are the only
> really objective measures of a chip's processing capability.
In this thread, we aren't concerned with "a chip's processing
capability" in any way other than the ISA as seen by the
compiler.
--
Grant Edwards grante Yow! Well, I'm INVISIBLE
at AGAIN ... I might as well
visi.com pay a visit to the LADIES
ROOM ...
Reply by George Neuner●February 16, 20092009-02-16
On Sat, 14 Feb 2009 23:42:47 -0600, Grant Edwards <grante@visi.com>
wrote:
>On 2009-02-15, Dennis <dennis@nowhere.net> wrote:
>> George Neuner wrote:
>>> On Fri, 13 Feb 2009 11:04:58 +0100, David Brown
>>> <david@westcontrol.removethisbit.com> wrote:
>>>
>>>> The 68K support is a different story. The 68K family has always been
>>>> 32-bit, not 16-bit. It has some 16-bit features - a 16-bit external
>>>> databus, and the original 68000 used a 16-bit wide ALU (running twice
>>>> for 32-bit operands). But these are minor implementation details,
>>>> trading speed against cost and chip size. The instruction set
>>>> architecture and basic register width are what counts - it was 32-bit
>>>>from its conception.
>>>
>>> IMO the ALU width defines the chip, but I won't debate that here.
>>
>> Hmm The IBM System/360 Model 30 (mid 1960's 32 bit mainframe) had an 8
>> bit ALU. Of course that was back in the day of real microcode. And it
>> was much more than a single chip.
>
>I once used a processor which I and most other people would
>call a 16-bit processor (16-bit registers, 16-bit address
>space, 16-bit data paths). However, it was built out of a set
>of AM2901 bit-slice processors. Since each of the AM2901 ALUs
>was 4-bits wide, I guess George would say that the CPU in
>question was a 4-bit CPU.
>
>There were some pretty famous CPUs built using the AM2901
>family: DEC PDP-10, DG Nova, AN/UYK-44, and so on. All of them
>4-bit CPUs, presumably.
Ha Ha.
Bit-slicing is an implementation detail - what matters is how many
bits are being computed in parallel.
I didn't want to get deeper into this discussion, but here goes ...
The problem is to specify the chip's capability and give some
indication as to its performance (at least relative to other members
of the same family). The ISA defines the chip's programming API.
Registers generally coincide with the needs/wants of the ISA but there
are significant exceptions.
- Ex. CM-1|2,T-2|4|8. These have no programmer visible registers.
The CMs have arbitrary width integer instructions that take operand
and result widths as parameters.
- Ex. Vax-11 has 32-bit general registers, but has 64-bit integer ops
that use an adjacent pair of registers and 64 and 128-bit FP ops
that use 2 or 4 adjacent registers (no dedicated FP registers).
- Ex. Am29050, like the Vax, has 32-bit general registers, but has
64-bit FP ops using an adjacent pair of registers.
Somebody ;) is now going to object that most of these architectures
are not relevant today. So what? The issue is how to describe chip
capabilities and these examples and other show that there are issues
with using ISA and/or register width to do that.
- A number of modern chips have an N x N -> 2N bit multiply, or even
occasionally (N x N) + 2N -> 2N multiply/accumulate, producing
results bigger than a register. Some have a special 2N-bit
register for the result while others require a pair of registers
to catch the result.
AFAICS, the ALU's bit width (total combined bit width if sliced 8) and
its set of primitive operations are the only really objective measures
of a chip's processing capability. For microcoded and CISC_on_RISC
architectures (current i86(-64)) the ISA is a work of fiction far
removed from the machine's primitive capabilities. Visible registers
(if any) may be just convenient groupings of bits meant to coincide
with the ISA.
George
Reply by Grant Edwards●February 16, 20092009-02-16
On 2009-02-16, David Brown <david@westcontrol.removethisbit.com> wrote:
> I couldn't find any summary or overview on gcc's website - it
> seems to assume that every visitor knows what gcc is, and how
> it works. Do you know any useful links that could give a good
> summary for George (and others - he is not alone in being
> unfamiliar with gcc)? The best I could find is not exactly
> official:
>
><http://en.wikipedia.org/wiki/GNU_Compiler_Collection>
That's probably the best general overview there is. There are
a bunch of links at the bottom of the Wikipedia article that
provide more detailed bits of history.
--
Grant Edwards grante Yow! DIDI ... is that a
at MARTIAN name, or, are we
visi.com in ISRAEL?
Reply by David Brown●February 16, 20092009-02-16
Grant Edwards wrote:
> On 2009-02-14, George Neuner <gneuner2@comcast.net> wrote:
>
>>> You said that anything smaller that 32 bits had been removed
>> >from 4.0 and you gave the 68K as an example. The 68K is
>>> neither less that 32bits nor has it been removed from 4.0.
>>>
>>>> That doesn't mean that you won't be able to find (or build)
>>>> GCC to work with your legacy chip - all the old code generator
>>>> modules are still available for download, they have just been
>>>> reclassified as "additional" and are no longer included in the
>>>> official release.
>>> What do you mean by "official release"? They're still in SVN
>>> trunk.
>> I mean that there are code generator modules available which are not
>> built into the binary distribution but which you can download and
>> compile yourself into your own customized GCC version.
>
> I've no idea what you mean.
>
> 1) There is no such thing as "the binary distribution". The
> FSF distributes source code, and the FSF sources support a
> dozen or so architectures (including one you say has been
> removed). There are some architectures that are maintained
> outside the FSF source tree (e.g. MSP430, NIOS2, M16C,
> etc.). AFAICT, all targets start out that way and once they
> get ironed out they get added to the FSF source tree. I
> remember when quite a few of the current targets that are
> in the FSF sources were external.
>
> 2) You can only build GCC for one target architecture. Gcc
> supports more than one architecture. Therefore, for any
> given build it's a tautology that "there are code generator
> modules available which are not built into the binary...
> but which you can download and compile yourself".
>
>> JeffR wrote:
>>>> CBFalconer wrote:
>>>>> George Neuner wrote:
>>>>> ... snip ...
>> This sort of argument turns up regularly - you can look up in the
>> archives if you want, rather than starting a new battle here. I'm going
>
>> to give a brief summary here of why the 68000 is 32-bit (and the 80188
>> is 16-bit), why your arguments here are completely wrong, and what other
>
>> nonsensical values have been used for the "bitness" of a processor.
>>
>> First off, "bitness" has nothing to do with performance. If you double
>> the clock frequency (all other things being equal), you double the
>> performance - that doesn't affect the "bitness". If you have a
>> bottleneck that slows a chip down, it does not affect its "bitness".
>
> "bitness" is a silly category for processors. Databus width certainly
"bitness" is often a useful categorisation of processors, but not
necessarily in the way people think (often people here say they want a
32-bit processor, when they really mean they want a fairly fast one).
> does affect performance. Does a processor with a maximum system clock rate
> of X, that requires doubling up the databus accesses because of
> "mini-bitness" sizes mean it performs the same as a different processor
> family at rate X that has the same databus "bitness" as its internal
> registers/ALU? Or even the exact same processor family; model Y of system
> clock rate X and databus of "half bitness" versus model Y2 of system clock
> rate X and databus of "full bitness". Of course not. But a better metric
> is a processor's Dhrystone results. Would you suggest that a processor
A processor's Dhrystone MIPS is a pretty poor way to measure it's
performance - the performance is heavily dependent on what you task you
want the processor to do, the toolset used for compilation, and
everything external (such as the memory connected to the databus). It
gives a rough indication of speed class, but nothing more.
> with a half-databus compared with its "bitness" per your definition, would
> chunk out the same Dhrystone as the exact same processor architecture,
> model 2, with the same bitness of databus? If you do, *that* is
> ridiculous. "bitness" is marketing fluff. So let us not take up anymore
> comp.arch.embedded database space with this discussion thread; especially
> of the server supporting the database is running a "half bitness" processor
> ;)
Bitness is about *software* - it is about how wide data the processor
can deal with directly. The width of an external databus is a
compromise between speed and cost and physical size - it is not part of
the processor, it is not part of the processor architecture, the
instruction set architecture, or anything else in the core. It is no
more relevant to a description of the *processor* than the size of its
cache, or the device's support for SDRAM or DDR memory, or the number of
timers on the device. Sure, it affects the processing speed (although a
half-width databus device will typically run much faster than a
half-clock full-width device with the same core). But that's just
*speed* - bitness is about *functionality*.
Reply by ●February 15, 20092009-02-15
On Sat, 14 Feb 2009 22:34:38 -0600, Dennis <dennis@nowhere.net> wrote:
>George Neuner wrote:
>> On Fri, 13 Feb 2009 11:04:58 +0100, David Brown
>> <david@westcontrol.removethisbit.com> wrote:
>>
>>> The 68K support is a different story. The 68K family has always been
>>> 32-bit, not 16-bit. It has some 16-bit features - a 16-bit external
>>> databus, and the original 68000 used a 16-bit wide ALU (running twice
>>> for 32-bit operands). But these are minor implementation details,
>>> trading speed against cost and chip size. The instruction set
>>> architecture and basic register width are what counts - it was 32-bit
>>>from its conception.
>>
>> IMO the ALU width defines the chip, but I won't debate that here.
>>
>
>Hmm The IBM System/360 Model 30 (mid 1960's 32 bit mainframe) had an 8
>bit ALU. Of course that was back in the day of real microcode. And it
>was much more than a single chip.
Reply by Paul Keinanen●February 15, 20092009-02-15
On Sat, 14 Feb 2009 22:34:38 -0600, Dennis <dennis@nowhere.net> wrote:
>George Neuner wrote:
>> On Fri, 13 Feb 2009 11:04:58 +0100, David Brown
>> <david@westcontrol.removethisbit.com> wrote:
>>
>>> The 68K support is a different story. The 68K family has always been
>>> 32-bit, not 16-bit. It has some 16-bit features - a 16-bit external
>>> databus, and the original 68000 used a 16-bit wide ALU (running twice
>>> for 32-bit operands). But these are minor implementation details,
>>> trading speed against cost and chip size. The instruction set
>>> architecture and basic register width are what counts - it was 32-bit
>>>from its conception.
>>
>> IMO the ALU width defines the chip, but I won't debate that here.
>>
>
>Hmm The IBM System/360 Model 30 (mid 1960's 32 bit mainframe) had an 8
>bit ALU. Of course that was back in the day of real microcode. And it
>was much more than a single chip.
Also, how should the 68008 be classified with 8 bit external data bus
and 32 bit instruction set ? The 8088 had 8 bit external data bus but
internally 16 bit addressing.
From the compiler code generator point of view, these are the same as
their wider brothers, although the number of external address lines
might be smaller, but this should not affect the code generator.
Paul
Reply by Grant Edwards●February 15, 20092009-02-15
On 2009-02-15, Dennis <dennis@nowhere.net> wrote:
> George Neuner wrote:
>> On Fri, 13 Feb 2009 11:04:58 +0100, David Brown
>> <david@westcontrol.removethisbit.com> wrote:
>>
>>> The 68K support is a different story. The 68K family has always been
>>> 32-bit, not 16-bit. It has some 16-bit features - a 16-bit external
>>> databus, and the original 68000 used a 16-bit wide ALU (running twice
>>> for 32-bit operands). But these are minor implementation details,
>>> trading speed against cost and chip size. The instruction set
>>> architecture and basic register width are what counts - it was 32-bit
>>>from its conception.
>>
>> IMO the ALU width defines the chip, but I won't debate that here.
>
> Hmm The IBM System/360 Model 30 (mid 1960's 32 bit mainframe) had an 8
> bit ALU. Of course that was back in the day of real microcode. And it
> was much more than a single chip.
I once used a processor which I and most other people would
call a 16-bit processor (16-bit registers, 16-bit address
space, 16-bit data paths). However, it was built out of a set
of AM2901 bit-slice processors. Since each of the AM2901 ALUs
was 4-bits wide, I guess George would say that the CPU in
question was a 4-bit CPU.
There were some pretty famous CPUs built using the AM2901
family: DEC PDP-10, DG Nova, AN/UYK-44, and so on. All of them
4-bit CPUs, presumably.
--
Grant
Reply by Grant Edwards●February 15, 20092009-02-15
On 2009-02-15, Dennis <dennis@nowhere.net> wrote:
>>>> There are plenty of "less than 32-bit cpus" supported by gcc.
>>>
>>> There are legacy code generators and anyone can submit a new generator
>>> to the tool chain, but the GCC development team does not maintain
>>> unofficial targets. GCC has *never* officially supported any 8-bit
>>> device. The steering committee announced with v4.0 that no targets
>>> smaller than 32-bit will be officially supported.
>>>
>>
>> This would be news to the official gcc maintainers for the AVR (8-bit)
>> and m6811 (8-bit) and m6812 (16-bit) targets that have been part of the
>> main gcc tree for many years.
>>
> You mean for example...
> avr-gcc -v
> Using built-in specs.
> Target: avr
> Configured with: ../gcc-4.1.2/configure --prefix=/usr
> --mandir=/usr/share/man --infodir=/usr/share/info --target=avr
> --enable-languages=c,c++ --disable-nls --disable-libssp
> --with-system-zlib --enable-version-specific-runtime-libs
> Thread model: single
> gcc version 4.1.2 (Fedora 4.1.2-5.fc8)
I think George is a troll.
--
Grant
Reply by Grant Edwards●February 15, 20092009-02-15
On 2009-02-14, George Neuner <gneuner2@comcast.net> wrote:
>>You said that anything smaller that 32 bits had been removed
>>from 4.0 and you gave the 68K as an example. The 68K is
>>neither less that 32bits nor has it been removed from 4.0.
>>
>>> That doesn't mean that you won't be able to find (or build)
>>> GCC to work with your legacy chip - all the old code generator
>>> modules are still available for download, they have just been
>>> reclassified as "additional" and are no longer included in the
>>> official release.
>>
>>What do you mean by "official release"? They're still in SVN
>>trunk.
>
> I mean that there are code generator modules available which are not
> built into the binary distribution but which you can download and
> compile yourself into your own customized GCC version.
I've no idea what you mean.
1) There is no such thing as "the binary distribution". The
FSF distributes source code, and the FSF sources support a
dozen or so architectures (including one you say has been
removed). There are some architectures that are maintained
outside the FSF source tree (e.g. MSP430, NIOS2, M16C,
etc.). AFAICT, all targets start out that way and once they
get ironed out they get added to the FSF source tree. I
remember when quite a few of the current targets that are
in the FSF sources were external.
2) You can only build GCC for one target architecture. Gcc
supports more than one architecture. Therefore, for any
given build it's a tautology that "there are code generator
modules available which are not built into the binary...
but which you can download and compile yourself".
--
Grant