EmbeddedRelated.com
Forums

problem using FILE pointer

Started by abc February 5, 2009
>JeffR wrote: >>> CBFalconer wrote: >>>> George Neuner wrote: >>>> ... snip ... >This sort of argument turns up regularly - you can look up in the >archives if you want, rather than starting a new battle here. I'm going
>to give a brief summary here of why the 68000 is 32-bit (and the 80188 >is 16-bit), why your arguments here are completely wrong, and what other
>nonsensical values have been used for the "bitness" of a processor. > >First off, "bitness" has nothing to do with performance. If you double >the clock frequency (all other things being equal), you double the >performance - that doesn't affect the "bitness". If you have a >bottleneck that slows a chip down, it does not affect its "bitness".
"bitness" is a silly category for processors. Databus width certainly does affect performance. Does a processor with a maximum system clock rate of X, that requires doubling up the databus accesses because of "mini-bitness" sizes mean it performs the same as a different processor family at rate X that has the same databus "bitness" as its internal registers/ALU? Or even the exact same processor family; model Y of system clock rate X and databus of "half bitness" versus model Y2 of system clock rate X and databus of "full bitness". Of course not. But a better metric is a processor's Dhrystone results. Would you suggest that a processor with a half-databus compared with its "bitness" per your definition, would chunk out the same Dhrystone as the exact same processor architecture, model 2, with the same bitness of databus? If you do, *that* is ridiculous. "bitness" is marketing fluff. So let us not take up anymore comp.arch.embedded database space with this discussion thread; especially of the server supporting the database is running a "half bitness" processor ;) ********************************************* Jeff http://www.e2atechnology.com
On Sat, 14 Feb 2009 14:26:42 +0100, David Brown
<david.brown@hesbynett.removethisbit.no> wrote:

>The 68000 had a 16-bit >wide ALU - 32-bit operations passed through it twice (it did not, as >another poster claimed, have 2 16-bit ALUs working in parallel).
This is only partly correct. In fact, the 68000 had 3 16-bit ALUs ... one unit for data operations, and *two* units used in parallel for address generation. Since the 68000 could perform address and data calculations simultaneously, certain instructions used all 3 units simultaneously. I wanted to cite the Motorola architecture docs, but I wasn't able to find them on the web. However, for a summary of the internals see http://www.experiencefestival.com/a/Motorola_68000_-_Architecture/id/1779821 George
On Sat, 14 Feb 2009 07:50:19 -0600, Grant Edwards <grante@visi.com>
wrote:

>On 2009-02-14, George Neuner <gneuner2@comcast.net> wrote: >> On Thu, 12 Feb 2009 23:04:19 -0600, Grant Edwards <grante@visi.com> >> wrote: >> >>>On 2009-02-13, CBFalconer <cbfalconer@yahoo.com> wrote: >>>> George Neuner wrote: >>>>> >>>There are plenty of "less than 32-bit cpus" supported by gcc. >> >> There are legacy code generators and anyone can submit a new generator >> to the tool chain, but the GCC development team does not maintain >> unofficial targets. GCC has *never* officially supported any 8-bit >> device. The steering committee announced with v4.0 that no targets >> smaller than 32-bit will be officially supported. > >So, a the target is in the official source tree and is being >actively developed and supported, but it's still not >"officially supported"? > >> They are slowly removing from the official release code >> generators for chips which are no longer popular (you can >> check this by comparing version manuals). > >You said that anything smaller that 32 bits had been removed >from 4.0 and you gave the 68K as an example. The 68K is >neither less that 32bits nor has it been removed from 4.0. > >> That doesn't mean that you won't be able to find (or build) >> GCC to work with your legacy chip - all the old code generator >> modules are still available for download, they have just been >> reclassified as "additional" and are no longer included in the >> official release. > >What do you mean by "official release"? They're still in SVN >trunk.
I mean that there are code generator modules available which are not built into the binary distribution but which you can download and compile yourself into your own customized GCC version. George
David Brown wrote:
> George Neuner wrote: >> On Thu, 12 Feb 2009 23:04:19 -0600, Grant Edwards <grante@visi.com> >> wrote: >> >>> On 2009-02-13, CBFalconer <cbfalconer@yahoo.com> wrote: >>>> George Neuner wrote: >>>> ... snip ... >>>>> Even well known 16-bit chips like 8086 and 68K, which were >>>>> supported by the official versions of GCC, have now been dropped >>>>> from the 4.x releases. You have to use 3.x versions for them. >>> Since when is the 68K a 16-bit chip? All the ones I've ever >>> used had 32-bit registers. >> >> Depends on how you look at it. The 68K had 16-bit ALUs. It used 2 >> ALUs in parallel to work on 32-bit data. The 68020 was the first in >> the family to have a 32-bit ALU. >> > > See my other post regarding the irrelevancy of the ALU width in > discussing processor bitness. Also note that the 68000 had *one* 16-bit > ALU, which was used twice for 32-bit operands - having 2 16-bit ALUs > would be a silly idea, since a single 32-bit ALU would be far more > efficient at almost identical cost. > >> >>> Huh? I just looked at gcc trunk at >>> http://gcc.gnu.org/viewcvs/trunk/gcc/config/ >>> >>> The following "non-32-bit" targets are still there: >>> >>> m68hc11 >>> avr (actually it's 8-bit) >>> pdp11 >>> h8300 (some sub-types are 16-bit) >>> stormy16 >>> picochip >>> m68k (which _is_ a 32-bit architecture) >>> >>> I checked the stuff for the AVR (an 8-bit CPU), and it's got >>> commits less than a week old. >>> >>>> I didn't realize that. >>> I don't think it's true. >>> >>> There are plenty of "less than 32-bit cpus" supported by gcc. >> >> There are legacy code generators and anyone can submit a new generator >> to the tool chain, but the GCC development team does not maintain >> unofficial targets. GCC has *never* officially supported any 8-bit >> device. The steering committee announced with v4.0 that no targets >> smaller than 32-bit will be officially supported. >> > > This would be news to the official gcc maintainers for the AVR (8-bit) > and m6811 (8-bit) and m6812 (16-bit) targets that have been part of the > main gcc tree for many years. >
You mean for example... avr-gcc -v Using built-in specs. Target: avr Configured with: ../gcc-4.1.2/configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --target=avr --enable-languages=c,c++ --disable-nls --disable-libssp --with-system-zlib --enable-version-specific-runtime-libs Thread model: single gcc version 4.1.2 (Fedora 4.1.2-5.fc8)
George Neuner wrote:
> On Fri, 13 Feb 2009 11:04:58 +0100, David Brown > <david@westcontrol.removethisbit.com> wrote: > >> The 68K support is a different story. The 68K family has always been >> 32-bit, not 16-bit. It has some 16-bit features - a 16-bit external >> databus, and the original 68000 used a 16-bit wide ALU (running twice >> for 32-bit operands). But these are minor implementation details, >> trading speed against cost and chip size. The instruction set >> architecture and basic register width are what counts - it was 32-bit >>from its conception. > > IMO the ALU width defines the chip, but I won't debate that here. >
Hmm The IBM System/360 Model 30 (mid 1960's 32 bit mainframe) had an 8 bit ALU. Of course that was back in the day of real microcode. And it was much more than a single chip.
On 2009-02-14, George Neuner <gneuner2@comcast.net> wrote:

>>You said that anything smaller that 32 bits had been removed >>from 4.0 and you gave the 68K as an example. The 68K is >>neither less that 32bits nor has it been removed from 4.0. >> >>> That doesn't mean that you won't be able to find (or build) >>> GCC to work with your legacy chip - all the old code generator >>> modules are still available for download, they have just been >>> reclassified as "additional" and are no longer included in the >>> official release. >> >>What do you mean by "official release"? They're still in SVN >>trunk. > > I mean that there are code generator modules available which are not > built into the binary distribution but which you can download and > compile yourself into your own customized GCC version.
I've no idea what you mean. 1) There is no such thing as "the binary distribution". The FSF distributes source code, and the FSF sources support a dozen or so architectures (including one you say has been removed). There are some architectures that are maintained outside the FSF source tree (e.g. MSP430, NIOS2, M16C, etc.). AFAICT, all targets start out that way and once they get ironed out they get added to the FSF source tree. I remember when quite a few of the current targets that are in the FSF sources were external. 2) You can only build GCC for one target architecture. Gcc supports more than one architecture. Therefore, for any given build it's a tautology that "there are code generator modules available which are not built into the binary... but which you can download and compile yourself". -- Grant
On 2009-02-15, Dennis <dennis@nowhere.net> wrote:

>>>> There are plenty of "less than 32-bit cpus" supported by gcc. >>> >>> There are legacy code generators and anyone can submit a new generator >>> to the tool chain, but the GCC development team does not maintain >>> unofficial targets. GCC has *never* officially supported any 8-bit >>> device. The steering committee announced with v4.0 that no targets >>> smaller than 32-bit will be officially supported. >>> >> >> This would be news to the official gcc maintainers for the AVR (8-bit) >> and m6811 (8-bit) and m6812 (16-bit) targets that have been part of the >> main gcc tree for many years. >> > You mean for example... > avr-gcc -v > Using built-in specs. > Target: avr > Configured with: ../gcc-4.1.2/configure --prefix=/usr > --mandir=/usr/share/man --infodir=/usr/share/info --target=avr > --enable-languages=c,c++ --disable-nls --disable-libssp > --with-system-zlib --enable-version-specific-runtime-libs > Thread model: single > gcc version 4.1.2 (Fedora 4.1.2-5.fc8)
I think George is a troll. -- Grant
On 2009-02-15, Dennis <dennis@nowhere.net> wrote:
> George Neuner wrote: >> On Fri, 13 Feb 2009 11:04:58 +0100, David Brown >> <david@westcontrol.removethisbit.com> wrote: >> >>> The 68K support is a different story. The 68K family has always been >>> 32-bit, not 16-bit. It has some 16-bit features - a 16-bit external >>> databus, and the original 68000 used a 16-bit wide ALU (running twice >>> for 32-bit operands). But these are minor implementation details, >>> trading speed against cost and chip size. The instruction set >>> architecture and basic register width are what counts - it was 32-bit >>>from its conception. >> >> IMO the ALU width defines the chip, but I won't debate that here. > > Hmm The IBM System/360 Model 30 (mid 1960's 32 bit mainframe) had an 8 > bit ALU. Of course that was back in the day of real microcode. And it > was much more than a single chip.
I once used a processor which I and most other people would call a 16-bit processor (16-bit registers, 16-bit address space, 16-bit data paths). However, it was built out of a set of AM2901 bit-slice processors. Since each of the AM2901 ALUs was 4-bits wide, I guess George would say that the CPU in question was a 4-bit CPU. There were some pretty famous CPUs built using the AM2901 family: DEC PDP-10, DG Nova, AN/UYK-44, and so on. All of them 4-bit CPUs, presumably. -- Grant
On Sat, 14 Feb 2009 22:34:38 -0600, Dennis <dennis@nowhere.net> wrote:

>George Neuner wrote: >> On Fri, 13 Feb 2009 11:04:58 +0100, David Brown >> <david@westcontrol.removethisbit.com> wrote: >> >>> The 68K support is a different story. The 68K family has always been >>> 32-bit, not 16-bit. It has some 16-bit features - a 16-bit external >>> databus, and the original 68000 used a 16-bit wide ALU (running twice >>> for 32-bit operands). But these are minor implementation details, >>> trading speed against cost and chip size. The instruction set >>> architecture and basic register width are what counts - it was 32-bit >>>from its conception. >> >> IMO the ALU width defines the chip, but I won't debate that here. >> > >Hmm The IBM System/360 Model 30 (mid 1960's 32 bit mainframe) had an 8 >bit ALU. Of course that was back in the day of real microcode. And it >was much more than a single chip.
Also, how should the 68008 be classified with 8 bit external data bus and 32 bit instruction set ? The 8088 had 8 bit external data bus but internally 16 bit addressing. From the compiler code generator point of view, these are the same as their wider brothers, although the number of external address lines might be smaller, but this should not affect the code generator. Paul
On Sat, 14 Feb 2009 22:34:38 -0600, Dennis <dennis@nowhere.net> wrote:

>George Neuner wrote: >> On Fri, 13 Feb 2009 11:04:58 +0100, David Brown >> <david@westcontrol.removethisbit.com> wrote: >> >>> The 68K support is a different story. The 68K family has always been >>> 32-bit, not 16-bit. It has some 16-bit features - a 16-bit external >>> databus, and the original 68000 used a 16-bit wide ALU (running twice >>> for 32-bit operands). But these are minor implementation details, >>> trading speed against cost and chip size. The instruction set >>> architecture and basic register width are what counts - it was 32-bit >>>from its conception. >> >> IMO the ALU width defines the chip, but I won't debate that here. >> > >Hmm The IBM System/360 Model 30 (mid 1960's 32 bit mainframe) had an 8 >bit ALU. Of course that was back in the day of real microcode. And it >was much more than a single chip.
And an 8-bit path to memory, IIRC. -- ArarghMail902 at [drop the 'http://www.' from ->] http://www.arargh.com BCET Basic Compiler Page: http://www.arargh.com/basic/index.html To reply by email, remove the extra stuff from the reply address.