EmbeddedRelated.com
Forums

problem using FILE pointer

Started by abc February 5, 2009
>JeffR wrote: >> cbfalconer wrote: >>> abc wrote: >>> >... snip ... >>> >>>> FILE *fp; >>>> >>>> fp=fopen("C:\Documents and
Settings\btp\Desktop\pertest\ecg.txt","r");
>>> ^___ a backslash is an escape char. Use / or \\. >>>> >>>> while(fp!=EOF){ >> ^.... while( !feof(fp) ) > >A bad suggestion. feof(fp) only signals that an EOF has been >detected. Before that the following fread (or getc etc.) statement >can read invalid data. All file reading calls signal when they >encounter EOF.
No, this is the correct suggestion but the fscanf return value needs to be checked. The problem is you cannot blindly call fscanf without checking the return. It should be: if ( fscanf(fp,"%d",&ecg[j]) != EOF ) { ++j; /*this only works for a byte array; %d will push ints in the array; if ints are desired it should be j += 2 */ } Then the !feof check will cause the loop to terminate. All is then good ... well sort of. Another problem with this code is that the array is declared uint8, yet the fscanf format specifier is "%d". The %d will fetch enough characters to fill the standard word size of the processor represented by an int (e.g. 2 bytes), yet the declaration is a uint8. This will overflow the buffer when the last value is read and placed at the end of the array for most processor architectures, and cause an exception at runtime (if you're lucky!), unless the compiler is smart enough to complain, but I doubt it since it's a format specifier. The declaration should be (and the static initializer is a bad idea; if this is an embedded system, all that needs to be done is to force BSS to be initialized to zero, then remove the {0}): unsigned int ecg[ecg_size]; and ecg_size set to this if bytes are desired: #define ecg_size (1250 / (sizeof(int)) The write of this code really ought to spend some more time on it before posting the code here. There are many problems with it as it was posted.
>>JeffR wrote: >>> cbfalconer wrote: >>>> abc wrote: >>>> >>... snip ... >>>> >>>>> FILE *fp; >>>>> >>>>> fp=fopen("C:\Documents and >Settings\btp\Desktop\pertest\ecg.txt","r"); >>>> ^___ a backslash is an escape char. Use / or \\. >>>>> >>>>> while(fp!=EOF){ >>> ^.... while( !feof(fp) ) >> >>A bad suggestion. feof(fp) only signals that an EOF has been >>detected. Before that the following fread (or getc etc.) statement >>can read invalid data. All file reading calls signal when they >>encounter EOF. > >No, this is the correct suggestion but the fscanf return value needs to
be
>checked. The problem is you cannot blindly call fscanf without checking >the return. It should be: > >if ( fscanf(fp,"%d",&ecg[j]) != EOF ) >{ > ++j; /*this only works for a byte array; %d will push ints in the >array; if ints are desired it should be j += 2 */ >} > >Then the !feof check will cause the loop to terminate. All is then good >... well sort of. > >Another problem with this code is that the array is declared uint8, yet >the fscanf format specifier is "%d". The %d will fetch enough
characters
>to fill the standard word size of the processor represented by an int
(e.g.
>2 bytes), yet the declaration is a uint8. This will overflow the buffer >when the last value is read and placed at the end of the array for most >processor architectures, and cause an exception at runtime (if you're >lucky!), unless the compiler is smart enough to complain, but I doubt it >since it's a format specifier. The declaration should be (and the
static
>initializer is a bad idea; if this is an embedded system, all that needs
to
>be done is to force BSS to be initialized to zero, then remove the {0}): > >unsigned int ecg[ecg_size]; > >and ecg_size set to this if bytes are desired: > >#define ecg_size (1250 / (sizeof(int)) > >The write of this code really ought to spend some more time on it before >posting the code here. There are many problems with it as it was
posted.
>
---> and ecg_size set to this if bytes are desired: I meant: and ecg_size set to this if ints are desired: (1250 * sizeof(int)) and the increment of the index j should be: j += sizeof(int);
>CBFalconer wrote: >> George Neuner wrote: >> ... snip ... >The 68K support is a different story. The 68K family has always been >32-bit, not 16-bit. It has some 16-bit features - a 16-bit external >databus, and the original 68000 used a 16-bit wide ALU (running twice >for 32-bit operands). But these are minor implementation details, >trading speed against cost and chip size. The instruction set >architecture and basic register width are what counts - it was 32-bit >from its conception.
No no ... 16-bit databus processors suffer in performance and are *not* minor details. The fact that internal registers were 32 bits and the ALU was 32 bits doesn't mean the processor *is* 32 bits. This is marketing fluff. Double up the databus accesses for 32 bits worth of data in 16-bit chunks and that's a big deal, especially if the processor has no cache. That's no different than saying the 80188 was a 16 bit processor. It certainly was not. It had an 8-bit databus that was double-cycled for 16-bit accesses.
On Thu, 12 Feb 2009 23:04:19 -0600, Grant Edwards <grante@visi.com>
wrote:

>On 2009-02-13, CBFalconer <cbfalconer@yahoo.com> wrote: >> George Neuner wrote: >>> >> ... snip ... >>> >>> Even well known 16-bit chips like 8086 and 68K, which were >>> supported by the official versions of GCC, have now been dropped >>> from the 4.x releases. You have to use 3.x versions for them. > >Since when is the 68K a 16-bit chip? All the ones I've ever >used had 32-bit registers.
Depends on how you look at it. The 68K had 16-bit ALUs. It used 2 ALUs in parallel to work on 32-bit data. The 68020 was the first in the family to have a 32-bit ALU.
>Huh? I just looked at gcc trunk at http://gcc.gnu.org/viewcvs/trunk/gcc/config/ > >The following "non-32-bit" targets are still there: > > m68hc11 > avr (actually it's 8-bit) > pdp11 > h8300 (some sub-types are 16-bit) > stormy16 > picochip > m68k (which _is_ a 32-bit architecture) > >I checked the stuff for the AVR (an 8-bit CPU), and it's got >commits less than a week old. > >> I didn't realize that. > >I don't think it's true. > >There are plenty of "less than 32-bit cpus" supported by gcc.
There are legacy code generators and anyone can submit a new generator to the tool chain, but the GCC development team does not maintain unofficial targets. GCC has *never* officially supported any 8-bit device. The steering committee announced with v4.0 that no targets smaller than 32-bit will be officially supported. They are slowly removing from the official release code generators for chips which are no longer popular (you can check this by comparing version manuals). That doesn't mean that you won't be able to find (or build) GCC to work with your legacy chip - all the old code generator modules are still available for download, they have just been reclassified as "additional" and are no longer included in the official release. 68K won't be dropped quickly (if ever) because it is a subset of chips which are still officially supported. But if your code doesn't work on a 68K, there's no one to complain to. George
On Fri, 13 Feb 2009 11:04:58 +0100, David Brown
<david@westcontrol.removethisbit.com> wrote:

>The 68K support is a different story. The 68K family has always been >32-bit, not 16-bit. It has some 16-bit features - a 16-bit external >databus, and the original 68000 used a 16-bit wide ALU (running twice >for 32-bit operands). But these are minor implementation details, >trading speed against cost and chip size. The instruction set >architecture and basic register width are what counts - it was 32-bit >from its conception.
IMO the ALU width defines the chip, but I won't debate that here.
>The first target for the first version of gcc was the 68k, and the 68k >family (now as ColdFires) is still a major target that is actively >developed and improved. Most of the improvements are through generic >gcc changes or ColdFire-specific changes, but they still affect >compilation for the 68k.
Coldfire is only partly 68K compatible - it has the same instruction set, but not all instructions are implemented - in particular there are fewer addressing modes - so there is a major impact on compilation. Coldfires are more memory bound than real 68Ks and require higher clock speeds and bigger caches to get equivalent performance where the 68K could use more complex addressing modes. And v4,v5 Coldfires have an incompatible FPU so you'll get different answers than you would if you run the code on an 040,060 or on an earlier chip with 68881,2 coprocessor. George
JeffR wrote:
>> CBFalconer wrote: >>> George Neuner wrote: >>> ... snip ... >> The 68K support is a different story. The 68K family has always been >> 32-bit, not 16-bit. It has some 16-bit features - a 16-bit external >> databus, and the original 68000 used a 16-bit wide ALU (running twice >> for 32-bit operands). But these are minor implementation details, >> trading speed against cost and chip size. The instruction set >> architecture and basic register width are what counts - it was 32-bit >>from its conception. > > No no ... 16-bit databus processors suffer in performance and are *not* > minor details. The fact that internal registers were 32 bits and the ALU > was 32 bits doesn't mean the processor *is* 32 bits. This is marketing > fluff. Double up the databus accesses for 32 bits worth of data in 16-bit > chunks and that's a big deal, especially if the processor has no cache. > That's no different than saying the 80188 was a 16 bit processor. It > certainly was not. It had an 8-bit databus that was double-cycled for > 16-bit accesses.
This sort of argument turns up regularly - you can look up in the archives if you want, rather than starting a new battle here. I'm going to give a brief summary here of why the 68000 is 32-bit (and the 80188 is 16-bit), why your arguments here are completely wrong, and what other nonsensical values have been used for the "bitness" of a processor. First off, "bitness" has nothing to do with performance. If you double the clock frequency (all other things being equal), you double the performance - that doesn't affect the "bitness". If you have a bottleneck that slows a chip down, it does not affect its "bitness". In particular, the width of the a particular chip's external databus bears no direct relationship with the "bitness" of the process. The processor is *part* of a device, it is not the whole device - just as it is not the whole system, but part of the system. A 32-bit device can have a 16-bit databus (an example would be the 68332 - the core is virtually identical to the core of the 68020, but the external databus is 16-bit). A 32-bit processor with a 32-bit databus can be connected to an 8-bit memory. A 32-bit processor can have a 64-bit or 128-bit wide databus (not uncommon on high-end processors). A microcontroller with internal Flash may have no external databus at all. Thus the databus width is irrelevant when discussing the "bitness" of a processor. ALU's are more integral to a processor core, so let's consider them. There are plenty of processors (mostly older or specialised designs) that have very narrow ALU's - the COP8, for example, has a 1-bit wide serial ALU. Yet the processor itself is 8-bit. The 68000 had a 16-bit wide ALU - 32-bit operations passed through it twice (it did not, as another poster claimed, have 2 16-bit ALUs working in parallel). High-end processors have multiple ALUs - that does not make their "bitness" larger. They also have extra-wide ALUs for SIMD or vector instructions - again, this does not given them a higher "bitness". Some people (in particular, Microchip's marketing folk) like to refer to the width of the internal flash as the chip's "bitness". This is, of course, even less relevant than the width of the external databus, since it is only concerned with code and not data. Again, there are plenty of examples of microcontrollers with 32-bit cores connected to 16-bit internal flash, 32-bit internal flash, and 64-bit internal flash. There are only two features that can be considered to be useful, consistent and realistic measures of the "bitness" of a processor - the maximum width of data that can be handled by most general instructions, and the width of the main general purpose register(s). These are almost always the same (I can't think of any counter-examples off the top of my head). For most processors, this also corresponds to the width of a C "int" (except for 8-bit processors, since a C "int" must be at least 16-bit, and for 64-bit processors, since many C models use a 32-bit "int"). This definition of "bitness" is the size that is relevant for software running on the processor, and is key to its ISA (instruction set architecture). Any other widths on the device are almost entirely irrelevant to software (exceptions noted below) - they may affect the speed of the device, but not its functionality (and thus are as irrelevant as the clock speed in discussions of "bitness"). Binary code for the 8086, using 16-bit data and 16-bit instructions, will run perfectly well on the 80188 - they have the same ISA, and are both 16-bit. Binary code for the 80386SX and the 80386DX are identical, and both are 32-bit processors despite the SX having a 16-bit databus. All the 68k family, including the 68000 and its descendants the 68020 though to the 68060, the embedded devices like the 68332, and all the ColdFire devices, are 32-bit. This is because their data registers are 32-bit wide, and the ISA supports ALU and general purpose instructions up to 32-bit wide. The fact that some devices are faster at handling 16-bit data than 32-bit data is irrelevant, just as the fact that some devices have special instructions for working with 64-bit data (or in fact a multiple-move of up to 512 bits at a time). This is also easy to see from compilers - any C toolchain for the 68k devices will support all the original 68k devices and the ColdFires (at least, those available when the compiler was written!), all using 32-bit "ints", all all able to generate binary code that will run unmodified on all devices. As I mentioned above, there is at least one point at which the width of external databuses may be visible from software - the maximum width of atomic accesses (including locked accesses or read-modify-write accesses) may be affected by the databus width. But that is always somewhat system-dependent. There are also some processors that are not easily categorised. The Z80 is a prime example - it has an 8-bit accumulator, and many instructions are thus limited to 8-bit. But it also has 16-bit register pairs, and many general and ALU instructions can operate directly on these. The Z80 is probably best referred to as an 8/16-bit hybrid. There are also many DSP architectures that do not have a simple "bitness" width.
George Neuner wrote:
> On Thu, 12 Feb 2009 23:04:19 -0600, Grant Edwards <grante@visi.com> > wrote: > >> On 2009-02-13, CBFalconer <cbfalconer@yahoo.com> wrote: >>> George Neuner wrote: >>> ... snip ... >>>> Even well known 16-bit chips like 8086 and 68K, which were >>>> supported by the official versions of GCC, have now been dropped >>>> from the 4.x releases. You have to use 3.x versions for them. >> Since when is the 68K a 16-bit chip? All the ones I've ever >> used had 32-bit registers. > > Depends on how you look at it. The 68K had 16-bit ALUs. It used 2 > ALUs in parallel to work on 32-bit data. The 68020 was the first in > the family to have a 32-bit ALU. >
See my other post regarding the irrelevancy of the ALU width in discussing processor bitness. Also note that the 68000 had *one* 16-bit ALU, which was used twice for 32-bit operands - having 2 16-bit ALUs would be a silly idea, since a single 32-bit ALU would be far more efficient at almost identical cost.
> >> Huh? I just looked at gcc trunk at http://gcc.gnu.org/viewcvs/trunk/gcc/config/ >> >> The following "non-32-bit" targets are still there: >> >> m68hc11 >> avr (actually it's 8-bit) >> pdp11 >> h8300 (some sub-types are 16-bit) >> stormy16 >> picochip >> m68k (which _is_ a 32-bit architecture) >> >> I checked the stuff for the AVR (an 8-bit CPU), and it's got >> commits less than a week old. >> >>> I didn't realize that. >> I don't think it's true. >> >> There are plenty of "less than 32-bit cpus" supported by gcc. > > There are legacy code generators and anyone can submit a new generator > to the tool chain, but the GCC development team does not maintain > unofficial targets. GCC has *never* officially supported any 8-bit > device. The steering committee announced with v4.0 that no targets > smaller than 32-bit will be officially supported. >
This would be news to the official gcc maintainers for the AVR (8-bit) and m6811 (8-bit) and m6812 (16-bit) targets that have been part of the main gcc tree for many years.
> They are slowly removing from the official release code generators for > chips which are no longer popular (you can check this by comparing > version manuals). That doesn't mean that you won't be able to find > (or build) GCC to work with your legacy chip - all the old code > generator modules are still available for download, they have just > been reclassified as "additional" and are no longer included in the > official release. >
True.
> 68K won't be dropped quickly (if ever) because it is a subset of chips > which are still officially supported. But if your code doesn't work > on a 68K, there's no one to complain to. >
Yes there is - the 68k family, which covers the original 68xxx processors and the ColdFire devices, is fully supported and actively developed by the gcc maintainers. Like other gcc targets, if you have support questions you can ask on the main gcc mailing lists, or on target-specific mailing lists, or contact the official maintainers, or get support through third-parties who provide support contracts. In this case, it is CodeSourcery who are the official maintainers of the ColdFire (and ARM, and various other) gcc targets, and they provide support ranging from free mailing lists to expensive but unlimited professional support contracts.
On 2009-02-14, George Neuner <gneuner2@comcast.net> wrote:
> On Thu, 12 Feb 2009 23:04:19 -0600, Grant Edwards <grante@visi.com> > wrote: > >>On 2009-02-13, CBFalconer <cbfalconer@yahoo.com> wrote: >>> George Neuner wrote: >>>> >>> ... snip ... >>>> >>>> Even well known 16-bit chips like 8086 and 68K, which were >>>> supported by the official versions of GCC, have now been dropped >>>> from the 4.x releases. You have to use 3.x versions for them. >> >>Since when is the 68K a 16-bit chip? All the ones I've ever >>used had 32-bit registers. > > Depends on how you look at it.
I look at as the width of registers and the natural "width" of assembly instruction operations. We're talking about this in the context of compiler support, and that's what compilers care about.
> The 68K had 16-bit ALUs.
I don't care. Neither does gcc.
> It used 2 ALUs in parallel to work on 32-bit data.
I don't care. Neither does gcc.
> The 68020 was the first in the family to have a 32-bit ALU.
I don't care. Neither does gcc.
> >>Huh? I just looked at gcc trunk at http://gcc.gnu.org/viewcvs/trunk/gcc/config/ >> >>The following "non-32-bit" targets are still there: >> >> m68hc11 >> avr (actually it's 8-bit) >> pdp11 >> h8300 (some sub-types are 16-bit) >> stormy16 >> picochip >> m68k (which _is_ a 32-bit architecture) >> >>I checked the stuff for the AVR (an 8-bit CPU), and it's got >>commits less than a week old. >> >>> I didn't realize that. >> >>I don't think it's true. >> >>There are plenty of "less than 32-bit cpus" supported by gcc. > > There are legacy code generators and anyone can submit a new generator > to the tool chain, but the GCC development team does not maintain > unofficial targets. GCC has *never* officially supported any 8-bit > device. The steering committee announced with v4.0 that no targets > smaller than 32-bit will be officially supported.
So, a the target is in the official source tree and is being actively developed and supported, but it's still not "officially supported"?
> They are slowly removing from the official release code > generators for chips which are no longer popular (you can > check this by comparing version manuals).
You said that anything smaller that 32 bits had been removed from 4.0 and you gave the 68K as an example. The 68K is neither less that 32bits nor has it been removed from 4.0.
> That doesn't mean that you won't be able to find (or build) > GCC to work with your legacy chip - all the old code generator > modules are still available for download, they have just been > reclassified as "additional" and are no longer included in the > official release.
What do you mean by "official release"? They're still in SVN trunk.
> 68K won't be dropped quickly (if ever) because it is a subset > of chips which are still officially supported. But if your > code doesn't work on a 68K, there's no one to complain to.
-- Grant
George Neuner wrote:
> On Fri, 13 Feb 2009 11:04:58 +0100, David Brown > <david@westcontrol.removethisbit.com> wrote: > >> The 68K support is a different story. The 68K family has always been >> 32-bit, not 16-bit. It has some 16-bit features - a 16-bit external >> databus, and the original 68000 used a 16-bit wide ALU (running twice >> for 32-bit operands). But these are minor implementation details, >> trading speed against cost and chip size. The instruction set >> architecture and basic register width are what counts - it was 32-bit >>from its conception. > > IMO the ALU width defines the chip, but I won't debate that here. >
IMO you are completely wrong here - the width of the ALU *implementation* is just an implementation detail. The width of data that the processor can pass through the ALU in an instruction, on the other hand, is fundamental to the ISA - and thus defines the bitness of the processor. So an ALU that happens to be physically implemented as 16-bit for cost reasons, but which works transparently with 32-bit data from 32-bit registers, is a 32-bit ALU in a 32-bit processor.
> >> The first target for the first version of gcc was the 68k, and the 68k >> family (now as ColdFires) is still a major target that is actively >> developed and improved. Most of the improvements are through generic >> gcc changes or ColdFire-specific changes, but they still affect >> compilation for the 68k. > > Coldfire is only partly 68K compatible - it has the same instruction > set, but not all instructions are implemented - in particular there > are fewer addressing modes - so there is a major impact on > compilation. Coldfires are more memory bound than real 68Ks and > require higher clock speeds and bigger caches to get equivalent > performance where the 68K could use more complex addressing modes. > And v4,v5 Coldfires have an incompatible FPU so you'll get different > answers than you would if you run the code on an 040,060 or on an > earlier chip with 68881,2 coprocessor. >
That is only sort of true. When designing the ColdFire, FreeScale (I can't remember if they were still "Motorola" at the time) looked at the 68k ISA, dropped some parts that were rarely used but cost a great deal to implement, and then designed a completely new implementation of the same ISA using a modern design. One of the features that got dropped was the more complex addressing modes - most of which were not much used, and many of which had already been dropped on newer 680x0 devices such as the 68040 and the 68060. The missing address modes do not have a "major" impact on compilation, though of course it only takes a single unimplemented instruction to lose binary compatibility. The complex addressing modes were used on only certain types of code (in particular, complex data structures and array and pointer manipulation). They were also falling into disuse before the ColdFire - some had been dropped from the later 680x0 devices, and others were little used even though they were implemented, since code for the 68040 and 68060 was often faster if these modes were avoided (they caused pipeline stalls, and hindered the compiler from re-ordering instructions to reduce latencies). The ColdFire's *are* more memory bound than the original 68k devices, but that is mainly because they execute far more instructions per clock cycle! The difference due to compact complex addressing modes being split into several smaller (but much faster) instructions, and therefore requiring more code memory bandwidth, is tiny. I'd be very surprised if code size increased more than a couple of percent between a ColdFire-optimised compilation and a 680x0-optimised compilation. There are certainly plenty of instructions that exist on some 680x0 devices and not on the ColdFires, and vice versa, and also when comparing the different devices in each family (the 68040 has instructions that are not in the 68020, and vice versa). But these are minor points - the main ISA is the same across all the devices, and it is not hard for a compiler to generate code that will run on all of them (though less efficiently than if it can use extra instructions).
On 2009-02-14, George Neuner <gneuner2@comcast.net> wrote:
> On Thu, 12 Feb 2009 23:04:19 -0600, Grant Edwards <grante@visi.com> > wrote: > >>On 2009-02-13, CBFalconer <cbfalconer@yahoo.com> wrote: >>> George Neuner wrote: >>>> >>> ... snip ... >>>> >>>> Even well known 16-bit chips like 8086 and 68K, which were >>>> supported by the official versions of GCC, have now been dropped >>>> from the 4.x releases. You have to use 3.x versions for them.
>>There are plenty of "less than 32-bit cpus" supported by gcc. > > There are legacy code generators and anyone can submit a new > generator to the tool chain, but the GCC development team does > not maintain unofficial targets. GCC has *never* officially > supported any 8-bit device. The steering committee announced > with v4.0 that no targets smaller than 32-bit will be > officially supported.
They're in SVN trunk and freshly downloaded 4.3.3 source tarballs. They're being actively maintained. What exactly is meant by "dropped from 4.x releases" and "not officially supported"? If they're still in SVN trunk and 4.x source tarballs and are being actively maintained, why should anybody care wether they've been "dropped from 4.x releases" and "aren't officially supported?"
> They are slowly removing from the official release code > generators for chips which are no longer popular
I've got no problems with that. If nobody steps up to maintain something, then it goes away. I just can't figure out why you say that targets like the m68k "have now been dropped" and are "not supported" in 4.x, and why you "have to use 3.x versions for them."
> (you can check this by comparing version manuals). That > doesn't mean that you won't be able to find (or build) GCC to > work with your legacy chip - all the old code generator > modules are still available for download, they have just been > reclassified as "additional" and are no longer included in the > official release.
I give up. What does "no longer included in the official release" mean?
> 68K won't be dropped quickly (if ever) because it is a subset > of chips which are still officially supported. But if your > code doesn't work on a 68K, there's no one to complain to.
I see. m68K support "have been dropped from the 4.x release" but "won't be dropped quickly (if ever)". -- Grant