Reply by David Brown September 29, 20092009-09-29
ChrisQ wrote:
> David Brown wrote: > >> >> I think the same thing applies to most languages. I guess we can >> blame this idiosyncrasy (or rather, idiocy) of C on the slow Dec >> Writer keyboards - K&R were more concerned about avoiding keystrokes >> than on making a good structured modular programming language. The >> default "int" is in the same category. > > If all you have is a int, everything looks like a hammer etc, or have I > mixed that up a bit. > > C may not be perfect, but it's kept me in lunch for more years than I > care to remember, Still enjoy the challenge and and still learning new > stuff every day. What more could you ask for a way of earning a living, > with half the world in employment slavery ?. >
To paraphrase Churchill - C is the worst of all possible programming languages, except for all the others.
> Everything is a compromise and C was originally designed in the days > before computing as it is today. Code generated from it drives a large > proportion of the world's engineering, apps, medicine, leisure and more, > so perhaps they didn't do such a bad job... >
When C was designed, there were plenty of other languages that were far safer, better structured, and in many ways more powerful - Algol and Pascal being the obvious examples. There are a number of points where C could have been much better, for very little cost - avoiding implicit ints and making file-scope objects static are clear examples. The lack of a proper interface-implementation separation is perhaps the biggest failing - people /still/ can't agree on a sensible style of how to name headers and C files, and what should go in each file. I suppose C++ shows that C is not as bad as it is possible to get. It's fair enough to say that C's limitations and design faults are because K&R were writing it for a specific use, and it worked fine for that job. And it's also fair enough to say that any design is a compromise. But when C was designed, other current languages were significantly more "modern" in their structure and safety - C was a big step back in those areas.
Reply by Ulf Samuelsson September 29, 20092009-09-29
>>> >>> Beyond that, avr-gcc pushes registers if they are needed - pretty >>> much like any other compiler I have used. If your interrupt function >>> calls an external function, and you are not using whole-program >>> optimisation, then this means pushing all ABI "volatile" registers - >>> an additional 12 registers. Again, this is the same as for any other >>> compiler I have seen. And as with any other compiler, you avoid the >>> overhead by keeping your interrupt functions small and avoiding >>> external function calls, or by using whole-program optimisations. >>> >>>> The IAR is simply - better - . >>>> >>> >>> I'll not argue with you about IAR producing somewhat smaller or >>> faster code than avr-gcc. I have only very limited experience with >>> IAR, so I can't judge properly. But then, you apparently have very >>> little experience with avr-gcc - >> >> I don't disagree with that. >> I have both, but I quickly scurry back to the IAR compiler >> if I need to show off the AVR. >> > > You have colleagues at Atmel who put a great deal of time and effort > into avr-gcc. You might want to talk to them about how to get the best > out of avr-gcc - that way you can offer your customers a wider choice. > Different tools are better for different users and different projects - > your aim is that customers have the best tools for their use, and know > how to get the best from those tools, so that they will get the best out > of your devices. > > On the other hand, I fully understand that no one has the time to learn > about all the tools available, and you have to concentrate on particular > choices. It's fair enough to tell people how wonderful IAR and the AVR > go together - but it is not fair enough to tell people that avr-gcc is a > poor choice without better technical justification. > >
There is a difference between saying the one compiler is better than the other, and saying that the second rate compiler is a poor choice. Statistically, the avr-gcc compiler is used by more than 50% of the AVR developers, and for many, "free of charge" is a much more important parameter than efficient code generation. Others would like to have a compiler which is not dongle protected due to bad experience with the vendor, and that is another reason to go avr-gcc, instead of IAR. BR Ulf Samuelsson
Reply by Ulf Samuelsson September 29, 20092009-09-29
>>> >>> Beyond that, avr-gcc pushes registers if they are needed - pretty >>> much like any other compiler I have used. If your interrupt function >>> calls an external function, and you are not using whole-program >>> optimisation, then this means pushing all ABI "volatile" registers - >>> an additional 12 registers. Again, this is the same as for any other >>> compiler I have seen. And as with any other compiler, you avoid the >>> overhead by keeping your interrupt functions small and avoiding >>> external function calls, or by using whole-program optimisations. >>> >>>> The IAR is simply - better - . >>>> >>> >>> I'll not argue with you about IAR producing somewhat smaller or >>> faster code than avr-gcc. I have only very limited experience with >>> IAR, so I can't judge properly. But then, you apparently have very >>> little experience with avr-gcc - >> >> I don't disagree with that. >> I have both, but I quickly scurry back to the IAR compiler >> if I need to show off the AVR. >> > > You have colleagues at Atmel who put a great deal of time and effort > into avr-gcc. You might want to talk to them about how to get the best > out of avr-gcc - that way you can offer your customers a wider choice. > Different tools are better for different users and different projects - > your aim is that customers have the best tools for their use, and know > how to get the best from those tools, so that they will get the best out > of your devices. > > On the other hand, I fully understand that no one has the time to learn > about all the tools available, and you have to concentrate on particular > choices. It's fair enough to tell people how wonderful IAR and the AVR > go together - but it is not fair enough to tell people that avr-gcc is a > poor choice without better technical justification. > >
There is a difference between saying the one compiler is better than the other, and saying that the second rate compiler is a poor choice. Statistically, the avr-gcc compiler is used by more than 50% of the AVR developers, and for many, "free of charge" is a much more important parameter than efficient code generation. Others would like to have a compiler which is not dongle protected due to bad experience with the vendor, and that is another reason to go avr-gcc, instead of IAR. BR Ulf Samuelsson
Reply by ChrisQ September 29, 20092009-09-29
David Brown wrote:

> > I think the same thing applies to most languages. I guess we can blame > this idiosyncrasy (or rather, idiocy) of C on the slow Dec Writer > keyboards - K&R were more concerned about avoiding keystrokes than on > making a good structured modular programming language. The default > "int" is in the same category.
If all you have is a int, everything looks like a hammer etc, or have I mixed that up a bit. C may not be perfect, but it's kept me in lunch for more years than I care to remember, Still enjoy the challenge and and still learning new stuff every day. What more could you ask for a way of earning a living, with half the world in employment slavery ?. Everything is a compromise and C was originally designed in the days before computing as it is today. Code generated from it drives a large proportion of the world's engineering, apps, medicine, leisure and more, so perhaps they didn't do such a bad job... Regards, Chris
Reply by David Brown September 29, 20092009-09-29
Grant Edwards wrote:
> On 2009-09-29, David Brown <david.brown@hesbynett.removethisbit.no> wrote: > >> The big issue with "static" and C is that file-scope objects >> should be static by default, and only made public with an >> explicit keyword (at the very least, an "extern" declaration). > > I agree. Making file-scope things global by default was a > mistake. The odd thing is that all of the assemblers I've used > did things the "right" way and required a "global" declaration > and by default things were file-scope. >
I think the same thing applies to most languages. I guess we can blame this idiosyncrasy (or rather, idiocy) of C on the slow Dec Writer keyboards - K&R were more concerned about avoiding keystrokes than on making a good structured modular programming language. The default "int" is in the same category.
Reply by ChrisQ September 29, 20092009-09-29
John Devereux wrote:

> > OK, of course, that makes more sense. I was thinking he might have some > neat C way. > >
Well, there is, in that you can put function pointers into a structure and dereference through a pointer to the structure. From what I remember, that's one of the methods used by early c++ to c translators... Regards, Chris
Reply by John Devereux September 29, 20092009-09-29
Dombo <dombo@disposable.invalid> writes:

> John Devereux schreef: >> Vladimir Vassilevsky <nospam@nowhere.com> writes: >>
[...]
>>> I do require all access to such variables via member functions. I like >>> an overloaded function UART.Timeout() instead of separate Set.. and >>> Get.. functions. >> >> How does that work, like: >> >> timeout = UART.Timeout(GET,UART1); >> >> ... >> >> UART.Timeout(SET,10000); > > > Since Vladimir mentioned overloaded member functions I expect he means > something like: > > class UART > { > public: > void Timeout(int amount); // Set UART timeout > int Timeout(); // Get UART timeout > > > };
OK, of course, that makes more sense. I was thinking he might have some neat C way. -- John Devereux
Reply by Grant Edwards September 29, 20092009-09-29
On 2009-09-29, David Brown <david.brown@hesbynett.removethisbit.no> wrote:

> The big issue with "static" and C is that file-scope objects > should be static by default, and only made public with an > explicit keyword (at the very least, an "extern" declaration).
I agree. Making file-scope things global by default was a mistake. The odd thing is that all of the assemblers I've used did things the "right" way and required a "global" declaration and by default things were file-scope. -- Grant
Reply by David Brown September 29, 20092009-09-29
Grant Edwards wrote:
> On 2009-09-28, Jon Kirwan <jonk@infinitefactors.org> wrote: > >> [First off, I want to make sure no one reading this is conflating the >> idea of 'global' with 'static'. A global definition (in c) is always >> a static definition. But not all static definitions are global.] > > The overloading of the 'static' keyword in the C language to > modify lexical scope in one case and lifetime in another was a > huge mistake IMO. >
I like to think of "static" as making the object in question (function or data) a fixed global object for the lifetime of the program, with a name derived from its closest scope. Thus "static int x" local to function "foo" in file "bar.c" acts exactly as though it were a file-scope global variable called "bar_foo_x". Of course, that's a little simplification - "static" lets the compiler optimise much better, and you can get yourself in a real muddle trying to figure out the meaning of a static local variable in a "static inline" function included in a header. The big issue with "static" and C is that file-scope objects should be static by default, and only made public with an explicit keyword (at the very least, an "extern" declaration).
Reply by David Brown September 29, 20092009-09-29
Vladimir Vassilevsky wrote:
> > > David Brown wrote: > > >> This is comp.arch.embedded - a great many of the projects here do not >> grow "large" because they are for small systems. The code is small >> enough that is not a big problem to keep track of modules and their >> interfaces, and that includes global variables. On the other hand, >> the waste of runtime and code space caused by unnecessary accessor >> functions /can/ be significant. > > Unfortunately, I've seen quite a few projects which were started as > "small" and then overgrew into "large". It was too late to change the > interfaces, so they ended up duplicating the entire modules by > copy/paste and renaming the global variables by find/replace.
Projects that start small and then grow large is a bad sign in the first place. A common cause of this sort of problem is when people take "test" or "prototype" code and designs, and try to turn it into a finished product. It is always important to understand what you are aiming for in your code. Are you trying to write something specific for one application, or do you want it for general use? Are you trying to write something small and fast, or is that low importance for this particular piece of code? Does it need to be very portable? Does it need to be easily understood by others? You should not write code if you don't know /what/ you are targeting, and /why/ you are doing it in a particular way. Then you can (mostly!) avoid overgrowth issues. And of course, the use of global variables is a small issue in such cases - if you make a duplicate module with find/replace, you have the same issues with global functions or structures as with global variables.
> "Code efficiency" is a very common excuse for sloppy practices. >
"Premature optimisation is the root of all evil". However, too many abstractions is just as bad. Pick a happy medium, suited to the code in question.
> >> To take another example, suppose you have a module that collects >> samples from somewhere and puts it in an array dataSamples[]. Should >> that only be accessible through a function "int GetDataSample(int n)" >> ? That would make a summation function very slow. Should you have a >> "CopyDataSamples(int * dest, int n)" to let users have a local copy? >> Again, it's slow, and a waste of data space. Perhaps just "const int >> * AccessDataSamples(void)" to return a pointer to the data - it's >> fast, but loses the compile-time checking you get with simple clear >> access to the raw global data. > > The notions of "Slow" and "Fast" are meaningless without respect to the > particular application. A function either works or not works. If it > works, I would do it in the most clear, portable and modular way. If > there are the *real* constraints in size/speed, then I may have to > resort to globals. >
I use global variables when they are appropriate - the choice can often be because they make the code clearer (and perhaps also more portable and modular). Building fine oo interfaces and other abstractions /may/ make the code better (in the sense of making it clear, portable and modular), but it can also make it worse. If it is done as part of a consistent design pattern, it will help. If it is done simply because someone said that global variables were bad, it will make it worse. Of course, there is no doubt that code correctness is much more important than code speed, and also that clarity of code (which heavily influences its correctness, especially during later maintenance) is generally more important than raw speed. But given the choice of writing clear, efficient code or clear, inefficient code, I know which I choose.