EmbeddedRelated.com
Forums

CPU recommendations?

Started by Unknown March 23, 2009
David Brown wrote:
> Michael N. Moran wrote: >> Steve at fivetrees wrote: >>> I *do* like OO, but consider C++ to be a poor/primitive >>> implementation. >> >> So you want to code C++ features by hand? >> Hand coded vtables? >> How do you code templates? >> > > I'd guess he means something like: > > void runMotor(motor *mp) { > if (mp->isBig) runBigMotor(mp); else runSmallMotor(mp); > }
That's one way to achieve polymorphism. However, each time you add a new type of motor, you must modify this file by hand, and the implementation of "motor" must be changed.
> That sort of thing is perfectly possible. It's not as > scalable as using C++ class hierarchies, but on the other > hand you have all the relevant code in one place.
Having all of the code in one place is only a good idea if the module will never be reused in another project.
>> What better privacy can you achieve with C that >> you are unable to accomplish with C++? >> > > In C, your header file might contain: > > extern void startMotor(void); > extern int motorSpeedTarget; > > And the C file contains: > > static void runMotor(void); > static int currentSpeed; > > > With C++, your header file has the class definition : > > class Motor { > public : > void start(void); > int speedTarget; > private: > void run(void); > int currentSpeed; > }
Where is the C struct for "motor" ? In the previous C example, runMotor() took a pointer to a motor structure, that is the equivalent of the implicit "this" pointer in C++.
> There is no excuse for designing an OO language in which > code and data that is explicitly reserved for private use > within a class implementation must be publicly exposed in > the interface definition. It's bad design in the > language, and there are no good, efficient ways to avoid > it.
The fact that C++ (just like C) is a static systems programing language, seems like a pretty good excuse to me. Especially when one considers compiler implementation issues. It seems a reasonable trade-off to me, and certainly no different from the C language.
> The same applies to things like inline functions - they > are critical to getting code quality generated code while > maintaining an abstract interface, but it means that > these implementation details end up in the interface file > (the header).
The C version of inlining is a macro. How is that "better?" Again, this is more of a trade-off in compiler implementation complexity. Inlined functions are an optimization, and generally should be used sparingly, when they are short, simple and unlikely to contain logic errors.
> The common "workaround" for this is that > your C++ program is actually compiled by a driver C++ > file that #include's all the other C++ files in the > project!
Yikes! Those who include C++ files (not headers) deserve what the get. I don't know how "common" this is. On the other hand, at least GCC is capable of inlining code that does not appear in header files under some circumstances. A similar benefit is also obtained with whole program optimization passes.
> Of course, you can use abstract base classes - if you are > willing to pay the cost.
Blech.
>>> But on embedded targets (more my area), I have an aversion to any use >>> of malloc/new, >> >> For some embedded systems, Malloc/new is convenient >> at initialization. However, free/delete are not OK >> for mine (heap fragmentation.) >> Placement *new* is a great tool. >> > > Agreed - it's free/delete that is the big issue, rather > than malloc/new. But it is much better to have fixed > static allocation in small systems - access to > linker-allocated addresses is faster (that's different on > bigger processors, where the difference is minor).
Agreed.
>>> and to any form of late binding. >> >> What in the world is wrong with late binding? >> >> I guess you don't like function pointers either. >> > > Late binding is good if you really need it - virtual > method accesses are not much different from function > pointers (though there might be an extra indirection > hidden). But you don't often need it - and with C++ it's > easy to get accidental late binding.
How does one achieve *accidental* late binding? I use pure virtual interfaces frequently. The overhead is minimal compared to the benefits it brings in terms of complexity management and code reuse. Of course, if I have some portion of a system that cannot deal with the overhead, that is optimized as required using alternate techniques. However, that is a rarity.
> On big systems, there are certain habits and rules such > as "always make your destructors virtual", and if these > are carried over to embedded systems they quickly add > unnecessary overhead.
That rule drives me crazy... especially when it is enforced by the compiler. However, I find the overhead is minimum and avoidable where needed for optimization.
> The point here is not that C++ forces you to have such > overhead, but that it takes a good programmer to make > sure they don't do that sort of thing by mistake.
I don't blame the language for being flexible ... but yes ... writing good software requires practice with purpose.
>> Do you mean C++ is broken or it's ugly? >> How is C++ broken? It works quite well for me. >> > > It's certainly ugly (look at the template syntax, or the "new style" > cast syntax).
Yep.
> It is certainly broken in several ways - the lack of a > clear interface/implementation boundary is a major point.
It's no more broken than C.
> Templates are broken - there is no consensus on whether > these should be in headers or cpp files, there is no > language-defined way to control or discover what code is > generated or used for templates, and there is no > consistency in error checking and reporting when using > templates.
Template programming is UGLY, but it certainly isn't broken. Templates/Generics are a great way to reuse algorithms while retaining strict type safety. I certainly define all of my templates in header files simply because like inlining, it makes sense. Syntax errors in templates are *fun* to find! ;) However, I *have* become accustomed to it with practice.
> Multiple inheritance is broken - it shouldn't be there in > the first place, or it should be done properly (allowing > a "sorted list of triangles" to inherit from both "list" > and "shape").
I distinguish two types of MI. 1) Interface inheritance (think Java interfaces) - In this case you inherit only pure virtual functions. No data, no overridden implentation. 2) Behavioral inheritance. In this case you inherit virtual functions which may or may not be overridden. I frequently use interface inheritance. Behavioral inheritance is only broken in the sense that if you are crazy enough to use it as a rule then you will pay. Containment is almost always a better solution.
> Overloading is broken - there should be a way to handle > overloading based on return types, and ways to handle > ambiguous overloading.
Overloading in general works fine. But I agree with you about the return types issue. This is rarely a problem for me, however.
> And of course there are mistakes in C that C++ has > inherited, like int promotion that screws with > overloading. > > This doesn't mean that C++ doesn't work - but it could > have been so much better.
Ah the benefits of hindsight. However, as a practical matter, IMHO C++ is a fine language for systems programming.
>>> Now - if someone were to come up with a new version of C which >>> understands classes, and (emphasis) *led to clearer >>> code*, I'd be right there. C++ ain't it. >> >> C++ "understands" classes. >> Obfuscation can be accomplished in any syntax. >> > > C++ gives new ways to avoid obfuscation, and new ways to implement it.
:) -- Michael N. Moran (h) 770 516 7918 5009 Old Field Ct. (c) 678 521 5460 Kennesaw, GA, USA 30144 http://mnmoran.org "So often times it happens, that we live our lives in chains and we never even know we have the key." "Already Gone" by Jack Tempchin (recorded by The Eagles) The Beatles were wrong: 1 & 1 & 1 is 1
David Brown wrote:
> So we definitely agree on the second part - that there are few > developers that can write good C++ on embedded systems! I don't count > myself as a good C++ programmer - I have not had nearly enough > experience with it, and certainly not for real-world projects.
When the C/C++ Users' Journal was in print there was a monthly column (by Herb Sutter, was it?) Each month somebody in a notional programming shop would be mystified by a problem, and the Guru would sort everybody out. The lesson I took away is that if you program C++ seriously, and you don't have a Guru around, you're doomed. Mel.
Michael N. Moran wrote:
> David Brown wrote: >> Michael N. Moran wrote: >>> Steve at fivetrees wrote: >>>> I *do* like OO, but consider C++ to be a poor/primitive >>>> implementation. >>> >>> So you want to code C++ features by hand? >>> Hand coded vtables? >>> How do you code templates? >>> >> >> I'd guess he means something like: >> >> void runMotor(motor *mp) { >> if (mp->isBig) runBigMotor(mp); else runSmallMotor(mp); >> } > > That's one way to achieve polymorphism. However, > each time you add a new type of motor, you must modify > this file by hand, and the implementation of "motor" > must be changed. >
Correct - but that's only relevant if you are going to be adding new motor types often. If that is something that is only done on occasion, and involves lots of changes throughout the program, then the cost of modifying runMotor() is approximately zero. For small programs, there is usually only one programmer who has a fair idea of how everything fits together, and much of the software is dedicated to the particular task in hand. There is therefore very little benefit in keeping a clean abstract interface and separation of code layers - it adds overhead to the generated code, and it adds overhead to the development (since the programmer must understand, obey, and perhaps modify the interface as well as the code itself). Abstracting code into class hierarchies makes sense if you can make use of them for code re-use, and if it really does save time in development. In reality, classes get much less re-use than people think.
>> That sort of thing is perfectly possible. It's not as >> scalable as using C++ class hierarchies, but on the other >> hand you have all the relevant code in one place. > > Having all of the code in one place is only a good > idea if the module will never be reused in another > project. >
It's a good idea if you want to be able to find all the relevant code in one place, and be able to view it and understand it easily. This is very dependent on the size of the project - for larger projects, you need more rigorous structuring. But for smaller projects, such structuring becomes proportionately more overhead.
> >>> What better privacy can you achieve with C that >>> you are unable to accomplish with C++? >>> >> >> In C, your header file might contain: >> >> extern void startMotor(void); >> extern int motorSpeedTarget; >> >> And the C file contains: >> >> static void runMotor(void); >> static int currentSpeed; >> >> >> With C++, your header file has the class definition : >> >> class Motor { >> public : >> void start(void); >> int speedTarget; >> private: >> void run(void); >> int currentSpeed; >> } > > Where is the C struct for "motor" ? > In the previous C example, runMotor() took a > pointer to a motor structure, that is the > equivalent of the implicit "this" pointer in > C++. >
That was a different example - one in which there could be several motors of several types.
>> There is no excuse for designing an OO language in which >> code and data that is explicitly reserved for private use >> within a class implementation must be publicly exposed in >> the interface definition. It's bad design in the >> language, and there are no good, efficient ways to avoid >> it. > > The fact that C++ (just like C) is a static systems > programing language, seems like a pretty good excuse > to me. Especially when one considers compiler implementation > issues. It seems a reasonable trade-off to me, and certainly > no different from the C language. >
It is worse than C since you have more private implementation details revealed in the public header. And while I agree that it is vital to the efficiency of C++ that details of a class contents are known to its users at compile time (the obvious case being so that the using code knows the size of the class objects, other factors include being able to inline small methods), I totally disagree that this implies that private implementation details must be in the interface file. That is only the case if you rely rigidly on the traditional (i.e., old-fashioned) compilation of individual modules to target object code, and then link them together afterwards.
>> The same applies to things like inline functions - they >> are critical to getting code quality generated code while >> maintaining an abstract interface, but it means that >> these implementation details end up in the interface file >> (the header). > > The C version of inlining is a macro. How is that > "better?" Again, this is more of a trade-off in compiler > implementation complexity.
No, the C version of inlining is inlining, at least with modern compilers. So in this case C is just as bad as C++ - I didn't say C was good, just that C++ was even worse than C in some cases.
> > Inlined functions are an optimization, and generally should > be used sparingly, when they are short, simple and unlikely > to contain logic errors. >
Inlined functions were introduced in C++ as a way of reducing function call overhead for small methods so that it is practical to use the abstraction of an accessor method without the cost. They are to be used regularly and freely, albeit with a little care. Where possible, you should let the compiler figure out if a function should be inlined or not - use "inline" as a hint. Used properly, they lead to smaller and faster code. If you don't use inlined functions for your classes, you will either force class users to access data directly, or generate terrible object code (especially on small systems).
>> The common "workaround" for this is that your C++ program is actually >> compiled by a driver C++ file that #include's all the other C++ files >> in the >> project! > > Yikes! Those who include C++ files (not headers) deserve > what the get. I don't know how "common" this is. >
I have certainly seen it used. Careful use of namespaces (or at least top-level names - class encapsulation helps here) can make this reasonably safe. It can greatly reduce the overhead of module structuring since the compiler can then optimise across module boundaries.
> On the other hand, at least GCC is capable of inlining > code that does not appear in header files under some > circumstances. A similar benefit is also obtained > with whole program optimization passes. >
Yes, gcc can do interprocedural optimisation across modules - *if* you are using C rather than C++. This can lead to significant savings in code space and run time for some types of program. But the relevant flags don't work yet for C++ - this might have something to do with C++ being hideously complicated to parse.
>> Of course, you can use abstract base classes - if you are >> willing to pay the cost. > > Blech. > >>>> But on embedded targets (more my area), I have an aversion to any >>>> use of malloc/new, >>> >>> For some embedded systems, Malloc/new is convenient >>> at initialization. However, free/delete are not OK >>> for mine (heap fragmentation.) >>> Placement *new* is a great tool. >>> >> >> Agreed - it's free/delete that is the big issue, rather >> than malloc/new. But it is much better to have fixed >> static allocation in small systems - access to >> linker-allocated addresses is faster (that's different on >> bigger processors, where the difference is minor). > > Agreed. > >>>> and to any form of late binding. >>> >>> What in the world is wrong with late binding? >>> >>> I guess you don't like function pointers either. >>> >> >> Late binding is good if you really need it - virtual >> method accesses are not much different from function >> pointers (though there might be an extra indirection >> hidden). But you don't often need it - and with C++ it's >> easy to get accidental late binding. > > How does one achieve *accidental* late binding? >
You get it by having virtual functions somewhere in your class hierarchy without thinking about it. A danger of overdoing OO is that you lose track of what your code is actually doing, and where the different parts of the code are to be found. This is very much a case of writing bad software in C++, rather than a failing of C++ itself - but C++ can make it so easy and tempting to write confusing and tangled hierarchies. I've seen programs that are incomprehensible because they are so full of "factory" classes, "delegater" classes, "proxy" classes, "single instance" wrappers, and so on that it is impossible for anyone but the author to figure out what is going on. All the classes seem to do is construct instances of each other and pass control back and forth.
> I use pure virtual interfaces frequently. The > overhead is minimal compared to the benefits it brings > in terms of complexity management and code reuse. > > Of course, if I have some portion of a system that > cannot deal with the overhead, that is optimized as > required using alternate techniques. However, that > is a rarity. >
It's all a balance. The danger with C++, especially on small systems, is that it is easy to get seriously out of balance.
>> On big systems, there are certain habits and rules such >> as "always make your destructors virtual", and if these >> are carried over to embedded systems they quickly add >> unnecessary overhead. > > That rule drives me crazy... especially when it is > enforced by the compiler. However, I find the overhead > is minimum and avoidable where needed for optimization. > >> The point here is not that C++ forces you to have such >> overhead, but that it takes a good programmer to make >> sure they don't do that sort of thing by mistake. > > I don't blame the language for being flexible ... > but yes ... writing good software requires practice > with purpose. >
As has been said in this thread, C++ is just a tool. But it's a tool that is unnecessarily hard to use well.
>>> Do you mean C++ is broken or it's ugly? >>> How is C++ broken? It works quite well for me. >>> >> >> It's certainly ugly (look at the template syntax, or the "new style" >> cast syntax). > > Yep. > >> It is certainly broken in several ways - the lack of a clear >> interface/implementation boundary is a major point. > > It's no more broken than C. >
It is marginally more broken than C - but yes, C is broken too. C++ missed the opportunity of fixing some of C's failings.
>> Templates are broken - there is no consensus on whether these should >> be in headers or cpp files, there is no >> language-defined way to control or discover what code is >> generated or used for templates, and there is no >> consistency in error checking and reporting when using >> templates. > > Template programming is UGLY, but it certainly isn't broken. > > Templates/Generics are a great way to reuse algorithms while > retaining strict type safety. > > I certainly define all of my templates in header files > simply because like inlining, it makes sense. > > Syntax errors in templates are *fun* to find! ;) > However, I *have* become accustomed to it with practice. > >> Multiple inheritance is broken - it shouldn't be there in >> the first place, or it should be done properly (allowing >> a "sorted list of triangles" to inherit from both "list" >> and "shape"). > > I distinguish two types of MI. > > 1) Interface inheritance (think Java interfaces) - In this > case you inherit only pure virtual functions. No data, no > overridden implentation. > > 2) Behavioral inheritance. In this case you inherit virtual > functions which may or may not be overridden. > > I frequently use interface inheritance. Behavioral > inheritance is only broken in the sense that if you are > crazy enough to use it as a rule then you will pay. > > Containment is almost always a better solution. >
Agreed.
>> Overloading is broken - there should be a way to handle >> overloading based on return types, and ways to handle ambiguous >> overloading. > > Overloading in general works fine. But I agree with > you about the return types issue. This is rarely > a problem for me, however. > >> And of course there are mistakes in C that C++ has >> inherited, like int promotion that screws with >> overloading. >> >> This doesn't mean that C++ doesn't work - but it could >> have been so much better. > > Ah the benefits of hindsight. However, as a practical > matter, IMHO C++ is a fine language for systems > programming. > > >>>> Now - if someone were to come up with a new version of C which >>>> understands classes, and (emphasis) *led to clearer >>>> code*, I'd be right there. C++ ain't it. >>> >>> C++ "understands" classes. >>> Obfuscation can be accomplished in any syntax. >>> >> >> C++ gives new ways to avoid obfuscation, and new ways to implement it. > > :) >
David Brown wrote:
> I know the reason for including the private part - it's still very bad > design. It's based on a compilation and link strategy that was aimed at > the limited computers of 30 years ago, and is simply not appropriate for > a language of C++'s date. There were already better models in use at > that time (such as Modula 2) - C++ should have been better than existing > models, not worse than the reigning king of bad models (C).
In a very true sense we still have the computers of 30 years ago, they are just a lot faster. Linux and OSX are really not all that different from 1970's Unix. Earlier in the post you write:
> Yes, C code is valid C++ code - but if it is C code, it's not a C++ program.
I think the same applies to operating systems. A fancy compilation system may be way cool, but if it does not play well with `make' and friends, it is not a Unix compiler. This is one reason why Modula 2 and Ada have not taken over. I don't have anything against advanced development systems -- I would love to run Symbolics Genera instead of Linux, and program in Common Lisp. But I also think there are valid reasons why the Unix compilation model has persisted so long. -- Pertti
Pertti Kellomaki wrote:

> I think the same applies to operating systems. A fancy compilation > system may be way cool, but if it does not play well with `make' and > friends, it is not a Unix compiler. This is one reason why Modula 2 > and Ada have not taken over.
Can you explain why you think that Ada compilers do not play well with 'make'? The most accessible Ada compiler today for Unix systems -- the GNU Ada compiler, GNAT -- follows the Unix philosophy in that the "Ada library" consists of the source-code files (*.ads, *.adb) plus a compiler-generated information file (*.ali) for each package. I think it plays well enough with 'make', although it does not need 'make' because the module/package system in Ada is strong enough to let the compiler figure out what needs recompilation. I certainly prefer to just type 'gnatmake prog', whatever changes I have made in the multiple source-code files that make up 'prog', without having to edit a makefile to show that some source file now #includes some other file, as for C. -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
"Niklas Holsti" <niklas.holsti@tidorum.invalid> wrote in message 
news:49d58f01$0$3478$4f793bc4@news.tdc.fi...
> Pertti Kellomaki wrote: > >> I think the same applies to operating systems. A fancy compilation >> system may be way cool, but if it does not play well with `make' and >> friends, it is not a Unix compiler. This is one reason why Modula 2 >> and Ada have not taken over. > > Can you explain why you think that Ada compilers do not play well with > 'make'? >
I was wondering the same thing myself with respect to Modula-2 compilers.
> I think it plays well enough with 'make', although it does not need > 'make' because the module/package system in Ada is strong enough to let > the compiler figure out what needs recompilation. >
Exactly. The fact that languages like C need separate makefiles that have to be manually maintained to keep in sync with source code is a disadvantage NOT an advantage. Not only is it a tedious task it is also error prone. -- Chris Burrows CFB Software Armaide: LPC2000 Oberon-07 Development System http://www.cfbsoftware.com/armaide
Chris Burrows wrote:
> "Niklas Holsti" <niklas.holsti@tidorum.invalid> wrote in message > news:49d58f01$0$3478$4f793bc4@news.tdc.fi... >> Pertti Kellomaki wrote: >> >>> I think the same applies to operating systems. A fancy compilation >>> system may be way cool, but if it does not play well with `make' and >>> friends, it is not a Unix compiler. This is one reason why Modula 2 >>> and Ada have not taken over. >> Can you explain why you think that Ada compilers do not play well with >> 'make'? >> > > I was wondering the same thing myself with respect to Modula-2 compilers. > >> I think it plays well enough with 'make', although it does not need >> 'make' because the module/package system in Ada is strong enough to let >> the compiler figure out what needs recompilation. >> > > Exactly. The fact that languages like C need separate makefiles that have to > be manually maintained to keep in sync with source code is a disadvantage > NOT an advantage. Not only is it a tedious task it is also error prone. >
While I fully agree that C's #include mechanism and total lack of proper module control and dependency checking is a disadvantage, it's worth noting that you can do a lot with make, a good directory structure, and a decent compiler (i.e., gcc). I don't do any manual maintenance on my makefiles. When I start a new project, I use the same basic makefile as always. I modify it a little to match the compiler and target, along with the name of the project, any libraries that are needed, and so on. But I don't list the C files or object files - I use wildcard matching to compile all the C files in the directory. And there is no need to figure out header dependencies - gcc (or rather, its preprocessor) can generate dependency lists itself (the -MDD flag, IIRC). If I'm doing a project using a more limited commercial compiler, I still use gnu make and gcc's cpp to handle dependencies (I also use gcc as an error and warning checker even if it is not generating the target code).
"Chris Burrows" <cfbsoftware@hotmail.com> wrote in message 
news:3NGdneZuPP3bc07UnZ2dnUVZ_sqdnZ2d@posted.internode...
> "Steve at fivetrees" <steve@NOSPAMTAfivetrees.com> wrote in message > news:-pmdnY01ZOTnfU7UnZ2dnUVZ8u6dnZ2d@pipex.net... >> Again, it's down to design rather than coding. I admit that we tend to do >> both things together these days. But again,C++ is *not* (IMHO) a good >> tool - it's the right idea, but badly done. Time for a revision. Ada, >> anyone? >> > > Have you investigated Oberon? The Oberon-07 that I use for embedded > systems programming does not include all of the OO-specific features of > Oberon-2 or Component Pascal but it also doesn't suffer from the C++ flaws > that you mentioned. You will find all of the details here: > > http://www.inf.ethz.ch/personal/wirth/Articles/Oberon.html
Oooh, interesting. I'll be back when I've read up. Gimme a while ;). Steve -- http://www.fivetrees.com
"Michael N. Moran" <mnmoran@bellsouth.net> wrote in message 
news:6lWAl.23691$v8.7089@bignews3.bellsouth.net...
> Steve at fivetrees wrote: >> My main objective in writing code these days is clarity, >> clarity, and yet more clarity. > > Clarity is good. > Clarity at the module/file level is possible. > Clarity at the inter-module level requires > external documentation.
Michael, all of your responses were cogent, and I'd discuss any one of them at length over several pints of bitter ;). And I'll respond more fully (earlier in my evening). Meanwhile, this one point: I disagree ;). For me, it's about interfaces. I deal with a 3-terminal regulator as you'd expect: one input, one output, one ground. I read the datasheet from time to time (yes, external documentation [1]), but the interface is my main connection. So it should be in software. I don't need to know the details of what's going on under the hood (unless I need more detail about my I/O than I understand); I just need to know what it does. I do a lot of Unix sysadmin work; I can configure one .conf file happily while having some faith that the code will do what I ask. I don't need/want to know anything else. Fundamentally this comes down to good decomposition and good methodologies for interfacing. Some of these techniques are now well-understood ("spaghetti code sucks", "globals are bad"); some less so - learning good decomposition and breaking things down into simple modules with clean interfaces seems to be a hard thing to teach. Or indeed learn. [1] Trouble with s/w documentation is that it is always out of sync with the code. I'd rather the header file provided the interface and all the details of "how" and "what"... the docs will likely tell me what the originator had in mind for version 0.1 ;). Laters... Steve -- http://www.fivetrees.com
"David Brown" <david@westcontrol.removethisbit.com> wrote in message 
news:49d4711f$0$22026$8404b019@news.wineasy.se...
> Michael N. Moran wrote: >> Steve at fivetrees wrote: > There is no excuse for designing an OO language in which code and data > that is explicitly reserved for private use within a class implementation > must be publicly exposed in the interface definition. It's bad design in > the language, and there are no good, efficient ways to avoid it. > > The same applies to things like inline functions - they are critical to > getting code quality generated code while maintaining an abstract > interface, but it means that these implementation details end up in the > interface file (the header). The common "workaround" for this is that > your C++ program is actually compiled by a driver C++ file that #include's > all the other C++ files in the project!
Exactly and indeed exactly. Furthermore, exactly.
>> > What in the world is wrong with late binding? > > I guess you don't like function pointers either. >
< But I *lerv* function pointers ;). I can hide a *world* of sin behind them ;). No, seriously, I'm scarily into indirection.
>> But you don't often need it - and with C++ it's easy to get accidental >> late binding. <<
Indeed.
>> The point here is not that C++ forces you to have such overhead, but that >> it takes a good programmer to make sure they don't do that sort of thing >> by mistake.
<< Agreed except for one small detail: the real skill is in decomposition and design, not in the finer points of C++ or any other language. If the language gets in the way, step back.
>>
It's certainly ugly (look at the template syntax, or the "new style" cast syntax). It is certainly broken in several ways - the lack of a clear interface/implementation boundary is a major point. Templates are broken - there is no consensus on whether these should be in headers or cpp files, there is no language-defined way to control or discover what code is generated or used for templates, and there is no consistency in error checking and reporting when using templates. Multiple inheritance is broken - it shouldn't be there in the first place, or it should be done properly (allowing a "sorted list of triangles" to inherit from both "list" and "shape"). Overloading is broken - there should be a way to handle overloading based on return types, and ways to handle ambiguous overloading. And of course there are mistakes in C that C++ has inherited, like int promotion that screws with overloading. This doesn't mean that C++ doesn't work - but it could have been so much better << Perfectly put. It could have been so much better.
>> C++ gives new ways to avoid obfuscation, and new ways to implement it. <<
I may adopt that as my sig ;) (along with "C provides enough rope to hang yourself, C++ provides enough to rig a schooner and *then* hang yourself". Mainly the latter, I find. There seems to be no end to "cute" solutions to simple problems. Steve -- http://www.fivetrees.com