EmbeddedRelated.com
Forums
The 2026 Embedded Online Conference

Languages, is popularity dominating engineering?

Started by Ed Prochak December 12, 2014
On 18/12/14 02:09, Don Y wrote:
> Hi David, > > On 12/16/2014 4:52 PM, David Brown wrote: >> But when such tricks are carried over to newer and/or better >> compilers, they >> are often not just harder to read or maintain, but result in poorer >> code - >> compilers can generally optimise clear code better than cryptic code. > > Exactly. It's harder to get the code "right", harder for others to > understand what you are doing (and *why* you are doing it that way), > harder to sort out boundary conditions and verify behavior, there, etc. > >>> And, they would be part of his/her *style* instead of applied where >>> needed AND VERIFIED to achieve their intended goals. I.e., if you >>> *really* think this operation needs to be expressed in this weird >>> manner, where's the commentary justifying your action?? >>> >>> (What happens when the compiler is upgraded? Do you go back and remove >>> all that cruft? Or, leave the cost of its presence there when it's >>> not really improving anything??) >> >> You don't upgrade or change toolchains in the middle of a project. >> And if the > > Ha! I've had projects where the tools were updated almost *weekly* -- as > bugs were uncovered, etc. The alternative is to live with every bug in > the compiler. This leads to lots of "documented workarounds" in the > sources > which are almost *worse* than trying to be "cryptic" ("Why didn't the > developer just write XXXX instead of this *mess*?")
I have managed to avoid most of such tools - although I do have a few old compilers which need a bit more care in use. But small updates (assuming they don't bring in /new/ bugs) are not so much of a problem - what I always avoid is bigger upgrades or changes in tools. Most of the problems are caused by features that are not part of standardised C, such as subtle changes to part-specific header files or extensions to support interrupt functions.
> > This played a large part in moving me to FOSS tools -- not wanting to > HAVE TO wait for the vendor to fix a set of bugs so I could write > "legitimate" code instead of "workaround" code! >
With FOSS tools, if you work on the problem yourself it is often easy to get the main developers to work with you and get a quick resolution. But generally, the speed of bug fixes doesn't have any correlation with the price of the tools - I have seen fast and slow fixes in both free and expensive tools.
>> same code is later re-used in a different project, then it could well >> need >> significant change to re-write it in a better style - but that >> possibility >> applies to any code re-use. (The usual issues apply - efficient >> running code, >> written in a short time, on the current system balanced against >> portability, >> flexibility and maintainability in the future. It's all very well >> wanting to >> write code that will be good for future re-use - but not if the current >> customer has to pay for it.) > > I've had a simple approach to this with clients: you want to benefit from > the work I've done for others? Then you have to allow the next guy to > benefit > from the work I'm doing for *you*! Or, you can pay for me to develop > EVERYTHING I USE *from scratch*! >
Yes, this is all part of the process and the balance between customers and speed and reliability of development. I find that a fair amount of my code is project-specific, and code flexibility and maintainability doesn't matter much - but other parts are more general. There is never just one answer to these things.
On 18/12/14 00:59, Don Y wrote:
> The machine has the advantage of being able to "instantly" evaluate > a variety of different approaches to a *particular* problem -- and > settle on the "best" one (where "best" can be defined AT COMPILE TIME) > while taking into consideration as much (or as little) of the surrounding > "context" that it deems appropriate.
The "at compile time" caveat is extremely important w.r.t. C/C++ programming. There are many things the compiler cannot safely deduce due to the language definition. A well-known example of that is the pessimism that the compiler must introduce w.r.t. the possibility of pointer aliasing. That is especially acute wit separate compilation units produced by different companies and delivered in the form of object code libraries. That, of course, leads to the myriad of compiler flags which are at least as important as the code itself. Fortran is, according to those that have far more hard-won experience than myself, much less susceptible to that than C/C++ and so produces correspondingly tighter code. Even Java has an advantage: its HotSpot optimisation detects the what the code is actually doing on each different machine. Similar techniques when applied to C code have also resulted in surprisingly good performance improvements; google for "HP Dynamo" for hard information.
Hi David,

On 12/18/2014 3:22 AM, David Brown wrote:
> On 18/12/14 02:09, Don Y wrote:
>>>> And, they would be part of his/her *style* instead of applied where >>>> needed AND VERIFIED to achieve their intended goals. I.e., if you >>>> *really* think this operation needs to be expressed in this weird >>>> manner, where's the commentary justifying your action?? >>>> >>>> (What happens when the compiler is upgraded? Do you go back and remove >>>> all that cruft? Or, leave the cost of its presence there when it's >>>> not really improving anything??) >>> >>> You don't upgrade or change toolchains in the middle of a project. >>> And if the >> >> Ha! I've had projects where the tools were updated almost *weekly* -- as >> bugs were uncovered, etc. The alternative is to live with every bug in >> the compiler. This leads to lots of "documented workarounds" in the >> sources >> which are almost *worse* than trying to be "cryptic" ("Why didn't the >> developer just write XXXX instead of this *mess*?") > > I have managed to avoid most of such tools - although I do have a few > old compilers which need a bit more care in use. But small updates
To be clear, this isn't anywhere near the problem it was (for me) years ago. Compilers (and the environments in which they execute) have matured considerably. But, each time a tool is revised, at the very least, all the regression tests need to run -- again. (This can be A Good Thing as it can draw attention to non-portable behaviors that you may have let creep into the codebase). This can be treacherous with clients who naively think "a compiler is a compiler is a compiler". E.g., naively applying a new compiler to an old code base and wondering why the performance changes, etc.
> (assuming they don't bring in /new/ bugs) are not so much of a problem - > what I always avoid is bigger upgrades or changes in tools. Most of the
I formally "left" MS's tools because of this with an early C++ compiler. ("We no longer support that product -- but, you are entitled to a FREE upgrade to our NEW PRODUCT")
> problems are caused by features that are not part of standardised C, > such as subtle changes to part-specific header files or extensions to > support interrupt functions.
I religiously avoid all compiler-specific "extensions". Even ASM interfaces I accommodate *in* the ASM sources and not directly in the "C" sources. (e.g., not using "asm()")
>> This played a large part in moving me to FOSS tools -- not wanting to >> HAVE TO wait for the vendor to fix a set of bugs so I could write >> "legitimate" code instead of "workaround" code! > > With FOSS tools, if you work on the problem yourself it is often easy to > get the main developers to work with you and get a quick resolution. > But generally, the speed of bug fixes doesn't have any correlation with > the price of the tools - I have seen fast and slow fixes in both free > and expensive tools.
It's not directly an issue of "speed". Rather, one of *control*. Neither "vendor" is likely to give you a definite commitment as to when (if) the bug will be fixed. It may be "too involved" to receive attention, *now*. Or, it may get rolled into a release that makes other, "significant" changes to the compiler (while *I* could potentially isolate JUST that portion and apply it to my sources -- creating my own "unsupported release"). Or, it may be too trivial to push out a new release, etc.
>>> It's all very well wanting to >>> write code that will be good for future re-use - but not if the current >>> customer has to pay for it.) >> >> I've had a simple approach to this with clients: you want to benefit from >> the work I've done for others? Then you have to allow the next guy to >> benefit >> from the work I'm doing for *you*! Or, you can pay for me to develop >> EVERYTHING I USE *from scratch*! > > Yes, this is all part of the process and the balance between customers > and speed and reliability of development. I find that a fair amount of > my code is project-specific, and code flexibility and maintainability > doesn't matter much - but other parts are more general. There is never > just one answer to these things.
In my case, I look at development time as "hours of my life" and am not real keen on *repeating* those efforts. When I planted the first citrus tree, here, I dug a hole 4'x4'x4' (we have lousy soil). It was an "interesting" (though not "exciting") experience. The second and third holes (and, later, fourth, fifth and sixth) were just *chores*! "What makes you think I *want* to do this, AGAIN?"
On Thu, 18 Dec 2014 11:14:58 +0200, Niklas Holsti
<niklas.holsti@tidorum.invalid> wrote:

>On 14-12-18 09:37 , George Neuner wrote: >> On Wed, 17 Dec 2014 22:58:41 -0700, Don Y <this@is.not.me.com> wrote: >> >> ... but you may not be aware that the Algol parser was itself >> specified using a 2-level attributed 'vW'-grammar (for Van >> Wijngaarden). > >The 2-level grammar was used only for Algol-68, which is not what people >usually mean by "Algol" -- Algol-60 is usually meant. Algol-68 was a >very different language from Algol-60.
Actually there was/is a 2-level grammar for Algol-60. It was eschewed by the committee because too many of them felt that they didn't understand it well enough to implement it. It can be found buried in the preliminary reports. Secondly, even Algol-68 itself didn't directly use the Van Wijngaarden grammar. Instead it used a simpler derivative called NEST. NEST was developed from the vW grammar, again because the vW grammar was considered too complex to be understood by implementers. You can see the vW grammar in the original 68 report and the NEST grammar in the revised report. [If you compare the vW and NEST, it's clear that they are equivalent, but the NEST is far more verbose. vW is Turing complete, NEST is not ... the reason NEST was even possible was that the Turing power of vW wasn't needed.] WRT differences between 68 and 60: their relationship is similar to that of C++ to C. Just as C++ has 99.44% of C at its core, 68 has most of 60 at its core. In idiomatic usage, 68 and 60 actually were closer than C++ and C.
>> However, as a language and compiler geek myself, I know that trying to >> read that grammar gives me a headache. > >I think it resembles an Algol-68 parser implemented in Prolog. The >"uniform replacement rule" is essentially the same as matching and >binding free Prolog variables. Too bad that Prolog was not around at the >time, it might have provided a direct route from a 2-level grammar to a >parser.
I had to think about that for a while, but you're right ... it does resemble a declarative rule system.
>> It's so complex that few >> languages since - and no commercial ones - have tried to follow >> Algol's example. Single level (E)BNF grammars which (essentially) >> encode only syntax became the norm for language development. > >But for compiler development, many people developed and I think some >also used attribute grammars.
Yes, and there are several different forms of attribute grammars. However, so far as I have seen, _workable_ attribute grammars are not declarative but consist of operationally defined processing grafted onto the declarative (E)BNF. There are a handful of academic attempts at declarative attribute grammars, but so far as I know, nobody other than their creators has ever used them in anger. You can make the case that vW grammars are attribute grammars taken to the extreme, but the arguments I have heard I think are strained. To me, there is a vast difference between the declarative vW grammars and the operational attribute grammars I have seen.
>The idea is the same -- include semantic >analysis in parsing -- but it is more practical because the attributes >can be of various types, attribute computations can be propagated >upwards or downwards in the parse tree (inherited/synhetic attributes), >and there can be several iterations of attribute computation. In the >2-level grammar, the semantic data corresponding to attribute values >were forced to look like sentences or syntax trees and could only be >accessed by something like the pattern-matching decomposers, used today >in functional languages.
Yes.
>But I don't really understand why the method used for describing the >grammar and semantics of a programming language should have much to do >with the kind of bugs that are common in programs using that language.
Per se it really has nothing to do with aiding programmers, though the hope is that such help would fall out naturally as a side effect. The goal is a formalism that can define both syntax and semantics together. In current practice much of the semantics tend to be defined operationally by code and described by natural language supplemented by examples. There aren't many languages which have solid denotational definitions: Scheme, ML, Haskell and Prolog are the only ones I can think of offhand [and Prolog's was written *after* it was implemented]. No doubt there are others I am unaware of, however the vast majority of languages are defined by implementation. It's really quite rare to have a denotational (or even a simpler axiomatic) definition of the language's semantics. George
On 14-12-18 22:38 , George Neuner wrote:
> On Thu, 18 Dec 2014 11:14:58 +0200, Niklas Holsti > <niklas.holsti@tidorum.invalid> wrote: > >> On 14-12-18 09:37 , George Neuner wrote: >>> On Wed, 17 Dec 2014 22:58:41 -0700, Don Y <this@is.not.me.com> wrote: >>> >>> ... but you may not be aware that the Algol parser was itself >>> specified using a 2-level attributed 'vW'-grammar (for Van >>> Wijngaarden). >> >> The 2-level grammar was used only for Algol-68, which is not what people >> usually mean by "Algol" -- Algol-60 is usually meant. Algol-68 was a >> very different language from Algol-60. > > Actually there was/is a 2-level grammar for Algol-60. It was eschewed > by the committee because too many of them felt that they didn't > understand it well enough to implement it. It can be found buried in > the preliminary reports.
Thanks, I didn't know that. My impression is that even the use of BNF for Algol 60 was felt by many to be new and strange, with its recursive syntactical productions. I believe that the Fortran specification did not use BNF at that time.
> WRT differences between 68 and 60: their relationship is similar to > that of C++ to C. Just as C++ has 99.44% of C at its core, 68 has > most of 60 at its core.
I agree that most of Algol 60 is in Algol 68, but IMO much more was added and changed in moving from 60 to 68 than in moving from C to C++. Algol 60 did not even have struct/record types... But comparisons are of course subjective. -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
On 18/12/14 19:17, Don Y wrote:
> Hi David, > > On 12/18/2014 3:22 AM, David Brown wrote: >> On 18/12/14 02:09, Don Y wrote: > > > I religiously avoid all compiler-specific "extensions". Even ASM > interfaces > I accommodate *in* the ASM sources and not directly in the "C" sources. > (e.g., not using "asm()")
Since much of my code is project-specific, and I don't change compilers within a project, I don't mind using compiler-specific extensions /if/ they add significant value to the code. In particular, I use gcc on most of my targets, or compilers (such as CodeWarrior) that support many gcc extensions - so to me, such gcc extensions are another tool in the toolbox. I won't use an extension unless it is actually /useful/, but some of them can be helpful, such as the "typeof" operator, arrays of zero length, case ranges, and some of the function and type attributes. Where these are hints rather than requirements, such as adding a "const" attribute to a function, I do this via a macro - on gcc, the macro will generate the "const" attribute and lead to extra static checks and possibly tighter object code. On non-gcc compilers, the macro dissolves to nothing and everything continues to work. On smaller micros in particular, there are often target-specific compiler extensions for handing things like data in flash, interrupts, etc. You can't avoid having at least some of that when coding for something like the AVR - you treat it just as you would treat using a specific library or header for the micro. Regarding assembly, I actually prefer to have any assembly as inline assembly in C, rather than separate assembly files. I find it keeps my source cleaner and neater. If I need the assembly as a way of accessing special processor features (interrupt enable/disable, memory barriers, cache control, etc.), then inline assembly is much more efficient than external assembly sources. I seldom need large pieces of assembly outside of that - perhaps just a little startup code. I'll write that in C whenever possible, and use inline assembly inside a dummy C function when required. Tastes and styles for that sort of thing vary, of course - I don't expect everyone to do it the way I do it.
> >>> This played a large part in moving me to FOSS tools -- not wanting to >>> HAVE TO wait for the vendor to fix a set of bugs so I could write >>> "legitimate" code instead of "workaround" code! >> >> With FOSS tools, if you work on the problem yourself it is often easy to >> get the main developers to work with you and get a quick resolution. >> But generally, the speed of bug fixes doesn't have any correlation with >> the price of the tools - I have seen fast and slow fixes in both free >> and expensive tools. > > It's not directly an issue of "speed". Rather, one of *control*.
Fair enough.
> Neither "vendor" is likely to give you a definite commitment as > to when (if) the bug will be fixed. It may be "too involved" to > receive attention, *now*. Or, it may get rolled into a release > that makes other, "significant" changes to the compiler (while > *I* could potentially isolate JUST that portion and apply it to my > sources -- creating my own "unsupported release"). Or, it may > be too trivial to push out a new release, etc. > >>>> It's all very well wanting to >>>> write code that will be good for future re-use - but not if the current >>>> customer has to pay for it.) >>> >>> I've had a simple approach to this with clients: you want to benefit >>> from >>> the work I've done for others? Then you have to allow the next guy to >>> benefit >>> from the work I'm doing for *you*! Or, you can pay for me to develop >>> EVERYTHING I USE *from scratch*! >> >> Yes, this is all part of the process and the balance between customers >> and speed and reliability of development. I find that a fair amount of >> my code is project-specific, and code flexibility and maintainability >> doesn't matter much - but other parts are more general. There is never >> just one answer to these things. > > In my case, I look at development time as "hours of my life" and am > not real keen on *repeating* those efforts. When I planted the first > citrus tree, here, I dug a hole 4'x4'x4' (we have lousy soil). It > was an "interesting" (though not "exciting") experience. The second > and third holes (and, later, fourth, fifth and sixth) were just *chores*! > > "What makes you think I *want* to do this, AGAIN?"
Hi David,

On 12/18/2014 1:47 AM, David Brown wrote:

> One other time when the programmer can (or thinks he can) optimise > better than the compiler is when the programmers knows something the > compiler does not know. For example, the programmer might know that an > "int" parameter to a particular function is only ever 1, 2 or 3 - and > use that to his advantage in the code. > > However, with modern tools you can achieve quite a lot of such > optimisation by giving the compiler more information. There are three > tools for this. One is to make sure you use "static" for your functions > and file-scope variables whenever possible (this is good practice > anyway), since it lets the compiler know all uses of the function and > data. Another is to use link-time optimisation (or whole-program > optimisation), that lets the compiler optimise across modules. And > finally, many compilers support gcc's "__builtin_unreachable()" or > similar pseudo-functions that let you give the compiler extra > information. Thus you can tell the compiler that "x" is between 1 and 3 > by writing: > > if ((x < 1) || (x > 3)) __builtin_unreachable(); > > Other things that modern compilers can do is constant propagation across > modules (with LTO), partial inlining, and function cloning (if a > function is only ever used a few times, but with different constant > parameters, it can be more efficient to generate multiple > constant-specific versions of the function). It's all great stuff, > letting you write your code clearly, correctly, and flexibly, while > helping the compiler turn it into fast object code.
One of my advisors was looking into ways to marry knowledgebases to (e.g.) DBMS's to make for more efficient (query, in the DBMS case) processing. E.g., instead of looking for "pregnant patients", look for *females* that are pregnant (drawing in the qualification from the knowledgebase: only females get pregnant)
>>> It is not just a compiler thing, it is about how the programmer thinks; >>> high level languages simply constrain that assuming that he is too >>> stupid to not make mistakes which are known to be commonly made. >> >> A lot depends on the language. Some languages truly treat the programmer >> as "challenged" (in the name of "making it easier to code"). Others >> expose the programmer to increasing amounts of the hardware -- and risk. >> >> You can make mistakes in *any* language. E.g., typing "A * B" instead of >> "A + B" -- no language will protect you from your own incompetence. >> >>> I realize I am probably alone on the world left doing that but I can >>> say that once one becomes really good at writing with access to low >>> level - good register model in the head all the time - and all the >>> facilities to go higher (plenty of argument/text processing abilities, >>> partial word extraction, recursive macros, multidimensional text >>> variables etc. etc.) high level languages with their predefined >>> "one fit all" sentences look like tools from the stone age. >> >> What do you do when your design is ported to a processor that has >> a different register complement? E.g., in the early 80's, I was >> in the Z80 camp while a friend was in the 68xx. How we approached >> algorithms varied considerably as he was always stuck with the >> "single accumulator" model (i.e., always hammering on memory for >> every data access). I, OTOH, could keep several pointers, data, >> flags, etc. "on hand" and quickly juggle between them. >> >> What I *most* miss about, e.g., ASM is *good* macro processing. >> I could write things like: >> >> STATE Idle >> On digit MoveTo GetDigits Executing accumulate >> On help_key MoveTo Helping Executing show_help >> Otherwise RemainInPlace Executing beep >> >> STATE Helping >> On next_key RemainInPlace Executing next_screen >> On prev_key RemainInPlace Executing prev_screen >> Check Status >> Otherwise PreviousState Executing restore >> >> Or: >> >> LOOP: SCALE 2.5 2.5 >> BLANK >> MOVETO 2734 8883 >> BLUE >> MOVETO 2734 0 >> MOVETO 0 0 >> MOVETO 0 8883 >> BLANK >> SYNC 13 >> JMP LOOP >> >> And, being able to do this without dragging in another text processor >> (possibly forcing these "snippets" to reside in other files to do so) > > I too have sometimes had to use Python pre-processing for calculations > that could often be done by assembler macro processors, but which cannot > be done with the C pre-processor. A key issue is that the C > pre-processor can't do loops or recursive macros.
And, the preprocessor often can't do "text processing" well (or at all)
> However, C++ offers new ways to do this with templates with integer > parameters, and constexpr functions in C++11 (and extended in C++14). > You won't get quite the syntaxes from your examples above, but it can > certainly cover some advanced compile-time calculation.
With a good macro processor, it's relatively easy to cobble together ASL's that you can intuitively embed in-line in your source -- without having to push it out to another file and write a parser to recognize what you've created. This allows you to trade run-time efficiency for algorithm clarity. E.g., each of the above examples are much clearer (and easier to maintain) than the bytecodes that they generate!
Hi Dimiter,

On 12/18/2014 2:31 AM, Dimiter_Popoff wrote:
>> On 12/16/2014 8:56 AM, Dimiter_Popoff wrote: >>> On 16.12.2014 &#1075;. 10:25, upsidedown@downunder.com wrote: >>>> ... >>>> These days with compilers running on big platforms, much optimization >>>> can be done and there is no point in trying to help the compiler. In >>>> practice the only need for manual assembler is to utilize some special >>>> target machine instructions that can't be expressed in HLL. >>> >>> It is perhaps true compilers can be made that good but I have yet to >>> see HLL written code which my VPA (which I control how high it >>> gets while I write, lines with register operations alternate with >>> actions on objects etc.) written code won't beat by at least a factor >>> of 10 when it comes to density. Execution speed likely too but this >>> is harder to compare. >> >> While I can't comment on *your* abilities -- or the characteristics >> of your VPA -- I can only say that *I* have (long ago) been blown away >> by how *clever* many of the optimizing compilers can be! > > Of course they can be clever. My point is that overall humans are > much cleverer at using a language - check for example how we use > natural languages and how do the machines cope. It is a matter > of time until they outsmart us, may be not much time, but for now > we are incomparably better. > It is just a matter of how well we choose to learn/use the > language - and how good a language processor we have in our head > of course (this varies a lot between individuals).
The last comment is the kicker -- if you're coding for your eyes only, you can do things a lot different than if others will have to view, maintain or enhance your codebase. When I was in school, one of the mantras was to make no (costly) optimization if it didn't give you a 2x performance increase (size, speed, etc.). The thinking was that you could get this "for free" by just taking a long coffee break and waiting for the technology to advance to that level (a naive idea -- but, it acknowledges that the sorts of optimizations that you make "manually" *do* take time... and, technology is advancing while you're optimizing!). What was NOT said was that some optimizations are far more productive than others. Granted, if your algorithm isn't fast enough to execute in the time allotted, you've got a problem. Or, if it requires more memory than you have available. But, each of those things can be improved upon by technology (or, "small dollars"). The thing that technology is lousy at is "enhancing wetware" -- programmers don't inherently get "twice as productive" each year or two. They can't write twice as much debugged code or comprehend twice as many lines per unit time. So, you want the tools that they use to *express* their ideas to become more productive. And, to do so in a way that allows *others* to readily understand what they are trying to say.
>> The machine has the advantage of being able to "instantly" evaluate >> a variety of different approaches to a *particular* problem -- and >> settle on the "best" one (where "best" can be defined AT COMPILE TIME) >> while taking into consideration as much (or as little) of the surrounding >> "context" that it deems appropriate. > > Of course there are such tasks but in my thinking they are what my code > will have to do, not a job for the compiler. I am the one who creates > the code, not the compiler. Leaving to it to choose the algorithm would > simply mean I am not programming, just using the machine. Which I would > gladly do of course were it good enough to do what I want; so far it > is not.
Most people can't come up with the "optimal" way of evaluating an arbitrary expression -- given the opcodes available in the *particular* target. (I've seen some cleaver uses of LEA to implement simple expression calculations in a single instruction). Or, if they *can*, the next pair of eyes ends up relying on the *commentary* to understand the operation being performed (and, as such, won't notice a bug in it!). These sorts of things are where an optimizing compiler can excel. WITHOUT burdening the original author or anyone reading the code at a later date. Some folks are good at arithmetic. Do you try to train *everyone* to be equally good at this? Or, do you provide *calculators* to rid them of the tedium of doing the math "manually"? (and, get greater confidence in their results, in the process)
>>> I realize I am probably alone on the world left doing that but I can >>> say that once one becomes really good at writing with access to low >>> level - good register model in the head all the time - and all the >>> facilities to go higher (plenty of argument/text processing abilities, >>> partial word extraction, recursive macros, multidimensional text >>> variables etc. etc.) high level languages with their predefined >>> "one fit all" sentences look like tools from the stone age. >> >> What do you do when your design is ported to a processor that has >> a different register complement? > > What I did when I had to port my code from 68k to power was to create > VPA. It can be done for any register model, just a matter of > compilation. > It would be a pain to do it for a machine with fewer than 32 GP > registers but it can be done - yet I see no reason why, load/store > machines are ruling at the moment and 16- registers are just to > few for a load/store machine to be viable for me to bother with > (e.g. ARM), it is fundamentally limited in a way similar to x86. > Why should I spend years of my life on getting used to drive a > car withe the handbrake on by design.
But the same argument could then be applied to *any* technology. Why not code everything in LISP? Any machine that is too slow or has too little memory is suddenly "not worthy"? The advantage that HLLs have is one of portability across a wide range of targets, applications and DEVELOPERS.
>> What I *most* miss about, e.g., ASM is *good* macro processing. > > With VPA you would find a whole new world of that :-). Over the > years I have added functionality which allows you to do a lot more > than normal macros allow in assemblers. E.g. when I needed an assembler > for the HC11 some 15 years ago under DPS I just wrote a macro file > which did it - and that was on the predecessor of VPA, things have > grown significantly since. Add to that the ability to have shell-script > lines within your source - with multidimensional variables, local > and global (multidimensional here meaning a variable name can be > made up of variables) and you have quite a tool - I have not > wished to improve it for years now.
What I want most in sources now is the ability to include better commentary -- multimedia files, interactive demos, etc. I.e., things that assist the developer/maintainer, not the "executable"
Don Y wrote:
> Hi Dimiter,
<snip>
> > The thing that technology is lousy at is "enhancing wetware" -- programmers > don't inherently get "twice as productive" each year or two. They can't > write > twice as much debugged code or comprehend twice as many lines per unit > time. >
But programmers should at least target not being the bottleneck any more. It's a nicer way to do business and makes your life much more pleasant. <snip> -- Les Cargill
Hi David,

On 12/19/2014 1:35 AM, David Brown wrote:
> On 18/12/14 19:17, Don Y wrote: >> On 12/18/2014 3:22 AM, David Brown wrote: >>> On 18/12/14 02:09, Don Y wrote: >> >> I religiously avoid all compiler-specific "extensions". Even ASM >> interfaces >> I accommodate *in* the ASM sources and not directly in the "C" sources. >> (e.g., not using "asm()") > > Since much of my code is project-specific, and I don't change compilers > within a project, I don't mind using compiler-specific extensions /if/ > they add significant value to the code. In particular, I use gcc on > most of my targets, or compilers (such as CodeWarrior) that support many > gcc extensions - so to me, such gcc extensions are another tool in the > toolbox. I won't use an extension unless it is actually /useful/, but > some of them can be helpful, such as the "typeof" operator, arrays of > zero length, case ranges, and some of the function and type attributes. > Where these are hints rather than requirements, such as adding a > "const" attribute to a function, I do this via a macro - on gcc, the > macro will generate the "const" attribute and lead to extra static > checks and possibly tighter object code. On non-gcc compilers, the > macro dissolves to nothing and everything continues to work. > > On smaller micros in particular, there are often target-specific > compiler extensions for handing things like data in flash, interrupts, > etc. You can't avoid having at least some of that when coding for > something like the AVR - you treat it just as you would treat using a > specific library or header for the micro. > > Regarding assembly, I actually prefer to have any assembly as inline > assembly in C, rather than separate assembly files. I find it keeps my > source cleaner and neater. If I need the assembly as a way of accessing > special processor features (interrupt enable/disable, memory barriers, > cache control, etc.), then inline assembly is much more efficient than > external assembly sources. I seldom need large pieces of assembly > outside of that - perhaps just a little startup code. I'll write that > in C whenever possible, and use inline assembly inside a dummy C > function when required. > > Tastes and styles for that sort of thing vary, of course - I don't > expect everyone to do it the way I do it.
I do most of my designs with reuse in mind. Always striving to build abstractions that I can later port to other projects (clients). The "top level, application specific" nature of the project thus being minimized -- something that I can encode in an FSM, etc. with a bit of tweaking around the edges. [Why write a DPLL from scratch each time you need one? Or, a PID loop? Or...] In-lining ASM really only makes sense (for me) for clauses that are stand-alone and known to have no consequence on the balance of the execution environment. E.g., disabling interrupts. It also makes it tedious to interface ASM routines to the in-lined ASM code. So, you risk having to write two versions of the same routine: one in the HLL and the other in ASM. These practices have allowed me to borrow DEBUGGED code from arbitrary points throughout my development history and leverage them into existing products with minimal risk of introducing errors into the code *or* the test suite. [As I said in previous post: "What makes you think I *want* to do this, AGAIN?" :> ] In hindsight, I would have done much of this in a more templatized fashion. But, using features from the future is a bit problematic in the past! :>
>>>> This played a large part in moving me to FOSS tools -- not wanting to >>>> HAVE TO wait for the vendor to fix a set of bugs so I could write >>>> "legitimate" code instead of "workaround" code! >>> >>> With FOSS tools, if you work on the problem yourself it is often easy to >>> get the main developers to work with you and get a quick resolution. >>> But generally, the speed of bug fixes doesn't have any correlation with >>> the price of the tools - I have seen fast and slow fixes in both free >>> and expensive tools. >> >> It's not directly an issue of "speed". Rather, one of *control*. > > Fair enough.
Ever get called to make a patch to something you did 20+ years ago? (lesson belatedly learned: limit extent of support services in all contracts!) Do you *really* want to run it through a 20 year *newer* compiler and deal with recertifying the entire product? :( [This is why it was also important for me to keep old development *environments* available. *So* much easier than trying to figure out how to make an old tool run on a newer OS!]
The 2026 Embedded Online Conference