On 12/16/2014 10:26 AM, Paul Rubin wrote:> Dennis <dennis@none.none> writes: >> Maybe not a full FFT but they can do remarkable things. The following >> is some code that takes advantage of the ARM short vector >> instructions. ... The "big platform" was an original Beaglebone Black >> (about $50) for compilation and execution. ... >> 62 0070 660F6F84 movdqa 8240(%rsp,%rax), %xmm0 ... >> 63 0079 660FFE84 paddd 4144(%rsp,%rax), %xmm0 > > That looks like x86 code. >Oops, you are right - wrong listing, this is the on PC afterwards to see what would happen there. Here is a slide I did in a Beagelbone presentation. Also the example was with ints - it has been a while since I did the presentation. Memory fades... unsigned int a[1024],b[1024],c[1024],i; for (i=0;i<1024;i++) { a[i] = b[i] + c[i]; } Scalar (6 cycles*1024) Vector (8 cycles*256) r0->c, r1->b, r5 ->a r0->c, r1->b, r3 ->a r3 = 0 r2-> 16 past end of a .L3: .L3: ldr ip, [r0, r3] vld1.64 {d16-d17}, [r0:64]! ldr r2, [r1, r3] vld1.64 {d18-d19}, [r1:64]! add r2, ip, r2 vadd.i32 q8, q9, q8 str r2, [r5, r3] vst1.64 {d16-d17}, [r3:64]! add r3, r3, #4 cmp r3, r2 cmp r3, #4096 bne .L3 bne .L3
Languages, is popularity dominating engineering?
Started by ●December 12, 2014
Reply by ●December 16, 20142014-12-16
Reply by ●December 16, 20142014-12-16
On 16/12/14 16:48, Don Y wrote:> Hi David, > > On 12/16/2014 2:58 AM, David Brown wrote: >> On 15/12/14 21:15, Don Y wrote: >>> On 12/15/2014 12:24 PM, David Brown wrote: >>>> On 15/12/14 18:41, Don Y wrote: >>> >>>>> As I said up-thread, let the compiler do the optimizing. >>>>> Concentrate on >>>>> *clearly* writing what you intend. If the compiler is clever >>>>> enough to >>>>> extract some nugget of efficiency from what you've written and >>>>> *equivalently* >>>>> transform your code into something "better", let *it* do that -- >>>>> instead of >>>>> *you* trying to be cryptically clever in what you *write*. >>>> >>>> Absolutely. Compilers are better able to optimised from clear code >>>> than >>>> cryptic code - for example, they can do a better job with array >>>> expressions or >>>> multiplies than when someone has tried to "help" by using pointers or >>>> shifts. >>> >>> I think a lot of this (programmer) behavior comes from the days of >>> naive compilers -- folks got used to "being clever" with their source >>> to try to force particular opcodes to be generated. >>> >>> *Now*, the behavior just obfuscates what the programmer is *really* >>> trying to "say" (do). >> >> Yes. I have worked with compilers that needed such "clever help" in >> order to produce efficient code. Thankfully, I left such tools behind >> for the most part. (And when working with such tools, I always examined >> the generated code to see the results.) > > IME, that was the problem: the "cleverness" didn't typically result in > more efficient code. Just "harder to read" or "easier to break". The > developer, however, would typically be *so* convinced that his "tricks" > were smarter than the compiler's "dumb" (hey, it's a machine, right?) > approach that they *must* be more efficient.Well, that may be the case for "typical" programmers - but when /I/ used compilers that needed this kind of help, I verified the generated code to be sure that there was a real difference before doing any manual array-to-pointer conversion or other such tricks. So yes, /my/ tricks were without doubt smarter than that compiler - it was not that my tricks were so clever, but that the particular compiler had very poor optimisation. But when such tricks are carried over to newer and/or better compilers, they are often not just harder to read or maintain, but result in poorer code - compilers can generally optimise clear code better than cryptic code.> > And, they would be part of his/her *style* instead of applied where > needed AND VERIFIED to achieve their intended goals. I.e., if you > *really* think this operation needs to be expressed in this weird > manner, where's the commentary justifying your action?? > > (What happens when the compiler is upgraded? Do you go back and remove > all that cruft? Or, leave the cost of its presence there when it's > not really improving anything??)You don't upgrade or change toolchains in the middle of a project. And if the same code is later re-used in a different project, then it could well need significant change to re-write it in a better style - but that possibility applies to any code re-use. (The usual issues apply - efficient running code, written in a short time, on the current system balanced against portability, flexibility and maintainability in the future. It's all very well wanting to write code that will be good for future re-use - but not if the current customer has to pay for it.)> >>>>>> Minimising the scope /and/ lifetimes of variables is always good >>>>>> programming practice, and gives the compiler the best chance at >>>>>> finding >>>>>> flaws and generating good code. >>>>> >>>>> +1 IME, "block scope" is too infrequently exploited. >>>> >>>> Not by me it isn't! I am also a fan of C99's mixing of declarations >>>> and >>>> statements, so that you don't have to declare variables until you >>>> actually need >>>> them. >>> >>> Again, I think this is a legacy behavior. People get used to declaring >>> variables at the top of a function -- because you *had* to. >> >> For some people, that's the case. Others think it is somehow clearer, >> or better style, to declare variables at the top of the function. >> People have strange ideas about code style! > > I would *prefer* to be able to find definitions in a fixed place. But, > it's only a problem when you're dealing with some lengthy, complicated > bit of code and have to go chase down *where* the definition might lie. > > Solution: strive for simple functions and good "presentation" -- so you > can more readily "perceive" where the declarations *should* be (and, > surprise!, that's where they are!) > > At times, it can be frustrating as it can add (some small amount of) > typing to your effort -- e.g., when you decide to move the declaration > (because a variable must be accessed in a larger scope; or, earlier). > > Limbo allows declaration and assignment with slightly different syntax: > foo: int; > foo = 2; > vs. > foo := 2; > Note that the latter is "encouraged" by the syntax -- it's easier to > type than the former (I believe you should strive to make better practices > the ones that require the least effort from the user... so laziness causes > them to be adopted! :> ) > > I can't tell you the number of times I've had to change the ":=" to '=' > and scroll up to insert the declaration a few lines earlier. This > gets old *really* quick! It would save a lot of un-typing/re-typing > to just put all the declarations in one common place... > > My "pro bono" day (perhaps last of the year?? :> ) ...
Reply by ●December 17, 20142014-12-17
Hi Dimiter, On 12/16/2014 8:56 AM, Dimiter_Popoff wrote:> On 16.12.2014 г. 10:25, upsidedown@downunder.com wrote: >> ... >> These days with compilers running on big platforms, much optimization >> can be done and there is no point in trying to help the compiler. In >> practice the only need for manual assembler is to utilize some special >> target machine instructions that can't be expressed in HLL. > > It is perhaps true compilers can be made that good but I have yet to > see HLL written code which my VPA (which I control how high it > gets while I write, lines with register operations alternate with > actions on objects etc.) written code won't beat by at least a factor > of 10 when it comes to density. Execution speed likely too but this > is harder to compare.While I can't comment on *your* abilities -- or the characteristics of your VPA -- I can only say that *I* have (long ago) been blown away by how *clever* many of the optimizing compilers can be! The machine has the advantage of being able to "instantly" evaluate a variety of different approaches to a *particular* problem -- and settle on the "best" one (where "best" can be defined AT COMPILE TIME) while taking into consideration as much (or as little) of the surrounding "context" that it deems appropriate. By contrast, when *I* "optimize", I have to look at "big picture" issues (that the compiler is incapable of grasping). E.g., what *type* of algorithm would best solve this problem? And, for that specific type, how best to implement it? Also, I can take advantage of compiler enhancements/improvements without spending ANY time "re-bugging" the code. Or, porting the code to an entirely different processor. Finally, the guy who fills my chair next week can step right into my shoes and benefit from my labors (in crafting the approach) and the compiler's efforts (in translating idea into op-codes).> It is not just a compiler thing, it is about how the programmer thinks; > high level languages simply constrain that assuming that he is too > stupid to not make mistakes which are known to be commonly made.A lot depends on the language. Some languages truly treat the programmer as "challenged" (in the name of "making it easier to code"). Others expose the programmer to increasing amounts of the hardware -- and risk. You can make mistakes in *any* language. E.g., typing "A * B" instead of "A + B" -- no language will protect you from your own incompetence.> I realize I am probably alone on the world left doing that but I can > say that once one becomes really good at writing with access to low > level - good register model in the head all the time - and all the > facilities to go higher (plenty of argument/text processing abilities, > partial word extraction, recursive macros, multidimensional text > variables etc. etc.) high level languages with their predefined > "one fit all" sentences look like tools from the stone age.What do you do when your design is ported to a processor that has a different register complement? E.g., in the early 80's, I was in the Z80 camp while a friend was in the 68xx. How we approached algorithms varied considerably as he was always stuck with the "single accumulator" model (i.e., always hammering on memory for every data access). I, OTOH, could keep several pointers, data, flags, etc. "on hand" and quickly juggle between them. What I *most* miss about, e.g., ASM is *good* macro processing. I could write things like: STATE Idle On digit MoveTo GetDigits Executing accumulate On help_key MoveTo Helping Executing show_help Otherwise RemainInPlace Executing beep STATE Helping On next_key RemainInPlace Executing next_screen On prev_key RemainInPlace Executing prev_screen Check Status Otherwise PreviousState Executing restore Or: LOOP: SCALE 2.5 2.5 BLANK MOVETO 2734 8883 BLUE MOVETO 2734 0 MOVETO 0 0 MOVETO 0 8883 BLANK SYNC 13 JMP LOOP And, being able to do this without dragging in another text processor (possibly forcing these "snippets" to reside in other files to do so)
Reply by ●December 17, 20142014-12-17
Hi David, On 12/16/2014 4:52 PM, David Brown wrote:> But when such tricks are carried over to newer and/or better compilers, they > are often not just harder to read or maintain, but result in poorer code - > compilers can generally optimise clear code better than cryptic code.Exactly. It's harder to get the code "right", harder for others to understand what you are doing (and *why* you are doing it that way), harder to sort out boundary conditions and verify behavior, there, etc.>> And, they would be part of his/her *style* instead of applied where >> needed AND VERIFIED to achieve their intended goals. I.e., if you >> *really* think this operation needs to be expressed in this weird >> manner, where's the commentary justifying your action?? >> >> (What happens when the compiler is upgraded? Do you go back and remove >> all that cruft? Or, leave the cost of its presence there when it's >> not really improving anything??) > > You don't upgrade or change toolchains in the middle of a project. And if theHa! I've had projects where the tools were updated almost *weekly* -- as bugs were uncovered, etc. The alternative is to live with every bug in the compiler. This leads to lots of "documented workarounds" in the sources which are almost *worse* than trying to be "cryptic" ("Why didn't the developer just write XXXX instead of this *mess*?") This played a large part in moving me to FOSS tools -- not wanting to HAVE TO wait for the vendor to fix a set of bugs so I could write "legitimate" code instead of "workaround" code!> same code is later re-used in a different project, then it could well need > significant change to re-write it in a better style - but that possibility > applies to any code re-use. (The usual issues apply - efficient running code, > written in a short time, on the current system balanced against portability, > flexibility and maintainability in the future. It's all very well wanting to > write code that will be good for future re-use - but not if the current > customer has to pay for it.)I've had a simple approach to this with clients: you want to benefit from the work I've done for others? Then you have to allow the next guy to benefit from the work I'm doing for *you*! Or, you can pay for me to develop EVERYTHING I USE *from scratch*!
Reply by ●December 17, 20142014-12-17
On Tue, 16 Dec 2014 09:08:56 -0700, Don Y <this@is.not.me.com> wrote:>Hey George! > >How they hangin'?Coyotes ate them. 8-) Last week the Critter Cam captured a fight in the yard.>On 12/15/2014 7:45 PM, George Neuner wrote: >> On Fri, 12 Dec 2014 14:48:39 -0700, Don Y <this@is.not.me.com> wrote:>>> That's sort of a silly question. Define "know". I.e., one such definition >>> might be "able to sit down and write EFFECTIVE/bug-free code NOW". >> >> By that definition, few people *know* any language. >> >> I certainly can read/understand more languages than I can sit down and >> immediately start to work with (there's only ~ a half dozen there). > >I can "read" lots of languages (including things like Spanish, Greek, >etc. -- despite never having learned any of them). But, could only >*guess* as to what they were saying, in many cases. If you don't >have enough confidence to be able to spot errors *in* a piece of >code, then I wouldn't consider that as "knowing".Which goes to the very next thing I said - vocabulary. It's relatively easy to spot simple grammatically caused logic errors in many languages - however, complex logic errors require knowing the purpose of the variables, their dependencies, and often also the evolution of their values. These things are independent of the language's grammar. But the largest source of bugs in most code is incorrect usage of library functions. Spotting these additionally requires knowledge of the library - not just of the code that's calling it.>I.e., something that separates the real code from "pseudo-code" in >your mind (so, instead of "it looks like this is trying to...", >you are confident saying "this code DOES...") > >> But generally what prevents me using a new language quickly is not >> it's grammar but it's vocabulary: i.e. before I can do anything useful >> I have to learn about its "standard library". >> >> But so far as writing effective, bug free code the first time: that's >> a fine ideal, but it really isn't that important. What is important >> is that buggy code not escape from the development environment and >> that the final code be effective. > >I guess it depends on the OP's intent behind the question. I see >"know" and "familiarity" as two entirely different things. I can >PROBABLY sit down and look at a huge number of different code fragments >in many languages -- including imaginary ones -- and claim, with some >amount of confidence, that "it looks like this is trying to..." >(especially if the code was written by someone who "knows" the language >so there is some inherently high degree of correctness to it). > >But, I'd never claim/imply to be able to sit down and "make it work" >(in some small number of attempts).Agreed.>>> Some languages force you to clearly define THE IMPLEMENTATION. But, if >>> this still has a poor match to the actual *problem*, the the language's >>> features/facilities/CONSTRAINTS don't do anything to improve the quality >>> of the product ("Our code is 100% bug-free -- machine validated!" "Yeah, >>> but it doesn't *do* what we set out to do!") >> >> The definition of "bug" is operation deviating from specification. >> When in doubt, change the specification. 8-) > >"Well, if we're going to *change* it, why waste time WRITING IT in the first >place?? Let's wait until we're done -- and, hell, at that point, we won't >NEED it!! :> "That is the whole point of constraint logic languages, including the class of so-called "expert system" languages, in which the code is a *specification* of the solution and that specification *IS* the implementation. Unfortunately, too many of them run like molasses in winter.>Gotta go hide the last batch of cookies before C decides they're "breakfast"...My sister has a new(ish this year) beagle. Last week it got onto the kitchen counter and ate 1/3 of a cranberry/nut pie, 3/4 loaf of date/nut bread and 20 frosted gingerbread cookies. Then it asked to be let out and led my sister right past the crumbs. 8-) George
Reply by ●December 18, 20142014-12-18
Hi George, On 12/17/2014 7:21 PM, George Neuner wrote:> On Tue, 16 Dec 2014 09:08:56 -0700, Don Y <this@is.not.me.com> wrote: > >> How they hangin'? > > Coyotes ate them. 8-)Low hanging? Or, tall coyotes??> Last week the Critter Cam captured a fight in the yard.Cool! We've been hearing nearby kills. A few nights ago, whatever they caught *screamed*!>>>> That's sort of a silly question. Define "know". I.e., one such definition >>>> might be "able to sit down and write EFFECTIVE/bug-free code NOW". >>> >>> By that definition, few people *know* any language. >>> >>> I certainly can read/understand more languages than I can sit down and >>> immediately start to work with (there's only ~ a half dozen there). >> >> I can "read" lots of languages (including things like Spanish, Greek, >> etc. -- despite never having learned any of them). But, could only >> *guess* as to what they were saying, in many cases. If you don't >> have enough confidence to be able to spot errors *in* a piece of >> code, then I wouldn't consider that as "knowing". > > Which goes to the very next thing I said - vocabulary. It's > relatively easy to spot simple grammatically caused logic errors in > many languages - however, complex logic errors require knowing the > purpose of the variables, their dependencies, and often also the > evolution of their values. These things are independent of the > language's grammar. > > But the largest source of bugs in most code is incorrect usage of > library functions. Spotting these additionally requires knowledge of > the library - not just of the code that's calling it.I think a fair number of bugs come from folks not remembering what "x" is (icky grammar). Hence the silly naming conventions that try to *remind* the developer that "this is a pointer to a char", etc. If they haven't specified the functionality and approach BEFORE they sat down to code, it's too easy to trick themselves in to thinking "something else" halfway through.>>> But so far as writing effective, bug free code the first time: that's >>> a fine ideal, but it really isn't that important. What is important >>> is that buggy code not escape from the development environment and >>> that the final code be effective. >> >> I guess it depends on the OP's intent behind the question. I see >> "know" and "familiarity" as two entirely different things. I can >> PROBABLY sit down and look at a huge number of different code fragments >> in many languages -- including imaginary ones -- and claim, with some >> amount of confidence, that "it looks like this is trying to..." >> (especially if the code was written by someone who "knows" the language >> so there is some inherently high degree of correctness to it). >> >> But, I'd never claim/imply to be able to sit down and "make it work" >> (in some small number of attempts). > > Agreed.And, the OP hasn't tried to clarify that.>> Gotta go hide the last batch of cookies before C decides they're "breakfast"... > > My sister has a new(ish this year) beagle. Last week it got onto the > kitchen counter and ate 1/3 of a cranberry/nut pie, 3/4 loaf of > date/nut bread and 20 frosted gingerbread cookies. > > Then it asked to be let out and led my sister right past the crumbs. > 8-)Ha! My first winter, here, I opted to make a popcorn garland (C wanted a small tree). Had never done this but figured it couldn't be *that* hard! (No, just tedious and time consuming! It's amazing how many popped kernels it takes to create a linear foot of garland!!) Finished. Sighed audibly enough for the folks in the next block to hear me. Then, strung (stringed?) the garland around the tree and wandered off to clean by hands, etc. Returned to the living room to find Josie standing on her hind legs, paws on the table, eyes fixed on the tree -- with the garland strand down her throat. Joyfully chewing away while the tree obligingly rotated to "dispense" additional garland "on demand". It was damn near her last evening on the planet!! :-/ More cookies, tonight. Need "the zest of three lemons" (though I decided to try *six*, instead). OK, might as well pick the tree clean. It's a wee bit of a thing (second season, maybe 3 ft tall?) so it can't have many fruit, right? <frown> Sixty pounds. Spent the last two hours juicing the things. And I still haven't zested my six lemons!! :-/
Reply by ●December 18, 20142014-12-18
On Wed, 17 Dec 2014 22:58:41 -0700, Don Y <this@is.not.me.com> wrote:>On 12/17/2014 7:21 PM, George Neuner wrote: >> On Tue, 16 Dec 2014 09:08:56 -0700, Don Y <this@is.not.me.com> wrote: > >> But the largest source of bugs in most code is incorrect usage of >> library functions. Spotting these additionally requires knowledge of >> the library - not just of the code that's calling it. > >I think a fair number of bugs come from folks not remembering what >"x" is (icky grammar). Hence the silly naming conventions that try >to *remind* the developer that "this is a pointer to a char", etc.The naming conventions exist because programming involves a lot of trivia, and some people just can't keep it all in their heads. I don't think grammar is the answer to that. I know you are familiar with Algol, but you may not be aware that the Algol parser was itself specified using a 2-level attributed 'vW'-grammar (for Van Wijngaarden). The idea at the time was to (try to) encode semantics and data typing into the parser generating (compiler-compiler) grammar so that the resulting parser could catch language usage errors as syntax errors before they reached later stages of compilation. I don't know how well that idea actually worked: I've heard that Algol parsers all were hand written (rather than tool generated), that they really didn't catch most type and semantic errors and that was never any implementation that 100% corresponded to the official grammar. However, as a language and compiler geek myself, I know that trying to read that grammar gives me a headache. It's so complex that few languages since - and no commercial ones - have tried to follow Algol's example. Single level (E)BNF grammars which (essentially) encode only syntax became the norm for language development. Statistically, the most problems occur with functions expecting pointer arguments: e.g., passing a value where a pointer is expected, or a pointer where a handle is expected. Too many libraries have APIs that (relatively) are poorly designed.>If they haven't specified the functionality and approach BEFORE they >sat down to code, it's too easy to trick themselves in to thinking >"something else" halfway through.It's worse in languages that don't require declarations and/or that allow variables to take on values of any type. Up above 'x' was ... but now it is ... Having said that, I appreciate having certain flexibility and these days I'm writing a lot of code in Scheme and SQL. George
Reply by ●December 18, 20142014-12-18
On 18/12/14 01:59, Don Y wrote:> Hi Dimiter, > > On 12/16/2014 8:56 AM, Dimiter_Popoff wrote: >> On 16.12.2014 г. 10:25, upsidedown@downunder.com wrote: >>> ... >>> These days with compilers running on big platforms, much optimization >>> can be done and there is no point in trying to help the compiler. In >>> practice the only need for manual assembler is to utilize some special >>> target machine instructions that can't be expressed in HLL. >> >> It is perhaps true compilers can be made that good but I have yet to >> see HLL written code which my VPA (which I control how high it >> gets while I write, lines with register operations alternate with >> actions on objects etc.) written code won't beat by at least a factor >> of 10 when it comes to density. Execution speed likely too but this >> is harder to compare. > > While I can't comment on *your* abilities -- or the characteristics > of your VPA -- I can only say that *I* have (long ago) been blown away > by how *clever* many of the optimizing compilers can be! > > The machine has the advantage of being able to "instantly" evaluate > a variety of different approaches to a *particular* problem -- and > settle on the "best" one (where "best" can be defined AT COMPILE TIME) > while taking into consideration as much (or as little) of the surrounding > "context" that it deems appropriate. > > By contrast, when *I* "optimize", I have to look at "big picture" issues > (that the compiler is incapable of grasping). E.g., what *type* of > algorithm would best solve this problem? And, for that specific type, > how best to implement it?One other time when the programmer can (or thinks he can) optimise better than the compiler is when the programmers knows something the compiler does not know. For example, the programmer might know that an "int" parameter to a particular function is only ever 1, 2 or 3 - and use that to his advantage in the code. However, with modern tools you can achieve quite a lot of such optimisation by giving the compiler more information. There are three tools for this. One is to make sure you use "static" for your functions and file-scope variables whenever possible (this is good practice anyway), since it lets the compiler know all uses of the function and data. Another is to use link-time optimisation (or whole-program optimisation), that lets the compiler optimise across modules. And finally, many compilers support gcc's "__builtin_unreachable()" or similar pseudo-functions that let you give the compiler extra information. Thus you can tell the compiler that "x" is between 1 and 3 by writing: if ((x < 1) || (x > 3)) __builtin_unreachable(); Other things that modern compilers can do is constant propagation across modules (with LTO), partial inlining, and function cloning (if a function is only ever used a few times, but with different constant parameters, it can be more efficient to generate multiple constant-specific versions of the function). It's all great stuff, letting you write your code clearly, correctly, and flexibly, while helping the compiler turn it into fast object code.> > Also, I can take advantage of compiler enhancements/improvements without > spending ANY time "re-bugging" the code. > > Or, porting the code to an entirely different processor. > > Finally, the guy who fills my chair next week can step right into my > shoes and benefit from my labors (in crafting the approach) and the > compiler's efforts (in translating idea into op-codes). > >> It is not just a compiler thing, it is about how the programmer thinks; >> high level languages simply constrain that assuming that he is too >> stupid to not make mistakes which are known to be commonly made. > > A lot depends on the language. Some languages truly treat the programmer > as "challenged" (in the name of "making it easier to code"). Others > expose the programmer to increasing amounts of the hardware -- and risk. > > You can make mistakes in *any* language. E.g., typing "A * B" instead of > "A + B" -- no language will protect you from your own incompetence. > >> I realize I am probably alone on the world left doing that but I can >> say that once one becomes really good at writing with access to low >> level - good register model in the head all the time - and all the >> facilities to go higher (plenty of argument/text processing abilities, >> partial word extraction, recursive macros, multidimensional text >> variables etc. etc.) high level languages with their predefined >> "one fit all" sentences look like tools from the stone age. > > What do you do when your design is ported to a processor that has > a different register complement? E.g., in the early 80's, I was > in the Z80 camp while a friend was in the 68xx. How we approached > algorithms varied considerably as he was always stuck with the > "single accumulator" model (i.e., always hammering on memory for > every data access). I, OTOH, could keep several pointers, data, > flags, etc. "on hand" and quickly juggle between them. > > What I *most* miss about, e.g., ASM is *good* macro processing. > I could write things like: > > STATE Idle > On digit MoveTo GetDigits Executing accumulate > On help_key MoveTo Helping Executing show_help > Otherwise RemainInPlace Executing beep > > STATE Helping > On next_key RemainInPlace Executing next_screen > On prev_key RemainInPlace Executing prev_screen > Check Status > Otherwise PreviousState Executing restore > > Or: > > LOOP: SCALE 2.5 2.5 > BLANK > MOVETO 2734 8883 > BLUE > MOVETO 2734 0 > MOVETO 0 0 > MOVETO 0 8883 > BLANK > SYNC 13 > JMP LOOP > > And, being able to do this without dragging in another text processor > (possibly forcing these "snippets" to reside in other files to do so)I too have sometimes had to use Python pre-processing for calculations that could often be done by assembler macro processors, but which cannot be done with the C pre-processor. A key issue is that the C pre-processor can't do loops or recursive macros. However, C++ offers new ways to do this with templates with integer parameters, and constexpr functions in C++11 (and extended in C++14). You won't get quite the syntaxes from your examples above, but it can certainly cover some advanced compile-time calculation.
Reply by ●December 18, 20142014-12-18
On 14-12-18 09:37 , George Neuner wrote:> On Wed, 17 Dec 2014 22:58:41 -0700, Don Y <this@is.not.me.com> wrote: > >> On 12/17/2014 7:21 PM, George Neuner wrote: >>> On Tue, 16 Dec 2014 09:08:56 -0700, Don Y <this@is.not.me.com> wrote: >> >>> But the largest source of bugs in most code is incorrect usage of >>> library functions. Spotting these additionally requires knowledge of >>> the library - not just of the code that's calling it. >> >> I think a fair number of bugs come from folks not remembering what >> "x" is (icky grammar). Hence the silly naming conventions that try >> to *remind* the developer that "this is a pointer to a char", etc. > > The naming conventions exist because programming involves a lot of > trivia, and some people just can't keep it all in their heads. > > I don't think grammar is the answer to that. I know you are familiar > with Algol, but you may not be aware that the Algol parser was itself > specified using a 2-level attributed 'vW'-grammar (for Van > Wijngaarden).The 2-level grammar was used only for Algol-68, which is not what people usually mean by "Algol" -- Algol-60 is usually meant. Algol-68 was a very different language from Algol-60. The grammar of Algol-60 was specified using Backus-Naur context-free grammar ("BNF"), which is when BNF was invented.> The idea at the time was to (try to) encode semantics > and data typing into the parser generating (compiler-compiler) grammar > so that the resulting parser could catch language usage errors as > syntax errors before they reached later stages of compilation. > > I don't know how well that idea actually worked: I've heard that Algol > parsers all were hand written (rather than tool generated), that they > really didn't catch most type and semantic errors and that was never > any implementation that 100% corresponded to the official grammar. > > However, as a language and compiler geek myself, I know that trying to > read that grammar gives me a headache.I think it resembles an Algol-68 parser implemented in Prolog. The "uniform replacement rule" is essentially the same as matching and binding free Prolog variables. Too bad that Prolog was not around at the time, it might have provided a direct route from a 2-level grammar to a parser.> It's so complex that few > languages since - and no commercial ones - have tried to follow > Algol's example. Single level (E)BNF grammars which (essentially) > encode only syntax became the norm for language development.But for compiler development, many people developed and I think some also used attribute grammars. The idea is the same -- include semantic analysis in parsing -- but it is more practical because the attributes can be of various types, attribute computations can be propagated upwards or downwards in the parse tree (inherited/synhetic attributes), and there can be several iterations of attribute computation. In the 2-level grammar, the semantic data corresponding to attribute values were forced to look like sentences or syntax trees and could only be accessed by something like the pattern-matching decomposers, used today in functional languages. But I don't really understand why the method used for describing the grammar and semantics of a programming language should have much to do with the kind of bugs that are common in programs using that language. Either the language is defined to catch certain kinds of syntax and semantic errors at compile time, or not, and the compiler either works according to the language standard, or not. The method used to define or describe the language can only have an indirect effect, by helping or hindering the construction of correct compilers and other analysis tools. -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
Reply by ●December 18, 20142014-12-18
Hi Don, On 18.12.2014 г. 02:59, Don Y wrote:> Hi Dimiter, > > On 12/16/2014 8:56 AM, Dimiter_Popoff wrote: >> On 16.12.2014 г. 10:25, upsidedown@downunder.com wrote: >>> ... >>> These days with compilers running on big platforms, much optimization >>> can be done and there is no point in trying to help the compiler. In >>> practice the only need for manual assembler is to utilize some special >>> target machine instructions that can't be expressed in HLL. >> >> It is perhaps true compilers can be made that good but I have yet to >> see HLL written code which my VPA (which I control how high it >> gets while I write, lines with register operations alternate with >> actions on objects etc.) written code won't beat by at least a factor >> of 10 when it comes to density. Execution speed likely too but this >> is harder to compare. > > While I can't comment on *your* abilities -- or the characteristics > of your VPA -- I can only say that *I* have (long ago) been blown away > by how *clever* many of the optimizing compilers can be!Of course they can be clever. My point is that overall humans are much cleverer at using a language - check for example how we use natural languages and how do the machines cope. It is a matter of time until they outsmart us, may be not much time, but for now we are incomparably better. It is just a matter of how well we choose to learn/use the language - and how good a language processor we have in our head of course (this varies a lot between individuals).> The machine has the advantage of being able to "instantly" evaluate > a variety of different approaches to a *particular* problem -- and > settle on the "best" one (where "best" can be defined AT COMPILE TIME) > while taking into consideration as much (or as little) of the surrounding > "context" that it deems appropriate.Of course there are such tasks but in my thinking they are what my code will have to do, not a job for the compiler. I am the one who creates the code, not the compiler. Leaving to it to choose the algorithm would simply mean I am not programming, just using the machine. Which I would gladly do of course were it good enough to do what I want; so far it is not.> >> I realize I am probably alone on the world left doing that but I can >> say that once one becomes really good at writing with access to low >> level - good register model in the head all the time - and all the >> facilities to go higher (plenty of argument/text processing abilities, >> partial word extraction, recursive macros, multidimensional text >> variables etc. etc.) high level languages with their predefined >> "one fit all" sentences look like tools from the stone age. > > What do you do when your design is ported to a processor that has > a different register complement?What I did when I had to port my code from 68k to power was to create VPA. It can be done for any register model, just a matter of compilation. It would be a pain to do it for a machine with fewer than 32 GP registers but it can be done - yet I see no reason why, load/store machines are ruling at the moment and 16- registers are just to few for a load/store machine to be viable for me to bother with (e.g. ARM), it is fundamentally limited in a way similar to x86. Why should I spend years of my life on getting used to drive a car withe the handbrake on by design.> What I *most* miss about, e.g., ASM is *good* macro processing.With VPA you would find a whole new world of that :-). Over the years I have added functionality which allows you to do a lot more than normal macros allow in assemblers. E.g. when I needed an assembler for the HC11 some 15 years ago under DPS I just wrote a macro file which did it - and that was on the predecessor of VPA, things have grown significantly since. Add to that the ability to have shell-script lines within your source - with multidimensional variables, local and global (multidimensional here meaning a variable name can be made up of variables) and you have quite a tool - I have not wished to improve it for years now. Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/







