Op 10-Mar-17 om 20:40 schreef Hans-Bernhard Bröker:> Am 10.03.2017 um 09:17 schrieb Tom Gardner: >> This is a philosophical difference. >> >> If something is important to the correct operation of >> the program, then I like it to be visible in the source >> code. > > That's just it: it is hardly essential for the operation of such code > _where_ exactly that register is; what really matters is that the > variable describing it is a) properly structured and named, and b) > ultimately correctly located. > > So how is it better to have that address hard-coded into the driver > source code, as opposed to getting to decide it at link-time?Because it might enable the code generator to generate better (smaller and faster) code. Wouter "Objects? No Thanks!" van Ooijen
Language feature selection
Started by ●March 5, 2017
Reply by ●March 10, 20172017-03-10
Reply by ●March 10, 20172017-03-10
Jacob Sparre Andersen <jacob@jacob-sparre.dk> writes:> The Ada variant is: > Some_Variable : Some_Type with Address => #16#dead_beef#;Do you need something like a volatile declaration? Does Ada have that?
Reply by ●March 10, 20172017-03-10
Walter Banks <walter@bytecraft.com> writes:> The brouhaha about "@" and C is really more about having supporting > syntax to be able to explain what is desired without needing an > indirect definition. That is my real argument not that it can't be done.It's probably clearest to just call functions to bang on those registers. The compiler can inline them so there shouldn't be overhead.> It is a real eyeopener to spend some time with some of the current crop > of programmers who are using what many of us would consider a toy > language to actually achieve some pretty remarkable results. It took me > a long time to respect what they are doing.Do you mean C?
Reply by ●March 10, 20172017-03-10
On 3/10/2017 1:30 PM, Walter Banks wrote:> On 2017-03-10 12:24 PM, Don Y wrote: >>> >>> If something is important to the correct operation of the program, >>> then I like it to be visible in the source code. A useful benefit >>> is that the information is easily found and analysed by the IDEs >>> and/or other source code manipulation tools around. >> >> I guess it depends on what you consider "source code". How do you >> treat makefiles, linker scripts, etc.? Clearly they are all >> important to the *intended* operation of the program -- as are the >> actual tools, themselves. >> >> How much of this cruft do you clutter the "sources" with in the >> attempt to ensure they accompany the sources? What about applications >> wherein multiple languages are combined; how do you nail down the >> "implementation defined" behavior of their interfaces? What order >> will your Java/C/Python/foo function build its stack frame? How will >> your FORTRAN/Pascal/ASM/bar module interface to it?? > > There are lots of tools issues that should be re-examined. There is many > cases where the tool execution is backwards to what generates the bestYes, but you also have to figure the "developer" in that calculus. People need to be able to wrap their brains around their solutions (esp if those solutions import "components" from other crania!) A machine can have (near) infinite storage; humans tend to need to reduce things to simpler mental models, eliding many of the details (that a machine *could* examine and exploit). Additionally, there is value in "hiding detail" when dealing with human agents; you can (undoubtedly) "trust" a machine with those details -- cuz, you'd know the constraints placed on what it could *do* with them! (something that you wouldn't trust to a human!)> code. The compile link sequence where some key information about the > target ISA or processor architecture is not known until link time > sometimes forces the compiler to use a subset of the actual processors > ISA rather than take advantage of a specific member feature. The recent > discussion of the mill belt length is a good example of how important > this is. This type of compiler approach can encapsulate the specific > processor variations and make application wide optimizations with > relative ease. > > Switching this around so the compiler is focused on creation code for a > well defined target rarely is anything more than including a device > specific header file in the application. (as a side effect eliminating > the link step) > > What's wrong with a single set of sources that defines an application, > no command line options or linker scripts just an application including > the definition of the target, files and libraries it needs. Compilation > is both faster by many factors and there is a simple self contained > project that can be easily re-created after a decade or more.That would depend on the size and complexity of the project, right? I have 192 processors (each with multiple cores) in my current design. It would be *delightful* if <something> could sort out how best to allocate resources at run-time instead of my crude metrics. But, those tools don't exist and aren't likely to any time soon. As a result, I have to develop solutions that meatware can manage... and, with existing tools as I don't relish becoming a tool-designer any more than absolutely necessary for *my* needs -- with little or no interest in "yours" (out of no malice to "you")> (The oldest project we have helped customers re-create in the last year > was archived by the customer in 1988, we have copies of every released > tool set. Start to recreating an identical HEX file < 2 hours from > receiving the customer support request email)I can build any project almost instantly -- as long as I don't ALSO need to bring up target hardware and/or hardware debugging tools. (I've been diligent about preserving tools AND development environments)> The brouhaha about "@" and C is really more about having supporting > syntax to be able to explain what is desired without needing an > indirect definition. That is my real argument not that it can't be done. > Most languages have some way to access the underlying machine, fewer of > these languages do so in a simple clean way.Agreed. Software (language designers) seem to think in terms of some EXISTING environment, not in the CREATION of that environment. This may be a necessary practical concession: how much effort should go into supporting bare metal when that metal changes at a rate that approaches a typical development cycle length?> Don I am not arguing to create a more complex world but in the area of > language design why are many tool sets burdened with solutions to the > computer limitations of 1980?Walter, you're preaching to the choir! But, in my case, I am interested in the *applications* and suffer the inadequacies of the tools. Why are telephones (to this day!) so "uncomfortable" to hold? Hasn't anyone ever looked at the characteristics of a human hand and how it might wrap around a "handset"? How much time should folks ("users") spend thinking about this vs. just making their phone calls and tolerating the klunky physical implementation? My current project involves 4 or 5 different languages. And, concepts that don't inherently map to traditional language primitives. How much time do I spend trying to bend my implementation to the languages? Or, the languages to the implementation? Do I add keywords to the language to cause the automatic generation of client- and server-side stubs for procedures and functions that are intended to be RPC's? Or, do I invent an IDL to use alongside the targeted languages? And, "manually" create the necessary stubs?> It is a real eyeopener to spend some time with some of the current crop > of programmers who are using what many of us would consider a toy > language to actually achieve some pretty remarkable results. It took me > a long time to respect what they are doing.I think there are a few different issues, involved (ignoring egos, NIH, etc.). First, it seems difficult for those of us with "long histories" to wrap our heads around how much technology has changed over the course of our *individual* careers. Recall, for many of us, "software" didn't exist before we were born -- so, it's not like trying to understand the advances in PLUMBING since the Roman Era! :> I have to make a very *conscious* effort not to pack 8 "flags" into a byte. Or, "reuse" a byte for different purposes by exploiting knowledge of which parts of the application are running at any given time. I can recall counting subroutine invocations in an early 8085 project so that I could map the 7 most common invocations to 7 of the "restart" vectors (thus allowing a 3 byte "CALL" to be replaced by a one byte "RST" -- just to save TWO BYTES several times) in order to shrink the size of the binary to avoid adding another $50 EPROM to the product's cost! Or, carefully selecting the condition code for a JUMP/CALL (and the surrounding code's structure) to leverage my knowledge of how often that condition will be met -- to trim a few clock cycles off that execution path through the code. Its also hard to comprehend how much *faster* (and cheaper) the hardware has become. When I started my current project, I was unduly biased by my past conceptions of costs and spent a fair bit of effort "skimping" on the system architecture to save a few dollars here and there. And, rely on COTS hardware (e.g., PC's) for the "heavy lifting". This has a profound influence over what you can do and where you can "do" it! Over time, it became !painfully! obvious that this was me clinging to an obsolete idea of hardware costs. Why not "PC's (figuratively) everywhere"? Sure, no need for all those displays, keyboards, UI's, etc. But, the compute power is cheap enough (if you stay way back from the state of the art) that its silly to cripple the implementation by "prematurely optimizing (hardware) cost"! Likewise, why burden an "average user" with having to understand the issues involved in "programming"? Why should they have to care about data types, overflow, roundoff error, etc.? With all that horsepower available, why not do some of the heavy lifting FOR the user? length = 12 feet 8.5 inches width = 8 feet 6 inches height = 4/3 meters volume = length * width * height print "Volume is " (cubic yards) volume " cubic yards." [I cringe each time I see something done in a runtime that "shouldn't have to be done", in some other approach to the problem -- e.g., GC] These same issues have analogs on the development side. Build cycles for the aforementioned 8085 project took 4 hours (edit, compile, link, burn EPROMs). I'd get *two* passes at the code -- using a 'scope probe as my debugger -- in a normal work day. And, had to share THE development system with 2 other folks who were operating under similar constraints (with scarce target prototypes!). Now, its almost easier to "make world" than it is to ensure you've got ALL the dependencies properly defined in a set of makefiles. And why worry about incremental backups when its far more convenient to just image the entire system, daily? Yet, each of these were skills that were very important when building products that had to outperform their implementation hardware and development budgets. How do you rearrange your meatware to address the new "realities"? Second, how do you square your KNOWLEDGE, gained from years of experience, of the sorts of errors that are encountered in developing code (regardless as to how well you've honed your skillset to avoid these) and the LIKELIHOOD of those errors (in folks with "less capable" skillsets?) with mechanisms in languages/tools that purport to minimize those? It's like complaining that you're being FORCED to wear a seatbelt while driving! (is there some reason you WANT to be injured in a crash?) Third, how can you ignore the inevitable FUTURE evolution of the product, tools, *CODEBASE*, etc. given historical evidence? How many products are flash-in-the-pans and exist as little historical islands without contributing to their successors and peers? In my current project, I have to balance many competing design criteria at different levels in the design. E.g., I wouldn't want to code the OS in the same sort of language that "users" use for scripting. Nor would I want future subsystem developers to have to deal with the intricacies of the OS at anything beyond a certain level of abstraction. So, I trade-off complexity, capability, "safety", reliability, etc. as best fitting the capabilities of the (types of) folks who will *probably* be involved at the different levels in the project's implementation (a "crack coder" would grimace at the constraints placed on him by the scripting language; and an "average user" would be glossy-eyed trying to understand how the OS works!) I think the same sort of calculus is involved when developing or embracing any toolset (isn't a product just a different type of tool?).
Reply by ●March 10, 20172017-03-10
On 2017-03-10 3:37 PM, Paul Rubin wrote:> Walter Banks <walter@bytecraft.com> writes: >> The brouhaha about "@" and C is really more about having >> supporting syntax to be able to explain what is desired without >> needing an indirect definition. That is my real argument not that >> it can't be done. > > It's probably clearest to just call functions to bang on those > registers. The compiler can inline them so there shouldn't be > overhead. > >> It is a real eyeopener to spend some time with some of the current >> crop of programmers who are using what many of us would consider a >> toy language to actually achieve some pretty remarkable results. It >> took me a long time to respect what they are doing. > > Do you mean C? >More like the whole crop of interpreted languages now being used. I tend to think of C as something of our generation. A high percentage (but not all) of the compiler projects I have done has been C compilers. w..
Reply by ●March 10, 20172017-03-10
On 2017-03-10 4:10 PM, Don Y wrote:>> What's wrong with a single set of sources that defines an >> application, no command line options or linker scripts just an >> application including the definition of the target, files and >> libraries it needs. Compilation is both faster by many factors and >> there is a simple self contained project that can be easily >> re-created after a decade or more. > > That would depend on the size and complexity of the project, right? I > have 192 processors (each with multiple cores) in my current design. > It would be *delightful* if <something> could sort out how best to > allocate resources at run-time instead of my crude metrics. > > But, those tools don't exist and aren't likely to any time soon.Most of my time now is working on both tools and ISA's. There has been some really significant changes in both approaches to compiling for heterogeneous parallel environments and execution environments that have hundreds to thousands of processors in them. We are likely sooner than later to see some major shifts in tool sets. I am currently working on a reference design for one of these that has several hundred execution units. w..
Reply by ●March 10, 20172017-03-10
On 10/03/17 19:33, Don Y wrote:> On 3/10/2017 12:09 PM, Tom Gardner wrote: > >>>>> The source code itself doesn't need to know about where variables go[1]. >>>>> >>>>> Its part of the responsibility of tools that are invoked later >>>>> in the build cycle. >>>> >>>> This is a philosophical difference. >>>> >>>> If something is important to the correct operation of >>>> the program, then I like it to be visible in the source >>>> code. A useful benefit is that the information is easily >>>> found and analysed by the IDEs and/or other source code >>>> manipulation tools around. >>> >>> I guess it depends on what you consider "source code". >>> How do you treat makefiles, linker scripts, etc.? Clearly >>> they are all important to the *intended* operation of >>> the program -- as are the actual tools, themselves. >> >> Keep it simple... >> >> Source code => as defined in the language standard. > > So, you're requiring everything "that is important to the correct > operation of the program" to reside *in* that "source code", NOT > handled by the linkage editor (?)I'm not requiring anything. I'm stating what I believe to be desirable, and why.> [The linkage editor is outside the scope of the "language standard"] > >> If something is tool-specific, then it is not part >> of the source code. Hence compiler arguments are >> not part of the source code. >> >> Code inspection tools such as browsers, analysers, >> and compliance checkers work on the source code. >> That's important. >> >>> How much of this cruft do you clutter the "sources" with >>> in the attempt to ensure they accompany the sources? >>> What about applications wherein multiple languages are >>> combined; how do you nail down the "implementation defined" >>> behavior of their interfaces? What order will your Java/C/Python/foo >>> function build its stack frame? How will your FORTRAN/Pascal/ASM/bar >>> module interface to it?? >>> >>>> In the same vein, in C I dislike having correct code >>>> operation being dependent on combinations of command >>>> line compiler arguments. >>> >>> There's usually a difference between "correct" and "desired". >> >> Not really. >> >> If there is a distinction between "desired" and >> "correct" then I can instantly rewrite the program >> to be much faster and much smaller. > > "Desired" implies some acknowledgement of the application and tools > involved.Que?> Ages ago, it wasn't uncommon to have applications that did > bank switching, overlays, etc. WHERE things resided was > a crucial part of how they would work IN THAT ENVIRONMENT. > E.g., if the trampoline code could ever be mapped *out* of > the address space, then the bank-switching feature came to an > unceremonious halt. > > Move the exact same application to a machine with a larger > physical address space, tweek the linker script accordingly > and the code is still "correct". > > In the first case, where things are located can also have a marked > impact on size and space ("desired" behavior); by avoiding the > overhead of "distant" references for those things that can (or must) > benefit from the "near" efficiencies.What's your point about future language features?>>> It's unfortunate when "correct" relies on command line >>> arguments to resolve some "implementation defined behavior" >>> with which the compiler could, otherwise, take liberties. >> >> That is always the case with C/C++, unless the program >> uses no separately compiled libraries and turns off >> all optimisations. > > No. You can discipline yourself to avoid relying on undocumented > behaviors or "letting the compiler choose" (among possible > interpretations). Of course, writing PORTABLE code under those > constraints is considerably harder (how many folks actually > worry about exceeding the implementation-specific limits for > particular data types? Or, verify that their code will continue > to function properly *if* those limits are considerably higher > than "nominal" from the Standard?)Not in C/C++. The languages remove the ability of any compiler to determine what is and is not aliased. That precludes many optimisations unless you assert there won't be aliasing.>> Other languages are much better in that regard. > > Sure. But that comes at the expense of either requiring more > work of the processor (given that the language isn't tailored > to a particular processor) *or* rendering extra processor > capabilities moot. > > [E.g., imagine all ints were 16b and you developed an application > on a 128b processor in <whatever> language. Or, that all ints were > 128b and you wanted to run the application on a 16b processor. etc.]Shrug.>>> Likewise, if the order and locations at which objects >>> can be bound can arbitrarily be altered and affect operation >> >> That is just one small consideration in this context. > > Of course! I was merely drawing attention to the fact (which could > easily have been overlooked) that *order* can affect the DESIRED > operation (e.g., performance). > >>> [These should be eschewed, IMO] >>> >>> (This is an issue on many processors, without concern for the >>> actual I/O's) >>> >>> Of course, there's no way for the tool to know/enforce these constraints >>> other than a suitable note to the future developer! >> >> And that's undesirable. > > But largely unavoidable. Its also undesireable for developers to > write crappy code, fail to document their algorithms/implementations, > fail to include comprehensive test suites, etc. But, there is a > point at which you have to assume "professional" means more than > "paid to do a job".Nice idea. Meanwhile, back on planet earth...>>>>>> It is language independent and very easy to add to compilers without >>>>>> changing the basic form of the language. >>>>> >>>>> it very nearly destroys the portability of code that uses it... >>>> >>>> That seems unimportant to me. I cannot think of a >>>> reason why you would need to nail down addresses >>>> in portable code. Of course "portable" is not a >>>> black and white concept! >>> >>> Therein lies the rub. Code can be "portable" yet still tied to >>> a particular processor (but a different implementation). E.g., >>> reset_processor()... >> >> More than just a processor, consider different >> boards with the same processor. > > I addressed "different boards" with my "different implementation" > reference.You wrote "... yet still tied to a particular processor (but a different implementation)..." i.e a different implementation of a processor.> E.g., relocating a UART to a different location in > (memory/IO) space, altering the "sense" of the address lines to > that device (i.e., so consecutive registers are NOT in consecutive > locations *or* are presented in an entirely different order), > scrambling the data lines (e.g., so 0x53 is returned when the character > '5' is received), etc. > > The "reset_processor()" reference intending to suggest the fact that > most processors "come out of reset" at a particular (fixed!) point in > their address space, regardless of the rest of the board around them. > > [This sort of thing is increasingly common with SoC targets where > many of the design choices (e.g., address map) have been taken away > from the *hardware* designer]Ah. You are choosing a definition of "portable" to coincide with your contentions. Difficult to argue against that.> >>>> Any examples? > >
Reply by ●March 10, 20172017-03-10
On 3/10/2017 11:43 AM, George Neuner wrote:>> On 3/8/2017 10:20 AM, George Neuner wrote: >>> >>> A closure can be defined over a function anywhere the function is in >>> scope. A function F exported from module X may be used by a closure >>> in module Y which imports X. Similarly a closure defined in module X >>> may be exported from X as an opaque object. >> >> Two heaps created, each by referencing a function: >> instantiate_heap(memory, metrics, allocation_policy, release_policy, ...) >> >> Module that *defines* that function must remain "loaded" as it defines the >> memory managers (directly or indirectly, depending on what is return-ed) >> for each heap. >> >> Any "choices" that are stored (again, avoiding "static") in the function's >> definition need to persist beyond the *invocation* of instantiate_heap. >> >>> Recall that a module may require "initialization" when it is imported. >>> Closures defined for export would be created at that time. >> >> Yes. But who ensures they remain present <somewhere> after the module >> itself has served its purpose? I.e., the state is "hidden" and not >> obvious to the "invoker". > > You're moving the goal post again. > > You asked if closures introduced a GC penalty - the answer to which is > "not necessarily". Now you are complaining that functions referenced > by a closure need to remain available for its entire lifetime. > > I'd say "duh" but that would be redundant.Yes -- so how does the tool and/or developer ensure it provides the intended functionality when required? How do you imbue the tool with the "smarts" to be able to analyze these cases (e.g., impose a "persistent" implementation) in light of the different ways that the developer can opt to use it? Alternatively, how do you give the developer tools that let *him* convey those dependencies to the tool? Unless you restrict how they can be used, I don't see how you can address each possibility...> A module has not "served its purpose" if its functions may yet be > called in the future. That, in general, is undecidable. > > The GC standard of "reachability" is conservative - GC does not > consider whether an object will be used again because it can't know > that. A compiler is in a better position to figure out usage, but > even there the only way to know for certain is to simulate execution > of the program and observe that it terminates without ever again > referencing the object in question. > [Hint: indefinate loops can't be guaranteed to terminate.]Exactly. I "avoid" this issue by requiring the developer to handle the "resource reclamation". This means he can make a mistake and shoot off both feet (so the OS has to know to catch these sorts of problems, indirectly).>> [If initialize_heap RETURNS "memory manager"s that are then used to >> manage each individual heap, then initialize_heap() itself is still >> dynamically bound to those objects. If you unload the module >> containing it, then the memory manager objects (in this example) that >> it created for the callers also disappears.] > > Not necessarily - it depends on the module structure. > E.g., > > +----------------+ > -| heap functions | > / +----------------+ > / | > / v > / +-----------------+ > <closure> <-- | initialize_heap | > +-----------------+ > > initialize_heap can be in a module separate from other heap control > functions. Once the (heap interface) closure is created, the module > containing initialize_heap is unneeded and could be unloaded.That's how I "manually" implement these things. But *I* have to keep track of whether the functions are loaded and *where* they are loaded. E.g., if I want to migrate something to another node, then I need a handle by which the heap functions (and the bindings made at "initialization") can be invoked "later". I just don't see how a compiler can be aware of this sort of thing (without PREVENTING me from doing these sorts of things).> Your imagination is failing. > >>> You can do this even with stack bound closures. Consider that >>> imported namespaces need to be available before the importing module's >>> code can execute. However, even in the case of the 1st (top) module, >>> the *process* invoking it already exists, and therefore there is >>> already is a stack. >>> >>> With appropriate language support, a module which exports closures can >>> construct them on the process stack at the point when the module is >>> 1st imported. Then they would be available anywhere "below" the site >>> of the import (subject to visibility). >> >> So, the memory manager example would necessitate loading that >> "system object" (memory MANAGER) into the user's process space. >> >> Or, keeping process/task specific "state" like that in some >> protected per-process portion of privileged memory. > > Depends on how the system is structured. Certainly functions that are > needed would have to remain accessible, but proper structuring can > reduce incidental retention of things that are unneeded. > >>>> [E.g., consider the heap instantiator example: why does *it* have >>>> to persist just so something it provides remains accessible?] > > It doesn't.Unless the parameters (e.g., allocation_policy, etc.) that were specified in its invocation are embodied *within* it. I think a lot of these sorts of things have an underlying assumption that they are part of a persistent, single "program unit" so the compiler doesn't have to "worry" about lifespan. E.g., crt0.s can "go away" after main() is reached. OTOH, much harder to decide when the floating point library can (safely) be unloaded! But, the *developer* COULD know this (he could also get it wrong!)>>> In any case, the functions involved cannot be unloaded (at least not >>> easily) if they will be needed by something that is still running. >> >> Does the language magically enforce this dependence? Or, does the developer >> have to be aware of the potential "gotcha"? Or, does something else reference >> count, etc.? > > What "language" are we talking about? Dependencies can be enforced at > many levels.*Your* language of choice. I'm merely evaluating it (as a language feature) in *my* build/execute environment. "How would I make it work?" "How could a compiler ensure that it works as intended *in* my environment?" etc. E.g., its relatively easy to make most "conventional" (whatever that means) language work in a multitasking environment without intimate knowledge of their code generators (though you may have to force some responsibilities onto the developer to do so reliably). Languages don't tend to place temporal guarantees on anything -- the next statement could be executed nanoseconds after the previous one OR weeks later; the accuracy of the *code* isn't affected by its timing. I can impose some extensions to a "system" without annoying the compiler, linker, developer, etc. *if* I am careful in how I implement them AND what I expect of these "parties". So, the question becomes: "what responsibilities/constraints does THIS force onto the developer (to ensure the assumptions that the compiler REASONABLY makes won't be violated)?"
Reply by ●March 10, 20172017-03-10
On 3/10/2017 3:42 PM, Walter Banks wrote:> On 2017-03-10 4:10 PM, Don Y wrote: >>> What's wrong with a single set of sources that defines an >>> application, no command line options or linker scripts just an >>> application including the definition of the target, files and >>> libraries it needs. Compilation is both faster by many factors and >>> there is a simple self contained project that can be easily >>> re-created after a decade or more. >> >> That would depend on the size and complexity of the project, right? I >> have 192 processors (each with multiple cores) in my current design. >> It would be *delightful* if <something> could sort out how best to >> allocate resources at run-time instead of my crude metrics. >> >> But, those tools don't exist and aren't likely to any time soon. > > Most of my time now is working on both tools and ISA's. There has been > some really significant changes in both approaches to compiling for > heterogeneous parallel environments and execution environments that have > hundreds to thousands of processors in them.But they are (largely) *static* environments (?). The toolchain doesn't have to decide when to bring another processor on-line... or, when it can retire a running processor and migrate its workload to some OTHER processor, etc. Or, which aspects of an application should be bound to specific processors (nearness of related I/Os) and which aspects should AVOID particular processors (as they were in insecure locations). [Simulations of my first workload scheduler immediately brought every processor online and kept them there! D'uh!] I've found it "trying" to come up with even a suitable set of criteria by which to constrain these choices. E.g., "performance" can be evaluated in a variety of ways: throughput, response time, power consumption, redundancy, etc. Just coming up with *a* set of criteria is a challenge. And, if the user can bias this at run-time, it becomes even more challenging! (perhaps userX might be willing to suffer slower response times for reduced power consumption?) [As I get older, I am encountering more applications where The Right Answer is really elusive and often not available "at compile time". *Or*, even DESIGN TIME! (I'm still at a loss to forumulate a test suite to score the performance of the different speech synthesizer implelmentations I've created -- let alone their "costs"! sqrt(3) = 1.732 is a "better" answer than sqrt(3) = 1.7; but, how do you decide which pronunciation of which utterance is "better" -- and, how to weight the performances of the limitless number of POSSIBLE uterances to come up with a composite score??)]> We are likely sooner than later to see some major shifts in tool sets. > I am currently working on a reference design for one of these that has > several hundred execution units.I'd be interested in seeing what directions you took when you have something to share! And, the assumptions you made along the way. Time for my evening jaunt... cripes, still 90 degrees -- this won't be fun. :<
Reply by ●March 10, 20172017-03-10
On 11/03/17 00:02, Don Y wrote:> On 3/10/2017 3:42 PM, Walter Banks wrote: >> Most of my time now is working on both tools and ISA's. There has been >> some really significant changes in both approaches to compiling for >> heterogeneous parallel environments and execution environments that have >> hundreds to thousands of processors in them. > > But they are (largely) *static* environments (?). The toolchain doesn't > have to decide when to bring another processor on-line... or, when it can > retire a running processor and migrate its workload to some OTHER > processor, etc. Or, which aspects of an application should be bound > to specific processors (nearness of related I/Os) and which aspects > should AVOID particular processors (as they were in insecure locations).The more advanced toolchains are doing similar things. They instrument themselves, determine what the code+data is *actually* doing at runtime, and optimise the **** out of that. That's as opposed to what the compiler can guess they are doing and where a compiler has to make pessimising assumptions. And such techniques also work with C. For 18 year old (gulp) results, google for "hplb dynamo". And don't forget that some related techniques are implemented in processor's hardware microarchitecture.> [Simulations of my first workload scheduler immediately brought every > processor online and kept them there! D'uh!] > > I've found it "trying" to come up with even a suitable set of criteria > by which to constrain these choices. E.g., "performance" can be evaluated > in a variety of ways: throughput, response time, power consumption, > redundancy, etc. Just coming up with *a* set of criteria is a challenge. > And, if the user can bias this at run-time, it becomes even more > challenging! (perhaps userX might be willing to suffer slower response > times for reduced power consumption?) > > [As I get older, I am encountering more applications where The Right Answer > is really elusive and often not available "at compile time". *Or*, even > DESIGN TIME! (I'm still at a loss to forumulate a test suite to score > the performance of the different speech synthesizer implelmentations I've > created -- let alone their "costs"! sqrt(3) = 1.732 is a "better" answer > than sqrt(3) = 1.7; but, how do you decide which pronunciation of which > utterance is "better" -- and, how to weight the performances of the > limitless number of POSSIBLE uterances to come up with a composite score??)]You are correct in presuming that you can't do an optimal job at compile time, since the information isn't there - and can't be there. The bonus of avoiding getting toolchains to make premature optimisations, is that the same runtime optimisation techniques also work with different processors. There are disadvantages, of course. TANSTAFFL.>> We are likely sooner than later to see some major shifts in tool sets. >> I am currently working on a reference design for one of these that has >> several hundred execution units. > > I'd be interested in seeing what directions you took when you have something > to share! And, the assumptions you made along the way. > > Time for my evening jaunt... cripes, still 90 degrees -- this won't be fun. > :<







