EmbeddedRelated.com
Forums
The 2026 Embedded Online Conference

Language feature selection

Started by Don Y March 5, 2017
On 12/03/17 19:15, George Neuner wrote:
> On Sat, 11 Mar 2017 01:32:04 +0000, Tom Gardner > <spamjunk@blueyonder.co.uk> wrote: > >> On 11/03/17 01:22, George Neuner wrote: >> >>> There are at least 2 compilers which do whole program alias analysis. >> >> How do they do that if the program includes a library >> for which the source is not available, and for which >> the compiler flags are not known? > > Obviously "whole" analysis can't be done in that situation. However, > in that situation, it isn't needed either. > > Libary code can see only what data is passed to it: aliasing, if there > is any, can only be in function call parameters. Of course, the > aliasing may be in parameters to different functions. > > Aliasing introduced *within* the library, if any, is the fault and > concern of the library developer. > > > Typically most libraries *are* compiled pessimistically unless they > are deliberately intended for high performance use ... math, image > processing, etc. ... and in that case there will be all kinds of > documented restrictions on how to use the library functions correctly. > > If a pessimistic canned library is a critical path for a program, then > performance may suffer and there is little the programmer can do about > it [other than find another library]. But use of canned libraries > doesn't affect better optimization of the rest of the program. > > > In general, it's a mistake to compile a library for distribution using > very high optimization settings. It makes the library brittle and > hard to use correctly. It is acceptable for applications that require > maximum performance, but most applications do not fall under that > classification. In general, it is more important that a library be > defensive with its input and/or tolerant of user mistakes than it is > to provide high performance. > > [Remember that in the real world outside of C.A.E, the average skill > level of a software developer is only slight above novice. I've > mentioned occasionally that I believe a lot of so-called "developers" > would be doing the world a favor by finding other employment.]
All very plausible. It can be contended that embedded systems have to make full use of the available resources (processor, memory etc). That suggests requiring highly optimised code, and hence requiring that libraries' source code is available. Getting the optimisation/pessimisation compiler flags right for code written by other people is not easy do correctly. It is too easy to get them "almost right" so that they pass the unit and integration tests, but intermittently fail in the field. And that's still true for non-novice developers unfortunately. Not a solid starting point for a system that has to be reliable, IMNSHO.
On Sat, 11 Mar 2017 11:13:53 +0000, Tom Gardner
<spamjunk@blueyonder.co.uk> wrote:


>In the C/C++ /language/, aliasing is a pig resulting in >pessimised code. Typically, unlike other languages, you >have to resort giving extra assertions to the /tools/; those >assertions are the source of quite a few subtle problems.
Some problem aliasing is (almost) unavoidable, but the unfortunate truth is that much of the aliasing that *is* a problem in C is self-inflicted by programmers: the idiomatic style that abuses pointers to, e.g., strength reduce address calculations, or "shorten" a long-winded name (e.g., a deep nested struct member), etc. ... that style evolved at a time when compilers did comparatively little optimization and performance of general ALU code was much more sensitive to instruction selection than is true of modern CPUs [e.g., 20 cycles for a multiply vs 2 for an shift, etc.]. Using a modern C compiler, array code written in C using *indices* _can_ achieve, within a few %, similar performance to comparable code written in Fortran. But if the C programmer tries to "optimize" the array code using pointers, it is more likely that the optimizer will be confused and the compiler will produce less than optimal code. [And most programmers are not capable of writing the optimal code without assistance from the compiler.] YMMV, George
On 12/03/17 21:26, George Neuner wrote:
> On Sat, 11 Mar 2017 11:13:53 +0000, Tom Gardner > <spamjunk@blueyonder.co.uk> wrote: > > >> In the C/C++ /language/, aliasing is a pig resulting in >> pessimised code. Typically, unlike other languages, you >> have to resort giving extra assertions to the /tools/; those >> assertions are the source of quite a few subtle problems. > > Some problem aliasing is (almost) unavoidable, but the unfortunate > truth is that much of the aliasing that *is* a problem in C is > self-inflicted by programmers: the idiomatic style that abuses > pointers to, e.g., strength reduce address calculations, or "shorten" > a long-winded name (e.g., a deep nested struct member), etc. ... that > style evolved at a time when compilers did comparatively little > optimization and performance of general ALU code was much more > sensitive to instruction selection than is true of modern CPUs [e.g., > 20 cycles for a multiply vs 2 for an shift, etc.]. > > > Using a modern C compiler, array code written in C using *indices* > _can_ achieve, within a few %, similar performance to comparable code > written in Fortran. But if the C programmer tries to "optimize" the > array code using pointers, it is more likely that the optimizer will > be confused and the compiler will produce less than optimal code.
Good compilers can sometimes follow the source of pointers and other "tweaking", and get the same information as if the programmer had used arrays or structs directly. But yes, it used to be the case that to get the most efficient code, you would do all sorts of "hand optimisation" to your source code in order to help the compiler. Changing array accesses to pointers, using do..while loops instead of for loops, shifts instead of multiplies or divides, etc. And with a good modern compiler, you want to do exactly the opposite! It is nicer to be able to write code as clearly as possible, and let the compiler figure out the details. However, there is still scope for writing better code by giving the compiler more information. A simple example is that careful use of "restrict" can help significantly with aliasing situations.
> > [And most programmers are not capable of writing the optimal code > without assistance from the compiler.]
Certainly /I/ find a compiler to be invaluable in generating optimal code - or indeed, /any/ code - from my C or C++ source!
> > YMMV, > George >
Hi Dimiter,

On 3/12/2017 12:05 PM, Dimiter_Popoff wrote:
> On 12.3.2017 &#1075;. 17:45, Walter Banks wrote: >> On 2017-03-11 1:53 PM, Don Y wrote: >>>> It is our generation that is obsessed with optimization execution and >>>> data space. >>> >>> Agreed (as I stated elsewhere). But this is only natural given our >>> "upbringing" -- in much the same way that folks who lived through >>> The Great Depression are far less likely to discard items than those >>> who grew up in "times of plenty". >>> >>> And, while hardware has grown cheaper and more capable over the years, >>> you still don't see 32b CPU's in *mice* or debouncing keys in keyboards! >> >> Developers have discovered that updating old 8 bit applications to run >> on modern 32 bit general purpose processors have had a hidden cost. >> Some applications are better suited to small processors for code size >> and a variety of things emi from chips due to increased width of the >> data buses and board layout issues.
> I had - still have - some work done in the 80-s I don't have to part > with. It was done on a 6809 (under MDOS, the OS which used to run > on Motorola's Exorsiser systems, back then I ran it on hardware I had > built with ROM (BIOS as they now have it) I had written). > So what did it cost me? A few weeks of work to emulate the 6809 > system I had on a Power architecture processor running under DPS. > 46 kilobytes of code; this includes a terminal emulation (my graphics > terminal I had designed back then), can be run in multiple instances > in multiple windows - and runs about 50 times faster than the original > 2MHz 09 ran (in fact than two of them, one was in the graphics > terminal, interfaced via a parallel FIFO-ed cable to the system board; > the terminal board was not emulated as if running on a 6809, just > its functionality (was easier using the dps graphics services)). > > The silicon all this can run on is certainly cheaper than it > could have been back in the 80-s with the 6809... > > I am not contradicting your point of course, I am sure the hidden > costs you refer to do exist. But this is probably when people > try to replicate a device programmed 30 years ago cleanly with > some new high level language stuff put together by todays > popular methods (which are very messy in my book by this is > another matter).
I think the point is the assumptions that are inherent in the original ("crippled", by today's standards) implementations. Non-standard data types, hand-optimization that doesn't lend itself to a newer processor or language, etc. I always cringed when I had to maintain a "small system" that someone else had designed/implemented... no way to know what hacks he relied on in his design (that aren't documented, of course!) As a result, I would strongly resist the temptation to "fix" outrageously bad implementations unless the effort was required for me to implement the changes contracted. I'd never know if some change would break something of which I was unaware! [I've seen some REALLY BAD implementations of small systems -- I suspect because the folks hired for the original effort weren't chosen for their abilities to develop BIG/well-defined systems and were more "hackers" than anything else] [[I knew of one (very large!) company that assigned responsibility for the software on a multi-million dollar development project to a *technician* -- because he tinkered with software AS A HOBBY!]] Note the problems the MAME folks have had getting (pseudo-)*true* emulation of the systems that ran on 8b CPU's that were 1000 times slower than modern machines despite all the resources they can throw at the problem! [And, I suspect that these systems only *appear* to emulate their ancestors but don't really when examined in fine detail (e.g., under conditions of overload)]
On Sat, 11 Mar 2017 12:33:04 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:

>On 3/11/2017 2:17 AM, George Neuner wrote: >> >> Once you introduce heap-based closures the lifetime of closures >> becomes indefinite: it can be *estimated* by a compiler - e.g., using >> region analysis - but runtime GC generally is better at noticing >> disuse in a timely fashion. Module reference counting by the compiler >> is no longer sufficient and must be augmented by runtime GC. > >This (as with other aspects below) suggests the compiler really >needs to see the entire "application" ("system"). And, even then, >can be tripped up because it doesn't know what will come and go >(dynamic "applet" loading) or when. Add another "applet" and >the apple-cart has been toppled and needs to be re-sorted...
Operationally there is little difference vs manual DLL use. (un)Loading the module is under control of the execution environment: the OS or VM under which the application is running. The compiler/runtime only tells the environment whether or not the application is currently using the module. Overall memory use is more or less the same regardless of whether module handling is manual or automatic.
>I haven't settled on a consistent way of handling this (which is why >I am having trouble imagining how a compiler/tool could uniformly >enforce it). I look at each instance and think about how it is >intended to be used (cuz *I* am using it!) and let that information >dictate how I'll implement. What seems natural in some cases would be >piss-poor in others.
On the face of it that seems strange: if the module mechanism is "unnatural" in some usage, that indicates to me that the problem is not the mechanism per se, but rather that the application itself is not well suited to modules.
>In my case, "no more references" causes the OS to delete the resource >("unreachable") -- whether that is a dynamically created object (e.g., a >heap, in this example) or a static one (e.g., a loaded module -- that might >find itself RE-loaded any time now) > >We just think of things in different ways. You think in terms of the >compiler's work, I think in terms of the OS/runtime's.
No. I just can't think in terms of *your* system because it is unlike anything in my experience. You are aware that my background includes study of both operating systems and virtual machine environments. I always consider the runtime environment and how the program interacts with it. George
On 12.3.2017 &#1075;. 23:40, Don Y wrote:
> Hi Dimiter, > > On 3/12/2017 12:05 PM, Dimiter_Popoff wrote: >> On 12.3.2017 &#1075;. 17:45, Walter Banks wrote: >>> On 2017-03-11 1:53 PM, Don Y wrote: >>>>> It is our generation that is obsessed with optimization execution and >>>>> data space. >>>> >>>> Agreed (as I stated elsewhere). But this is only natural given our >>>> "upbringing" -- in much the same way that folks who lived through >>>> The Great Depression are far less likely to discard items than those >>>> who grew up in "times of plenty". >>>> >>>> And, while hardware has grown cheaper and more capable over the years, >>>> you still don't see 32b CPU's in *mice* or debouncing keys in >>>> keyboards! >>> >>> Developers have discovered that updating old 8 bit applications to run >>> on modern 32 bit general purpose processors have had a hidden cost. >>> Some applications are better suited to small processors for code size >>> and a variety of things emi from chips due to increased width of the >>> data buses and board layout issues. > >> I had - still have - some work done in the 80-s I don't have to part >> with. It was done on a 6809 (under MDOS, the OS which used to run >> on Motorola's Exorsiser systems, back then I ran it on hardware I had >> built with ROM (BIOS as they now have it) I had written). >> So what did it cost me? A few weeks of work to emulate the 6809 >> system I had on a Power architecture processor running under DPS. >> 46 kilobytes of code; this includes a terminal emulation (my graphics >> terminal I had designed back then), can be run in multiple instances >> in multiple windows - and runs about 50 times faster than the original >> 2MHz 09 ran (in fact than two of them, one was in the graphics >> terminal, interfaced via a parallel FIFO-ed cable to the system board; >> the terminal board was not emulated as if running on a 6809, just >> its functionality (was easier using the dps graphics services)). >> >> The silicon all this can run on is certainly cheaper than it >> could have been back in the 80-s with the 6809... >> >> I am not contradicting your point of course, I am sure the hidden >> costs you refer to do exist. But this is probably when people >> try to replicate a device programmed 30 years ago cleanly with >> some new high level language stuff put together by todays >> popular methods (which are very messy in my book by this is >> another matter). > > I think the point is the assumptions that are inherent in the > original ("crippled", by today's standards) implementations. > Non-standard data types, hand-optimization that doesn't lend > itself to a newer processor or language, etc. > > I always cringed when I had to maintain a "small system" that > someone else had designed/implemented... no way to know what hacks > he relied on in his design (that aren't documented, of course!) > As a result, I would strongly resist the temptation to "fix" > outrageously bad implementations unless the effort was required > for me to implement the changes contracted. I'd never know if > some change would break something of which I was unaware!
Hi Don, my point is that instead of trying to "port" the old code it is much easier to simply emulate the old processor instruction set _completely_, like I did with the 6809. People just overlook it. And, like I said, the entire thing was 45 kilobytes (and the actual processor emulation is a fraction of that) of PPC code.... Even the 6809 - a fairly complex 8 bit CPU - was quite easy to emulate on a Power architecture processor with its huge register set, all registers being larger than those to emulate etc. And processors of that era tend to be well documented, at least those I know (Motorolas) used to be. As long as the original code does not do timing loops everything runs just fine. If there are timing loops well, they have to be taken care of. Not hard to locate. Then emulating on a much faster CPU makes life easier; you can just include some virtual machinery in the emulation, e.g. trap read/writes to old peripheral addresses and handle them on the new CPU using its peripherals - and its power (I did not do that IIRC, I just modified a little the ROM entry points - did have the sources as I had written it - so that say for disk accesses via the old floppy disk controller the thing traps and uses a memory image for a floppy disk (the largest one was 512 kilobytes....). A 400 MHz Power core (603e derivative) runs about 50 times faster than two 2 MHz 6809 CPU-s could run in parallel, without the emulation being particularly tricky or pushed, just the old processor took a lot of cycles to do things. They all did. Mind you, I have done the "porting" as well - when performance mattered. This is how I started vpa - it assembled CPU32 sources into PPC object code. The effort did not take me a few weeks as it did with the 6809 emulation, it took me about _a year_, the vpa sources are 880 kilobytes - I have been progressing at half my average programming rate which is about 150k/month... Much much harder than the emulation (well, this was the tool creation of course, not really comparable the two things). Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/
On 12/03/17 20:26, George Neuner wrote:
> On Sat, 11 Mar 2017 11:13:53 +0000, Tom Gardner > <spamjunk@blueyonder.co.uk> wrote: > > >> In the C/C++ /language/, aliasing is a pig resulting in >> pessimised code. Typically, unlike other languages, you >> have to resort giving extra assertions to the /tools/; those >> assertions are the source of quite a few subtle problems. > > Some problem aliasing is (almost) unavoidable, but the unfortunate > truth is that much of the aliasing that *is* a problem in C is > self-inflicted by programmers: the idiomatic style that abuses > pointers to, e.g., strength reduce address calculations, or "shorten" > a long-winded name (e.g., a deep nested struct member), etc. ... that > style evolved at a time when compilers did comparatively little > optimization and performance of general ALU code was much more > sensitive to instruction selection than is true of modern CPUs [e.g., > 20 cycles for a multiply vs 2 for an shift, etc.]. > > > Using a modern C compiler, array code written in C using *indices* > _can_ achieve, within a few %, similar performance to comparable code > written in Fortran. But if the C programmer tries to "optimize" the > array code using pointers, it is more likely that the optimizer will > be confused and the compiler will produce less than optimal code. > > [And most programmers are not capable of writing the optimal code > without assistance from the compiler.]
"You can write Fortran in any language" :) But here's a serious real-world problem... There are people that understand an application domain, and want their computation completed correctly, /with/ /fewest surprises/. A tool which enables /them/ to solve their computational problem is highly beneficial. OTOH, a tool that requires /experts/ to use /correctly/ is bound to be a problem, since such tool experts are very very unlikely to also be the application domain experts. So your manpower is at least doubled, and communication rears its head - even more so when it becomes desirable to include code written by outside people. That's why the necessity of having "language lawyers" to unravel the arcane complexity of modern C/C++ and tools implies that modern C/C++ is part of the problem as much as it is part of the solution. The corollary, to get back to the original question, is that understandability, of the language and of programs written in that language is of key importance.
On 3/12/2017 3:36 PM, Dimiter_Popoff wrote:

> my point is that instead of trying to "port" the old code it is > much easier to simply emulate the old processor instruction set > _completely_, like I did with the 6809. People just overlook > it. And, like I said, the entire thing was 45 kilobytes (and the > actual processor emulation is a fraction of that) of PPC code.... > Even the 6809 - a fairly complex 8 bit CPU - was quite easy > to emulate on a Power architecture processor with its huge > register set, all registers being larger than those to emulate etc. > And processors of that era tend to be well documented, at least > those I know (Motorolas) used to be.
But Exorcisers/Exormacs are essentially "desktop" machines -- they run text processing applications, for the most part. Even the debuggers are reasonably "rich" applications -- they've got a healthy budget and sell price that they could command.
> As long as the original code does not do timing loops > everything runs just fine. If there are timing loops well, > they have to be taken care of. Not hard to locate.
The problem comes when the timing is *implicit*. E.g., you can't just emulate the functionality of the code being emulated but, also, must emulate the *timing* of that code. [If you could do this, then timing loops wouldn't be an issue] E.g., there were arcade pieces (video/pinball) that would sit in very tight loops effectively shifting *bits* out as fast as the processor could manage (because the hardware was designed by someone ignorant of the *software* issues!). loop sample bit if bit HI output HI else output LO repeat Not only do you have to emulate the "time around the loop" (i.e., each repetition) AND match that of the original execution time (otherwise, you're altering the frequency component of the generated "bit stream"), *but*, you also have to ensure the time between one "output HI" and the next "output HI" *or* "output LO" remains constant -- otherwise you're introducing clock jitter. Then, wrap this in a bigger loop to enumerate all of the bits in a particular byte: bigloop: fetch byte bitcount = 8 loop: sample bit if bit HI output HI else output LO bitcount-- repeat if not zero advance pointer bytecount-- repeat bigloop if not done And the time from output of the last bit of one byte to the output of the first bit of the next byte must now satisfy the same inter-bit time as between any two bits within a single byte. [If using this to generate audio, clock jitter can beat through the signal annoyingly] If you look through some of my old (ancient) code, you'll often see numbers in parens adjacent to each instruction (ASM) indicating the number of clock cycles. This code stanza will be bracketed with comments like: // set output to desired state ... // (5) // (10/12) // (4) ... // at least (19) clock cycles have elapsed -- sufficient // delay for Fosc <= 12.3MHz I.e., instead of spinning in a loop, I was "doing work" that needed to be done and using the side effect (elapsed time) of that work to provide my delay. If you just look at the object/ROM image, you'd not know that this was happening for a deliberate reason. For really short delays, there isn't often a practical alternative. OTOH, old processors had far more predictable "best case" timings (if they happened to "run long", no problem -- you're looking for AT LEAST some minimum delay)
> Then emulating on a much faster CPU makes life easier; > you can just include some virtual machinery in the emulation, > e.g. trap read/writes to old peripheral addresses and handle > them on the new CPU using its peripherals - and its power > (I did not do that IIRC, I just modified a little the ROM > entry points - did have the sources as I had written it - so > that say for disk accesses via the old floppy disk controller > the thing traps and uses a memory image for a floppy disk > (the largest one was 512 kilobytes....). > > A 400 MHz Power core (603e derivative) runs about 50 times > faster than two 2 MHz 6809 CPU-s could run in parallel, > without the emulation being particularly tricky or pushed, > just the old processor took a lot of cycles to do things. > They all did. > > Mind you, I have done the "porting" as well - when performance > mattered. This is how I started vpa - it assembled CPU32 sources > into PPC object code. The effort did not take me a few weeks > as it did with the 6809 emulation, it took me about _a year_, > the vpa sources are 880 kilobytes - I have been progressing at > half my average programming rate which is about 150k/month... > Much much harder than the emulation (well, this was the tool > creation of course, not really comparable the two things).
On 3/12/2017 3:08 PM, George Neuner wrote:
> On Sat, 11 Mar 2017 12:33:04 -0700, Don Y > <blockedofcourse@foo.invalid> wrote: > >> On 3/11/2017 2:17 AM, George Neuner wrote: >>> >>> Once you introduce heap-based closures the lifetime of closures >>> becomes indefinite: it can be *estimated* by a compiler - e.g., using >>> region analysis - but runtime GC generally is better at noticing >>> disuse in a timely fashion. Module reference counting by the compiler >>> is no longer sufficient and must be augmented by runtime GC. >> >> This (as with other aspects below) suggests the compiler really >> needs to see the entire "application" ("system"). And, even then, >> can be tripped up because it doesn't know what will come and go >> (dynamic "applet" loading) or when. Add another "applet" and >> the apple-cart has been toppled and needs to be re-sorted... > > Operationally there is little difference vs manual DLL use. > > (un)Loading the module is under control of the execution environment: > the OS or VM under which the application is running. The > compiler/runtime only tells the environment whether or not the > application is currently using the module.
The problem is *knowing* when it can be unloaded. In my case, the runtime (OS) actively tracks references as a side-effect of the implementation. E.g., each time a handle is deleted/released, the OS does that work. So, it knows when there are no more active handles to a particular object. And, can reclaim the object itself. [Likewise, it knows "who" to notify if an object is prematurely deleted by noticing the outstanding handles still in existence at that time]
> Overall memory use is more or less the same regardless of whether > module handling is manual or automatic. > >> I haven't settled on a consistent way of handling this (which is why >> I am having trouble imagining how a compiler/tool could uniformly >> enforce it). I look at each instance and think about how it is >> intended to be used (cuz *I* am using it!) and let that information >> dictate how I'll implement. What seems natural in some cases would be >> piss-poor in others. > > On the face of it that seems strange: if the module mechanism is > "unnatural" in some usage, that indicates to me that the problem is > not the mechanism per se, but rather that the application itself is > not well suited to modules.
Assume applet, memory_manager_module, applet_parent. Applet_parent (for example) instantiates a memory object for applet. Part of that requires binding parameters to that memory object (size, granularity, allocation_policy, release_policy, etc.). Applet_parent need not persist beyond this point (assume its job is done). Where do the bound parameters get stored? In the applet? In the memory object? Obviously, can't be part of applet_parent because that won't persist (unless you make other arrangements for it or portions of it). [I.e., applet_parent can't define an allocation handler in its body.] Now, migrate applet to another node. Another (shared) instance of memory_manager_module likely already exists on that node. If the allocation handler was a "stock" handler (contained in the memory_manager_module), can you simply copy the state over to the new node? What if the allocation handler was resident in some OTHER module on the original node? Do you migrate that other module, as well? Or, instantiate another copy if the module has outstanding references on the first node?? Or, do you require declarations as to what *can* migrate and what is bound in place? Then, contend with the consequences of cascaded dependencies? The "consistent" solution is to make all of these first class objects and let the OS manage them and their locations/instantiations. But, that adds considerably to the cost of *using* them. [I'm currently exploring short-circuit paths to automatically expedite local cases. But, that's even more complexity...] Its a lot easier to deal with implementations where "everything" is a cohesive "blob" instead of being able to slice-and-dice it at run-time and redistribute it based on evolving workloads, resources, etc. [Do you kill off a task/applet so you can reinstantiate it, from scratch, when new hardware is made available to the system? Or, do you let it continue to run as that hardware's resources come on-line?] Last 25# of oranges... yippee!
On Saturday, March 11, 2017 at 4:54:54 PM UTC-6, upsid...@downunder.com wrote:
> On Sat, 11 Mar 2017 14:20:27 -0800 (PST), jim.brakefield@ieee.org > wrote: > > >On Saturday, March 11, 2017 at 7:42:00 AM UTC-6, Niklas Holsti wrote: > >> On 17-03-11 04:46 , jim.brakefield@ieee.org wrote: > >> > On Sunday, March 5, 2017 at 8:43:28 PM UTC-6, Don Y wrote: > >> >> A quick/informal/UNSCIENTIFIC poll: > >> >> > >> >> What *single* (non-traditional) language feature do you find most > >> >> valuable in developing code? (and, applicable language if unique > >> >> to *a* language or class of languages) > >> > > >> > A plug for array operators: as in Numpy, IDL/PV~wave, APL and Julia. > >> > That is: array and vector operators baked into the language. > >> > >> In a language that allows operator overloading, programmers can define > >> their own array operators. > >> > >> > I've found that programming at this level yields shorter programs with > >> > less debugging: You windup making your data structures and algorithms > >> > use the fewest operators possible/practical. > >> > >> I agree that array operators are useful, but only for relatively simple > >> cases such as the basic arithmetic operations on arrays. However, taking > >> it to the APL extreme with complex vector/matrix restructurings (outer > >> products, lamination, ...) can create code that is hard for others to > >> understand. > >> > >> > There is a theoretical vantage point for this style of programming > >> > (which I call "programming in the large" as opposed to "programming > >> > in the small"): > >> > >> You may call it that, but I hope you know that most people understand > >> these large/small terms differently; see > >> https://en.wikipedia.org/wiki/Programming_in_the_large_and_programming_in_the_small. > >> > >> > >> -- > >> Niklas Holsti > >> Tidorum Ltd > >> niklas holsti tidorum fi > >> . @ . > > > >]> You may call it that, but I hope you know that most people understand > >]> these large/small terms differently; see > > > >Was not aware of this definition. Tend to consider this type of "programming in the large" as solving an organizational problem and an architectural problem. > > > >Another form of programming in the large is characterized by provisioning a complete 64-bit computer: e.g. one with a complete 64-bit address space. > > > >]> I agree that array operators are useful, but only for relatively simple > >]> cases such as the basic arithmetic operations on arrays. However, taking > >]> it to the APL extreme with complex vector/matrix restructurings (outer > >]> products, lamination, ...) can create code that is hard for others to > >]> understand. > > > >My experience was with scientific programming, so yes this argument has some validity. Am still convinced that it is a useful exercise: Push low level details down into the operators and data structures, consider the various ways of doing this so that the high level operators (that need to be written) emerge. > > > >Also consider programming well with "array" operators to require greater experience, know-how and good judgement than doing low level coding. > > > >Jim Brakefield > > FORTRAN had complex number support from the beginning. > > The array support has been added more recently. It seems that array > support was added ,more recently.. IMHO Fortran is still a viable > option for solving mathematical problems after recent updates.
]> FORTRAN had complex number support from the beginning. Fortran IV did not have a complex number type? Had expected Fortran 90 to compete well against C/C++. What happened? Julia uses 1 origin subscripts, same as Fortran. It can be difficult to convert a Fortran program to C/C++/etc: e.g, preserving correctness while converting from 1 origin to 0 origin subscripts. Jim Brakefield
The 2026 Embedded Online Conference