EmbeddedRelated.com
Forums
The 2026 Embedded Online Conference

Portable Assembly

Started by rickman May 27, 2017
On 08.6.2017 г. 13:38, George Neuner wrote:
> ... > > The question is not why C was adopted for system programming, or for > cross development from a capable system to a smaller target. Rather > the question is why it was so widely adopted for ALL kinds of > programming on ALL platforms given that were many other reasonable > choices available.
My take on that is it happened because people needed a low level language, some sort of assembler - and the widest spread CPU was the x86 with a register model for which no sane person would consider programming larger pieces of code. I am sure there have been people who have done it but they can't have been exactly sane :) (i.e. have been insane in a way most people would have envied them for their insanity). So C made x86 usable - and the combination (C+x86) is the main factor which led to the absurd situation we have today, where code which used to take kilobytes of memory takes gigabytes (not because of the inefficiency of compilers, just because of where most programmers have been led to). Dimiter ====================================================== Dimiter Popoff, TGI http://www.tgi-sci.com ====================================================== http://www.flickr.com/photos/didi_tgi/
On 08/06/17 11:38, George Neuner wrote:
> The smallest K&R1 compiler I can remember that *ran* on an 8-bit micro > was circa 1983. It was a few hundred KB of code. It ran in 48KB > using overlays, needed 2 floppy drives or a hard disk, and required 2 > compile passes per source file and a final link pass. > > It was quite functional (if glacially slow), and included program code > overlay support and emulated single precision FP (in VAX format IIRC). > Although it targeted a 16-bit virtual machine with 6 16-bit registers, > it produced native 8-bit code : i.e. the "16-bit VM" program was not > interpreted, but was emulated by 8-bit code.
Whitesmiths? IIRC the symbol table size became a limiting factor during linking, so linking became multipass :(
> There existed various subset C compilers that could run in less than > 48KB, but most of them were no more than expensive learning toys.
I always found that remarkable, since Algol-60 compiler ran in 4kwords of 2 instructions/word.
> But even by the standard of "the compiler could run on the machine", > there were languages better suited than C for application programming. > > Consider that in the late 70's there already were decent 8-bit > implementations of BASIC, BCPL Logo, SNOBOL, etc. (Extended) Pascal, > Smalltalk, SNOBOL4, etc. became available in the early 80's for both 8 > and 16-bit systems. But C really wasn't useable on any micro prior to > ~1985 when reasonably<?> priced hard disks appeared.
I'll debate Smalltalk :) Apple's implementation (pre L Peter Deutsch's JIT) was glacially slow. I know: it is still running on my fat Mac downstairs :)
> The question is not why C was adopted for system programming, or for > cross development from a capable system to a smaller target. Rather > the question is why it was so widely adopted for ALL kinds of > programming on ALL platforms given that were many other reasonable > choices available.
Yes indeed. Fortunately The New Generation has seen the light, for better and for worse. But then if you make it possible to program in English, you will find that people cannot think and express themselves in English.
On 8.6.17 16:50, Tom Gardner wrote:
> On 08/06/17 11:38, George Neuner wrote: >> The smallest K&R1 compiler I can remember that *ran* on an 8-bit micro >> was circa 1983. It was a few hundred KB of code. It ran in 48KB >> using overlays, needed 2 floppy drives or a hard disk, and required 2 >> compile passes per source file and a final link pass. >> >> It was quite functional (if glacially slow), and included program code >> overlay support and emulated single precision FP (in VAX format IIRC). >> Although it targeted a 16-bit virtual machine with 6 16-bit registers, >> it produced native 8-bit code : i.e. the "16-bit VM" program was not >> interpreted, but was emulated by 8-bit code. > > Whitesmiths? IIRC the symbol table size became a limiting > factor during linking, so linking became multipass :(
Must be. It run on CP/M machines.
>> There existed various subset C compilers that could run in less than >> 48KB, but most of them were no more than expensive learning toys. > > I always found that remarkable, since Algol-60 compiler ran in > 4kwords of 2 instructions/word.
You mean Elliott 803 / 503? It had also an overlay structure. If the program grew above a certain limit, it was dumped out in an intermediate format, and the operator needed to feed in the second compiler pass paper tape and the intermediate code ('owncode') to get the final run code. -- -TV
On 08/06/17 15:38, Tauno Voipio wrote:
> On 8.6.17 16:50, Tom Gardner wrote: >> On 08/06/17 11:38, George Neuner wrote: >>> The smallest K&R1 compiler I can remember that *ran* on an 8-bit micro >>> was circa 1983. It was a few hundred KB of code. It ran in 48KB >>> using overlays, needed 2 floppy drives or a hard disk, and required 2 >>> compile passes per source file and a final link pass. >>> >>> It was quite functional (if glacially slow), and included program code >>> overlay support and emulated single precision FP (in VAX format IIRC). >>> Although it targeted a 16-bit virtual machine with 6 16-bit registers, >>> it produced native 8-bit code : i.e. the "16-bit VM" program was not >>> interpreted, but was emulated by 8-bit code. >> >> Whitesmiths? IIRC the symbol table size became a limiting >> factor during linking, so linking became multipass :( > > Must be. It run on CP/M machines. > >>> There existed various subset C compilers that could run in less than >>> 48KB, but most of them were no more than expensive learning toys. >> >> I always found that remarkable, since Algol-60 compiler ran in >> 4kwords of 2 instructions/word. > > You mean Elliott 803 / 503? > > It had also an overlay structure. If the program grew above a > certain limit, it was dumped out in an intermediate format, and > the operator needed to feed in the second compiler pass paper > tape and the intermediate code ('owncode') to get the final > run code.
Yes and yes. I saw a running 803 a couple of weeks ago, and discussed the circuit diagrams with the staff member there.
On Thu, 08 Jun 2017 15:58:51 +0300, Dimiter_Popoff <dp@tgi-sci.com>
wrote:

>On 08.6.2017 ?. 13:38, George Neuner wrote: >> ... >> >> The question is not why C was adopted for system programming, or for >> cross development from a capable system to a smaller target. Rather >> the question is why it was so widely adopted for ALL kinds of >> programming on ALL platforms given that were many other reasonable >> choices available. > >My take on that is it happened because people needed a low level >language, some sort of assembler - and the widest spread CPU was >the x86 with a register model for which no sane person would consider >programming larger pieces of code. >I am sure there have been people who have done >it but they can't have been exactly sane :) (i.e. have been insane in >a way most people would have envied them for their insanity). >So C made x86 usable - and the combination (C+x86) is the main factor >which led to the absurd situation we have today, where code which >used to take kilobytes of memory takes gigabytes (not because of the >inefficiency of compilers, just because of where most programmers >have been led to). > >Dimiter > >====================================================== >Dimiter Popoff, TGI http://www.tgi-sci.com >====================================================== >http://www.flickr.com/photos/didi_tgi/ >
PL/M-80 and PL/M-86 were quite reasonable intermediate languages. The same also applies to BLISS for PDP-10/PDP-11/VAX/Alpha and recently some Intel HW. The problem why these languages did not become popular was that the hardware vendors did want to make money by compiler sales. Some HW companies wanting to boost their HW sales did give away compilers and development software for free and that way boost their HW sale.
On 6/8/2017 3:38 AM, George Neuner wrote:
>> The problem is that you have to *design* systems for with these >> folks in mind as their likely "maintainers" and/or "evolvers". >> >> So, even if you're "divinely inspired", you have to manage to either >> put in place a framework that effectively guides (constrains!) their >> future work to remain consistent with that design... >> >> Or, leave copious notes and HOPE they read them, understand them and >> take them to heart... > > Or adopt a throw-away mentality: replace rather than maintain. > > That basically is the idea behind the whole agile/devops/SaaS > movement: if it doesn't work today, no problem - there will be a new > release tomorrow [or sooner].
I think those are just enablers for PHB's who are afraid to THINK about what they want (in a product/design) and, instead, want to be shown what they DON'T want. I encountered a woman who was looking for a "mobility scooter" a week or two ago. I showed her *one* and she jumped at the opportunity. I quickly countered with a recommendation that some OTHER form of "transport" might be better for her: "The scooter has a wide turning radius. If you head down a hallway (i.e., in your home) and want to turn around, you'll have to either continue in the current direction until you encounter a wider space that will accommodate the large turning radius *or* travel backwards retracing your steps. A powerchair will give you a smaller turning radius. An electric wheelchair tighter still!" She was insistent on the scooter. Fearing that she was clinging to it as the sole CONCRETE example available, I told her that I also had examples of each of the other options available. [I was fearful of getting into a situation where I refurbished one "transport device", sent her home with it -- only to find her returning a week later complaining of its limitations, and wanting to "try another option"] In this case, she had clearly considered the options and come to the conclusion that the scooter was best suited to her needs: the chair options tend to be controlled by a joystick interface whereas the scooter has a tiller (handlebars) and "speed setting". For her, the tremor in her hands made the fine motor skills required to interact with the joystick impractical. So, while the scooter was less maneuverable (in the abstract sense), it was more CONTROLABLE in her particular case. She'd actively considered the options instead of needing to "see" each of them (to discover each of their shortcomings).
>> OTOH, if you've got a boatload of similar jobs (The David, The Rita, >> The Bob, The Bethany, The Harold, The Gretchen, etc.), that one artist >> may decide he's tired of being asked to "tweek" the works of past >> artists and want a commission of his own! Or, simply not have time >> enough in his schedule to get to all of them at the pace you desire! > > No problem: robots and 3-D printers will take care of that. Just read > an article that predicts AI will best humans at *everything* within 50 > years.
Yeah, Winston told me that... 40 years ago! :>
>>>> Alternatively, you can try to constrain the programmer (protect him from >>>> himself) and *hope* he's compliant. >>> >>> Yes. And my preference for a *general* purpose language is to default >>> to protecting the programmer, but to selectively permit use of more >>> dangerous constructs in marked "unsafe" code. >> >> And, count on the DISCIPLINE of all these would-be Michelangelos to >> understand (and admit!) their own personal limitations prior to enabling >> those constructs? > > For the less experienced, fear, uncertainty and doubt are better > counter-motivators than is any amount of discipline. When a person > believes (correctly or not) that something is hard to learn or hard to > use, he or she usually will avoid trying it for as long as possible.
Or, will think they are "above average" and, thus, qualified to KNOW how to use/do it!
> The basic problem with C is that some of its hard to master concepts > are dangled right in the faces of new programmers.
I think the problem is that the "trickier" aspects aren't really labeled as such. I know most folks would rather tackle a multiplication problem than an equivalent one of division. But, they've learned (from experience) of the relative costs/perils of each. It's not like there is a big red flag on the chapter entitled "division" that warns of Dragons!
> For almost any non-system application, you can do without (explicit > source level) pointer arithmetic. But pointers and the address > operator are fundamental to function argument passing and returning > values (note: not "value return"), and it's effectively impossible to > program in C without using them.
But, if you'd a formal education in CS, it would be trivial to semantically map the mechanisms to value and reference concepts. And, thinking of "reference" in terms of an indication of WHERE it is! etc. Similarly, many of the "inconsistencies" (to noobs) in the language could easily be explained with "common sense": - why aren't strings/arrays passed by value? (think about how ANYTHING is passed by value; the answer should then be obvious) - the whole notion of references being IN/OUT's - gee, const can ensure an IN can't be used as an OUT! etc. I think the bigger problem is that folks are (apparently) taught "keystrokes" instead of "concepts": type THIS to do THAT.
> This pushes newbies to learn about pointers, machine addressing and > memory management before many are ready. There is plenty else to > learn without *simultaneously* being burdoned with issues of object > location.
Then approach the topics more incrementally. Instead of introducing the variety of data types (including arrays), introduce the basic ones. Then, discuss passing arguments -- and how they are COPIED into a stack frame. This can NATURALLY lead to the fact that you can only "return" one datum; which the caller would then have to explicitly assign to <whatever>. "Gee, wouldn't it be nice if we could simply POINT to the things that we want the function (subroutine) to operate on?" Then, how you can use references to economize on the overhead of passing large objects (like strings/arrays) to functions. Etc. I just think the teaching approach is crippled. Its driven by industry with the goal of getting folks who can crank out code, regardless of quality or comprehension.
> Learning about pointers then invariably leads to learning about > arithmetic on pointers because they are covered together in most > tutorials. > > Keep in mind that the majority of people learning and using C (or C++) > today have no prior experience with hardware or even with programming > in assembler. If C isn't their 1st (non-scripting) language then most > likely their prior experiences were with "safe", high level, GC'd > languages that do not expose object addressing: e.g., Java, Scheme, > Python, etc. ... the commonly used "teaching" languages.
But you can still expose a student to the concepts of the underlying machine, regardless of language. Introduce a hypothetical machine... something with, say, memory and a computation unit. Treat memory as a set of addressable "locations", etc. My first "computer texts" all presented a conceptual model of a "computer system" -- even though the languages discussed (e.g., FORTRAN) hid much of that from the casual user. Instead, there's an emphasis on idioms and tricks that aren't portable and confuse the issue(s). Its like teaching a student driver about the infotainment system in the vehicle instead of how the brake and accelerator operate.
> For general application programming, there is no need for a language > to provide mutable pointers: initialized references, together with > array (or stream) indexing and struct/object member access are > sufficient for virtually any non-system programming use. This has > been studied extensively and there is considerable literature on the > subject.
But then you force the developer to pick different languages for different aspects of a problem. How many folks are comfortable with this "application specific" approach to *a* problem's solution? E.g., my OS is coded in C and ASM. Most of the core services are written in C (so I can provide performance guarantees) with my bogus IDL to handle RPC/IPC. The RDBMS server is accessed using SQL. And, "applications" are written in my modified-Limbo. This (hopefully) "works" because most folks will only be involved with *one* of these layers. And, folks who are "sufficiently motivated" to make their additions/modifications *work* can resort to cribbing from the existing parts of the design -- as "examples" of how they *could* do things ("Hey, this works; why not just copy it?") OTOH, if someone had set out to tackle the whole problem in a single language/style... <shrug>
>> What you (ideally) want, is to be able to "set a knob" on the 'side' of >> the language to limit its "potential for misuse". But, to do so in a >> way that the practitioner doesn't feel intimidated/chastened at its >> apparent "setting". > > Look at Racket's suite of teaching and extension languages. They all > are implemented over the same core language (an extended Scheme), but > they leverage the flexibility of the core langauge to offer different > syntaxes, different semantics, etc. > > In the case of the teaching languages, there is reduced functionality, > combined with more newbie friendly debugging output, etc. > > http://racket-lang.org/ > https://docs.racket-lang.org/htdp-langs/index.html > > And, yeah, the programmer can change which language is in use with a > simple "#lang <_>" directive, but the point here is the flexibility of > the system to provide (more or less) what you are asking for.
I'm sure you've worked in environments where the implementation was "dictated" by what appeared to be arbitrary constraints: will use this language, these tools, this process, etc. IME, programmers *chaffe* at such constraints. Almost as if they were personal affronts ("*I* know the best way to tackle the problem that *I* have been assigned!"). Imagine how content they'd be knowing they were being told to eat at the "kiddie table". I designed a little serial protocol that lets me daisy-chain messages through simple "motes". The protocol had to be simple and low overhead as the motes are intended to be *really* crippled devices -- at best, coded in C (on a multitasking *executive*, not even a full-fledged OS) and, more likely, in ASM. When I went to code the "host" side of the protocol, my first approach was to use Limbo -- this should make it more maintainable by those who follow (goal is to reduce the requirements imposed on future developers as much as possible). But, I was almost literally grinding my teeth as I was forced to build message packets in "byte arrays" with constant juggling of array indices, etc. (no support for pointers). I eventually "rationalized" that this could be viewed as a "core service" (communications) and, thus, suitable for coding along the same lines as the other services: in C! :> An hour later, the code is working and (to me) infinitely more intuitive than a bunch of "array slices" and "casts".
>> (Returning to "portability"...) >> >> Even if I can craft something that is portable under some set of >> conditions/criteria that I deem appropriate -- often by leveraging >> particular features of the language of a given implementation >> thereof -- how do I know the next guy will understand those issues? >> How do I know he won't *break* that aspect (portability) -- and >> only belatedly discover his error (two years down the road when the >> code base moves to a Big Endian 36b processor)? > > You don't, and there is little you can do about it. You can try to be > helpful - e.g., with documentation - but you can't be responsible for > what the next person will do.
Of course! My approach is to exploit laziness and greed. Leave bits of code that are RIPE for using as the basis for new services ("templates", of sorts). And, let the developer feel he can do whatever he wants -- if he's willing to bear the eventual cost for those design decisions (which might include users opting not to deploy his enhancements!)
> No software truly is portable except that which runs on an abstract > virtual machine. As long as the virtual machine can be realized on a > particular base platform, the software that runs on the VM is > "portable" to that platform. > >> It's similar to trying to ensure "appropriate" documentation >> accompanies each FUTURE change to the system -- who decides >> what is "appropriate"? (Ans: the guy further into the future who >> can't sort out the changes made by his predecessor!) > > Again, you are only responsible for what you do.
But, you can use the same lazy/greedy motivators there, as well. E.g., my gesture recognizer builds the documentation for the gesture from the mathematical model of the gesture. This relieves the developer from that task, ensures the documentation is ALWAYS in sync with the implementation *and* makes it trivial to add new gestures by lowering the effort required.
>>> ... No matter how sophisticated the compiler becomes, there always >>> will be cases where the programmer knows better and should be able >>> to override it. >> >> Exactly. Hence the contradictory issues at play: >> - enable the competent >> - protect the incompetent >> >>> But even with these limitations, there are languages that are useful >>> now and do far more of what you want than does C. >> >> But, when designing (or choosing!) a language, one of the dimensions >> in your decision matrix has to be availability of that language AND in >> the existing skillsets of its practitioners. > > The modern concept of availability is very different than when you had > to wait for a company to provide a turnkey solution, or engineer > something yourself from scratch. Now, if the main distribution > doesn't run on your platform, you are likely to find source that you > can port yourself (if you are able), or if there's any significant > user base, you may find that somebody else already has done it.
That works for vanilla implementations. It leads to all designs looking like all others ("Lets use a PC for this!"). This is fine *if* that's consistent with your product/project goals. But, if not, you're SoL. Or, faced with a tool porting/development task that exceeds the complexity of your initial problem.
> Tutorials, reference materials, etc. are a different matter, but the > simpler and more uniform the syntax and semantics, the easier the > language is to learn and to master. > > question: why in C is *p.q == p->q > but *p != p > and p.q != p->q > > followup: given coincidental addresses and modulo a cast, > how is it that *p can == *p.q > > Shit like this makes a student's head explode.
But C is lousy for its use of graphemes/glyphs. You'd think K&R were paraplegics given how stingy they are with keystrokes! Or, supremely lazy! (or, worse, think *us* that lazy!) [I guess it coul dbe worse; they could have forced all identifiers to be single character!]
> In Pascal, the pointer dereference operator '^' and the record > (struct) member access operator '.' were separate and always used > consistently. The type system guarantees that p and p^ and p^.q can > never, ever be the same object. > > This visual and logical consistency made Pascal easier to learn. And > not any less functional. > > My favorite dead horse - Modula 3 - takes a similar approach. Modula > 3 is both a competent bare metal system language AND a safe OO > application language. It does a whole lot more than (extended) Pascal > - yet it isn't that much harder to learn. > > It is possible to learn Modula 3 incrementally: leaving advanced > subjects such as where objects are located in memory and when it's > safe to delete() them - until you absolutely need to know. > > And if you stick to writing user applications in the safe subset of > the language, you may never need to learn it: Modula 3 uses GC by > default.
The same is largely true of Ada. But, with Ada, you end up knowing an encyclopaedic language that, in most cases, is overkill and affords little for nominal projects. An advantage of ASM was that there were *relatively* few operators and addressing modes, etc. Even complex instructions could be reliably (*and* mechanically) "decoded". You didn't find yourself wondering if something was a constant pointer to variable data, or a variable pointer to constant data, or a constant pointer to constant data, or... And, ASM syntax tended to be more "fixed form". There wasn't as much poetic license to how you expressed particular constructs. E.g., I instinctively write "&array[0]" instead of "array" (depending on the use).
>> ASM saw widespread use -- not because it was the BEST tool for the >> job but, rather, because it was (essentially) the ONLY game in town >> (in the early embedded world). Amusing that we didn't repeat the same >> evolution of languages that was the case in the "mainframe" world >> (despite having comparable computational resources to those >> ANCIENT machines!). >> >> The (early) languages that we settled on were simple to implement >> on the development platforms and with the target resources. Its >> only as targets have become more resource-rich that we're exploring >> richer execution environments (and the attendant consequences of >> that for the developer). > > There never was any C compiler that ran on any really tiny machine.
Doesn't have to run *on* a tiny machine. It just had to generate code that could run on a tiny machine! E.g., we used an 11 to write our i4004 code; the idea of even something as crude as an assembler running *ON* an i4004 was laughable!
> Ritchies' technotes on the development of C stated that the original > 1972 PDP-11 compiler had to run in ~6KB (all that was left after > loading Unix), required several passes, and really was not usable > until the machine was given a hard disk. Note also that that 1st > compiler implemented only a subset of K&R1. > > K&R1 - as described in the book - was 1st implemented in 1977 and I > have never seen any numbers on the size of that compiler. > > The smallest K&R1 compiler I can remember that *ran* on an 8-bit micro > was circa 1983. It was a few hundred KB of code. It ran in 48KB > using overlays, needed 2 floppy drives or a hard disk, and required 2 > compile passes per source file and a final link pass.
It wasn't uncommon for early *assemblers* to require multiple passes. I built some small CP/M based development systems for an employer many years ago. To save a few bucks, he opted to deploy most of them (mine being the exception! :> ) with a single 1.4M floppy. The folks using them were ecstatic as they were so much faster than the ZDS boxes we'd used up to then (hard sectored floppies, etc.). But, had the boss *watched* folks using them and counted the amount of time LOST swapping floppies (esp when you wanted to make a backup of a floppy!), he'd have realized how foolhardy his "disk economy" had been!
> It was quite functional (if glacially slow), and included program code > overlay support and emulated single precision FP (in VAX format IIRC). > Although it targeted a 16-bit virtual machine with 6 16-bit registers, > it produced native 8-bit code : i.e. the "16-bit VM" program was not > interpreted, but was emulated by 8-bit code. > > As part of the pro package (and available separately for personal use) > there also was a bytecode compiler that allowed packing much larger > applications (or their data) into memory. It had all the same > features as the native code compiler, but produced interpreted code > that ran much slower. You could use both native and interpreted code > in the same application via overlays. > > There existed various subset C compilers that could run in less than > 48KB, but most of them were no more than expensive learning toys.
Whitesmith's and Manx? JRT Pascal ($19.95!) ran on small CP/M boxes. IIRC, there was an M2 that also ran, there. And, MS had a BASIC compiler.
> But even by the standard of "the compiler could run on the machine", > there were languages better suited than C for application programming. > > Consider that in the late 70's there already were decent 8-bit > implementations of BASIC, BCPL Logo, SNOBOL, etc. (Extended) Pascal, > Smalltalk, SNOBOL4, etc. became available in the early 80's for both 8 > and 16-bit systems. But C really wasn't useable on any micro prior to > ~1985 when reasonably<?> priced hard disks appeared. > > Undoubtedly, AT&T giving away Unix to colleges from 1975..1979 meant > that students in that time frame would have gained some familiarity > with C. 16-bit micros powerful enough to really be characterized as > useful "development" systems popped out in the early 80's as these > students would have been graduating (or shortly thereafter). > > But they were extremely expensive: tens of thousands of dollars for a > usable system. You'd have to mortage your home to afford one, which > is not something the newly working with looming college loans would do > lightly. And sans hard disk (more $$$), you'd manage only one or two > compiles a day.
But you didn't have to rely on having a home system to write code. Just like most folks don't *rely* on having home internet to access the web, email, etc. If you're still in school, there's little to prevent you from using their tools for a "personal project". Ditto if employed. The only caveat being "not on company time".
> Turbo Pascal was the 1st really useable [in the modern sense] > developement system. It did not need a hard disk and it hit the > market before commodity hard disks were widely available. > > The question is not why C was adopted for system programming, or for > cross development from a capable system to a smaller target. Rather > the question is why it was so widely adopted for ALL kinds of > programming on ALL platforms given that were many other reasonable > choices available.
Look at them, individually. And, at the types of products that were being developed in that time frame. You could code most algorithms *in* BASIC. But, if forced into a single-threaded environment, most REAL projects would fall apart (cuz the processor would be too slow to get around to polling everything AND doing meaningful work). I wrote a little BASIC compiler that targeted the 647180 (one of the earliest SoC's). It was useless for product development. But, great for throwing together dog-n-pony's for clients. Allow multiple "program counters" to walk through ONE executable and you've got an effective multitasking environment (though with few RT guarantees). Slap *one* PLCC in a wirewrap socket with some misc signal conditioning/IO logic and show the client a mockup of a final product in a couple of weeks. [Then, explain why it was going to take several MONTHS to go from that to a *real* product! :> ] SNOBOL is really only useful for text processing. Try implementing Bresenham's algorithm in it -- or any other DDA. This sort of thing highlights the differences between "mainframe" applications and "embedded" applications. Ditto Pascal. How much benefit is there in controlling a motor that requires high level math and flagrant automatic type conversion? Smalltalk? You *do* know how much RAM cost in the early 80's?? Much embedded coding could (today) be done with as crippled a framework as PL/M. What you really want to do is give the developer some syntactic freedom (e.g., infix notation for expressions) and relieve him of the minutiae of setting up stack frames, tracking binary points, etc. C goes a long way towards that goal without favoring a particular application domain. And, because its relatively easy to "visualize" what is happening "behind the code", its easy to deploy applications coded in it in multiple different environments. [By contrast, think about how I tackled the multitasking BASIC implementation and how I'd have to code *for* that implementation to avoid "unexpected artifacts"]
> YMMV. I remain perplexed.
On Thu, 8 Jun 2017 14:50:06 +0100, Tom Gardner
<spamjunk@blueyonder.co.uk> wrote:

>On 08/06/17 11:38, George Neuner wrote: >> The smallest K&R1 compiler I can remember that *ran* on an 8-bit micro >> was circa 1983. It was a few hundred KB of code. It ran in 48KB >> using overlays, needed 2 floppy drives or a hard disk, and required 2 >> compile passes per source file and a final link pass. >> >> It was quite functional (if glacially slow), and included program code >> overlay support and emulated single precision FP (in VAX format IIRC). >> Although it targeted a 16-bit virtual machine with 6 16-bit registers, >> it produced native 8-bit code : i.e. the "16-bit VM" program was not >> interpreted, but was emulated by 8-bit code. > >Whitesmiths? IIRC the symbol table size became a limiting >factor during linking, so linking became multipass :(
No, it was the Aztec compiler from Manx. I'm not aware that Whitesmith ever ran on an 8-bit machine. The versions I remember were for CP/M-86 and PC/MS-DOS. My (maybe faulty) recollection is that Whitesmith was enormous: needing at least 512KB and a hard disk to be useful. I remember at one time using Microsoft's C compiler on 1.2MB floppies and needing half a dozen disk swaps to compile "hello world!".
>> There existed various subset C compilers that could run in less than >> 48KB, but most of them were no more than expensive learning toys. > >I always found that remarkable, since Algol-60 compiler ran in >4kwords of 2 instructions/word.
Must have been written in assembler - I would have loved to have seen that.
>> But even by the standard of "the compiler could run on the machine", >> there were languages better suited than C for application programming. >> >> Consider that in the late 70's there already were decent 8-bit >> implementations of BASIC, BCPL Logo, SNOBOL, etc. (Extended) Pascal, >> Smalltalk, SNOBOL4, etc. became available in the early 80's for both 8 >> and 16-bit systems. But C really wasn't useable on any micro prior to >> ~1985 when reasonably<?> priced hard disks appeared. > >I'll debate Smalltalk :) Apple's implementation (pre L Peter >Deutsch's JIT) was glacially slow. I know: it is still running >on my fat Mac downstairs :)
I agree that Apple's version was slow - I maybe never saw the version with JIT - but ParcPlace Smalltalk ran very well on a FatMac. I had a Smalltalk for my Apple IIe. It needed 128KB so required a IIe or a II with language card. It used a text based browser and ran quite acceptably for small programs. Unfortunately, the version I had was not able to produce a separate executable. Unfortunately, after too many moves, I no longer have very much of the early stuff. I never figured on it becoming valuable. George
On 09/06/17 19:14, George Neuner wrote:
> On Thu, 8 Jun 2017 14:50:06 +0100, Tom Gardner >>> There existed various subset C compilers that could run in less than >>> 48KB, but most of them were no more than expensive learning toys. >> >> I always found that remarkable, since Algol-60 compiler ran in >> 4kwords of 2 instructions/word. > > Must have been written in assembler - I would have loved to have seen > that.
You probably still can. Certainly the 803 was playing tunes a month ago. http://www.tnmoc.org/news/notes-museum/iris-atc-has-hiccup-and-elliott-803-store-fault-returns
>>> But even by the standard of "the compiler could run on the machine", >>> there were languages better suited than C for application programming. >>> >>> Consider that in the late 70's there already were decent 8-bit >>> implementations of BASIC, BCPL Logo, SNOBOL, etc. (Extended) Pascal, >>> Smalltalk, SNOBOL4, etc. became available in the early 80's for both 8 >>> and 16-bit systems. But C really wasn't useable on any micro prior to >>> ~1985 when reasonably<?> priced hard disks appeared. >> >> I'll debate Smalltalk :) Apple's implementation (pre L Peter >> Deutsch's JIT) was glacially slow. I know: it is still running >> on my fat Mac downstairs :) > > I agree that Apple's version was slow - I maybe never saw the version > with JIT - but ParcPlace Smalltalk ran very well on a FatMac.
I never saw PP Smalltalk on a fat mac. L Peter Deutsch's JIT was a significant improvement. I moved onto Smalltalk/V on a PC, a Tek Smalltalk machine, and then Objective-C. Later I was surprised to find that both Tek and HP had embedded Smalltalk in some of their instruments.
> I had a Smalltalk for my Apple IIe. It needed 128KB so required a IIe > or a II with language card. It used a text based browser and ran > quite acceptably for small programs. Unfortunately, the version I had > was not able to produce a separate executable. > > Unfortunately, after too many moves, I no longer have very much of the > early stuff. I never figured on it becoming valuable.
I'm collecting a bit now; I was surprised the fat mac only cost &#4294967295;90 inc shipping.
On 10/06/17 04:14, George Neuner wrote:
> On Thu, 8 Jun 2017 14:50:06 +0100, Tom Gardner > <spamjunk@blueyonder.co.uk> wrote: > >> On 08/06/17 11:38, George Neuner wrote: >>> The smallest K&R1 compiler I can remember that *ran* on an 8-bit micro >>> was circa 1983. It was a few hundred KB of code. It ran in 48KB >>> using overlays, needed 2 floppy drives or a hard disk, and required 2 >>> compile passes per source file and a final link pass. >>> >>> It was quite functional (if glacially slow), and included program code >>> overlay support and emulated single precision FP (in VAX format IIRC). >>> Although it targeted a 16-bit virtual machine with 6 16-bit registers, >>> it produced native 8-bit code : i.e. the "16-bit VM" program was not >>> interpreted, but was emulated by 8-bit code. >> >> Whitesmiths? IIRC the symbol table size became a limiting >> factor during linking, so linking became multipass :( > > No, it was the Aztec compiler from Manx. > > I'm not aware that Whitesmith ever ran on an 8-bit machine. The > versions I remember were for CP/M-86 and PC/MS-DOS. My (maybe faulty) > recollection is that Whitesmith was enormous: needing at least 512KB > and a hard disk to be useful. > > I remember at one time using Microsoft's C compiler on 1.2MB floppies > and needing half a dozen disk swaps to compile "hello world!".
I built my first software product like that, a personal filing system. I was very glad when we got a 5MB disk drive, and didn't have to swap disks any more. It was even better when, a few months later (1983) we got MS-DOS 2, with mkdir/rmdir, so not all files were in the root directory any more.
On Fri, 9 Jun 2017 00:06:05 -0700, Don Y <blockedofcourse@foo.invalid>
wrote:

>On 6/8/2017 3:38 AM, George Neuner wrote: >> >> ... adopt a throw-away mentality: replace rather than maintain. >> >> That basically is the idea behind the whole agile/devops/SaaS >> movement: if it doesn't work today, no problem - there will be a new >> release tomorrow [or sooner]. > >I think those are just enablers for PHB's who are afraid to THINK >about what they want (in a product/design) and, instead, want to be shown >what they DON'T want.
IME most people [read "clients"] don't really know what they want until they see what they don't want. Most people go into a software development effort with a reasonable idea of what it should do ... subject to revision if they are allowed to think about it ... but absolutely no idea what it should look like until they see - and reject - several demos. The entire field of "Requirements Analysis" would not exist if people knew what they wanted up front and could articulate it to the developer.
>> For almost any non-system application, you can do without (explicit >> source level) pointer arithmetic. But pointers and the address >> operator are fundamental to function argument passing and returning >> values (note: not "value return"), and it's effectively impossible to >> program in C without using them. > >But, if you'd a formal education in CS, it would be trivial to >semantically map the mechanisms to value and reference concepts. >And, thinking of "reference" in terms of an indication of WHERE >it is! etc.
But only a small fraction of "developers" have any formal CS, CE, or CSE education. In general, the best you can expect is that some of them may have a certificate from a programming course.
>Similarly, many of the "inconsistencies" (to noobs) in the language >could easily be explained with "common sense": >- why aren't strings/arrays passed by value? (think about how > ANYTHING is passed by value; the answer should then be obvious) >- the whole notion of references being IN/OUT's >- gee, const can ensure an IN can't be used as an OUT! >etc.
That's true ... but then you get perfectly reasonable questions like "why aren't parameters marked as IN or OUT?", and have to dance around the fact that the developers of the language were techno-snobs who didn't expect that clueless people ever would be trying to use it. Or "how do I ensure that an OUT can't be used as an IN?" Hmmm???
>I think the bigger problem is that folks are (apparently) taught >"keystrokes" instead of "concepts": type THIS to do THAT.
There is a element of that. But also there is the fact that many who can DO cannot effectively teach. I knew someone who was taking a C programming course, 2 nights a week at a local college. After (almost) every class, he would come to me with questions and confusions about the subject matter. He remarked on several occasions that I was able to teach him more in 10 minutes than he learned in a 90 minute lecture.
>> This pushes newbies to learn about pointers, machine addressing and >> memory management before many are ready. There is plenty else to >> learn without *simultaneously* being burdoned with issues of object >> location. > >Then approach the topics more incrementally. Instead of introducing >the variety of data types (including arrays), introduce the basic >ones. Then, discuss passing arguments -- and how they are COPIED into >a stack frame.
A what frame? I once mentioned "stack" in a response to a question posted in another forum. The poster had proudly announced that he was a senior in a CS program working on a midterm project. He had no clue that "stacks" existed other than as abstract notions, didn't know the CPU had one, and didn't understand why it was needed or how his code was faulty for (ab)using it. So much for "CS" programs.
>This can NATURALLY lead to the fact that you can only "return" one >datum; which the caller would then have to explicitly assign to ><whatever>. "Gee, wouldn't it be nice if we could simply POINT to >the things that we want the function (subroutine) to operate on?"
Huh? I saw once in a textbook that <insert_language> functions can return more than one object. Why is this language so lame?
>I just think the teaching approach is crippled. Its driven by industry >with the goal of getting folks who can crank out code, regardless of >quality or comprehension.
You and I have had this discussion before [at least in part]. CS programs don't teach programming - they teach "computer science". For the most part CS students simply are expected to know. CSE programs are somewhat better because they [purport to] teach project management: selection and use of tool chains, etc. But that can be approached largely in the abstract as well. Many schools are now requiring that a basic programming course be taken by all students, regardless of major. But this is relatively recent, and the language de choix varies widely.
>But you can still expose a student to the concepts of the underlying >machine, regardless of language. Introduce a hypothetical machine... >something with, say, memory and a computation unit. Treat memory >as a set of addressable "locations", etc.
That's covered in a separate course: "Computer Architecture 106". It is only offered Monday morning at 8am, and it costs another 3 credits.
>My first "computer texts" all presented a conceptual model of a >"computer system" -- even though the languages discussed >(e.g., FORTRAN) hid much of that from the casual user.
Every intro computer text introduces the hypothetical machine ... and spends 6-10 pages laboriously stretching out the 2 sentence decription you gave above. If you're lucky there will be an illustration of an array of memory cells. Beyond that, you are into specialty texts.
>> For general application programming, there is no need for a language >> to provide mutable pointers: initialized references, together with >> array (or stream) indexing and struct/object member access are >> sufficient for virtually any non-system programming use. This has >> been studied extensively and there is considerable literature on the >> subject. > >But then you force the developer to pick different languages for >different aspects of a problem. How many folks are comfortable >with this "application specific" approach to *a* problem's solution?
Go ask this question in a Lisp forum where writing a little DSL to address some knotty aspect of a problem is par for the course.
>E.g., my OS is coded in C and ASM. Most of the core services are >written in C (so I can provide performance guarantees) with my bogus >IDL to handle RPC/IPC. The RDBMS server is accessed using SQL. >And, "applications" are written in my modified-Limbo.
What does CLIPS use? By my count you are using 6 different languages ... 4 or 5 of which you can virtually count on the next maintainer not knowing. What would you have done differently if C were not available for writing your applications? How exactly would that have impacted your development?
>This (hopefully) "works" because most folks will only be involved >with *one* of these layers. And, folks who are "sufficiently motivated" >to make their additions/modifications *work* can resort to cribbing >from the existing parts of the design -- as "examples" of how they >*could* do things ("Hey, this works; why not just copy it?")
Above you complained about people being taught /"keystrokes" instead of "concepts": type THIS to do THAT./ and something about how that led to no understanding of the subject.
>OTOH, if someone had set out to tackle the whole problem in a single >language/style... <shrug>
It would be a f_ing nightmare. That's precisely *why* you *want* to use a mix of languages: often the best tool is a special purpose domain language.
>>> What you (ideally) want, is to be able to "set a knob" on the 'side' of >>> the language to limit its "potential for misuse". But, to do so in a >>> way that the practitioner doesn't feel intimidated/chastened at its >>> apparent "setting". >> >> Look at Racket's suite of teaching and extension languages. They all >> are implemented over the same core language (an extended Scheme), but >> they leverage the flexibility of the core langauge to offer different >> syntaxes, different semantics, etc. >> >> In the case of the teaching languages, there is reduced functionality, >> combined with more newbie friendly debugging output, etc. >> >> http://racket-lang.org/ >> https://docs.racket-lang.org/htdp-langs/index.html >> > >I'm sure you've worked in environments where the implementation >was "dictated" by what appeared to be arbitrary constraints: >will use this language, these tools, this process, etc. IME, >programmers *chaffe* at such constraints. Almost as if they were >personal affronts ("*I* know the best way to tackle the problem >that *I* have been assigned!"). Imagine how content they'd be >knowing they were being told to eat at the "kiddie table".
If the tool is Racket, it supports creating, using and ad-mixing any special purpose domain languages you are able to come up with. <grin> Racket isn't the only such versatile tool ... it's just the one I happened to have at hand.
>> The modern concept of availability is very different than when you had >> to wait for a company to provide a turnkey solution, or engineer >> something yourself from scratch. Now, if the main distribution >> doesn't run on your platform, you are likely to find source that you >> can port yourself (if you are able), or if there's any significant >> user base, you may find that somebody else already has done it. > >That works for vanilla implementations. It leads to all designs >looking like all others ("Lets use a PC for this!"). This is >fine *if* that's consistent with your product/project goals. >But, if not, you're SoL.
Yeah ... well the world is going that way. My electric toothbrush is a Raspberry PI running Linux.
>An advantage of ASM was that there were *relatively* few operators >and addressing modes, etc.
Depends on the chip. Modern x86_64 chips can have instructions up to 15 bytes (120 bits) long. [No actual instruction *is* that long, but that is the maximum the decoder will accept.]
>>> The (early) languages that we settled on were simple to implement >>> on the development platforms and with the target resources. Its >>> only as targets have become more resource-rich that we're exploring >>> richer execution environments (and the attendant consequences of >>> that for the developer). >> >> There never was any C compiler that ran on any really tiny machine. > >Doesn't have to run *on* a tiny machine. It just had to generate code >that could run on a tiny machine!
Cross compiling is cheating!!! In most cases, it takes more resources to develop a program than to run it ... so if you have a capable machine for development, why do need a *small* compiler? A small runtime footprint is a different issue, but *most* languages [even GC'd ones] are capable of operating with a small footprint. Once upon a time, I created a Scheme-like GC'd language that could do a hell of a lot in 8KB total for the compiler, runtime, a reasonably complex user program and its data.
>E.g., we used an 11 to write our i4004 code; the idea of even something >as crude as an assembler running *ON* an i4004 was laughable!
My point exactly. In any case, you wouldn't write for the i4004 in a compiled language. Pro'ly not for the i8008 either, although I have heard claims that that was possible.
>> The question is not why C was adopted for system programming, or for >> cross development from a capable system to a smaller target. Rather >> the question is why it was so widely adopted for ALL kinds of >> programming on ALL platforms given that were many other reasonable >> choices available. > >Look at them, individually. And, at the types of products that >were being developed in that time frame. > >You could code most algorithms *in* BASIC. But, if forced into a >single-threaded environment, most REAL projects would fall apart >(cuz the processor would be too slow to get around to polling >everything AND doing meaningful work). I wrote a little BASIC >compiler that targeted the 647180 (one of the earliest SoC's). > >It was useless for product development. But, great for throwing >together dog-n-pony's for clients. Allow multiple "program >counters" to walk through ONE executable and you've got an effective >multitasking environment (though with few RT guarantees). Slap >*one* PLCC in a wirewrap socket with some misc signal conditioning/IO >logic and show the client a mockup of a final product in a couple >of weeks. > >[Then, explain why it was going to take several MONTHS to go from >that to a *real* product! :> ] > >SNOBOL is really only useful for text processing. Try implementing >Bresenham's algorithm in it -- or any other DDA. This sort of thing >highlights the differences between "mainframe" applications and >"embedded" applications.
But we aren't talking about *embedded* applications ... we're talking about ALL KINDS of applications on ALL KINDS of machines. You view everything through the embedded lens.
>Ditto Pascal. How much benefit is there in controlling a motor >that requires high level math and flagrant automatic type conversion?
I don't even understand this.
>Smalltalk? You *do* know how much RAM cost in the early 80's??
Yes, I do. I also know that I had a Smalltalk development system that ran on my Apple IIe. Unfortunately, it was a "personal" edition that was not able to create standalone executables ... there was a "professional" version that could, but it was too expensive for me ... so I don't know how small a 6502 Smalltalk program could have been. I also had a Lisp and a Prolog for the IIe. No, they did not run in 4KB, but they were far from useless on an 8-bit machine. George
The 2026 Embedded Online Conference