EmbeddedRelated.com
Forums
The 2026 Embedded Online Conference

Portable Assembly

Started by rickman May 27, 2017
On 6/9/2017 7:14 PM, George Neuner wrote:
> On Fri, 9 Jun 2017 00:06:05 -0700, Don Y <blockedofcourse@foo.invalid> > wrote: > >> On 6/8/2017 3:38 AM, George Neuner wrote: >>> >>> ... adopt a throw-away mentality: replace rather than maintain. >>> >>> That basically is the idea behind the whole agile/devops/SaaS >>> movement: if it doesn't work today, no problem - there will be a new >>> release tomorrow [or sooner]. >> >> I think those are just enablers for PHB's who are afraid to THINK >> about what they want (in a product/design) and, instead, want to be shown >> what they DON'T want. > > IME most people [read "clients"] don't really know what they want > until they see what they don't want.
I've typically only found that to be the case when clients (often using "their own" money) can't decide *if* they want to enter a particular market. They want to see something to gauge their own reaction to it: is it an exciting product or just another warmed over stale idea. I used to make wooden mockups of devices just to "talk around". Then foamcore. Then, just 3D CAD sketches. But, how things work was always conveyed in prose. No need to see the power light illuminate when the power switch was toggled. If you can't imagine how a user will interact with a device, then you shouldn't be developing that device! The only "expensive" dog-and-pony's were cases where the underlying technology was unproven. Typically mechanisms that weren't known to behave as "envisioned" without some sort of reassurances (far from a clinical *proof*). I don't have an ME background so can never vouch for mechanical designs; if the client needs reassurance, the ME has to provide it *or* invest in building a real mechanism (which often just "looks pretty" without any associated driving electronics)
> Most people go into a software development effort with a reasonable > idea of what it should do ... subject to revision if they are allowed > to think about it ... but absolutely no idea what it should look like > until they see - and reject - several demos.
That's just a failure of imagination. A good spec (or manual) should allow a developer or potential user to imagine actually using the device before anything has been reified. Its expensive building space shuttles just to figure out what it should look like! :>
> The entire field of "Requirements Analysis" would not exist if people > knew what they wanted up front and could articulate it to the > developer.
IMO, the problem with the agile approach is that there is too much temptation to cling to whatever you've already implemented. And, if you've not thoroughly specified its behavior and characterized its operation, you've got a black box with unknown contents -- that you will now convince yourself does what it "should" (without having designed it with knowledge of that "should"). So, you end up on the wrong initial trajectory and don't discover the problem until you've baked lots of "compensations" into the design. [The hardest thing to do is convince yourself to start over]
>>> For almost any non-system application, you can do without (explicit >>> source level) pointer arithmetic. But pointers and the address >>> operator are fundamental to function argument passing and returning >>> values (note: not "value return"), and it's effectively impossible to >>> program in C without using them. >> >> But, if you'd a formal education in CS, it would be trivial to >> semantically map the mechanisms to value and reference concepts. >> And, thinking of "reference" in terms of an indication of WHERE >> it is! etc. > > But only a small fraction of "developers" have any formal CS, CE, or > CSE education. In general, the best you can expect is that some of > them may have a certificate from a programming course.
You've said that in the past, but I can't wrap my head around it. It's like claiming very few doctors have taken any BIOLOGY courses! Or, that a baker doesn't understand the basic chemistries involved.
>> Similarly, many of the "inconsistencies" (to noobs) in the language >> could easily be explained with "common sense": >> - why aren't strings/arrays passed by value? (think about how >> ANYTHING is passed by value; the answer should then be obvious) >> - the whole notion of references being IN/OUT's >> - gee, const can ensure an IN can't be used as an OUT! >> etc. > > That's true ... but then you get perfectly reasonable questions like > "why aren't parameters marked as IN or OUT?", and have to dance around > the fact that the developers of the language were techno-snobs who > didn't expect that clueless people ever would be trying to use it.
That's a shortcoming of the language's syntax. But, doesn't prevent you from annotating the parameters as such. My IDL requires formal specification because it has to know how to marshal and unmarshal on each end.
> Or "how do I ensure that an OUT can't be used as an IN?" Hmmm??? > >> I think the bigger problem is that folks are (apparently) taught >> "keystrokes" instead of "concepts": type THIS to do THAT. > > There is a element of that. But also there is the fact that many who > can DO cannot effectively teach.
Of course! SWMBO has been learning that lesson with her artwork. Taking a course from a "great artist" doesn't mean you'll end up learning anything or improving YOUR skillset.
> I knew someone who was taking a C programming course, 2 nights a week > at a local college. After (almost) every class, he would come to me > with questions and confusions about the subject matter. He remarked > on several occasions that I was able to teach him more in 10 minutes > than he learned in a 90 minute lecture.
But I suspect you had a previous relationship with said individual. So, knew how to "relate" concepts to him/her. Many of SWMBO's (female) artist-friends seem to have trouble grok'ing perspective. They read books, take courses, etc. and still can't seem to warp their head around the idea. I can sit down with them one-on-one and convey the concept and "mechanisms" in a matter of minutes: "Wow! This is EASY!!" But, I'm not trying to sell a (fat!) book or sign folks up for hours of coursework, etc. And, I know how to pitch the ideas to each person individually, based on my prior knowledge of their backgrounds, etc.
>>> This pushes newbies to learn about pointers, machine addressing and >>> memory management before many are ready. There is plenty else to >>> learn without *simultaneously* being burdoned with issues of object >>> location. >> >> Then approach the topics more incrementally. Instead of introducing >> the variety of data types (including arrays), introduce the basic >> ones. Then, discuss passing arguments -- and how they are COPIED into >> a stack frame. > > A what frame? > > I once mentioned "stack" in a response to a question posted in another > forum. The poster had proudly announced that he was a senior in a CS > program working on a midterm project. He had no clue that "stacks" > existed other than as abstract notions, didn't know the CPU had one, > and didn't understand why it was needed or how his code was faulty for > (ab)using it. > > So much for "CS" programs.
<frown> As time passes, I am becoming more convinced of the quality of my education. This was "freshman-level" coursework: S-machines, lambda calculus, petri nets, formal grammars, etc. [My best friend from school recounted taking some graduate level courses at Northwestern. First day of the *graduate* level AI course, a fellow student walked in with the textbook under his arm. My friend asked to look at it. After thumbing through a few pages, he handed it back: "I already took this course... as a FRESHMAN!"] If I had "free time", I guess it would be interesting to see just what modern teaching is like, in this field.
>> This can NATURALLY lead to the fact that you can only "return" one >> datum; which the caller would then have to explicitly assign to >> <whatever>. "Gee, wouldn't it be nice if we could simply POINT to >> the things that we want the function (subroutine) to operate on?" > > Huh? I saw once in a textbook that <insert_language> functions can > return more than one object. Why is this language so lame?
Limbo makes extensive use of tuples as return values. So, silly not to take advantage of that directly. (changes the syntax of how you'd otherwise use a function in an expression but the benefits outweigh the costs, typ).
>> I just think the teaching approach is crippled. Its driven by industry >> with the goal of getting folks who can crank out code, regardless of >> quality or comprehension. > > You and I have had this discussion before [at least in part]. > > CS programs don't teach programming - they teach "computer science". > For the most part CS students simply are expected to know.
I guess I don't understand the difference. In my mind, "programming" is the plebian skillset. programming : computer science :: ditch-digging : landscaping I.e., ANYONE can learn to "program". It can be taught as a rote skill. Just like anyone can be taught to reheat a batch of ready-made cookie dough to "bake cookies". The CS aspect of my (EE) degree showed me the consequences of different machine architectures, the value of certain characteristics in the design of a language, the duality of recursion/iteration, etc. E.g., when I designed my first CPU, the idea of having an "execution unit" started by the decode of one opcode and CONTINUING while other opcodes were fetched and executed wasn't novel; I'd already seen it done on 1960's hardware. [And, if the CPU *hardware* can do two -- or more -- things at once, then the idea of a *program* doing two or more things at once is a no-brainer! "Multitasking? meh..."]
> CSE programs are somewhat better because they [purport to] teach > project management: selection and use of tool chains, etc. But that > can be approached largely in the abstract as well.
This was an aspect of "software development" that was NOT stressed in my curriculum. Nor was "how to use a soldering iron" in the EE portion thereof (the focus was more towards theory with the understanding that you could "pick up" the practical skills relatively easily, outside of the classroom)
> Many schools are now requiring that a basic programming course be > taken by all students, regardless of major. But this is relatively > recent, and the language de choix varies widely.
I know every EE was required to take some set of "software" courses. Having attended an engineering school, I suspect that was true of virtually every "major". Even 40 years ago, it was hard to imagine any engineering career that wouldn't require that capability. [OTOH, I wouldn't trust one of the ME's to design a programming language anymore than I'd trust an EE/CS to design a *bridge*!]
>> But you can still expose a student to the concepts of the underlying >> machine, regardless of language. Introduce a hypothetical machine... >> something with, say, memory and a computation unit. Treat memory >> as a set of addressable "locations", etc. > > That's covered in a separate course: "Computer Architecture 106". It > is only offered Monday morning at 8am, and it costs another 3 credits.
I just can't imagine how you could explain "programming" a machine to a person without that person first understanding how the machine works. Its not like trying to teach someone to *drive* while remaining ignorant of the fact that there are many small explosions happening each second, under the hood! [How would you teach a car mechanic to perform repairs if he didn't understand what the components he was replacing *did* or how they interacted with the other components?]
>> My first "computer texts" all presented a conceptual model of a >> "computer system" -- even though the languages discussed >> (e.g., FORTRAN) hid much of that from the casual user. > > Every intro computer text introduces the hypothetical machine ... and > spends 6-10 pages laboriously stretching out the 2 sentence decription > you gave above. If you're lucky there will be an illustration of an > array of memory cells. > > Beyond that, you are into specialty texts.
My first courses (pre-college) went to great lengths to explain the hardware of the machine, DASD's vs., SASD's, components of access times, overlapped I/O, instruction formats (in a generic sense -- PC's hadn't been invented, yet), binary-decimal conversion, etc. But, then again, these were new ideas at the time, not old saws.
>>> For general application programming, there is no need for a language >>> to provide mutable pointers: initialized references, together with >>> array (or stream) indexing and struct/object member access are >>> sufficient for virtually any non-system programming use. This has >>> been studied extensively and there is considerable literature on the >>> subject. >> >> But then you force the developer to pick different languages for >> different aspects of a problem. How many folks are comfortable >> with this "application specific" approach to *a* problem's solution? > > Go ask this question in a Lisp forum where writing a little DSL to > address some knotty aspect of a problem is par for the course. > >> E.g., my OS is coded in C and ASM. Most of the core services are >> written in C (so I can provide performance guarantees) with my bogus >> IDL to handle RPC/IPC. The RDBMS server is accessed using SQL. >> And, "applications" are written in my modified-Limbo. > > What does CLIPS use?
Its hard to consider CLIPS's "language" to be a real "programming language" (e.g., Turing complete -- though it probably *is*, but with ghastly syntax!). Its bears the same sort of relationship that SQL has to RDBMS, SNOBOL to string processing, etc. Its primarily concerned with asserting and retracting facts based on patterns of recognized facts. While you *can* code an "action" routine in it's "native" language, I find it easier to invoke an external routine (C) that uses the API exported by CLIPS to do all the work. In my case, it would be difficult to code an "action routine" entirely in CLIPS and be able to access the rest of the system via the service-based interfaces I've implemented.
> By my count you are using 6 different languages ... 4 or 5 of which > you can virtually count on the next maintainer not knowing.
Yes. But I'm not designing a typical application; rather, a *system* of applications, services, OS, etc. I wouldn't expect one language to EFFICIENTLY tackle all (and, I'd have to build all of those components from scratch if I wanted complete control over their own implementation languages (I have no desire to write an RDBMS just so I can AVOID using SQL).
> What would you have done differently if C were not available for > writing your applications? How exactly would that have impacted your > development?
The applications are written in Limbo. I'd considered other scripting languages for that role -- LOTS of other languages! -- but Limbo already had much of the support I needed to layer onto the "structure" of my system. Did I want to invent a language and a hosting VM (to make it easy to migrate applications at run-time)? Add multithreading hooks to an existing language? etc. [I was disappointed with most language choices as they all tend to rely heavily on punctuation and other symbols that aren't "voiced" when reading the code] C just gives me lots of bang for the buck. I could implement all of this on a bunch of 8b processors -- writing interpreters to allow more complex machines to APPEAR to run on the simpler hardware, creating virtual address spaces to exceed the limits of those tiny processors, etc. But, all that would come at a huge performance cost. Easier just to *buy* faster processors and run code written in more abstract languages.
>> This (hopefully) "works" because most folks will only be involved >> with *one* of these layers. And, folks who are "sufficiently motivated" >> to make their additions/modifications *work* can resort to cribbing >>from the existing parts of the design -- as "examples" of how they >> *could* do things ("Hey, this works; why not just copy it?") > > Above you complained about people being taught /"keystrokes" instead > of "concepts": type THIS to do THAT./ and something about how that > led to no understanding of the subject.
There's a difference between the types of people involved. I don't expect anyone from "People's Software Institute #234B" to be writing anything beyond application layer scripts. So, they only need to understand the scripting language and the range of services available to them. They don't have to worry about how I've implemented each of these services. Or, how I move their application from processor node 3 to node 78 without corrupting any data -- or, without their even KNOWING that they've been moved! Likewise, someone writing a new service (in C) need not be concerned with the scripting language. Interfacing to it can be done by copying an interface for an existing service. And, interfacing to the OS can as easily mimic the code from a similar service. You obviously have to understand the CONCEPT of "multiplication" in order to avail yourself of it. But, do you care if it's implemented in a purely combinatorial fashion? Or, iteratively with a bunch of CSA's? Or, by tiny elves living in a hollow tree? In my case, you have to understand that each function/subroutine invocation just *appears* to be a subroutine/function invocation. That, in reality, it can be running code on another processor in another building -- concurrent with what you are NOW doing (this is a significant conceptual difference between traditional "programming" where you consider everything to be a series of operations -- even in a multithreaded environment!). You also have to understand that your "program" can abend or be aborted at any time. And, that persistent data has *structure* (imposed by the DBMS) instead of being just BLOBs. And, that agents/clients have capabilities that are finer-grained than "permissions" in conventional systems. But, you don't have to understand how any of these things are implemented in order to use them correctly.
>> OTOH, if someone had set out to tackle the whole problem in a single >> language/style... <shrug> > > It would be a f_ing nightmare. That's precisely *why* you *want* to > use a mix of languages: often the best tool is a special purpose > domain language.
But that complicates the design (and maintenance) effort(s) -- by requiring staff with those skillsets to remain available. Imagine if you had to have a VLSI person on hand all the time in case the silicon in your CPU needed to be changed...
>>> The modern concept of availability is very different than when you had >>> to wait for a company to provide a turnkey solution, or engineer >>> something yourself from scratch. Now, if the main distribution >>> doesn't run on your platform, you are likely to find source that you >>> can port yourself (if you are able), or if there's any significant >>> user base, you may find that somebody else already has done it. >> >> That works for vanilla implementations. It leads to all designs >> looking like all others ("Lets use a PC for this!"). This is >> fine *if* that's consistent with your product/project goals. >> But, if not, you're SoL. > > Yeah ... well the world is going that way. My electric toothbrush is > a Raspberry PI running Linux.
I suspect my electric toothbrush has a small MCU at its heart.
>> An advantage of ASM was that there were *relatively* few operators >> and addressing modes, etc. > > Depends on the chip. Modern x86_64 chips can have instructions up to > 15 bytes (120 bits) long. [No actual instruction *is* that long, but > that is the maximum the decoder will accept.]
But the means by which the "source" is converted to the "binary" is well defined. Different EA modes require different data to be present in the instruction byte stream -- and, in predefined places relative to the start of the instruction (or specific locations in memory). And, SUB behaved essentially the same as ADD -- with the same range of options available, etc. [You might have to remember that certain instructions expected certain parameters to be implicitly present in specific registers, etc.]
>>>> The (early) languages that we settled on were simple to implement >>>> on the development platforms and with the target resources. Its >>>> only as targets have become more resource-rich that we're exploring >>>> richer execution environments (and the attendant consequences of >>>> that for the developer). >>> >>> There never was any C compiler that ran on any really tiny machine. >> >> Doesn't have to run *on* a tiny machine. It just had to generate code >> that could run on a tiny machine! > > Cross compiling is cheating!!! > > In most cases, it takes more resources to develop a program than to > run it ... so if you have a capable machine for development, why do > need a *small* compiler?
Because not all development machines were particularly capable. My first project was i4004 based, developed on an 11. The newer version of the same product was i8085 hosted and developed on an MDS800. IIRC, the MDS800 was *8080* based and limited to 64KB of memory (no fancy paging, bank switching, etc.) I think a second 8080 ran the I/O's. So, building an object image was lots of passes, lots of "egg scrambling" (the floppies always sounded like they were grinding themselves to death) I.e., if we'd opted to replace the EPROM in our product with SRAM (or DRAM) and add some floppies, the product could have hosted the tools.
> A small runtime footprint is a different issue, but *most* languages > [even GC'd ones] are capable of operating with a small footprint. > > Once upon a time, I created a Scheme-like GC'd language that could do > a hell of a lot in 8KB total for the compiler, runtime, a reasonably > complex user program and its data. > >> E.g., we used an 11 to write our i4004 code; the idea of even something >> as crude as an assembler running *ON* an i4004 was laughable! > > My point exactly. In any case, you wouldn't write for the i4004 in a > compiled language. Pro'ly not for the i8008 either, although I have > heard claims that that was possible.
I have a C compiler that targets the 8080, hosted on CP/M. Likewise, a Pascal compiler and a BASIC compiler (and I think an M2 compiler) all hosted on that 8085 CP/M machine. The problem with HLL's on small machines is the helper routines and standard libraries can quickly eat up ALL of your address space! I designed several z180-based products in C -- but the (bizarre!) bank switching capabilities of the processor would let me do things like stack the object code for different libraries in the BANK section and essentially do "far" calls through a bank-switching intermediary that the compiler would automatically invoke for me. By cleverly designing the memory map, you could have large DATA and large CODE -- at the expense of lengthened call/return times (of course, the interrupt system had to remain accessible at all times so you worked hard to keep that tiny lest you waste address space catering to it).
>>> The question is not why C was adopted for system programming, or for >>> cross development from a capable system to a smaller target. Rather >>> the question is why it was so widely adopted for ALL kinds of >>> programming on ALL platforms given that were many other reasonable >>> choices available. >> >> Look at them, individually. And, at the types of products that >> were being developed in that time frame. >> >> You could code most algorithms *in* BASIC. But, if forced into a >> single-threaded environment, most REAL projects would fall apart >> (cuz the processor would be too slow to get around to polling >> everything AND doing meaningful work). I wrote a little BASIC >> compiler that targeted the 647180 (one of the earliest SoC's). >> >> It was useless for product development. But, great for throwing >> together dog-n-pony's for clients. Allow multiple "program >> counters" to walk through ONE executable and you've got an effective >> multitasking environment (though with few RT guarantees). Slap >> *one* PLCC in a wirewrap socket with some misc signal conditioning/IO >> logic and show the client a mockup of a final product in a couple >> of weeks. >> >> [Then, explain why it was going to take several MONTHS to go from >> that to a *real* product! :> ] >> >> SNOBOL is really only useful for text processing. Try implementing >> Bresenham's algorithm in it -- or any other DDA. This sort of thing >> highlights the differences between "mainframe" applications and >> "embedded" applications. > > But we aren't talking about *embedded* applications ... we're talking > about ALL KINDS of applications on ALL KINDS of machines.
Sure we are! This is C.A.E! :> If we're talking about all applications, then are we also dragging big mainframes into the mix? Where's mention of PL/1 and the other big iron running it?
> You view everything through the embedded lens. > >> Ditto Pascal. How much benefit is there in controlling a motor >> that requires high level math and flagrant automatic type conversion? > > I don't even understand this.
Motor control is a *relatively* simple algorithm. No *need* for complex data types, automatic type casts, etc. And, what you really want is deterministic behavior; you want to know that a particular set of "instructions" (in a HLL?) will execute in a particular, predictable time frame without worrying about some run-time support mechanism (e.g., GC) kicking in and confounding the expected behavior. [Or, having to take explicit measures to avoid this because of the choice of HLL]
>> Smalltalk? You *do* know how much RAM cost in the early 80's?? > > Yes, I do. > > I also know that I had a Smalltalk development system that ran on my > Apple IIe. Unfortunately, it was a "personal" edition that was not > able to create standalone executables ... there was a "professional" > version that could, but it was too expensive for me ... so I don't > know how small a 6502 Smalltalk program could have been. > > I also had a Lisp and a Prolog for the IIe. No, they did not run in > 4KB, but they were far from useless on an 8-bit machine.
As I said, I id a lot with 8b hardware. But, you often didn't have a lot of resources "to spare" with that hardware. I recall going through an 8085 design and counting the number of subroutine invocations (CALL's) for each specific subroutine. Then, replacing the CALLs to the most frequently accessed subroutine with "restart" instructions (RST) -- essentially a one-byte CALL that vectored through a specific hard-coded address in the memory map. I.e., each such replacement trimmed *2* bytes from the size of the executable. JUST TWO! We did that for seven of the eight possible RST's. (RST 0 is hard to cheaply use as it doubles as the RESET entry point). The goal being to trim a few score bytes out of the executable so we could eliminate *one* 2KB EPROM from the BoM (because we didn't need the entire EPROM, just a few score bytes of it -- so why pay for a $50 (!!) chip if you only need a tiny piece of it? And, why pay for ANY of it if you can replace 3-byte instructions with 1-byte instructions??)
Dimiter_Popoff wrote:
> On 08.6.2017 &#1075;. 13:38, George Neuner wrote: >> ... >> >> The question is not why C was adopted for system programming, or for >> cross development from a capable system to a smaller target. Rather >> the question is why it was so widely adopted for ALL kinds of >> programming on ALL platforms given that were many other reasonable >> choices available. > > My take on that is it happened because people needed a low level > language, some sort of assembler - and the widest spread CPU was > the x86 with a register model for which no sane person would consider > programming larger pieces of code. > I am sure there have been people who have done > it but they can't have been exactly sane :) (i.e. have been insane in > a way most people would have envied them for their insanity).
I doubt that, but there's something to be said for architectures that limit the complexity available to the financial/investor classes. The trouble with large swaths of assembly is that organizations aren't stable enough to support maintainers long enough to keep things running. The game here is funding, not function.
> So C made x86 usable - and the combination (C+x86) is the main factor > which led to the absurd situation we have today, where code which > used to take kilobytes of memory takes gigabytes (not because of the > inefficiency of compilers, just because of where most programmers > have been led to). >
Most of the people who need gigs of memory aren't at least native C speakers. It took the execrable web protocols and "relational databases" to make things utterly reek of doom. These are fine for toy programs to get you through a course, but no fun at all for Real Work(tm). Behold the $400 , WiFi enabled juicer; Juicero.
> Dimiter > > ====================================================== > Dimiter Popoff, TGI http://www.tgi-sci.com > ====================================================== > http://www.flickr.com/photos/didi_tgi/ > >
-- Les Cargill
upsidedown@downunder.com wrote:
> On Thu, 08 Jun 2017 15:58:51 +0300, Dimiter_Popoff <dp@tgi-sci.com> > wrote: > >> On 08.6.2017 ?. 13:38, George Neuner wrote: >>> ... >>> >>> The question is not why C was adopted for system programming, or for >>> cross development from a capable system to a smaller target. Rather >>> the question is why it was so widely adopted for ALL kinds of >>> programming on ALL platforms given that were many other reasonable >>> choices available. >> >> My take on that is it happened because people needed a low level >> language, some sort of assembler - and the widest spread CPU was >> the x86 with a register model for which no sane person would consider >> programming larger pieces of code. >> I am sure there have been people who have done >> it but they can't have been exactly sane :) (i.e. have been insane in >> a way most people would have envied them for their insanity). >> So C made x86 usable - and the combination (C+x86) is the main factor >> which led to the absurd situation we have today, where code which >> used to take kilobytes of memory takes gigabytes (not because of the >> inefficiency of compilers, just because of where most programmers >> have been led to). >> >> Dimiter >> >> ====================================================== >> Dimiter Popoff, TGI http://www.tgi-sci.com >> ====================================================== >> http://www.flickr.com/photos/didi_tgi/ >> > > PL/M-80 and PL/M-86 were quite reasonable intermediate languages. >
PL/M was fairly hard to maintain in. A couple lines of C would replace a page of PL/M, in cases.
> The same also applies to BLISS for PDP-10/PDP-11/VAX/Alpha and > recently some Intel HW. > > The problem why these languages did not become popular was that the > hardware vendors did want to make money by compiler sales. >
Gates identified massive cognitive dissonance against the idea of selling software en masse and set the tools price quite low. "Hardware vendors" like IBM had OS/360, where the O/S cost more than the machine. People still don't want to pay for software.
> Some HW companies wanting to boost their HW sales did give away > compilers and development software for free and that way boost their > HW sale. >
That's more generally true of chip vendors, who use the tools as an enabler for sales. -- Les Cargill
On Fri, 9 Jun 2017 22:50:31 -0700, Don Y <blockedofcourse@foo.invalid>
wrote:

>On 6/9/2017 7:14 PM, George Neuner wrote: >> On Fri, 9 Jun 2017 00:06:05 -0700, Don Y wrote: >> >>> ..., if you'd a formal education in CS, it would be trivial to >>> semantically map the mechanisms to value and reference concepts. >>> And, thinking of "reference" in terms of an indication of WHERE >>> it is! etc. >> >> But only a small fraction of "developers" have any formal CS, CE, or >> CSE education. In general, the best you can expect is that some of >> them may have a certificate from a programming course. > >You've said that in the past, but I can't wrap my head around it. >It's like claiming very few doctors have taken any BIOLOGY courses! >Or, that a baker doesn't understand the basic chemistries involved.
Comparitively few bakers actually can tell you the reason why yeast makes dough rise, or why you need to add salt to make things taste sweet. It's enough for many people to know that something works - they don't have a need to know how or why. WRT "developers": A whole lot of "applications" are written by people in profressions unrelated to software development. The become "developers" de facto when their programs get passed around and used by others. Consider all the scientists, mathematicians, statisticians, etc., who write data analysis programs in the course of their work. Consider all the data entry clerks / "accidental" database admins who end up having to learn SQL and form coding to do their jobs. Consider the frustrated office workers who study VBscript or Powershell on their lunch hour and start automating their manual processes to be more productive. : < more examples elided - use your imagination > Some of these "non-professional" programs end up being very effective and reliable. The better ones frequently are passed around, modified, extended, and eventually are coaxed into new uses that the original developer never dreamed of. Then consider the legions of (semi)professional coders who maybe took a few programming courses, or who learned on their own, and went to work writing, e.g., web applications, Android apps, etc. It has been estimated that over 90% of all software today is written by people who have no formal CS/CE/CSE or IS/IT education, and 40% of all programmers are employed primarily to do something other than software development. Note: programming courses != CS/CE/CSE education
>> I knew someone who was taking a C programming course, 2 nights a week >> at a local college. After (almost) every class, he would come to me >> with questions and confusions about the subject matter. He remarked >> on several occasions that I was able to teach him more in 10 minutes >> than he learned in a 90 minute lecture. > >But I suspect you had a previous relationship with said individual. >So, knew how to "relate" concepts to him/her.
In this case, yes. But I also had some prior teaching experience. I rarely have much trouble explaining complicated subjects to others. As you have noted in the past, it is largely a matter of finding common ground with a student and drawing appropriate analogies.
>> CS programs don't teach programming - they teach "computer science". >> For the most part CS students simply are expected to know. > >I guess I don't understand the difference. > >In my mind, "programming" is the plebian skillset.
Only sort of. Programming is fundamental to computer *engineering*, but that is a different discipline. Computer "science" is concerned with - computational methods, - language semantics, - ways to bridge the semantic gap between languages and methods, - design and study of algorithms, - design of better programming languages [for some "better"] - ... Programming per se really is not a requirement for a lot of it. A good foundation of math and logic is more necessary.
>> CSE programs are somewhat better because they [purport to] teach >> project management: selection and use of tool chains, etc. But that >> can be approached largely in the abstract as well. > >This was an aspect of "software development" that was NOT stressed >in my curriculum. Nor was "how to use a soldering iron" in the >EE portion thereof (the focus was more towards theory with the >understanding that you could "pick up" the practical skills relatively >easily, outside of the classroom)
Exactly! If you can't learn to solder on your own, you don't belong here. CS regards programming in the same way.
>I just can't imagine how you could explain "programming" a machine to a >person without that person first understanding how the machine works.
Take a browse through some classics: - Abelson, Sussman & Sussman, "Structure and Interpretation of Computer Programs" aka SICP - Friedman, Wand & Haynes, "Essentials of Programming Languages" aka EOPL There are many printings of each of these. I happen to have SICP 2nd Ed and EOPL 8th Ed on my shelf. Both were - and are still - widely used in undergrad CS programs. SICP doesn't mention any concrete machine representation until page 491, and then a hypothetical machine is considered with respect to emulating its behavior. EOPL doesn't refer to any concrete machine at all.
>> What would you have done differently if C were not available for >> writing your applications? How exactly would that have impacted your >> development? > >The applications are written in Limbo. I'd considered other scripting >languages for that role -- LOTS of other languages! -- but Limbo already >had much of the support I needed to layer onto the "structure" of my >system. Did I want to invent a language and a hosting VM (to make it >easy to migrate applications at run-time)? Add multithreading hooks >to an existing language? etc. > >[I was disappointed with most language choices as they all tend to >rely heavily on punctuation and other symbols that aren't "voiced" >when reading the code]
Write in BrainF_ck ... that'll fix them. Very few languages have been deliberately designed to be read. The very idea has negative connotations because the example everyone jumps to is COBOL - which was too verbose. It's also true that reading and writing effort are inversely related, and programmers always seem to want to type fewer characters - hence the proliferation of languages whose code looks suspiciously like line noise. I don't know about you, but I haven't seen a teletype connected to a computer since about 1972.
>You obviously have to understand the CONCEPT of "multiplication" in >order to avail yourself of it. But, do you care if it's implemented >in a purely combinatorial fashion? Or, iteratively with a bunch of CSA's? >Or, by tiny elves living in a hollow tree?
Rabbits are best for multiplication.
>In my case, you have to understand that each function/subroutine invocation >just *appears* to be a subroutine/function invocation. That, in reality, >it can be running code on another processor in another building -- concurrent >with what you are NOW doing (this is a significant conceptual difference >between traditional "programming" where you consider everything to be a >series of operations -- even in a multithreaded environment!). > >You also have to understand that your "program" can abend or be aborted >at any time. And, that persistent data has *structure* (imposed by >the DBMS) instead of being just BLOBs. And, that agents/clients have >capabilities that are finer-grained than "permissions" in conventional >systems. > >But, you don't have to understand how any of these things are implemented >in order to use them correctly.
Which is one of the unspoken points of those I books mentioned above: that (quite a lot of) programming is an exercise in logic that is machine independent. Obviously I am extrapolating and paraphrasing, and the authors did not have device programming in mind when they wrote the books. Nevertheless, there is lot of truth in it: identifying required functionality, designing program logic, evaluating and choosing algorithms, etc. ... all may be *guided* in situ by specific knowledge of the target machine, but they are skills which are independent of it. YMMV, George
On 6/10/2017 6:01 PM, George Neuner wrote:
> On Fri, 9 Jun 2017 22:50:31 -0700, Don Y <blockedofcourse@foo.invalid> > wrote: > >> On 6/9/2017 7:14 PM, George Neuner wrote: >>> On Fri, 9 Jun 2017 00:06:05 -0700, Don Y wrote: >>> >>>> ..., if you'd a formal education in CS, it would be trivial to >>>> semantically map the mechanisms to value and reference concepts. >>>> And, thinking of "reference" in terms of an indication of WHERE >>>> it is! etc. >>> >>> But only a small fraction of "developers" have any formal CS, CE, or >>> CSE education. In general, the best you can expect is that some of >>> them may have a certificate from a programming course. >> >> You've said that in the past, but I can't wrap my head around it. >> It's like claiming very few doctors have taken any BIOLOGY courses! >> Or, that a baker doesn't understand the basic chemistries involved. > > Comparitively few bakers actually can tell you the reason why yeast > makes dough rise, or why you need to add salt to make things taste > sweet. It's enough for many people to know that something works - > they don't have a need to know how or why.
I guess different experiences. Growing up, I learned these sorts of things by asking countless questions of the vendors we frequented. Yeast vs. baking soda as leavening agent; baking soda vs. powder; vs. adding cream of tartar; cake flour vs. bread flour; white sugar vs. brown sugar; vege shortening vs. butter (vs. oleo/oil); sugar as a "wet" ingredient; etc. Our favorite baker was a weekly visit. He'd take me in the back room (much to the chagrin of other customers) and show me the various bits of equipment, what he was making at the time, his "tricks" to eek a bit more life out of something approaching its "best by" date, etc. [I wish I'd pestered him, more, to learn about donuts and, esp, bagels as he made the *best* of both! OTOH, probably too many details for a youngster to commit to memory...] The unfortunate thing (re: US style of measurement by volume) is that you don't have as fine control over some of the ingredients (e.g., what proportion of "other ingredients" per "egg unit") [I've debated purchasing a scale just to weigh eggs! Not to tweek the amount of other ingredients proportionately but, rather, to select a "set" of eggs closest to a target weight for a particular set of "other ingredients". Instead, I do that "by feel", presently (one of the aspects of my Rx's that makes them "non-portable -- the other being my deliberate failure to upgrade the written Rx's as I improve upon them. Leaves folks wondering why things never come out "as good" when THEY make them... <grin>]
> WRT "developers": > > A whole lot of "applications" are written by people in profressions > unrelated to software development. The become "developers" de facto > when their programs get passed around and used by others. > > Consider all the scientists, mathematicians, statisticians, etc., who > write data analysis programs in the course of their work. > > Consider all the data entry clerks / "accidental" database admins who > end up having to learn SQL and form coding to do their jobs. > > Consider the frustrated office workers who study VBscript or > Powershell on their lunch hour and start automating their manual > processes to be more productive. > > : < more examples elided - use your imagination > > > Some of these "non-professional" programs end up being very effective > and reliable. The better ones frequently are passed around, modified, > extended, and eventually are coaxed into new uses that the original > developer never dreamed of. > > Then consider the legions of (semi)professional coders who maybe took > a few programming courses, or who learned on their own, and went to > work writing, e.g., web applications, Android apps, etc. > > It has been estimated that over 90% of all software today is written > by people who have no formal CS/CE/CSE or IS/IT education, and 40% of > all programmers are employed primarily to do something other than > software development. > > Note: programming courses != CS/CE/CSE education
And these folks tend to use languages (and tools) that are tailored to those sorts of "applications". Hence the reason I include a scripting language in my design; no desire to force folks to understand data types, overflow, mathematical precision, etc. "I have a room that is 13 ft, 2-1/4 inches by 18 ft, 3-3/8 inches. Roughly how many 10cm x 10cm tiles will it take to cover the floor?" Why should the user have to normalize to some particular unit of measure? All he wants, at the end, is a dimensionless *count*. [I was recently musing over the number of SOIC8 devices that could fit on the surface of a sphere having a radius equal to the average distance of Pluto from the Sun (idea came from a novel I was reading). And, how much that SOIC8 collection would *weigh*...]
>>> I knew someone who was taking a C programming course, 2 nights a week >>> at a local college. After (almost) every class, he would come to me >>> with questions and confusions about the subject matter. He remarked >>> on several occasions that I was able to teach him more in 10 minutes >>> than he learned in a 90 minute lecture. >> >> But I suspect you had a previous relationship with said individual. >> So, knew how to "relate" concepts to him/her. > > In this case, yes. But I also had some prior teaching experience. > > I rarely have much trouble explaining complicated subjects to others. > As you have noted in the past, it is largely a matter of finding > common ground with a student and drawing appropriate analogies.
Exactly. I had a lady friend many years ago to whom I'd always explain computer-related issues (more typ operational ones than theoretical ones) using "kitchen analogies". In a playful mood, one day, she chided me for the misogynistic examples. So, I started explaining things in terms of salacious "bedroom activities". Didn't take long for her to request a return to the kitchen analogies! :>
>>> CS programs don't teach programming - they teach "computer science". >>> For the most part CS students simply are expected to know. >> >> I guess I don't understand the difference. >> >> In my mind, "programming" is the plebian skillset. > > Only sort of. Programming is fundamental to computer *engineering*, > but that is a different discipline. > > Computer "science" is concerned with > - computational methods, > - language semantics, > - ways to bridge the semantic gap between languages and methods, > - design and study of algorithms, > - design of better programming languages [for some "better"] > - ... > Programming per se really is not a requirement for a lot of it. A > good foundation of math and logic is more necessary.
Petri nets, lamda calculus, S-machines, etc. But, to become *practical*, these ideas have to eventually be bound to concrete representations. You need ways of recording algorithms and verifying that they do, in fact, meet their desired goals. I know no one who makes a living dealing in abstractions, entirely. Even my physics friends have lives beyond a blackboard.
>>> CSE programs are somewhat better because they [purport to] teach >>> project management: selection and use of tool chains, etc. But that >>> can be approached largely in the abstract as well. >> >> This was an aspect of "software development" that was NOT stressed >> in my curriculum. Nor was "how to use a soldering iron" in the >> EE portion thereof (the focus was more towards theory with the >> understanding that you could "pick up" the practical skills relatively >> easily, outside of the classroom) > > Exactly! If you can't learn to solder on your own, you don't belong > here. CS regards programming in the same way.
But you can't examine algorithms and characterize their behaviors, costs, etc. without being able to reify them. You can't just magically invent an abstract language that supports: solve_homework_problem(identifier)
>> I just can't imagine how you could explain "programming" a machine to a >> person without that person first understanding how the machine works. > > Take a browse through some classics: > > - Abelson, Sussman & Sussman, "Structure and Interpretation of > Computer Programs" aka SICP > > - Friedman, Wand & Haynes, "Essentials of Programming Languages" > aka EOPL
All written long after I'd graduated. :> Most (all?) of my college CS courses didn't have "bound textbooks". Instead, we had collections of handouts coupled with notes that formed our "texts". In some cases, the handouts were "bound" (e.g., a cheap "perfect binding" paperback) for convenience as the instructors were writing the texts *from* their teachings. Sussman taught one of my favorite courses and I'm chagrined that all I have to show for it are the handouts and my notes -- it would have been nicer to have a lengthier text that I could explore at my leisure (esp after the fact). The books that I have on the subject predate my time in college (I attended classes at a local colleges at night and on weekends while I was in Jr High and High School). Many of the terms used in them have long since gone out of style (e.g., DASD, VTOC, etc.) I still have my flowcharting template and some FORTRAN coding forms for punched cards... I suspect *somewhere* these are still used! :> Other texts from that period are amusing to examine to see how terminology and approaches to problems have changed. "Real-time" being one of the most maligned terms! (e.g., Caxton's book)
> There are many printings of each of these. I happen to have SICP 2nd > Ed and EOPL 8th Ed on my shelf. > > Both were - and are still - widely used in undergrad CS programs. > > SICP doesn't mention any concrete machine representation until page > 491, and then a hypothetical machine is considered with respect to > emulating its behavior. > > EOPL doesn't refer to any concrete machine at all. > >>> What would you have done differently if C were not available for >>> writing your applications? How exactly would that have impacted your >>> development? >> >> The applications are written in Limbo. I'd considered other scripting >> languages for that role -- LOTS of other languages! -- but Limbo already >> had much of the support I needed to layer onto the "structure" of my >> system. Did I want to invent a language and a hosting VM (to make it >> easy to migrate applications at run-time)? Add multithreading hooks >> to an existing language? etc. >> >> [I was disappointed with most language choices as they all tend to >> rely heavily on punctuation and other symbols that aren't "voiced" >> when reading the code] > > Write in BrainF_ck ... that'll fix them. > > Very few languages have been deliberately designed to be read. The > very idea has negative connotations because the example everyone jumps > to is COBOL - which was too verbose.
Janus (Consistent System) was equally verbose. Its what I think of when I'm writing SQL :< An 80 column display was dreadfully inadequate!
> It's also true that reading and writing effort are inversely related, > and programmers always seem to want to type fewer characters - hence > the proliferation of languages whose code looks suspiciously like line > noise.
Yes, but if you're expecting to exchange code snippets with folks who can't *see*, the imprecision of "speaking" a program's contents is fraught with opportunity for screwups -- even among "professionals" who know where certain punctuation are *implied*. Try dictating "Hello World" to a newbie over the phone... I actually considered altering the expression syntax to deliberately render parens unnecessary (and illegal). I.e., if an expression can have two different meanings with/without parens, then ONLY the meaning without parens would be supported. But, this added lots of superfluous statements just to meet that goal *and* quickly overloads STM as you try to keep track of which "component statements" you've already encountered: area = (width_feet+(width_inches/12))*(length_feet+(length_inches/12) becomes: width = width_feet + width_inches/12 length = length_feet + length_inches/12 area = length * width [Imagine you were, instead, computing the *perimeter* of a 6 walled room!]
> I don't know about you, but I haven't seen a teletype connected to a > computer since about 1972.
Actually, I have one :>
>> You obviously have to understand the CONCEPT of "multiplication" in >> order to avail yourself of it. But, do you care if it's implemented >> in a purely combinatorial fashion? Or, iteratively with a bunch of CSA's? >> Or, by tiny elves living in a hollow tree? > > Rabbits are best for multiplication.
Or, Adders and log tables! (bad childhood joke)
>> In my case, you have to understand that each function/subroutine invocation >> just *appears* to be a subroutine/function invocation. That, in reality, >> it can be running code on another processor in another building -- concurrent >> with what you are NOW doing (this is a significant conceptual difference >> between traditional "programming" where you consider everything to be a >> series of operations -- even in a multithreaded environment!). >> >> You also have to understand that your "program" can abend or be aborted >> at any time. And, that persistent data has *structure* (imposed by >> the DBMS) instead of being just BLOBs. And, that agents/clients have >> capabilities that are finer-grained than "permissions" in conventional >> systems. >> >> But, you don't have to understand how any of these things are implemented >> in order to use them correctly. > > Which is one of the unspoken points of those I books mentioned above: > that (quite a lot of) programming is an exercise in logic that is > machine independent. > > Obviously I am extrapolating and paraphrasing, and the authors did not > have device programming in mind when they wrote the books. > > Nevertheless, there is lot of truth in it: identifying required > functionality, designing program logic, evaluating and choosing > algorithms, etc. ... all may be *guided* in situ by specific knowledge > of the target machine, but they are skills which are independent of > it.
But I see programming (C.A.E) as having moved far beyond the sorts of algorithms you would run on a desktop, mainframe, etc. Its no longer just about this operator in combination with these arguments yields this result. When I was younger, I'd frequently use "changing a flat tire" as an example to coax folks into describing a "familiar" algorithm. It was especially helpful at pointing out all the little details that are so easy to forget (omit) that can render an implementation ineffective, buggy, etc. "Wonderful! Where did you get the spare tire from?" "The trunk!" "And, how did you get it out of the trunk?" "Ah, I see... 'I *opened* the trunk!'" "And, you did this while seated behind the wheel?" "Oh, OK. 'I got out of the car and OPENED the trunk'" "While you were driving down the road?" "Grrr... 'I pulled over to the shoulder and stopped the car; then got out'" "And got hit by a passing vehicle?" Now, its not just about the language and the target hardware but, also, the execution environment, OS, etc. Why are people surprised to discover that it's possible for <something> to see partial results of <something else's> actions? (i.e., the need for atomic operations) Or, to be frustrated that such problems are so hard to track down? (In a multithreaded environment,) we all know that the time between execution of instruction N and instruction N+1 can vary -- from whatever the "instruction rate" of the underlying machine happens to be up to the time it takes to service all threads at this, and higher, priority... up to "indefinite". Yet, how many folks are consciously aware of that as they write code? A "programmer" can beat on a printf() statement until he manages to stumble on the correct combination of format specifiers, flags, arguments, etc. But, will it ever occur to him that the printf() can fail, at RUNtime? Or, the NEXT printf() might fail while this one didn't? How many "programmers" know how much stack to allocate to each thread? How do they decide -- wait for a stack fence to be breached and then increase the number and try again? Are they ever *sure* that they've got the correct, "worst case" value? I.e., there are just too many details of successful program deployment that don't work when you get away from the rich and tame "classroom environment". This is especially true as we move towards scenarios where things "talk to" each other, more (for folks who aren't prepared to deal with a malloc/printf *failing*, how do they address "network programming"? Or, RPC/RMI? etc.) Its easy to see how someone can coax a piece of code to work in a desktop setting -- and fall flat on their face when exposed to a less friendly environment (i.e., The Real World). [Cookies tonight (while its below 100F) and build a new machine to replace this one. Replace toilets tomorrow (replaced flange in master bath today).]
On Sun, 11 Jun 2017 00:39:41 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:

>On 6/10/2017 6:01 PM, George Neuner wrote: >> On Fri, 9 Jun 2017 22:50:31 -0700, Don Y <blockedofcourse@foo.invalid> >> wrote:
>[I was recently musing over the number of SOIC8 devices that could fit >on the surface of a sphere having a radius equal to the average distance >of Pluto from the Sun (idea came from a novel I was reading). And, how >much that SOIC8 collection would *weigh*...]
Reading about Dyson spheres are we? So how many trillion-trillion devices would it take?
>But you can't examine algorithms and characterize their behaviors, >costs, etc. without being able to reify them.
To a 1st approximation, you can. E.g., given just an equation, you can count the arithmetic operations and approximate the number of operand reads and result writes. Certain analyses are very sensitive to the language being considered. E.g., you'll get different results from analyzing an algorithm expressed in C vs the same algorithm expressed in assembler because the assembler version exposes low level minutia that is hidden by the C version.
>You can't just magically invent an abstract language that supports: > solve_homework_problem(identifier)
You can invent it ... you just [currently] can't implement it. And you probably even can get a patent on it since the USPTO no longer requires working prototypes.
>>> I just can't imagine how you could explain "programming" a machine to a >>> person without that person first understanding how the machine works. >> >> Take a browse through some classics: >> >> - Abelson, Sussman & Sussman, "Structure and Interpretation of >> Computer Programs" aka SICP >> >> - Friedman, Wand & Haynes, "Essentials of Programming Languages" >> aka EOPL > >All written long after I'd graduated. :>
SICP and EOPL both were being written during the time I was in grad school. I had some courses with Mitch Wand and I'm sure I was used as a guinea pig for EOPL. I acquired them later because they subsequently became famous as foundation material for legions of CS students.
>Most (all?) of my college >CS courses didn't have "bound textbooks". Instead, we had collections >of handouts coupled with notes that formed our "texts". In some cases, >the handouts were "bound" (e.g., a cheap "perfect binding" paperback) >for convenience as the instructors were writing the texts *from* >their teachings.
I'm not *that* far behind you. Many of my courses did have books, but quite a few of those books were early (1st or 2nd) editions. I have a 1st edition on denotational semantics that is pre-press and contains inserts of hand drawn illustrations.
>Sussman taught one of my favorite courses and I'm chagrined that >all I have to show for it are the handouts and my notes -- it would >have been nicer to have a lengthier text that I could explore at >my leisure (esp after the fact).
I met once Gerry Sussman at a seminar. Never had the opportunity to take one of his classes.
>The books that I have on the subject predate my time in college >(I attended classes at a local colleges at night and on weekends >while I was in Jr High and High School). Many of the terms used >in them have long since gone out of style (e.g., DASD, VTOC, etc.) >I still have my flowcharting template and some FORTRAN coding forms >for punched cards... I suspect *somewhere* these are still used! :>
I have the Fortran IV manual my father used when he was in grad school. <grin>
>I actually considered altering the expression syntax to deliberately >render parens unnecessary (and illegal). I.e., if an expression >can have two different meanings with/without parens, then ONLY the >meaning without parens would be supported.
Indentation sensitive syntax (I-expressions) is a recurring idea to rid the world of parentheses. Given the popularity of Python, iexprs may eventually find a future. OTOH, many people - me included - are philosophically opposed to the idea of significant whitespace. If you want syntax visualization, use a structure editor.
>But, this added lots of superfluous statements just to meet that >goal *and* quickly overloads STM as you try to keep track of >which "component statements" you've already encountered: > area = (width_feet+(width_inches/12))*(length_feet+(length_inches/12) >becomes: > width = width_feet + width_inches/12 > length = length_feet + length_inches/12 > area = length * width >[Imagine you were, instead, computing the *perimeter* of a 6 walled room!]
??? For what definition of "STM"? Transactional memory - if that's what you mean - shouldn't require refactoring code in that way.
>> ... identifying required >> functionality, designing program logic, evaluating and choosing >> algorithms, etc. ... all may be *guided* in situ by specific knowledge >> of the target machine, but they are skills which are independent of >> it. > >But I see programming (C.A.E) as having moved far beyond the sorts of >algorithms you would run on a desktop, mainframe, etc. Its no longer >just about this operator in combination with these arguments yields >this result.
As long as you don't dismiss desktops and servers, etc. [Mainframes and minis as distinct concepts are mostly passe. Super and cluster computers, however, are very important]. Despite the current IoT and BYO device fads, devices are not all there are. Judging from some in the computer press, you'd think the legions of office workers in the world would need nothing more than iPads and Kinkos. That isn't even close to being true.
>I.e., there are just too many details of successful program deployment >that don't work when you get away from the rich and tame "classroom >environment". This is especially true as we move towards scenarios >where things "talk to" each other, more (for folks who aren't prepared >to deal with a malloc/printf *failing*, how do they address "network >programming"? Or, RPC/RMI? etc.)
Again: CS is about computation and language theory, not about systems engineering. I got into it while ago with a VC guy I met at a party. He wouldn't (let companies he was backing) hire anyone more than 5 years out of school because he thought their skills were out of date. I told him I would hesitate to hire anyone *less* than 5 years out of school because most new graduates don't have any skills and need time to acquire them. I also said something about how the average new CS grad would struggle to implement a way out of a wet paper bag. Obviously there is a component of this that is industry specific, but few (if any) industries change so fast that skills learned 5 years ago are useless today. For me, it was a scary look into the (lack of) mind of modern business. YMMV, George
On 6/12/2017 8:27 PM, George Neuner wrote:
> On Sun, 11 Jun 2017 00:39:41 -0700, Don Y > <blockedofcourse@foo.invalid> wrote: > >> On 6/10/2017 6:01 PM, George Neuner wrote: >>> On Fri, 9 Jun 2017 22:50:31 -0700, Don Y <blockedofcourse@foo.invalid> >>> wrote: > >> [I was recently musing over the number of SOIC8 devices that could fit >> on the surface of a sphere having a radius equal to the average distance >> of Pluto from the Sun (idea came from a novel I was reading). And, how >> much that SOIC8 collection would *weigh*...] > > Reading about Dyson spheres are we? So how many trillion-trillion > devices would it take?
Matrioshka Brain -- "concentric" Dyson spheres each powered by the waste heat of the innermore spheres. I didn't do the math as I couldn't figure out what a good representative weight for a "wired" SOIC SoC might be...
>> But you can't examine algorithms and characterize their behaviors, >> costs, etc. without being able to reify them. > > To a 1st approximation, you can. E.g., given just an equation, you > can count the arithmetic operations and approximate the number of > operand reads and result writes.
Yes, but only for evaluating *relative* costs/merits of algorithms. It assumes you can "value" the costs/performance of the different operators in some "intuitive" manner. This doesn't always hold. E.g., a more traditionally costly operation might be "native" while the *expected* traditional operation has to be approximated or emulated.
> Certain analyses are very sensitive to the language being considered. > E.g., you'll get different results from analyzing an algorithm > expressed in C vs the same algorithm expressed in assembler because > the assembler version exposes low level minutia that is hidden by the > C version. > >> You can't just magically invent an abstract language that supports: >> solve_homework_problem(identifier) > > You can invent it ... you just [currently] can't implement it.
Sure you can! You just have to find someone sufficiently motivated to apply their meatware to the problem! There's nothing specifying the *time* that the implementation needs to take to perform the operation!
> And you probably even can get a patent on it since the USPTO no longer > requires working prototypes. > >>>> I just can't imagine how you could explain "programming" a machine to a >>>> person without that person first understanding how the machine works. >>> >>> Take a browse through some classics: >>> >>> - Abelson, Sussman & Sussman, "Structure and Interpretation of >>> Computer Programs" aka SICP >>> >>> - Friedman, Wand & Haynes, "Essentials of Programming Languages" >>> aka EOPL >> >> All written long after I'd graduated. :> > > SICP and EOPL both were being written during the time I was in grad > school. I had some courses with Mitch Wand and I'm sure I was used as > a guinea pig for EOPL.
Having not seen SICP, its possible the notes for GS's class found their way into it -- or, at least, *shaped* it.
> I acquired them later because they subsequently became famous as > foundation material for legions of CS students. > >> Most (all?) of my college >> CS courses didn't have "bound textbooks". Instead, we had collections >> of handouts coupled with notes that formed our "texts". In some cases, >> the handouts were "bound" (e.g., a cheap "perfect binding" paperback) >> for convenience as the instructors were writing the texts *from* >> their teachings. > > I'm not *that* far behind you. Many of my courses did have books, but > quite a few of those books were early (1st or 2nd) editions.
Its not just *when* you got your education but what the folks teaching opted to use as their "teaching materials". Most of my "CS" professors obviously considered themselves "budding authors" as each seemed unable to find a suitable text from which to teach and opted, instead, to write their own. OTOH, all my *other* classes (including the "EE" ones) had *real* textbooks.
> I have a 1st edition on denotational semantics that is pre-press and > contains inserts of hand drawn illustrations. > >> Sussman taught one of my favorite courses and I'm chagrined that >> all I have to show for it are the handouts and my notes -- it would >> have been nicer to have a lengthier text that I could explore at >> my leisure (esp after the fact). > > I met once Gerry Sussman at a seminar. Never had the opportunity to > take one of his classes.
Unfortunately, I never realized the sorts of folks I was surrounded by, at the time. It was "just school", in my mind.
>> I actually considered altering the expression syntax to deliberately >> render parens unnecessary (and illegal). I.e., if an expression >> can have two different meanings with/without parens, then ONLY the >> meaning without parens would be supported. > > Indentation sensitive syntax (I-expressions) is a recurring idea to > rid the world of parentheses. > > Given the popularity of Python, iexprs may eventually find a future. > OTOH, many people - me included - are philosophically opposed to the > idea of significant whitespace. > > If you want syntax visualization, use a structure editor.
Still doesn't work without *vision*!
>> But, this added lots of superfluous statements just to meet that >> goal *and* quickly overloads STM as you try to keep track of >> which "component statements" you've already encountered: >> area = (width_feet+(width_inches/12))*(length_feet+(length_inches/12) >> becomes: >> width = width_feet + width_inches/12 >> length = length_feet + length_inches/12 >> area = length * width >> [Imagine you were, instead, computing the *perimeter* of a 6 walled room!] > > ??? For what definition of "STM"? > > Transactional memory - if that's what you mean - shouldn't require > refactoring code in that way.
How many nested levels of parens can you keep track of if I'm dictating the code to you over the phone and your eyes are closed? Will I be disciplined enough to remember to alert you to the presence of every punctuation mark (e.g., paren)? Will you be agile enough to notice when I miss one?
>>> ... identifying required >>> functionality, designing program logic, evaluating and choosing >>> algorithms, etc. ... all may be *guided* in situ by specific knowledge >>> of the target machine, but they are skills which are independent of >>> it. >> >> But I see programming (C.A.E) as having moved far beyond the sorts of >> algorithms you would run on a desktop, mainframe, etc. Its no longer >> just about this operator in combination with these arguments yields >> this result. > > As long as you don't dismiss desktops and servers, etc. > > [Mainframes and minis as distinct concepts are mostly passe. Super > and cluster computers, however, are very important].
"Mainframe" is a colloquial overloading to reference "big machines" that have their own dedicated homes. The data center servicing your bank is a mainframe -- despite the fact that it might be built of hundreds of blade servers, etc. "Desktop" is the sort of "appliance" that a normal user relates to when you say "computer". He *won't* think of his phone even though he knows its one. He certainly won't think of his microwave oven, furnace, doorbell, etc.
> Despite the current IoT and BYO device fads, devices are not all there > are. Judging from some in the computer press, you'd think the legions > of office workers in the world would need nothing more than iPads and > Kinkos. That isn't even close to being true. > >> I.e., there are just too many details of successful program deployment >> that don't work when you get away from the rich and tame "classroom >> environment". This is especially true as we move towards scenarios >> where things "talk to" each other, more (for folks who aren't prepared >> to deal with a malloc/printf *failing*, how do they address "network >> programming"? Or, RPC/RMI? etc.) > > Again: CS is about computation and language theory, not about systems > engineering. > > I got into it while ago with a VC guy I met at a party. He wouldn't > (let companies he was backing) hire anyone more than 5 years out of > school because he thought their skills were out of date. > > I told him I would hesitate to hire anyone *less* than 5 years out of > school because most new graduates don't have any skills and need time > to acquire them. I also said something about how the average new CS > grad would struggle to implement a way out of a wet paper bag.
I think it depends on the "pedigree". When I was hired at my first job, the boss said, outright, "I don't expect you to be productive, today. I hired you for 'tomorrow'; if I wanted someone to be productive today, I'd have hired from the other side of the river -- and planned on mothballing him next year!" From the few folks that I interact with, I have learned to see his point. Most don't know anything about the "history" of their technology or the gyrations as it "experimented" with different things. They see "The Cloud" as something new and exciting -- and don't see the parallels to "time sharing", centralized computing, etc. that the industry routinely bounces through. Or, think it amazingly clever to turn a PC into an X terminal ("Um, would you like to see some REAL ones?? You know, the idea that you're PILFERING?") Employers/clients want to know if you've done THIS before (amusing if its a cutting edge project -- that NO ONE has done before!) as if that somehow makes you MORE qualified to solve their problem(s). I guess they don't expect people to LEARN...
> Obviously there is a component of this that is industry specific, but > few (if any) industries change so fast that skills learned 5 years ago > are useless today. For me, it was a scary look into the (lack of) > mind of modern business.
A lady friend once told me "Management is easy! No one wants to take risks or make decisions so, if YOU will, they'll gladly hide behind you!" /Pro bono/ day tomorrow. Last sub 100F day for at least 10 days (103 on Wed climbing linearly to 115 next Mon with a LOW of 82F) so I'm hoping to get my *ss out of here bright and early in the morning! <frown> 45 and raining, you say... :>
On Tuesday, June 13, 2017 at 1:42:17 AM UTC-4, Don Y wrote:
> On 6/12/2017 8:27 PM, George Neuner wrote: > > On Sun, 11 Jun 2017 00:39:41 -0700, Don Y > > <blockedofcourse@foo.invalid> wrote:
[]
> > >> But you can't examine algorithms and characterize their behaviors, > >> costs, etc. without being able to reify them. > > > > To a 1st approximation, you can. E.g., given just an equation, you > > can count the arithmetic operations and approximate the number of > > operand reads and result writes. > > Yes, but only for evaluating *relative* costs/merits of algorithms. > It assumes you can "value" the costs/performance of the different > operators in some "intuitive" manner.
I'm jumping in late here so forgive me if you covered this. Algorithmic analysis is generally order of magnitude (the familiar Big O notation) and independent of hardware implementation.
> > This doesn't always hold. E.g., a more traditionally costly operation > might be "native" while the *expected* traditional operation has to be > approximated or emulated.
I'm not quite sure what you are saying here, Don. What's the difference between "native" and *expected*? Is it that you *expected* the system to have a floating point multiply, but the "native" hardware does not so it is emulated? [] first:
> >> You can't just magically invent an abstract language that supports: > >> solve_homework_problem(identifier) > >
second:
> > You can invent it ... you just [currently] can't implement it. >
third:
> Sure you can! You just have to find someone sufficiently motivated to > apply their meatware to the problem! There's nothing specifying the > *time* that the implementation needs to take to perform the operation!
I'm confused here too, Don, unless the quotation levels are off. Is it you that said the first and third comments above? They seem contradictory. (or else you are referencing different contexts?) [lots of other interesting stuff deleted] ed
Hi Ed,

On 6/13/2017 2:24 PM, Ed Prochak wrote:

>>>> But you can't examine algorithms and characterize their behaviors, >>>> costs, etc. without being able to reify them. >>> >>> To a 1st approximation, you can. E.g., given just an equation, you >>> can count the arithmetic operations and approximate the number of >>> operand reads and result writes. >> >> Yes, but only for evaluating *relative* costs/merits of algorithms. >> It assumes you can "value" the costs/performance of the different >> operators in some "intuitive" manner. > > I'm jumping in late here so forgive me if you covered this. > > Algorithmic analysis is generally order of magnitude (the familiar > Big O notation) and independent of hardware implementation.
Correct. But, it's only "order of" assessments. I.e., is this a constant time algorithm? Linear time? Quadratic? Exponential? etc. There's a lot of handwaving in O() evaluations of algorithms. What's the relative cost of multiplication vs. addition operators? Division? etc. With O() you're just trying to evaluate the relative merits of one approach over another in gross terms.
>> This doesn't always hold. E.g., a more traditionally costly operation >> might be "native" while the *expected* traditional operation has to be >> approximated or emulated. > > I'm not quite sure what you are saying here, Don. > What's the difference between "native" and *expected*? > > Is it that you *expected* the system to have a floating point > multiply, but the "native" hardware does not so it is emulated?
Or, exactly the reverse: that it had the more complex operator but not the "simpler" (expected) one. We *expect* integer operations to be cheap. We expect logical operators to be <= additive operators <= multiplication, etc. But, that's not always the case. E.g., having a "multiply-and-accumulate" instruction (common in DSP) can eliminate the need for an "add" opcode (i.e., multiplication is as "expensive" as addition). I've designed (specialty) CPU's that had hardware to support direct (native) implementation of DDA's. But, trying to perform a simple "logical" operation would require a page of code (because there were no logical operators so they'd have to be emulated). Atari (?) made a processor that could only draw arcs -- never straight lines (despite the fact that line segments SHOULD be easier). Limbo initially took the approach of having five "base" data types: - byte - int ("long") - big ("long long") - real ("double") - string (These are supported directly by the underlying VM) No pointer types. No shorts, short-reals (floats), etc. If you want something beyond an integer, you go balls out and get a double! The haughtiness of always relying on "gold" instead of "lead" proved impractical, in he real world. So, there are now things like native support for Q-format -- and beyond (i.e., you can effectively declare the value of the rightmost bit AND a maximum value to representable by that particular "fixed" type): hourlyassessment: type fixed(0.1, 40.0); timespentworking, timeinmeetings: hourlyassessment; LETTER: con 11.0; DPI: con 400; inches: type fixed(1/DPI, LETTER); topmargin: inches; Likewise, the later inclusion of REFERENCES to functions as a compromise in the "no pointers" mentality. (you don't *need* references as you can map functions to integer identifiers and then use big "case" statements (the equivalent of "switch") to invoke one of N functions as indicated by that identifier; just far less efficiently (enough so that you'd make a change to the LANGUAGE to support it?) While not strictly on-topic, it's vindication that "rough" approximations of the cost of operators can often be far enough astray that you need to refine the costs "in practice". I.e., if "multiplication" could just be considered to have *a* cost, then there's be no need for all those numerical data types. So... you can use O-notation to compare the relative costs of different algorithms on SOME SET of preconceived available operators and data types. You have some implicit preconceived notion of what the "real" machine is like. But, mapping this to a real implementation is fraught with opportunities to come to the wrong conclusions (e.g., if you think arcs are expensive as being built from multiple chords, you'll favor algorithms that minimize the number of chords used)
> first: >>>> You can't just magically invent an abstract language that supports: >>>> solve_homework_problem(identifier) >>> > second: >>> You can invent it ... you just [currently] can't implement it. >> > third: >> Sure you can! You just have to find someone sufficiently motivated to >> apply their meatware to the problem! There's nothing specifying the >> *time* that the implementation needs to take to perform the operation! > > I'm confused here too, Don, unless the quotation levels are off. > Is it you that said the first and third comments above?
Yes.
> They seem contradictory. (or else you are referencing > different contexts?)
Read them again; the subject changes: 1. You can't invent that magical language that allows you to solve homework assignments with a single operator <grin> 2. You can INVENT it, but can't IMPLEMENT it (i.e., its just a conceptual language that doesn't run on any REAL machine) 3. You *can* IMPLEMENT it; find a flunky to do the work FOR you! (tongue firmly in cheek)
AT Wednesday 14 June 2017 12:57, Don Y wrote:

> Correct. But, it's only "order of" assessments. I.e., is this > a constant time algorithm? Linear time? Quadratic? Exponential? > etc. > > There's a lot of handwaving in O() evaluations of algorithms. > What's the relative cost of multiplication vs. addition operators? > Division? etc.
I found many of such evaluation grossly wrong. We had some iterative solutions praised as taking much fewer iterative steps than others. But nobody took into account that each step was much more complicated and took much more CPU time, than one of the solutions that took more steps. And quite often the simpler solution worked for the more general case whereas the "better" worked only under limited conditions. -- Reinhardt
The 2026 Embedded Online Conference