Reply by Don Y July 2, 20142014-07-02
Hi George,

On 7/1/2014 7:33 AM, George Neuner wrote:

>> There's no one forcing you to release YOUR CODE, now. Or, forcing >> you to *stop* working on a test suite, documentation, etc. AFTER a >> "beta" release. Yet, despite the opportunities to finish it up >> properly (the way you suggest you WISH you could to do in your 9-to-5), >> you, instead, lose interest and hope someone else cleans up the mess >> you've left. > > You're still assuming that these efforts somehow are "professional".
Well, I try to give folks the benefit of the doubt! :> I.e., they may not be *competent* but I *assume* their hearts are in the right places!
> A formal statement of commitment to support raises my estimation of a > project, but I never assume anything is a professional effort unless I > am expected to pay $$$ to acquire it.
That's why I am gearing so much of my effort to allowing for commercial exploitation -- ensuring the license doesn't drag in any GPL-ish taint that would discourage a business entity from embracing the code/hardware and "customizing it" (without fear of having to share those "enhancements", fixes, etc.) I.e., offer them something for their investment as *they* will be expected to offer something to their *customers*!
>> Yes, most "software consumers" just want to know "which button do I >> press". They don't want to understand the product. >> >> But, they would be just as happy with a CLOSED, PROPRIETARY solution >> released as a "free" binary! (I.e., they just don't want to "spend >> money"!) The whole point of FOSS is to enable others to modify and >> improve upon the initial work. One would think you would want to >> facilitate this! > > Back to the tape drawer analogy.
I guess I just do things differently. There are very few things that I invest time in and *save* that I don't also invest time in documenting, testing, etc. I just don't trust my memory of what something *may* have done 1, 2, ... 20 years ago without a paper trail. If it isn't worth that effort, then I literally *don't* save it! (e.g., the scheme implementation of the "rule converter" doesn't exist anywhere but my memory)
>>> Sans a formal verification effort, it's almost impossible to guarantee >>> that a project of any size is bug free ... the best you can hope for >>> is that any bugs that are there are innocuous if they are triggered. >> >> You don't have to ensure "bug free". But, you should be able to >> point to a methodical attempt undertaken by you -- and repeatable >> as well as augmentable by others -- that demonstrates the conditions >> that you have verified *in* your code base. So, I can "suspect" >> a problem, examine your test cases and see (or NOT see) that you >> have (or have NOT) tested for that case before I waste additional >> time chasing a possibly unlikely issue. > > Again you are assuming a "product". In most cases, whatever it is > ISN'T A PRODUCT in the minds of its developer(s).
Again, I keep (hoping to) give them the benefit of the doubt. Yet, am not naive enough to ignore the perceived reality: "What they release isn't a product but, rather, a smattering of code that did what *they* wanted it to do (perhaps) -- for the effort they were willing to invest." It would be like me releasing the scheme "hack" for the rule converter thinking "it costs me nothing to publish it, maybe someone will find it of use". Instead, I take the attitude of releasing a more "finished" tool that is *intended* to be supported (even if not by me!). So, any efforts in that light go into *the* tool and not the "earlier" tool. (keep efforts focused where I think they should be)
>> It shouldn't matter whether you are worrying about stockholders or >> a life saved/lost. Unless you are implicitly saying "this product >> isn't important... it doesn't HAVE TO WORK! It has NO INTRINSIC >> VALUE -- because I make no guarantees that it does ANYTHING!". > > "This generates those interesting licensing agreements in which the > vendor warrants nothing, not even that there is media on the disk, > while holding the buyer to line after line of restrictions on what he > can do with this thing he supposedly bought." > -- Jerry Pournelle
Yup. I am convinced that you should never make any claims as to what your hardware can do. Put all the functionality in the software/firmware. Then, sell the product "with an included disc" (USB stick, SD card, etc.) that represents a "suggested application for this hardware" (even if it is the ONLY potential application!). Then, any problems can be shrugged off as "not warrantied".
>> All I am asking the FOSS developer to do is *state* what he claims >> the value of his "product" to be. And, show me why he believes >> that to be the case. > > That's somewhat back-assward.
Why? He should just release some code and let YOU decide what you *think* it should do? Give it a catchy name and *infer* what it *might* do??
> Software has no intrinsic value - any value YOU associate with it lies > completely in whether it performs some function YOU need. It isn't up
By "value" I don't mean a numeric quantity. Rather, this is why I think this piece of code "solves a problem", "fills a need", etc. So I can *begin* to evaluate what it's *quantifiable* value to me may be.
> to the developer to justify its creation, it is up to you to justify > your use of it and whether you are willing to pay the cost: in $$$, in > time, in frustration, etc.
My personal experience seems to "vote" increasingly for COTS tools instead of FOSS "equivalents". Each year I am willing to spend less time dealing with tools that *hope* to work but still aren't ready for prime time (e.g., claiming some "feature" only to discover that it doesn't quite work -- yet!). This despite being an early adopter of many FOSS solutions and technologies. (E.g., I was running X "apps" on my own machines in the early-mid 80's). Note, however, that I am equally unwilling to chase new COTS releases "just to have the latest and greatest"! Find a tool that does what I *need*, then stick with it. No need to invest more time and money to "stay in the same place" (effectively). I'm more interested in getting *my* projects done, now, than helping others FINISH the loose ends on theirs...
Reply by George Neuner July 1, 20142014-07-01
On Sun, 29 Jun 2014 09:12:42 -0700, Don Y <this@is.not.me.com> wrote:

>Hi George, > >[small earthquake ~100 miles east last night. Pretty lame, I >imagine, as earthquakes go. But, a first for me! Cool!] > >On 6/27/2014 10:10 AM, George Neuner wrote: >>> What annoys me about most FOSS is that most don't treat their >>> "output" as a genuine (supportable) *product*. >> >> This shouldn't surprise you: it is traditional for hackers to release >> (what they think are) "useful" programs into the wild and then forget >> about them and move on to something else. If you're not making money >> from the project, apart from personal pride there's little incentive >> to keep supporting it. > >Note that I didn't say "keep supporting it" -- I said produce a >supportable product! What they release isn't a product but, >rather, a smattering of code that did what *they* wanted it to >do (perhaps) -- for the effort they were willing to invest.
You're missing the point ... which is that that smattering of code did what they wanted, so they thought "hey, it might be useful to someone else". The notion that their little hack is a "product" never occurs. Return with us now to those days of yesteryear when there was a drawer in the computer room filled with paper tapes, each containing assembler source for a neat but completely undocumented program that did who knows what. Fast forward 50 years and the "drawer" now is called SourceForge (or GitHub, or CodePlex, or whatever).
>Now, imagine I had left the Scheme versions of these tools in the >final build environment (LESS work for me as I wouldn't have to >then create the C versions).
You could have compiled it to C. If they can't reconstruct the algorithm from mechanically translated code, they probably aren't skilled enough to be playing with it in the first place. 8-)
>>> Yeah, I know... documentation and testing are "no fun". But, >>> presumably, you *want* people to use your "product" (even if it >>> is FREE) so wouldn't you want to facilitate that? I'm pretty >>> sure folks don't want to throw lots of time and effort into something >>> only to see it *not* used! >> >> That's a little harsh. I don't think it's fair in general to expect >> the same level of professionalism from part time developers as from >> full time. > >But these same folks explain away the lack of testing and documentation >in their "professional" efforts by blaming their PHB! As if to say, >"Yeah, I know. I really wish I could do the formal testing and >documentation that I, AS A PROFESSIONAL, know is representative of >my PRODUCT... but, my boss just never gives me the time to do so...". > >There's no one forcing you to release YOUR CODE, now. Or, forcing >you to *stop* working on a test suite, documentation, etc. AFTER a >"beta" release. Yet, despite the opportunities to finish it up >properly (the way you suggest you WISH you could to do in your 9-to-5), >you, instead, lose interest and hope someone else cleans up the mess >you've left.
You're still assuming that these efforts somehow are "professional". A formal statement of commitment to support raises my estimation of a project, but I never assume anything is a professional effort unless I am expected to pay $$$ to acquire it.
>Yes, most "software consumers" just want to know "which button do I >press". They don't want to understand the product. > >But, they would be just as happy with a CLOSED, PROPRIETARY solution >released as a "free" binary! (I.e., they just don't want to "spend >money"!) The whole point of FOSS is to enable others to modify and >improve upon the initial work. One would think you would want to >facilitate this!
Back to the tape drawer analogy.
>> Sans a formal verification effort, it's almost impossible to guarantee >> that a project of any size is bug free ... the best you can hope for >> is that any bugs that are there are innocuous if they are triggered. > >You don't have to ensure "bug free". But, you should be able to >point to a methodical attempt undertaken by you -- and repeatable >as well as augmentable by others -- that demonstrates the conditions >that you have verified *in* your code base. So, I can "suspect" >a problem, examine your test cases and see (or NOT see) that you >have (or have NOT) tested for that case before I waste additional >time chasing a possibly unlikely issue.
Again you are assuming a "product". In most cases, whatever it is ISN'T A PRODUCT in the minds of its developer(s).
>It shouldn't matter whether you are worrying about stockholders or >a life saved/lost. Unless you are implicitly saying "this product >isn't important... it doesn't HAVE TO WORK! It has NO INTRINSIC >VALUE -- because I make no guarantees that it does ANYTHING!".
"This generates those interesting licensing agreements in which the vendor warrants nothing, not even that there is media on the disk, while holding the buyer to line after line of restrictions on what he can do with this thing he supposedly bought." -- Jerry Pournelle
>All I am asking the FOSS developer to do is *state* what he claims >the value of his "product" to be. And, show me why he believes >that to be the case.
That's somewhat back-assward. Software has no intrinsic value - any value YOU associate with it lies completely in whether it performs some function YOU need. It isn't up to the developer to justify its creation, it is up to you to justify your use of it and whether you are willing to pay the cost: in $$$, in time, in frustration, etc. YMMV, George
Reply by Don Y June 29, 20142014-06-29
On 6/28/2014 11:53 PM, upsidedown@downunder.com wrote:
> On Sat, 28 Jun 2014 21:41:35 +0100, Tom Gardner > <spamjunk@blueyonder.co.uk> wrote: >> On 28/06/14 20:56, Don Y wrote: > >>> Support for multithreading >>> was entirely the user's responsibility (and, often not well facilitated >>> by the compiler). >> >> What multithreading support? There was none in C - it >> was explicitly stated by K&R to be a library issue. (And >> we'll overlook the embarrassing feature that libraries >> had to rely on language characteristics that C explicitly >> stated were undefined.) > > It is of course a good question, how much multithreading/multitasking > should be included in the language definition (such as ADA) and how > much should be handled by calls to the underlaying OS. > > IMHO, the minimum requirement for a multithreading environment is that > the generated code and libraries are (or at can be generated) as
------^^^^^^^^^^^^^^ This is the kicker. You can replace a library relatively easily. But, anything that the compiler itself relies upon gets to be a touchy subject. There are no contracts between the user and compiler when it comes to the "guts" of the code and no hooks that you can freely exploit that allow you to interject your operations amongst those of the compiler. Unless you force memory barriers, *in* your code AND have a way for the compiler to let you know *where* it is wrt that code, you can't go dicking around with the machine's state with impunity.
> re-entrant code. Unfortunately the K&R definition makes this quite > hard with some string etc. functions store some internal state in > static variables. Then there is the issue with errno.
You can reimplement the standard libraries in a reentrant manner (but, not a *portable* one -- unless you create some global structs that you adopt in all implementations). errno, as a macro, can more easily be hacked based on the details of your RTOS/MTOS: static int task_id; #define errno int ERRNO[task_id] Adopt the same sort of approach for any statics located in library functions. [I simply reimplement the libraries with more intimate ties to the OS -- in essence, moving these things into the TCB]
> A similar problem had one Pascal compiler for 6809, which stored > statically the so called "display" registers (base addresses for > statically nested functions) in static locations, so there could just > be one Pascal task, while the other tasks had to be written in > assembly. > > For environments with program loaders, it is also nice to be able to > use libraries which are position independent code, so that the code > could be loaded into any free space.
Reply by Don Y June 29, 20142014-06-29
Hi Tom,

On 6/28/2014 1:41 PM, Tom Gardner wrote:
> On 28/06/14 20:56, Don Y wrote: >> On 6/26/2014 2:30 AM, Tom Gardner wrote: >> >>>> *All* of these processors were total *dogs* when it came to >>>> HLL's! :< >>> >>> I'll disagree, for embedded systems at least. The code emitted >>> by ?WhiteSmith's? C compiler for the Z80 was perfectly >>> respectable. The only graunch I remember was i/o to a computed >>> address having to be done by constructing the code on the stack, >>> then executing it. >> >> The problem they all had was they were too small for HLL's. >> Very few applications (even embedded) deal *exclusively* with >> 8b data. So, lots of "helper" functions get drawn into the >> executable -- to add/subtract/multiply/divide (even for integer >> data). A *correct*/efficient printf(3c) could consume most of the >> address space of these little processors! > > Sure. "Doctor, Doctor, my head hurts when I bang it against > a wall". "Well, don't bang your head against walls, then".
If that's the only option you have, you're pretty much stuck with it!
> We recoded putc() so that it did the minimal necessary > on our system, and didn't use printf() (or did we write our > own without FP support?; I can't remember) > > Anyway, who needs putc() on an embedded system if you have > a decent ICE.
Um, you may need one if you do any sort of character based I/O!
> Apart from that, you just find ways of using the available > technology to further your ends. > >> Support for multithreading >> was entirely the user's responsibility (and, often not well facilitated >> by the compiler). > > What multithreading support? There was none in C - it > was explicitly stated by K&R to be a library issue. (And
And the libraries come from the compiler vendor! Note that there are many things that the compiler can do that will *prevent* you from implementing multithreading "in a library". This is especially true of smaller processors that inevitably end up with lots of little "helper routines" that are intimately tied to the compiler implementation. E.g., __floating_add(), __long_signed_mul(), and, even things as trivial as __long_add()! Unless you have hooks that the compiler vendor provides and guarantees for each of these things, there is nothing you can do in your "runtime support" (different from "libraries") to provide this.
> we'll overlook the embarrassing feature that libraries > had to rely on language characteristics that C explicitly > stated were undefined.) > > At least Z80s were so simple they didn't need a memory model!
Many Z80 systems employed banked memory. The Z180 family of devices had a funky *sort* of bank-switching. The were even TTL offerings (small RAMs) that were targeted to that sort of application. If you (not the compiler -- though I know of compilers that could do this as well) are going to support these unnatural (yet typical) sorts of environments, then you need to know what implementation details the compiler is relying upon so you don't, for example, swap the "floating point accumulator" out of resident memory just before the compiler makes a reference to it!
> So, we rolled our own cooperative multitasking RTOS using a > few lines of assembler, which enabled us to program very > naturally using something akin to co-routines. > > Same code (except teh ASM, natch) was used on many processors > from PDP-11s to 8080s.
I built a UNIX-like execution environment. Each "task" had its own stdin/stdout/stderr; register context; C context (e.g., things like errno, reentrant strcspn(), etc.); floating point context (along with a per-task switch that would allow the RTOS to avoid saving/restoring the floating point context if that task had no use for it); etc. E.g., one application provided a curses-based user interface over a serial port. To implement this interface on *two* serial ports, I simply spawned two tasks using the exact same "task code": if0 = spawn(userIO, UART0, UART0, ERRORLOG); if1 = spawn(userIO, UART1, UART1, ERRORLOG); (ERRORLOG was another "device" that just pushed whatever text was written to it out the ICE to be displayed on the developer's console. In production code, the absence of the ICE hook caused the text to be spooled to a small RAM buffer for post-mortems). As each task had its own stack, autovariables, etc. they could operate concurrently "for free". Mutex support in the RTOS allows device I/O to seemlessly be intertwined without requiring the developer to explicitly lock and release a device. So, ERRORLOG could contain entries like: taskD: ring detected taskD: answering taskA: waiting for user input taskD: building initial display image taskA: activating 'File' menu taskA: selecting 'SAVE AS' from 'File' menu taskD: waiting for user input instead of: tasktaskAD: rianswering detectnged
>> It was usually pretty easy to look at an executable and *know* that >> it was "compiled" -- the code was too rigid. I suspect this is >> the reason for the "I can code better than the compiler" opinion >> that was so prevalent -- because you often *could* (without >> breaking into a sweat) especially if you were familiar with the >> capabilities of the processor itself (e.g., do BCD math to >> eliminate having to to a binary-to-decimal conversion, etc.) > > We stuck to plain old fixed point integers. I knew what floating > point performance was like, having created a floating point library > (inc sine/cosine) on a 6800 five years earlier in the mid 70s.
I've always used reduced precision floating point implementations. "Templatize" the code and you can shrink to 24b floats, etc. But, you need the compiler vendor to cooperate with you. You can implement printf(3c) in a modular form that allows only those portions of the function that are going to be *used* to be included in the link. This isn't the sort of thing you would worry about with bigger environments but can save many KB in a small, e.g., 8-bitter.
>> Larger/more modern processors have more features geared directly >> towards these HLL constructs. So, it is much more common to encounter >> "clever code" from a *compiler* instead of an "ASM coder".
Reply by Don Y June 29, 20142014-06-29
Hi George,

[small earthquake ~100 miles east last night.  Pretty lame, I
imagine, as earthquakes go.  But, a first for me!  Cool!]

On 6/27/2014 10:10 AM, George Neuner wrote:
>> What annoys me about most FOSS is that most don't treat their >> "output" as a genuine (supportable) *product*. > > This shouldn't surprise you: it is traditional for hackers to release > (what they think are) "useful" programs into the wild and then forget > about them and move on to something else. If you're not making money > from the project, apart from personal pride there's little incentive > to keep supporting it.
Note that I didn't say "keep supporting it" -- I said produce a supportable product! What they release isn't a product but, rather, a smattering of code that did what *they* wanted it to do (perhaps) -- for the effort they were willing to invest. It's *not* something they are "proud" of but, rather, something they are opting to "publish" instead of just "discard" -- in the hope that someone else will wade through their *code* (not documentation!) and try to figure out what they were trying to do (probably "on a shoestring" instead of "Right") and, *possibly*, be INSPIRED and WILLING to take it a step further (possibly *back* to the direction it should have taken in the first place!). For one of the TTS algorithms I'm currently implementing, I include: - a "document" presenting the subset of the IPA appropriate for (nominal) English. In addition to introducing the glyphs used to represent each sound and "sample words" containing those sounds, it contains actual sound recordings *of* those sounds *in* those word samples -- because different region accents lead folks to "hearing" (what's the aural version of "visualizing?) those PRINTED examples differently. It also explains terms like front/back vowel, palatalization, etc. The point behind the document is to bring developers who may not be familiar with this application domain "up to speed" so further discussions don't have to be trivialized. - the original "research" defining the algorithm. This lets those developers double-check *my* interpretation of the research by returning to the unaltered source. - my *explanation* of that research along with the errors I've uncovered in that presentation (and implementation). Again, having the source available allows others to correct any mistakes *I* may have made and/or reinterpret the original material -- possibly in a different (better?) light. - a description of my *implementation* of the algorithm as this is significantly different than the original source. Much of these differences are attributable to the "non research" nature of my implementation (e.g., I don't have a big disk to store rules; a "mainframe" to execute them in some sluggish language; etc.). I also discuss the enhancements I've made to the algorithm, any micro-optimizations (e.g., rule reorderings), and how I've adapted the rules to the sorts of input *I* expect to encounter. E.g., I am less concerned that "aardvark" is pronounced correctly than I am about "gigabytes". This allows developers wanting to cherry-pick just this component of my system *out* for reuse in some other system -- with different vocabulary requirements! - the test suite that I used to evaluate the quality of the algorithm along with "rationalizations" as to why each "failure" is accepted instead of "fixed" (some fixes would require adding rules that would break *other* things -- English is full of exceptions!). It also provides statistics that tell me how often each rule is encountered when processing the sample input (allows me to adjust the algorithm's efficiency). And, lets me "watch" to see how a particular "word" is processed (to help formulate better rules). - a tool that converts for the human readable form of the rules to the machine readable encoding. This allows a developer to deal with the normalized representation (i.e., something that a linguist could understand and assist) while freeing the implementation from its particular form. - a tool to build the tables that the algorithm uses from that input. This reduces the probability of transcription errors between the (hundreds of) cryptic rules and their representation for the implementation (efficiency instead of readability). - a tool to port changes in that input back into the documentation (so the documentation can be kept up to date without transcription errors creeping in, there!) - a tool to evaluate *your* "sample text" for it's accuracy, tabulate relative frequencies of occurrence, etc. Those tools are written in the same language as the application that "consumes" that data object (table). So, a future developer wanting to change the run-time implementation of the algorithm need not learn some other language to "alter" the table generated from the chart! As the algorithm inherently processes a bunch of lists (lists of affixes, list of letter sequences, lists of phonemes, etc.), I wrote the preliminary "proof-of-concept" converters in Scheme and, later, Limbo. This let me explore encoding options for the resulting tables more easily as compile-time performance wasn't important! Once the form of the table was settled, I rewrote this in C to mimic the code that was *consuming* that table. Now, imagine I had left the Scheme versions of these tools in the final build environment (LESS work for me as I wouldn't have to then create the C versions). Now, the future developer has to have Scheme available; the version that he has available must be compatible with any specific features/extensions/etc. upon which I relied; he needs to *know* Scheme in order to be able to understand what that algorithm is doing to sufficient degree to be able to *change* it AND he has to be familiar with whatever development/debug/test environment I've opted to employ in my maintenance of *that* tool! I would end up raising the bar too high -- and, effectively forcing developers to stick with *my* encoding scheme because it's the path of least resistance (vs. having to drag in that other dependency). This, then, discourages them from altering the run-time algorithms that consume that data, etc. I.e., I've made it too tedious for them to alter/improve upon the code. I've *tied* them to my implementation. [When I was working on the Klatt synthesizer, this sort of thing was very evident. The eyes that had poked at it previously were much to tentative about attacking gross inefficiencies in the code and/or restructuring it for fear of breaking something that they didn't understand -- or didn't WANT to understand -- well.]
>> Yeah, I know... documentation and testing are "no fun". But, >> presumably, you *want* people to use your "product" (even if it >> is FREE) so wouldn't you want to facilitate that? I'm pretty >> sure folks don't want to throw lots of time and effort into something >> only to see it *not* used! > > That's a little harsh. I don't think it's fair in general to expect > the same level of professionalism from part time developers as from > full time.
But these same folks explain away the lack of testing and documentation in their "professional" efforts by blaming their PHB! As if to say, "Yeah, I know. I really wish I could do the formal testing and documentation that I, AS A PROFESSIONAL, know is representative of my PRODUCT... but, my boss just never gives me the time to do so...". There's no one forcing you to release YOUR CODE, now. Or, forcing you to *stop* working on a test suite, documentation, etc. AFTER a "beta" release. Yet, despite the opportunities to finish it up properly (the way you suggest you WISH you could to do in your 9-to-5), you, instead, lose interest and hope someone else cleans up the mess you've left. [This isn't true of all FOSS projects but is true of probably *most*!]
> Lots of people who can find and download a program online aren't > sophisticated enough to know whether a problem with that program is a > bug or if they screwed up somehow in trying to use it. If you browse > support forums (for projects that *are* supported), it quickly becomes > apparent that many reported problems come from trying to use the > software in ways for which it wasn't designed or in environments under > which it hasn't been tested.
*Which* (one?) environments has it been "tested" in? What were the "tests"? How can I even begin to come up with a set of tests for *my* environment if I can't see what *you* did in yours? What is the product *supposed* to do (not in flowery language but in measurable specifics)? How can I know if it's working if I don't know what to test against? Yes, most "software consumers" just want to know "which button do I press". They don't want to understand the product. But, they would be just as happy with a CLOSED, PROPRIETARY solution released as a "free" binary! (I.e., they just don't want to "spend money"!) The whole point of FOSS is to enable others to modify and improve upon the initial work. One would think you would want to facilitate this!
> Sans a formal verification effort, it's almost impossible to guarantee > that a project of any size is bug free ... the best you can hope for > is that any bugs that are there are innocuous if they are triggered.
You don't have to ensure "bug free". But, you should be able to point to a methodical attempt undertaken by you -- and repeatable as well as augmentable by others -- that demonstrates the conditions that you have verified *in* your code base. So, I can "suspect" a problem, examine your test cases and see (or NOT see) that you have (or have NOT) tested for that case before I waste additional time chasing a possibly unlikely issue. "Why is 'has' mispronounced? Oh, I see..." I make no claim that, for example, the TTS algorithm above is "bug free". Nor "optimal". But, I show what my thinking was and the conditions under which I evaluated the algorithm. And, provided that framework for the next guy who may want to evaluate it under a different set of conditions. [E.g., I am not concerned with "proper nouns" to the extent that someone wanting to reappropriate the code to read names and addresses out of a telephone directory would be! So, the rules for that sort of application would be different, have different relative priorities, etc. But, you'd still need a test framework (populated with a different set of words/names) and a way to evaluate the NEW algorithm's performance on that input set]
> No hobby developer - and damn few professionals - realistically can > maintain test platforms for every possible configuration under which > someone might try to run their software. Moreover, the majority of > projects get no feedback whatsoever, so if a problem bug does slip > through whatever testing is being done, only rarely does the developer > find out about it.
How does *a* developer (need not be the original developer) know what *has* been tested and what hasn't? Does he keep this on a scrap of paper in his desk drawer? Or, does he just make up test cases WHILE CODING THE ALGORITHM to increase his confidence in his solution? (which has no direct bearing on how 'correct' the code is... just how correct he *thinks* it is at that time!)
> I have developed FDA certified systems for diagnostic radiology and > for pharmaceutical production. I have the experience of worrying > about people being hurt if I screw up, large sums of money being lost, > and of potentially being sued or even criminally prosecuted as a > result. Percentage-wise, very few developers have experience of > litigious and safety critical backgrounds to guide their efforts.
It shouldn't matter whether you are worrying about stockholders or a life saved/lost. Unless you are implicitly saying "this product isn't important... it doesn't HAVE TO WORK! It has NO INTRINSIC VALUE -- because I make no guarantees that it does ANYTHING!". All I am asking the FOSS developer to do is *state* what he claims the value of his "product" to be. And, show me why he believes that to be the case. If all it is guaranteed to do is burn CPU cycles, then I won't bother wasting my time on it; there are lots of permutations of opcodes that will yield those results! :> I need something that allows me to decide where to invest *my* time -- both as a consumer and contributor.
> There's a push to certify software developers in the same way that > some engineers routinely are certified. It hasn't achieved much > traction because so few software developers have the educational > background to pass the proposed tests.
Ain't going to happen. Too many "programmers". And, business only pays lip service to wanting those things -- they'd rather just *hope* the diploma mills would crank out a new crop that chases whatever their (business) current efforts are headed.
>> Granted, the "development" issue that I initially discussed is a >> tough one -- how can I *expect* all FOSS developers to "settle" >> on a common/shared set of tools/approach? While this is common >> *within* an organization, it would be ridiculous to expect Company >> A to use the same tools and process that Company B uses! And, >> griping about it would be presumptuous. >> >> OTOH, it's fair to gripe when Company (Organization) X does things >> in a way that is needlessly more complicated or dependent (on a >> larger base of tools) than it needs to be. > > Yes and no. Working to the lowest common denominator often means > exponentially more work. I agree that using oddball tools that few > people have heard of is bad, but I disagree that using a relatively > well known tool that just doesn't happen to be in the default OS > installation is a problem.
I didn't imply it had to be *in* the default OS. Rather, that a *set* of tools be used without adding more tools "willy nilly" ("I really like coding in Ruby so I'll do this little task in Ruby instead of ________"). It's not just having a binary available but, also, the skillset that then becomes a requirement for the product's maintenance.
>> When I started my current set of (FOSS) projects, I was almost in >> a state of panic over the "requirements" it would impose on others. >> Too many tools, too much specialized equipment, skillsets, etc. >> >> After fretting about this for quite some time -- constantly trying >> to eliminate another "dependency" -- I finally realized the "minimum" >> is "a lot" and that's just the way it is! > > Yes. You and I have had a few conversations about this too.
And things will only get *worse* as software becomes increasingly more complex! E.g., it is almost impossible for <someone> to "trace" the execution of a "program" in my environment with the same expectations they have in a more traditional target. There are just too many virtual layers involved, different physical processors, etc. Should I build a tool to facilitate these efforts for others in the future? Will that effectively NEEDLESSLY *tie* them into my particular implementation? Or, can I, instead, strive for a better functional partitioning to reduce the need to comprehensively trace a "macro action" as it wanders through the system? And, illustrate how you can effectively debug just by watching subsystem interfaces?
>> ... And, *I* will endeavor to pick good tools that adequately >> address their respective needs so you aren't forced to use two >> *different* tools (e.g., languages) for the same class of "task". > > Above and beyond what most developers have come to expect. 8-)
<frown> There are still *huge* areas where I am unhappy with the level of documentation, etc. that I am including. I just think most of our tools (and practices) aren't mature enough to address the various competing issues involved. It would be like an architect having to use a blueprint to indicate the placement of the support structures in an edifice -- and, ENTIRELY DIFFERENT DOCUMENTS (format) to indicate the placement of the various "services" (electric, plumbing, data, etc.) within that structure (instead of indicating them on the same set of prints). Mixing text, illustration, animation, interactive demos, sound, video, etc. in *with* the "code" is a challenging environment. Just this documentation aspect alone is what led me to ignore the requirements I've imposed on "those who follow" -- it just takes too many different tools to do all of these things and trying to tie *my* hands just to cut down on what the next guy needs to "invest" (esp if the next guy isn't modifying any of those aspects of the documentation) was silly. Sunday lunch. Finestkind. Then, make some ice cream for desert! (that's the one consolation of dozens of hundred degree days -- you can make ice cream OFTEN without raising eyebrows! :> ) --don
Reply by Tauno Voipio June 29, 20142014-06-29
On 27.6.14 18:54, Anssi Saari wrote:
> Grant Edwards <invalid@invalid.invalid> writes: > >> And without the comparitive relieve from still-bleeding wounds caused >> by the 8048, I don't really understand why people find the 8051 >> acceptable unless they're locked in to it because of some weird >> peripheral. Compared to something like an MSP430, AVR, or small Cortex >> M-someting, an 8051 is torture. > > My understanding, from working for a disributor about a decade ago, is > that a lot of companies had debugged, working applications done in 8051 > assembler so there was very little incentive to switch to anything > else. Maybe it's still the same at least for long lived applications.
When there were no AVR's yet, we had 8051 in use (better than 8048). We changed to AVR's and assembler to C, and needed never to look back, unlike Loot's wife. Now, the next step is going on, to move from AVR to Cortex-M. The experience is painless (yet). -- Tauno Voipio
Reply by Tom Gardner June 29, 20142014-06-29
On 29/06/14 08:14, Paul Rubin wrote:
> upsidedown@downunder.com writes: >> It is of course a good question, how much multithreading/multitasking >> should be included in the language definition (such as ADA) and how >> much should be handled by calls to the underlaying OS. > > There's an argument that I don't completely understand, which says that > doing it reliably purely in libraries, without language support, is > impossible. So C11 and C++11 have some multithreading support. See: > > http://hboehm.info/misc_slides/pldi05_threads.pdf
Useful slides, thanks. Are there any C11/C++11 compilers yet? I mean /complete/ ones. IIRC it took about *6 years* for the first C++99 compiler to appear. I remember it being (somewhat embarrassingly) trumpeted, but since I had abandoned C++ I wasn't paying much attention. Why did I abandon it? The early 90s were full of endless should/shouldn't it be possible to "cast away constness" discussions. It seemed obvious that if the relevant people couldn't come to a consensus then there was something seriously broken with their objectives.
Reply by Tom Gardner June 29, 20142014-06-29
On 29/06/14 07:53, upsidedown@downunder.com wrote:
> On Sat, 28 Jun 2014 21:41:35 +0100, Tom Gardner > <spamjunk@blueyonder.co.uk> wrote: > >> On 28/06/14 20:56, Don Y wrote: > >>> Support for multithreading >>> was entirely the user's responsibility (and, often not well facilitated >>> by the compiler). >> >> What multithreading support? There was none in C - it >> was explicitly stated by K&R to be a library issue. (And >> we'll overlook the embarrassing feature that libraries >> had to rely on language characteristics that C explicitly >> stated were undefined.) > > It is of course a good question, how much multithreading/multitasking > should be included in the language definition (such as ADA) and how > much should be handled by calls to the underlaying OS. > > IMHO, the minimum requirement for a multithreading environment is that > the generated code and libraries are (or at can be generated) as > re-entrant code.
That's necessary but far from sufficient. Start by considering accessing a single variable from more than one thread - bad enough when it is in memory, but worse if it is in a register! Getting memory models right is extraordinarily difficult. Even Java, where it was included in the language/runtime from the very beginning, found soubtle bugs that require a new memory model. I'm sceptical that it will ever be successfully added to C/C++, unless backward compatibility is compromised.
Reply by Paul Rubin June 29, 20142014-06-29
upsidedown@downunder.com writes:
> It is of course a good question, how much multithreading/multitasking > should be included in the language definition (such as ADA) and how > much should be handled by calls to the underlaying OS.
There's an argument that I don't completely understand, which says that doing it reliably purely in libraries, without language support, is impossible. So C11 and C++11 have some multithreading support. See: http://hboehm.info/misc_slides/pldi05_threads.pdf
Reply by June 29, 20142014-06-29
On Sat, 28 Jun 2014 21:41:35 +0100, Tom Gardner
<spamjunk@blueyonder.co.uk> wrote:

>On 28/06/14 20:56, Don Y wrote:
>> Support for multithreading >> was entirely the user's responsibility (and, often not well facilitated >> by the compiler). > >What multithreading support? There was none in C - it >was explicitly stated by K&R to be a library issue. (And >we'll overlook the embarrassing feature that libraries >had to rely on language characteristics that C explicitly >stated were undefined.)
It is of course a good question, how much multithreading/multitasking should be included in the language definition (such as ADA) and how much should be handled by calls to the underlaying OS. IMHO, the minimum requirement for a multithreading environment is that the generated code and libraries are (or at can be generated) as re-entrant code. Unfortunately the K&R definition makes this quite hard with some string etc. functions store some internal state in static variables. Then there is the issue with errno. A similar problem had one Pascal compiler for 6809, which stored statically the so called "display" registers (base addresses for statically nested functions) in static locations, so there could just be one Pascal task, while the other tasks had to be written in assembly. For environments with program loaders, it is also nice to be able to use libraries which are position independent code, so that the code could be loaded into any free space.