Hi Les, On 12/20/2014 1:25 PM, Les Cargill wrote:> Don Y wrote:>> The thing that technology is lousy at is "enhancing wetware" -- programmers >> don't inherently get "twice as productive" each year or two. They can't >> write >> twice as much debugged code or comprehend twice as many lines per unit >> time. > > But programmers should at least target not being the bottleneck any > more. It's a nicer way to do business and makes your life much more > pleasant.But, how do they do that? You can't (indefinitely) improve a programmer's "inherent quality". And, any improvements there are slow to realize. So, you have to rely on the tools getting better, more expressive, etc. Let a tool burn development cycles to make the developer's effort be more productive (e.g., lint *in* the IDE while you're writing the code instead of as a "post process"). [The trick, here, is not to turn the developer into a mindless idiot that expects the machine to do ALL his/her thinking!] E.g., my bias in recent years is to make code more easily understood and more *robust* -- instead of burning clock cycles on (silly?) features, burn them ensuring the code ALWAYS works as advertised, etc. Yet, do so within the other constraints (cost, power, space, etc.) imposed on the design.
Languages, is popularity dominating engineering?
Started by ●December 12, 2014
Reply by ●December 20, 20142014-12-20
Reply by ●December 20, 20142014-12-20
Don Y wrote:> Hi Les, > > On 12/20/2014 1:25 PM, Les Cargill wrote: >> Don Y wrote: > >>> The thing that technology is lousy at is "enhancing wetware" -- >>> programmers >>> don't inherently get "twice as productive" each year or two. They can't >>> write >>> twice as much debugged code or comprehend twice as many lines per unit >>> time. >> >> But programmers should at least target not being the bottleneck any >> more. It's a nicer way to do business and makes your life much more >> pleasant. > > But, how do they do that? You can't (indefinitely) improve a programmer's > "inherent quality". And, any improvements there are slow to realize. >I said "at least target"; I don't even see too many that seem to care. Learn to speak the (spoken) language; learn how to negotiate what you'll deliver; learn how to do really thorough design/development/testing. Overdeliver/underpromise. I dunno; learn your craft. I've been at it for close to 30 years. I was barely even competent at ten years from a standpoint of being able to deliver stuff that pretty much works the first time - maybe bounce a bug or two before final release. I'd released stuff that worked from the git-go; what got cleaner was my ability to promise things and hit targets. Somewhere abut 20 years ago "we" decided that this is a children's game and structured things accordingly. No; it's an *adult* game and you have to play it to win.> So, you have to rely on the tools getting better, more expressive, etc.No. That doesn't work. the information asymmetries are huge. I suppose there's a way to school people to be better earlier but I doubt it. I think it'll just take the ten years to get to the journeyman phase.> Let a tool burn development cycles to make the developer's effort be > more productive (e.g., lint *in* the IDE while you're writing the code > instead of as a "post process"). >Use of an IDE is a marque of someone who doesn't understand the real risks. That doesn't always hold.> [The trick, here, is not to turn the developer into a mindless idiot > that expects the machine to do ALL his/her thinking!] >Feel free to do that; it does not turn out well.> E.g., my bias in recent years is to make code more easily understood > and more *robust* -- instead of burning clock cycles on (silly?) features, > burn them ensuring the code ALWAYS works as advertised, etc. Yet, do so > within the other constraints (cost, power, space, etc.) imposed on the > design. >Features are manageable; just withhold them until they work in subsequent releases. You, of course, have to know what's really required and what isn't, but that's a big part of the game. -- Les Cargill
Reply by ●December 20, 20142014-12-20
Hi Les, On 12/20/2014 4:21 PM, Les Cargill wrote:> Don Y wrote: >> On 12/20/2014 1:25 PM, Les Cargill wrote: >>> Don Y wrote: >> >>>> The thing that technology is lousy at is "enhancing wetware" -- >>>> programmers >>>> don't inherently get "twice as productive" each year or two. They can't >>>> write >>>> twice as much debugged code or comprehend twice as many lines per unit >>>> time. >>> >>> But programmers should at least target not being the bottleneck any >>> more. It's a nicer way to do business and makes your life much more >>> pleasant. >> >> But, how do they do that? You can't (indefinitely) improve a programmer's >> "inherent quality". And, any improvements there are slow to realize. > > I said "at least target"; I don't even see too many that seem to > care.But that is true of all professions! It's not just "ditch diggers" who treat "work" as "just a job". Ever have an argument with a nurse who *claims* she didn't dispense a particular medication -- when *you* watched her administer it? ("Well, it's not on the chart..."). Or a plumber that doesn't *appear* to know how to sweat a joint? Or a painter who didn't properly prepare the surface before slopping on paint? We tend to forget that, to most people, "work" is "just a job". What incentive do they have to perform better? Will they be rewarded? Conversely, penalized if they perform *worse*??> Learn to speak the (spoken) language; learn how to negotiate > what you'll deliver; learn how to do really thorough > design/development/testing. Overdeliver/underpromise. > > I dunno; learn your craft. I've been at it for close to 30 years. I > was barely even competent at ten years from a standpoint of being able > to deliver stuff that pretty much works the first time - maybe bounce > a bug or two before final release. I'd released stuff that worked from > the git-go; what got cleaner was my ability to promise things and hit targets. > > Somewhere abut 20 years ago "we" decided that this is a children's game > and structured things accordingly. No; it's an *adult* game and you > have to play it to win."The Market" doesn't want to have to rely/depend on practitioners. I've been hired for the *stated* purpose of an employer/client wanting to "not be reliant" on a particular individual currently in their employ. I, in turn, never wanted to be "strapped" with supporting/developing the same thing over and over and over (when you're seen as "good" at something, you tend to get STUCK doing it). Look at how society has striven to "dumb down" most labor efforts. Not to reduce errors or free employees from "tedium" but, rather, to allow less skilled (expensive) employees to fill those roles. [When did the ability to "make change" slip out of the basic skillset of ALL consumers??]>> So, you have to rely on the tools getting better, more expressive, etc. > > No. That doesn't work. the information asymmetries are huge. I suppose > there's a way to school people to be better earlier but I doubt it. I > think it'll just take the ten years to get to the journeyman phase.But it *has* worked! Look at how many folks now make a living "writing code". Years ago, they would have been hard-pressed to get their (Hollerith) cards in the right order to ensure the job didn't ABEND before it got started! Now, "secretaries" write macros in spreadsheets, countless script-kiddies build web pages, etc. All because the tools have taken on more of the "work". *My* productivity is vastly improved when I can code in a multitasking environment -- esp if the tools let me "attach" to multiple threads and watch them interacting. This would have been unheard of with the targets and development systems available when I started my career!>> Let a tool burn development cycles to make the developer's effort be >> more productive (e.g., lint *in* the IDE while you're writing the code >> instead of as a "post process"). > > Use of an IDE is a marque of someone who doesn't understand the real risks. > That doesn't always hold.You're advocating that we do away with IDE's? Simulators? Lint? etc. A tool isn't inherently "bad"; it all boils down to how well you *use* it and what you expect *from* it. I *love* being able to run my code on a desktop simulator instead of being dependent on a piece of target hardware. There's so much more I can do in pulling data from the "virtual target" to verify proper operation, visualize the data or the performance metrics of the code, etc.>> [The trick, here, is not to turn the developer into a mindless idiot >> that expects the machine to do ALL his/her thinking!] > > Feel free to do that; it does not turn out well.It depends on the individual. When I hear people complaining that their machines are too slow, my first thought is "What are they doing that is causing those to be the apparent bottleneck?" Often, they aren't THINKING but, instead, just "trying things" and hoping one of them works. Then, when it works, forgetting all about the problem (i.e., considering it "solved") and moving on -- to "throw darts" at the *next* problem they stumble upon. [Look at places like McDonald's; their cash registers just have *pictures* on them (or, at least, they *did*, at one point). Push the "hamburger button" twice for two hamburgers, etc. How the hell can they *ever* get an order WRONG? Yet they *do*! :-/ ]>> E.g., my bias in recent years is to make code more easily understood >> and more *robust* -- instead of burning clock cycles on (silly?) features, >> burn them ensuring the code ALWAYS works as advertised, etc. Yet, do so >> within the other constraints (cost, power, space, etc.) imposed on the >> design. > > Features are manageable; just withhold them until they work in subsequent > releases. You, of course, have to know what's really required and what isn't, > but that's a big part of the game.The developer doesn't always have control over what happens, when. Manglement can declare that a new feature is required -- even though the OLD features haven't been "perfected", yet. And, IME, developers tend to want to play with implementing new features instead of testing/documenting/perfecting old ones. There's little "novelty" in testing or documentation! And, by the time something is (sort of) working, the developer is looking for any excuse to "move on" to something else...
Reply by ●December 20, 20142014-12-20
Don Y wrote:> Hi Les, > > On 12/20/2014 4:21 PM, Les Cargill wrote: >> Don Y wrote: >>> On 12/20/2014 1:25 PM, Les Cargill wrote: >>>> Don Y wrote: >>> >>>>> The thing that technology is lousy at is "enhancing wetware" -- >>>>> programmers >>>>> don't inherently get "twice as productive" each year or two. They >>>>> can't >>>>> write >>>>> twice as much debugged code or comprehend twice as many lines per unit >>>>> time. >>>> >>>> But programmers should at least target not being the bottleneck any >>>> more. It's a nicer way to do business and makes your life much more >>>> pleasant. >>> >>> But, how do they do that? You can't (indefinitely) improve a >>> programmer's >>> "inherent quality". And, any improvements there are slow to realize. >> >> I said "at least target"; I don't even see too many that seem to >> care. > > But that is true of all professions! It's not just "ditch diggers" who > treat "work" as "just a job". Ever have an argument with a nurse who > *claims* she didn't dispense a particular medication -- when *you* > watched her administer it? ("Well, it's not on the chart..."). > Or a plumber that doesn't *appear* to know how to sweat a joint? > Or a painter who didn't properly prepare the surface before slopping > on paint? > > We tend to forget that, to most people, "work" is "just a job". > What incentive do they have to perform better? Will they be > rewarded? Conversely, penalized if they perform *worse*?? >At the end of the day, it's just a job to me, too. But it'd be no fun at all if I wasn't engaged with it at this level. Incentives don't work, ultimately.>> Learn to speak the (spoken) language; learn how to negotiate >> what you'll deliver; learn how to do really thorough >> design/development/testing. Overdeliver/underpromise. >> >> I dunno; learn your craft. I've been at it for close to 30 years. I >> was barely even competent at ten years from a standpoint of being able >> to deliver stuff that pretty much works the first time - maybe bounce >> a bug or two before final release. I'd released stuff that worked from >> the git-go; what got cleaner was my ability to promise things and hit >> targets. >> >> Somewhere abut 20 years ago "we" decided that this is a children's game >> and structured things accordingly. No; it's an *adult* game and you >> have to play it to win. > > "The Market" doesn't want to have to rely/depend on practitioners. > I've been hired for the *stated* purpose of an employer/client wanting > to "not be reliant" on a particular individual currently in their employ. >I understand completely; how'd that work out for 'em? We've already descended into the realm of "who has the power in this relationship?" That's easy: the boss does. That's fine for running a prison, but it's hell for a corporation. Had I been explicitly told that up front, I'd never trust that individual again. Then again, being head pumper on a sinking ship is no fun. So that's his choice... I sympathize completely having to depend on that one guy, but.. maybe he's doing it wrong.> I, in turn, never wanted to be "strapped" with supporting/developing the > same thing over and over and over (when you're seen as "good" at > something, you tend to get STUCK doing it). >I find that if you build it right, the support is pretty minimal.> Look at how society has striven to "dumb down" most labor efforts. > Not to reduce errors or free employees from "tedium" but, rather, > to allow less skilled (expensive) employees to fill those roles. >That doesn't work, either. It's been quite the opportunity for me as well.> [When did the ability to "make change" slip out of the basic skillset > of ALL consumers??] >Meh. We all use a little plastic card, anyway.>>> So, you have to rely on the tools getting better, more expressive, etc. >> >> No. That doesn't work. the information asymmetries are huge. I suppose >> there's a way to school people to be better earlier but I doubt it. I >> think it'll just take the ten years to get to the journeyman phase. > > But it *has* worked! Look at how many folks now make a living > "writing code". Years ago, they would have been hard-pressed to get > their (Hollerith) cards in the right order to ensure the job didn't ABEND > before it got started!Boo, cards. Very slow and inefficient.> Now, "secretaries" write macros in spreadsheets, > countless script-kiddies build web pages, etc. All because the tools > have taken on more of the "work". >So what's wrong with that? That is not what I am talking about anyway. "Secretaries" have *real* jobs; we get to play all day. A large dollop of respect is in order. I'm just a necessary evil, in the end.> *My* productivity is vastly improved when I can code in a multitasking > environment -- esp if the tools let me "attach" to multiple threads > and watch them interacting. This would have been unheard of with > the targets and development systems available when I started my career! >I don't find any of that amounts to a hill of beans. It's decoration. I've done things with .. too many threads, one "big loop", oddball CASE tools... the basics under it all are the same.>>> Let a tool burn development cycles to make the developer's effort be >>> more productive (e.g., lint *in* the IDE while you're writing the code >>> instead of as a "post process"). >> >> Use of an IDE is a marque of someone who doesn't understand the real >> risks. >> That doesn't always hold. > > You're advocating that we do away with IDE's?Nope. But you'd better be able to dive in outside the thing. Or do you ship "DEBUG" projects and call 'em released?> Simulators? Lint? etc.Certainly not.> A tool isn't inherently "bad"; it all boils down to how well you *use* it > and what you expect *from* it. >Of course.> I *love* being able to run my code on a desktop simulator instead of > being dependent on a piece of target hardware. There's so much more I > can do in pulling data from the "virtual target" to verify proper > operation, visualize the data or the performance metrics of the code, > etc. >This is fine so far as it goes.>>> [The trick, here, is not to turn the developer into a mindless idiot >>> that expects the machine to do ALL his/her thinking!] >> >> Feel free to do that; it does not turn out well. > > It depends on the individual. When I hear people complaining that their > machines are too slow, my first thought is "What are they doing that is > causing those to be the apparent bottleneck?" Often, they aren't > THINKING but, instead, just "trying things" and hoping one of them works.Of course. I do that same thing while I muse about the root cause. I suspect we all do. About half the time, I stumble into it.> Then, when it works, forgetting all about the problem (i.e., considering > it "solved") and moving on -- to "throw darts" at the *next* problem > they stumble upon. >Heh.> [Look at places like McDonald's; their cash registers just have *pictures* > on them (or, at least, they *did*, at one point). Push the "hamburger > button" > twice for two hamburgers, etc. How the hell can they *ever* get an order > WRONG? Yet they *do*! :-/ ] >McDonalds has a specific corporate directive to have people and not just machines in the stores. That's the only reason they're there. "Freshly scrubbed faces" as per an interview ( might have been an article ) with Ray Kroc.>>> E.g., my bias in recent years is to make code more easily understood >>> and more *robust* -- instead of burning clock cycles on (silly?) >>> features, >>> burn them ensuring the code ALWAYS works as advertised, etc. Yet, do so >>> within the other constraints (cost, power, space, etc.) imposed on the >>> design. >> >> Features are manageable; just withhold them until they work in subsequent >> releases. You, of course, have to know what's really required and what >> isn't, >> but that's a big part of the game. > > The developer doesn't always have control over what happens, when.They might as well. It ain't done 'til it's done. Her's the URL of the current defect list, and I'll send you an email every time a new one pops up...> Manglement can declare that a new feature is required -- even though > the OLD features haven't been "perfected", yet. >I've never had a lick of trouble with negotiating what goes into a release. All "manglement" wants is documentary evidence of improvement. If you learn to estimate the cost of not-fixing something, you'll have better luck with this. And it helps to have non-adversarial relationship.> And, IME, developers tend to want to play with implementing new features > instead of testing/documenting/perfecting old ones.So they should learn to be cost-driven. Every feature you *DON'T* add saves countless dollars in all directions. And if documentation hurts, you're doing it wrong. Remember that simulator you wrote? There ya go...> There's little > "novelty" in testing or documentation!"Novelty" is that which I should think we'd like to *avoid*. Nice, boring defect free stuff - that's the ticket.> And, by the time something is > (sort of) working, the developer is looking for any excuse to "move on" > to something else... > >Eventually that converges on not being a developer any more, in my experience. Narrow is the path... -- Les Cargill
Reply by ●December 20, 20142014-12-20
Don Y wrote:> [The trick, here, is not to turn the developer into a mindless idiot > that expects the machine to do ALL his/her thinking!] > > E.g., my bias in recent years is to make code more easily understood > and more *robust* -- instead of burning clock cycles on (silly?) features, > burn them ensuring the code ALWAYS works as advertised, etc. Yet, do so > within the other constraints (cost, power, space, etc.) imposed on the > design.Steady on Don. You are starting to sound like you are advocating that all software should be "correct by construction". ;> Actually, "correct by construction" is a very laudable aim for all software developers. However, you cannot truly state that what you have constructed is correct by construction if what you are building is overly complex. Hence, the need to get to the point of simplification of systems and the components that make-up those systems so that each and every one can be adequately described, documented and understood. -- ******************************************************************** Paul E. Bennett IEng MIET.....<email://Paul_E.Bennett@topmail.co.uk> Forth based HIDECS Consultancy.............<http://www.hidecs.co.uk> Mob: +44 (0)7811-639972 Tel: +44 (0)1235-510979 Going Forth Safely ..... EBA. www.electric-boat-association.org.uk.. ********************************************************************
Reply by ●December 21, 20142014-12-21
On Sat, 20 Dec 2014 11:43:02 -0700, Don Y <this@is.not.me.com> wrote:>One of my advisors was looking into ways to marry knowledgebases to >(e.g.) DBMS's to make for more efficient (query, in the DBMS case) >processing. E.g., instead of looking for "pregnant patients", >look for *females* that are pregnant (drawing in the qualification from >the knowledgebase: only females get pregnant)In a typically designed database, that query wouldn't be made any more efficient by only targeting females. Such an optimization would require that male and female patients be separate to begin with and there's generally no good reason to do that. A KB built from correlations found in the data might have some utility in optimizing ad hoc queries, but ad hoc queries are atypical in most settings. George
Reply by ●December 21, 20142014-12-21
Hi Paul, On 12/20/2014 7:14 PM, Paul E Bennett wrote:> Don Y wrote: > >> [The trick, here, is not to turn the developer into a mindless idiot >> that expects the machine to do ALL his/her thinking!] >> >> E.g., my bias in recent years is to make code more easily understood >> and more *robust* -- instead of burning clock cycles on (silly?) features, >> burn them ensuring the code ALWAYS works as advertised, etc. Yet, do so >> within the other constraints (cost, power, space, etc.) imposed on the >> design. > > Steady on Don. You are starting to sound like you are advocating that all > software should be "correct by construction". ;>Specification drives design and testing. Of course, no guarantee that the spec is correct for the problem at hand (that's the first part of the puzzle -- get it wrong and it doesn't matter how "perfect" your solution happens to be -- you've solved the wrong problem!) What I use "technological (runtime) advances" for is to choose cleaner algorithms, more fleshed out data constructs, etc. So "what I'm doing" is more apparent to the next guy to look at my code -- without my having to explain some "trick" in the algorithm (I may still have to explain the *algorithm* but not some twist that he/she is likely to misunderstand (or, worse, *think* they understand and break in their efforts to make changes). Similarly, I'll add black boxes to the run-time to give me some instrumentation that remains *in* the application (and, thus, does not alter its operation) that can enhance debugging (often, disguising them as "old state" so the algorithm can exploit their content as well). For *compile* time exploits, I litter my code with invariants (no runtime cost) so the next pair of eyes (which may be my own!) know what the safe assumptions are at each such place in the code (instead of just moving them up to check input parameters). Also, I build compile-time tools that help ensure code and documentation remain in lock step. (e.g., I extract details from publications that I've prepared to describe the algorithms and #include those directly into the source -- so you change the documentation to get the source updated!) Most of these things add some performance penalty (longer build times, slower run-times, etc.) but that gets hidden in the silicon improvements.> Actually, "correct by construction" is a very laudable aim for all software > developers. However, you cannot truly state that what you have constructed > is correct by construction if what you are building is overly complex. > Hence, the need to get to the point of simplification of systems and the > components that make-up those systems so that each and every one can be > adequately described, documented and understood.That's why it is important to make things in small pieces. "Complex == something that you can't fit in your head" So, "one page" functions; small modules with well defined functionality/interfaces; etc. E.g., if you look at how an individual op-code executes (i.e., alters the current state of the processor), there is a lot of detail, there (instruction fetch, decode, actuation). But, it fits within the above definition of "not *too* complex". A HLL statement might resolve to many of those *different* op-codes being executed to move the machine state from "before the HLL statement" to "after the HLL statement". But, we can still manage this complexity because we can "skip over" the complexity that is embodied in *each* opcode and, instead, concentrate on the abstractions they each represent. So, we apply our wetware to manage multiple opcode instances instead of all the mechanism they employ. Etc. for a function, then module, then program, then application.
Reply by ●December 21, 20142014-12-21
Hi Les, On 12/20/2014 6:06 PM, Les Cargill wrote:> Don Y wrote:>>> I said "at least target"; I don't even see too many that seem to >>> care. >> >> But that is true of all professions! It's not just "ditch diggers" who >> treat "work" as "just a job". Ever have an argument with a nurse who >> *claims* she didn't dispense a particular medication -- when *you* >> watched her administer it? ("Well, it's not on the chart..."). >> Or a plumber that doesn't *appear* to know how to sweat a joint? >> Or a painter who didn't properly prepare the surface before slopping >> on paint? >> >> We tend to forget that, to most people, "work" is "just a job". >> What incentive do they have to perform better? Will they be >> rewarded? Conversely, penalized if they perform *worse*?? > > At the end of the day, it's just a job to me, too. But it'd be no fun at all if > I wasn't engaged with it at this level.Ah, in my case, it's an "avocation" for which I get paid! :> Moving into a consultant's role gave me the freedom to explore different projects/application domains (instead of getting stuck doing the same thing over and over -- same market, same types of products, etc.). Retirement is a chance to address the projects that *I* originate (instead of worrying about making money for someone else!)> Incentives don't work, ultimately.Yup. Especially the obvious one (money). You really want to find people who are self-motivated, enjoy what they do, etc. "What projects have you done OUTSIDE of work?" (playing video games isn't a "project"!)>>> Learn to speak the (spoken) language; learn how to negotiate >>> what you'll deliver; learn how to do really thorough >>> design/development/testing. Overdeliver/underpromise. >>> >>> I dunno; learn your craft. I've been at it for close to 30 years. I >>> was barely even competent at ten years from a standpoint of being able >>> to deliver stuff that pretty much works the first time - maybe bounce >>> a bug or two before final release. I'd released stuff that worked from >>> the git-go; what got cleaner was my ability to promise things and hit >>> targets. >>> >>> Somewhere abut 20 years ago "we" decided that this is a children's game >>> and structured things accordingly. No; it's an *adult* game and you >>> have to play it to win. >> >> "The Market" doesn't want to have to rely/depend on practitioners. >> I've been hired for the *stated* purpose of an employer/client wanting >> to "not be reliant" on a particular individual currently in their employ. > > I understand completely; how'd that work out for 'em? We've already descended > into the realm of "who has the power in this relationship?"Some level of paranoia/self-protection is healthy. But, when parties start flexing their muscle (i.e., employee putting a gun to employer's head to get more money), then the relationship is already soured.> That's easy: the boss does.That's not always the case. Key employee leaves and there can be serious financial repercussions ("Who's your backup? What do we do if you get hit by a truck??")> That's fine for running a prison, but it's hell for a corporation. Had > I been explicitly told that up front, I'd never trust that individual > again. > > Then again, being head pumper on a sinking ship is no fun. So that's his > choice... I sympathize completely having to depend on that one guy, but.. maybe > he's doing it wrong.I can only surmise what transpired prior to my arrival -- based on observations of the personalities involved in the time that followed. I worked with a guy many years ago who (apparently) went looking for a new job every year "on the sly". But, always went to the same firms that he *knew* would leak his search activities back to my employer. According to my boss, he once fielded a call from one of these firms to the effect of: "Is XXXX really unhappy, there? Or, is he just holding you up, again?" I wonder if he'd be embarassed if he knew folks were saying things like that about him?>> I, in turn, never wanted to be "strapped" with supporting/developing the >> same thing over and over and over (when you're seen as "good" at >> something, you tend to get STUCK doing it). > > I find that if you build it right, the support is pretty minimal.If you have leverage over the folks who "want changes", that can be so. But, if (e.g.) Marketing comes in every other week with some new idea ("requirement"), all bets are off. At one firm, I was charged with coming up with a design for a newer version of a product they'd been "nursing" for more than a decade. I had to pitch my proposal to damn near everyone: top management, ALL of engineering, marketing, etc. (to my knowledge, this had never been done there -- before or *since*!) Almost immediately, the Marketing folks started in with their "Oh, you HAVE to have *this* feature!" -- citing something that their old device had but that I had elided from the new device's specification. They were NOT happy when I replied, "You sold exactly ONE system with that capability. I know because prior to preparing my proposal, I examined EVERY sales order for the past 10+ years!" The room went quiet until the CEO looked at me and said, "You know, I bet I know *who* bought it -- and it's probably sitting on a SHELF (not in use)". Had I *not* "done my homework", I'd have been bullied into adding a useless feature at some recurring and nonrecurring cost.>> Look at how society has striven to "dumb down" most labor efforts. >> Not to reduce errors or free employees from "tedium" but, rather, >> to allow less skilled (expensive) employees to fill those roles. > > That doesn't work, either. It's been quite the opportunity for me > as well. > >> [When did the ability to "make change" slip out of the basic skillset >> of ALL consumers??] > > Meh. We all use a little plastic card, anyway.?? That's something that you do "in your head" -- like memorizing multiplication tables!>> Now, "secretaries" write macros in spreadsheets, >> countless script-kiddies build web pages, etc. All because the tools >> have taken on more of the "work". > > So what's wrong with that? That is not what I am talking about anyway. > "Secretaries" have *real* jobs; we get to play all day. A large > dollop of respect is in order. > > I'm just a necessary evil, in the end.Tools (technology) have advanced so that more people can do the things that would previously have required "specialized skills". And, so those "things" can be applied more pervasively. "Secretaries" aren't carrying decks of cards around to "balance the books" but, instead, are writing macros (or, using visual tools to do same) to do it "live".>> *My* productivity is vastly improved when I can code in a multitasking >> environment -- esp if the tools let me "attach" to multiple threads >> and watch them interacting. This would have been unheard of with >> the targets and development systems available when I started my career! > > I don't find any of that amounts to a hill of beans. It's > decoration. I've done things with .. too many threads, > one "big loop", oddball CASE tools... the basics > under it all are the same.The point is, these techniques were impractical years ago. Writing a *debugger* was a significant effort (e.g., being able to peek and poke memory, single-step a program, etc.) and had to be done for each processor. Now, you have bloated debuggers and simulators that can easily be retargeted to different processors/environments. In my current environment, I have to attach to multiple threads/processes running on different, geographically distant, physical processors to watch a client's request pass through an agency and ultimately to a service. Doing *that* with even an ADVANCED debugger would have been tedious not long ago!>>>> Let a tool burn development cycles to make the developer's effort be >>>> more productive (e.g., lint *in* the IDE while you're writing the code >>>> instead of as a "post process"). >>> >>> Use of an IDE is a marque of someone who doesn't understand the real >>> risks. >>> That doesn't always hold. >> >> You're advocating that we do away with IDE's? > > Nope. But you'd better be able to dive in outside the thing. Or do you ship > "DEBUG" projects and call 'em released?You can't ship DEBUG binaries. All dead code has to be removed prior to shipment. There are typically *many* aspects of a device that can't be examined or tested without the development scaffolding in place. The advantage of better tools (languages, debuggers, IDE's, etc) is that it allows far more thorough testing/stressing *before* you get to RELEASE. The first product I worked on was debugged with "'scope loops" and paper printouts. No emulators/debuggers/simulators. No HLL's. It was *painful*. I suspect I could replace the three or four man-years we spent on just the *software* with a couple of weeks/months of effort, today (esp if I could take advantage of newer hardware so the newer *tools* were more effective).>> Simulators? Lint? etc. > > Certainly not. > >> A tool isn't inherently "bad"; it all boils down to how well you *use* it >> and what you expect *from* it. > > Of course. > >> I *love* being able to run my code on a desktop simulator instead of >> being dependent on a piece of target hardware. There's so much more I >> can do in pulling data from the "virtual target" to verify proper >> operation, visualize the data or the performance metrics of the code, >> etc. > > This is fine so far as it goes.It can go a *long* way! This is a direct carryover from the way I design hardware (logic): e.g., synchronous designs are much easier to "get right" than anything asynchronous. And, if you do a worst case analysis of all signal paths, all you have to do is verify operation at DC -- then crank the clock up to the target frequency. The same sort of approach can be used in software. Isolate the hardware and time specific aspects of the code. Verify *they* work correctly (with fleshed out test suites -- something that is SO much easier to do with the tools available, now!). Then, KNOWING these work, you can add in the hardware and temporal aspects of the solution (which you have deliberately minimized -- to make this easier *and* more portable!)>>>> [The trick, here, is not to turn the developer into a mindless idiot >>>> that expects the machine to do ALL his/her thinking!] >>> >>> Feel free to do that; it does not turn out well. >> >> It depends on the individual. When I hear people complaining that their >> machines are too slow, my first thought is "What are they doing that is >> causing those to be the apparent bottleneck?" Often, they aren't >> THINKING but, instead, just "trying things" and hoping one of them works. > > Of course. I do that same thing while I muse about the root cause. I suspect we > all do. About half the time, I stumble into it.There is a difference between "stumbling on" the problem -- and then *exploring* it -- and "OK, that works... on to the next bug..." Years ago, I was involved on a subcontract for a MIL project. Primary contractor had designed the kit. Our job was to build it and test it (one-of-a-kind sort of thing). A minicomputer was used to drive the test suite -- pushing data into the DUT and exercising all data and control paths, indirectly. The comms link between the minicomputer (TTL/LSI) and the DUT (ECL) was a horrible kludge of one-shots, level translators, line drivers, etc. It wasn't working. I suspected a one-shot was firing too quickly. Contractor's rep ruled that out -- by examining schematics. After patiently "deferring to my elders" and getting *nowhere* ("these are hours of my *life*!"), I grabbed a random cap off the nearest bench, tacked it onto the pins of the one-shot that I suspected and reinitiated communication. "Huh?? What did you *do*??" When he saw the size of the cap I used, his criticism turned away from "that's not the problem" to more of "that's *way* too big!" "Sure! But now we KNOW where the problem lies and can figure out why your design is wrong!">>> Features are manageable; just withhold them until they work in subsequent >>> releases. You, of course, have to know what's really required and what >>> isn't, but that's a big part of the game. >> >> The developer doesn't always have control over what happens, when. > > They might as well. It ain't done 'til it's done. Her's the URLIt's done when it gets *shipped*. Management always fall back on the "Shoot the Engineer" approach. "We don't have time to do it RIGHT; but, we'll have time to do it OVER!" [One place I worked shipped a large system IN PIECES (as in, not yet completely manufactured!) just so they could get it "on the books" before year end. Of course, shortly after the New Year, it came back with some really angry words from the customer. Of course, the CEO had moved up the ladder -- based on his "record year"! -- so the mess fell on those left behind to pick up HIS pieces!]> of the current defect list, and I'll send you an email every time a new one > pops up... > >> Manglement can declare that a new feature is required -- even though >> the OLD features haven't been "perfected", yet. > > I've never had a lick of trouble with negotiating what goes into a release. All > "manglement" wants is documentary evidence > of improvement. If you learn to estimate the cost of not-fixing something, > you'll have better luck with this. And it helps to have non-adversarial > relationship.This doesn't matter. See above. You are assuming people are rational. Put a megadollar on the books this year -- and let it come *off* the books NEXT year -- makes perfect sense to someone who's sole interest is his *promotion*! Some projects *avoid* adding things to a release for fear of it NOT working. One client told me that his bean counters had concluded that it cost $600 to put a technician in a car and have him drive the 30 miles to "town" to make a repair. Product we were designing at the time had a DM+DL in the *$300* ballpark. You don't get to make many "mistakes" at those rates! I.e., you don't skimp on component quality. You design so that you can drop-ship a replacement *product* instead of dispatching a technician. You test every feature to be sure it ALWAYS works. You don't indulge in feeping creaturism if there's no obvious value. OTOH, failing to add a feature that is necessary can cost a sale -- or a reputation!>> And, IME, developers tend to want to play with implementing new features >> instead of testing/documenting/perfecting old ones. > > So they should learn to be cost-driven. Every feature you *DON'T* add saves > countless dollars in all directions.They "should" learn lots of things: how to write specifications; how to design *to* specifications; how to test to specifications; how not to introduce bugs; etc. *My* -- or *your* -- saying these things doesn't make them so! IME, developers *don't* want to spend time writing specs (how often do you see someone sit down and start writing *code* as soon as they're given a new project? *Allegedly* just to "explore some algorithms"? how often do they *then* write the specifications having discarded the code they were "playing with"?). They don't like documenting their code. They don't like building test suites and applying them rigorously throughout the development effort (instead, they "poke at" their code just enough to convince themselves that it APPEARS to work). It's *so* much more interesting to move on to some other aspect of the design than to keep hammering at pathological cases that *might* come up (or, might NOT!). I have an uncanny ability to find flaws in production code. It's easy -- figure out what they ASSUME you will do, then do something unexpected. Disheartening when they are "relieved" that they are "finally done" -- only to see me poke a hole in their work almost *casually*! I had a tool vendor who would grumble about how frequently I would find bugs in their products (through normal course of use). While they weren't keen on the bug reports, they *were* happy to have a more robust product as a result.> And if documentation hurts, you're doing it wrong. Remember that simulator you > wrote? There ya go...Engineers tend to be more interested in "solving problems" than "describing what they did". I'm almost obsessed with documentation -- yet, each time I bake an Rx, I don't formally *revise* it to reflect the improvements I've introduced with this latest incarnation. Instead, I leave a cryptic note to myself. "Next time", I'll carefully examine every square inch of the page to figure out which group of notes are most recent and "update" the Rx "in my head". Should I, instead, keep a laptop in the kitchen just to "do it *right*"? (At least I *made* notations as to the impact of each change instead of relying on memory for that!)>> There's little >> "novelty" in testing or documentation! > > "Novelty" is that which I should think we'd like to *avoid*. Nice, boring > defect free stuff - that's the ticket.Elsewhere, you called it "fun". I guess we have different ideas of "fun". I don't consider "boring" and "fun' to be synonomous. Sounds more like a *job*!>> And, by the time something is >> (sort of) working, the developer is looking for any excuse to "move on" >> to something else... > > Eventually that converges on not being a developer any more, in > my experience. Narrow is the path...Look around at the (older) folks who started off their careers in engineering: Some move into Management (money, unable/unwilling to keep up with technological advances, perceived prestige, etc.). Some move into their own ventures (consultancies, businesses, etc.). Some keep doing the same thing forever (every place I've worked has had at least one "old-timer" who is helplessly out of date with current technology -- hopefully, not in a position where he keeps the company's feet firmly planted in The Past). Some keep performing at "subsistence level" and are retained solely out of inertia ("He's harmless"). Others become "idea people" -- keeping just enough abreast of technology to know what *should* be feasible, but not really competent to do the actual work. etc. There's a very different mindset involved in wanting to "get something (done) *right*" vs. "just move on". Time to assemble my first set of cookie platters and get them out of here (so I can get on with the rest of my baking!)
Reply by ●December 21, 20142014-12-21
Hi George, On 12/21/2014 5:23 AM, George Neuner wrote:> On Sat, 20 Dec 2014 11:43:02 -0700, Don Y <this@is.not.me.com> wrote: > >> One of my advisors was looking into ways to marry knowledgebases to >> (e.g.) DBMS's to make for more efficient (query, in the DBMS case) >> processing. E.g., instead of looking for "pregnant patients", >> look for *females* that are pregnant (drawing in the qualification from >> the knowledgebase: only females get pregnant) > > In a typically designed database, that query wouldn't be made any more > efficient by only targeting females. Such an optimization would > require that male and female patients be separate to begin with and > there's generally no good reason to do that.Dunno. I suspect this was just an easy example for him to use to explain the concept -- one that virtually everyone could "understand".> A KB built from correlations found in the data might have some utility > in optimizing ad hoc queries, but ad hoc queries are atypical in most > settings.I'm not sure what his goal/methodology was. If he was trying to "learn" on-the-fly or if this was part of some more fundamental aspect of the design. (this was almost 40 years ago and not something I was *interested* in, at the time). OTOH, it is this type of knowledge that a programmer can imbibe in his algorithm that a compiler can't (necessarily) infer from an examination of sources (that *don't* contain these relationships). Things like: uint foo; if (foo >= 0) ... are relatively lame, by comparison. (yet amusing to seee how often developers write things like that!)
Reply by ●December 21, 20142014-12-21
Don Y wrote:> Hi Les, > > On 12/20/2014 6:06 PM, Les Cargill wrote: >> Don Y wrote: ><snip>>>> I, in turn, never wanted to be "strapped" with supporting/developing the >>> same thing over and over and over (when you're seen as "good" at >>> something, you tend to get STUCK doing it). >> >> I find that if you build it right, the support is pretty minimal. > > If you have leverage over the folks who "want changes", that can > be so. But, if (e.g.) Marketing comes in every other week with > some new idea ("requirement"), all bets are off. >Never had any trouble with that. You have to frame issues in terms of risk, cost and capability.> At one firm, I was charged with coming up with a design for a newer > version of a product they'd been "nursing" for more than a decade. > I had to pitch my proposal to damn near everyone: top management, > ALL of engineering, marketing, etc. (to my knowledge, this had never > been done there -- before or *since*!) > > Almost immediately, the Marketing folks started in with their > "Oh, you HAVE to have *this* feature!" -- citing something that > their old device had but that I had elided from the new device's > specification. They were NOT happy when I replied, "You sold > exactly ONE system with that capability. I know because prior > to preparing my proposal, I examined EVERY sales order for the > past 10+ years!" > > The room went quiet until the CEO looked at me and said, "You > know, I bet I know *who* bought it -- and it's probably sitting > on a SHELF (not in use)". >Well, there ya go. <snip>> > "Secretaries" aren't carrying decks of cards around to "balance the > books" but, instead, are writing macros (or, using visual tools to > do same) to do it "live". >Nothing wrong with that. <snip>>> Nope. But you'd better be able to dive in outside the thing. Or do you >> ship >> "DEBUG" projects and call 'em released? > > You can't ship DEBUG binaries. All dead code has to be removed prior to > shipment. There are typically *many* aspects of a device that can't be > examined or tested without the development scaffolding in place. The > advantage of better tools (languages, debuggers, IDE's, etc) is that it > allows far more thorough testing/stressing *before* you get to RELEASE. >The point being that you have a release process. <snip>>>> I *love* being able to run my code on a desktop simulator instead of >>> being dependent on a piece of target hardware. There's so much more I >>> can do in pulling data from the "virtual target" to verify proper >>> operation, visualize the data or the performance metrics of the code, >>> etc. >> >> This is fine so far as it goes. > > It can go a *long* way! This is a direct carryover from the way I > design hardware (logic): e.g., synchronous designs are much easier > to "get right" than anything asynchronous. And, if you do a worst > case analysis of all signal paths, all you have to do is verify operation > at DC -- then crank the clock up to the target frequency. >It can so long as you can get buy-in on the NRE for it. <snip>>> >> Of course. I do that same thing while I muse about the root cause. I >> suspect we >> all do. About half the time, I stumble into it. > > There is a difference between "stumbling on" the problem -- and then > *exploring* it -- and "OK, that works... on to the next bug..." > > Years ago, I was involved on a subcontract for a MIL project. Primary > contractor had designed the kit. Our job was to build it and test it > (one-of-a-kind sort of thing). > > A minicomputer was used to drive the test suite -- pushing data into > the DUT and exercising all data and control paths, indirectly. The > comms link between the minicomputer (TTL/LSI) and the DUT (ECL) > was a horrible kludge of one-shots, level translators, line drivers, > etc. > > It wasn't working. I suspected a one-shot was firing too quickly. > Contractor's rep ruled that out -- by examining schematics. After > patiently "deferring to my elders" and getting *nowhere* ("these > are hours of my *life*!"), I grabbed a random cap off the nearest > bench, tacked it onto the pins of the one-shot that I suspected > and reinitiated communication. > > "Huh?? What did you *do*??" > > When he saw the size of the cap I used, his criticism turned away > from "that's not the problem" to more of "that's *way* too big!" > > "Sure! But now we KNOW where the problem lies and can figure > out why your design is wrong!" >:) <snip>>> I've never had a lick of trouble with negotiating what goes into a >> release. All >> "manglement" wants is documentary evidence >> of improvement. If you learn to estimate the cost of not-fixing >> something, >> you'll have better luck with this. And it helps to have non-adversarial >> relationship. > > This doesn't matter. See above. You are assuming people are rational.They are if you let 'em be rational. This is my point.> Put a megadollar on the books this year -- and let it come *off* the > books NEXT year -- makes perfect sense to someone who's sole interest > is his *promotion*! >In that case, that *IS* rational. But in general, I've managed to work with people who had the same basic interest-alignment I had.> Some projects *avoid* adding things to a release for fear of it NOT > working.Yes.> One client told me that his bean counters had concluded that > it cost $600 to put a technician in a car and have him drive the 30 > miles to "town" to make a repair. Product we were designing at the > time had a DM+DL in the *$300* ballpark. You don't get to make many > "mistakes" at those rates! >Nope.> I.e., you don't skimp on component quality. You design so that you > can drop-ship a replacement *product* instead of dispatching a > technician. You test every feature to be sure it ALWAYS works.Yep.> You don't indulge in feeping creaturism if there's no obvious > value. OTOH, failing to add a feature that is necessary can cost > a sale -- or a reputation! >The point of that is that it is manageable and the way to manage it is by balancing cost and risk. If a feature simply *HAS* to be there, then it's gonna need to be there.>>> And, IME, developers tend to want to play with implementing new features >>> instead of testing/documenting/perfecting old ones. >> >> So they should learn to be cost-driven. Every feature you *DON'T* add >> saves >> countless dollars in all directions. > > They "should" learn lots of things: how to write specifications; how to > design *to* specifications; how to test to specifications; how not to > introduce bugs; etc. > > *My* -- or *your* -- saying these things doesn't make them so! > > IME, developers *don't* want to spend time writing specs (how often > do you see someone sit down and start writing *code* as soon as > they're given a new project? *Allegedly* just to "explore some > algorithms"? how often do they *then* write the specifications > having discarded the code they were "playing with"?). They > don't like documenting their code. They don't like building > test suites and applying them rigorously throughout the development > effort (instead, they "poke at" their code just enough to convince > themselves that it APPEARS to work). >Yep. <snip>>> "Novelty" is that which I should think we'd like to *avoid*. Nice, boring >> defect free stuff - that's the ticket. > > Elsewhere, you called it "fun". I guess we have different ideas of "fun". > I don't consider "boring" and "fun' to be synonomous. Sounds more like > a *job*! >Of course it's a job. But that job is less fun when you're firefighting all the time.>>> And, by the time something is >>> (sort of) working, the developer is looking for any excuse to "move on" >>> to something else... >> >> Eventually that converges on not being a developer any more, in >> my experience. Narrow is the path... > > Look around at the (older) folks who started off their careers in > engineering: > > Some move into Management (money, unable/unwilling to keep up with > technological advances, perceived prestige, etc.). > > Some move into their own ventures (consultancies, businesses, etc.). > > Some keep doing the same thing forever (every place I've worked has > had at least one "old-timer" who is helplessly out of date with > current technology -- hopefully, not in a position where he keeps the > company's feet firmly planted in The Past). >Most companies have their feet firmly planted in the past - and for good reason. The "can't keep up" thing is always suspicious; I've never seen it in thirty years. Generally, new technology means a new project and those are, frankly, unusual. If you wish to introduce new tech, you're better off bringing it in as a fait accompli.> Some keep performing at "subsistence level" and are retained solely > out of inertia ("He's harmless"). > > Others become "idea people" -- keeping just enough abreast of technology > to know what *should* be feasible, but not really competent to do the > actual work. > > etc. > > There's a very different mindset involved in wanting to "get something > (done) *right*" vs. "just move on". >It's harder to get it right if you're having to "perform" at the same time. "Performers" play to the audience.> Time to assemble my first set of cookie platters and get them out of > here (so I can get on with the rest of my baking!)-- Les Cargill







