Am 08.03.2015 um 18:06 schrieb Don Y:> On 3/8/2015 1:39 AM, Paul E Bennett wrote:>> I often think a better metric for bonus payments are on Function Points >> correctly implemented (passed through test without detected errors).> As a contractor, I've come up with a practical solution: bug fixes are > free.That works well until an ill-minded customer discovers they can effectively run a denial-of-service attack on you by just claiming the existence of all manner of bugs, regardless whether they correspond to reality.> (also forces clients to know what they want and outline that in "terms" > that can be *measured* in the deliverablesThe other side of the above strategy will then be that they re-phrase all their change requests as bugs to be fixed for free.
Code metrics
Started by ●March 7, 2015
Reply by ●March 8, 20152015-03-08
Reply by ●March 8, 20152015-03-08
Hi Frank, On 3/8/2015 11:02 AM, Frnak McKenney wrote: 8<>>>> It has happened. In the early 80s I did some work in a shop where >>>> the new programming manager instituted LOCs as a productivity metric, >>>> which then factored into raises and bonuses. >>> >>> Dangerous!!! >> >> Virtually all incentives can be gamed -- to the *detriment* of the >> person doing the incentivizing! > > If you want to see this debate played out for a wider audience, take a > look at all the methods which have been suggested -- or used -- to > evaluate "teaching" or "education": class sizes, favorable student > reviews, amount of money per student, multiple-choice tests, GPA, > parental feedback, ... all of which seems to indicate that there is no > generally agreed-upon measure of either "efficiency" or "effectiveness".But that just confirms the misuse of (those) metrics! They are being used to *force* change (fire ineffective teachers; justify smaller class sizes; "get back to basics"; etc.) instead of as a *tool* to help understand the "system" that is being measured. Do people *really* think it's A Good Thing for kids NOT to be tested to "standards" in (primary) school? Sure, you won't risk hurting Johnny's feelings. Or, having the teacher concentrate too much on "teaching to the test" vs. a more general approach. Until, of course, Johnny gets to college and the admissions officer tosses his application in the trash because of poor grammar, spelling, etc. And, the FinAid office does likewise because "his numbers don't add up". "OhMiGosh! What are we going to do! Obviously, LOWER the standards cuz it's too late to *fix* Johnny's primary school problem!"> Which, since we're talking about human beings here, doesn't stop a lot of > heat -- and the odd bit of illumination -- being generated on the topic. > > "We know good/bad coding when we see it" seems as good a metric as any, > as does "We know good/bad teaching when we see it". It just takes a lotBut, *do* we? We *probably* can agree on egregious concoctions. But, do you really *know* how "good" *your* code is? How do you make that evaluation? "It runs"? "It was 'finished' on time/under budget"? "It hasn't killed anyone"? "There's not much maintenance being required"? And, more to the point, do you KNOW how to make it *better*? Or, do you just *think* you do? I.e., all these rules/guidelines developed and codified over the last 50 years *try* to bias your efforts to a "better" result. Yet, you could faithfully implement ALL of them and still have crappy code! When it comes down to specifics, do can you put a number on *how* important any particular coding practice is wrt code correctness? Or, its cost to The Project? Do you even *know* how much your code "costs" (i.e., *you*)? When I first started on my own, I was stunned at how much time was spent on non-engineering tasks! Equipment maintenance, purchases, ordering supplies, accounting, etc. How easily an hour could pass talking with a client, sales rep, colleague, etc. on the phone with "nothing" to show for it! E.g., sorting out some technical detail in a particular device prior to selecting it for the design; or a detail in the project specification; or, a detail in some other colleague's subsystem upon which you rely; etc.> of time and effort, and depends on honesty and trust... which are not > "mechanical" processes. > > Good luck!The *good* thing is that you can have "low expectations" from the results -- there are no "target numbers" involved. Just trends, guidance, etc.
Reply by ●March 8, 20152015-03-08
On Sun, 08 Mar 2015 01:12:24 -0700, Don Y <this@is.not.me.com> wrote:>Hi George, > >On 3/8/2015 12:00 AM, George Neuner wrote: > >> Writing fast and sloppy has been glorified and institutionalized >> through the use of "agile" methods, rapid releases, push updating, and >> using the customer as unpaid testers. > >But the numbers you'd gather from *that* effort would (roughly) translate >to a similar effort undertaken in that same style.That's true but mostly irrelevant because the numbers include unknown amounts of slop that can't be correlated. The problem with quick'n dirty is that much effort inevitably is wasted. Even if you know the general direction, there always are false starts and blind alleys before you reach the destination. Eventually the application falls over under the weight of the grafts and has to be refactored - which is all waste. Unavoidable sometimes, but waste nonetheless. Metrics rarely take into account how much work needs to be redone, however "you can always do it over" is a basic premise of agile.>And, would give you a way of comparing a *different* development style >to that one with measurable results: > "Yeah, we got the code to the user a lot quicker, the old way. But, > it ended up more complex and more costly and we had a customer grumbling > all the time we were issuing those endless updates! -- 'when is this thing > going to be *done*?'"Not really. Compared to other methods, agile development is extremely sensitive to team makeup. Replace any member of the team and the numbers you've gathered become meaningless. You can compare different agile teams which are doing essentially the same work, but you can't necessarily generalize from that to their performance on a different problem. You can compare the total cost of agile to the total cost of some other method, but the numbers are misleading because agile deliberately trades work redone later for short time to market now. That is fundamentally different from the feature trading that other methods consider and makes comparing agile with other methods extremely difficult (unless you consider correct function to be a "feature" that can be delayed until version X). I spent quite a few years doing "continuous" development - which is similar to "agile", but more structured. I think agile has been the worst thing to come along in my lifetime. On the surface it looks appealing, but the appearance is a mirage concealing a tar pit beneath. YMMV, George
Reply by ●March 8, 20152015-03-08
Hi Stefan, On 3/8/2015 11:09 AM, Stefan Reuther wrote:> Don Y wrote: >> On 3/8/2015 2:35 AM, Stefan Reuther wrote: >>> Don Y wrote: >>>> On 3/7/2015 2:52 PM, Tim Wescott wrote: >>>>> I don't know if things have changed in the last decade or so, but the >>>>> last time I really paid attention to this, it was felt that _any_ code >>>>> metric could be gamed. Search on "Dilbert" and "write me a new >>>>> mini-van". >>>> >>>> But, what do they *gain* by doing so? >>> >>> "Get the metric guys to shut up and let me do my job." >>> [...} >> >> But that's (IMO) a misapplication of metrics. They're trying to use it to >> control/impose quality, process, etc. > > ....which is precisely my complaint with these metrics.But that's not the fault of the metrics! Replace them with <anything> and the folks *applying* them would still cause you the same grief! If, instead, they were used as an advisory tool: "Hmmm... your LOC/day figure is dropping, Stefan. This *suggests* whatever you are working on, now, is more tedious than what you were working on previously. And, possibly suggests more testing will be required of that module than the previous (cuz you are having to 'think harder' about it while writing it)" Or: "Wow! At this point, in the last project, we were seeing XXX. But, we're now seeing YYY. How does this forebode our future efforts and costs wrt that previous project and the estimates we prepared for this?">> I believe virtually all "rules" are mistakes when it comes to software >> development. They should be considered "guidelines". Making things >> rules suggests you have incompetent folks doing the work (and bean counters >> trying to constrain it). Expect people to know their craft. >> [...] >> Tools should provide *guidance*; the developer should evaluate that >> guidance in the context of the problem at hand. > > When using a complicated or pitfall-ridden language such as C or C++, > you cannot assume people know their craft perfectly, so a tool sounds > like a perfect excuse. What I believe is overseen is that even if you > have a tool, you still need a competent guide to tell people what to do > with the tool's message - and a livable process for people to confirm > "yes, I know more than the tool at this point".Sure. We call that "experience" and "quality developers". You can *know* something is bad -- and still do it, intentionally, in spite of the acknowledged risks! Knowing doesn't ensure you will be *wise* in the use of that information.> Without that, intermediate developers will believe the tool, and the > advanced ones who don't believe the tool will game the system because of > the complicated deviation process (doing protocol means two more hours > overtime, gaming the system means five minutes). Neither improves > software quality.Again, that's how the metrics are *applied*, not a characteristic of the metrics themselves. Returning to my "no warnings" comment, below: I can just turn OFF all warnings to achieve the same result! With the expected downside impact on code quality. You can't "legislate" good behavior/practices. But, you can put tools in place that let people *see* the costs of their actions and make INFORMED decisions in light of that information.>> E.g., I target "no warnings" in my compiles. But, that's because there >> isn't a language feature that allows me to insert: >> #acknowledge The 'missing cast' error can be ignored, here >> that allows me to indicate that I am aware of the "warning" and have >> evaluated it properly. *AND*, that the compiler's failure to signal >> that warning at this directive should, itself, signal an ERROR >> (i.e., this acknowledgement shouldn't be here). As it would be a >> type of "comment" that the compiler *could* indicate was "incorrect"! > > "no warnings" is a pretty vague goal, because every compiler warns > differently, and some warnings are unavoidable. For example, > u++; > will warn with gcc and -Wconversion if u has a type shorter than int, > and the only way to shut that up is to convert it into something verbose > like > u = (uint16_t) (u + 1); > That's why I have mixed feelings about such a rule (but normally target > "no warnings" as well).The alternative is *hope* the developer (the original developer as well as any that *follow*!) understands all the warnings and has IMPLICITLY decided that they can be ignored. Are you sure the warnings *you* are seeing are the same warnings that *he* saw, previously?? :> I port a fair bit of software. So, switching compilers, environments, etc. is a commonplace occurrence for me. The first thing I do is turn on all warnings and see how "messy" the output becomes. Then, track down each "violation" to see why the compiler flagged it as such. Was the previous developer (which may have been me!) just lazy, here? Or, are the tools different which makes certain previous assumptions no longer valid (e.g., sizes of data types)? Or, is this a compiler specific behavior that wasn't caught by previous Best Practices? "Warning" means exactly that: "Hey, are you sure you know what you are doing, here?" I can either dismiss all with a naive, "Yup". Or, if I care for the quality of my code, I can spend the time to investigate why each was signaled. Then, take measures to "mark" them so they don't require additional time from me (or my successor) in the future. I can't guarantee particular results. But, I can choose to put in place procedures/mechanisms that "improve my odds" of getting things right. Metrics are just an advisory tool along that same continuum. E.g., I designed my IDL so the spec defines *all* the results from a particular method invocation. This allows me to handle each RPC/IPC in a boilerplate manner to ensure every potential outcome is at least *recognized*/acknowledged by the developer. It's the equivalent of ensuring each malloc() is followed by "if (result == NULL)...". I.e., it doesn't guarantee that the developer handles the out-of-memory case CORRECTLY. But, it prods him/her to at least remember that this is a real possibility at each invocation!
Reply by ●March 8, 20152015-03-08
Hi George, On 3/8/2015 12:08 PM, George Neuner wrote:> On Sun, 08 Mar 2015 01:12:24 -0700, Don Y <this@is.not.me.com> wrote: >> On 3/8/2015 12:00 AM, George Neuner wrote: >> >>> Writing fast and sloppy has been glorified and institutionalized >>> through the use of "agile" methods, rapid releases, push updating, and >>> using the customer as unpaid testers. >> >> But the numbers you'd gather from *that* effort would (roughly) translate >> to a similar effort undertaken in that same style. > > That's true but mostly irrelevant because the numbers include unknown > amounts of slop that can't be correlated. > > The problem with quick'n dirty is that much effort inevitably is > wasted. Even if you know the general direction, there always are > false starts and blind alleys before you reach the destination.I think the problem is finding two *apples* to compare. From what I've seen of agile, spiral, etc. development styles, they're "never done". So, how do you know when you've got project Y in the same state that project X was at the time of the metrics you're using in the comparison?> Eventually the application falls over under the weight of the grafts > and has to be refactored - which is all waste. Unavoidable sometimes, > but waste nonetheless. Metrics rarely take into account how much work > needs to be redone, however "you can always do it over" is a basic > premise of agile.That's my point, above. If you look at the project at a point AFTER the refactoring, then the costs of that are reflected in the new metrics.>> And, would give you a way of comparing a *different* development style >> to that one with measurable results: >> "Yeah, we got the code to the user a lot quicker, the old way. But, >> it ended up more complex and more costly and we had a customer grumbling >> all the time we were issuing those endless updates! -- 'when is this thing >> going to be *done*?'" > > Not really. Compared to other methods, agile development is extremely > sensitive to team makeup. Replace any member of the team and the > numbers you've gathered become meaningless.The same applies to any other effort. Comparing metrics of developer A to developer B (who has an entirely different process) are meaningless as are language A to language B, etc. Comparing the cost of a Ferrari to a Chevy would be equally meaningless. OTOH, comparing the costs of a Model XYZ w/an inline 6 to a Model XYZ w/a V8 bears some merit!> You can compare different agile teams which are doing essentially the > same work, but you can't necessarily generalize from that to their > performance on a different problem.You would use the trends observed *during* "problem A" to glean insights into what to expect for "problem B". Note you don't try to *know* what will happen in "problem B" but, rather, be alert as to what *has* happened with problem A in the past. E.g., "everything went fine UNTIL we got to..."> You can compare the total cost of agile to the total cost of some > other method, but the numbers are misleading because agile > deliberately trades work redone later for short time to market now.Ferrari vs. Chevy. Someone has to place a value on "time to market" and decide how it offsets the other "costs"/consequences of that approach. No free lunch. Likewise, someone has to put a cost on the disdain you risk from your guinea pigs^H^H^H^H^H customers by producing an incomplete product and *charging* them for a working unit! How many FUTURE customers so you lose? How many "lines of debugged code" would that have purchased?> That is fundamentally different from the feature trading that other > methods consider and makes comparing agile with other methods > extremely difficult (unless you consider correct function to be a > "feature" that can be delayed until version X).Isn't the same sort of thing evident in the blatant: "We don't have time to do it right; but, we'll have time to do it over" or: "Great idea! We'll put that in version 2" mentalities? These have been around since *I* got started in industry!> I spent quite a few years doing "continuous" development - which is > similar to "agile", but more structured. I think agile has been the > worst thing to come along in my lifetime. On the surface it looks > appealing, but the appearance is a mirage concealing a tar pit > beneath.Agreed. But I don't see any of these things as arguments against *having* metrics. Merely complications as to how they can be used and the "reliability" of the observations gleaned from them. Sunday Lunch. Finestkind!
Reply by ●March 8, 20152015-03-08
Frnak McKenney <frnak@far.from.the.madding.crowd.com> wrote: (snip on coding metrics)> If you want to see this debate played out for a wider audience, take a > look at all the methods which have been suggested -- or used -- to > evaluate "teaching" or "education": class sizes, favorable student > reviews, amount of money per student, multiple-choice tests, GPA, > parental feedback, ... all of which seems to indicate that there is no > generally agreed-upon measure of either "efficiency" or "effectiveness". > Which, since we're talking about human beings here, doesn't stop a lot of > heat -- and the odd bit of illumination -- being generated on the topic.Then add "No Child Left Behind", a plan from a C student president to make sure that all students are C students. In Washington state, all schools are now failing, according to NCLB, as they are refusing to use student test results for teacher evaluation. The result is that pretty much all schools have to send a letter to parents indicating that the school is failing.> "We know good/bad coding when we see it" seems as good a metric as any, > as does "We know good/bad teaching when we see it". It just takes a lot > of time and effort, and depends on honesty and trust... which are not > "mechanical" processes.and, similarly, test results aren't all that good at measuring teachers. -- glen
Reply by ●March 8, 20152015-03-08
Hi Glen, On 3/8/2015 1:59 PM, glen herrmannsfeldt wrote:> Frnak McKenney <frnak@far.from.the.madding.crowd.com> wrote: > > (snip on coding metrics) > >> If you want to see this debate played out for a wider audience, take a >> look at all the methods which have been suggested -- or used -- to >> evaluate "teaching" or "education": class sizes, favorable student >> reviews, amount of money per student, multiple-choice tests, GPA, >> parental feedback, ... all of which seems to indicate that there is no >> generally agreed-upon measure of either "efficiency" or "effectiveness". >> Which, since we're talking about human beings here, doesn't stop a lot of >> heat -- and the odd bit of illumination -- being generated on the topic. > > Then add "No Child Left Behind", a plan from a C student president to > make sure that all students are C students.*Any* and *every* plan to "measure" the education system (its components, its results, etc.) is a farce. Too many "special interests" -- teachers, administrators, parents, etc. And, the system is uncharacterized. Like trying to control a loop for which you have no idea of the extents of lag present, etc. So, instead, we wait until Johnny is responsible for administering that IV drug upon which your health/life depends. When he screws *that* up, we fire Johnny (let him move on to some other profession... like teaching!), give the patient 9or the patient's family) our condolences, pay off some lawyers and lament how the system failed "Johnny" (with no mention of the *patient*!) Or, enforcing *laws* on the streets. Charter schools! Really?? We all know how well business has addressed the needs of its "customers", historically. "Ah, but these are *schools*! SHIRLEY, they'll do better -- due to the moral imperative!" (just like pharmaceutical companies set their pricing and policies based on THAT morality)> In Washington state, all schools are now failing, according to NCLB, as > they are refusing to use student test results for teacher evaluation.To be fair, how would *you* measure teachers? It would be like measuring *your* coding performance based on a starting point of some other developer(s)' code upon which you've built. WITHOUT even giving you the choice of whose codebase you will be supporting! (No, you can't go back and rewrite it all; there are only 180 days in the school year and you have to have made *progress* in that time!)> The result is that pretty much all schools have to send a letter to > parents indicating that the school is failing. > >> "We know good/bad coding when we see it" seems as good a metric as any, >> as does "We know good/bad teaching when we see it". It just takes a lot >> of time and effort, and depends on honesty and trust... which are not >> "mechanical" processes. > > and, similarly, test results aren't all that good at measuring teachers.Building on Frank's comment. We all (think!) we know a good/bad teacher when we see him/her. But, the verdict is never "in" until long after the student has moved beyond. And, van never truly be isolated and identified as the *cause* of the student's success/failure. I.e., don't measure students to reward/punish the students *or* the teachers (or the System). Instead, use them as tools to evaluate areas that may need special attention. Or, to gauge the benefits of certain "investments". (Personally, I don't understand the big hullabaloo over testing. I can remember taking "big tests" throughout my primary school education. And, I'm sure there was *some* effort to guide my studies in such a way that I would fare well enough on those. Without drawing attention to the fact that this is what was actually being done!) Otherwise, you wait until it is too late and some "vested interest" casts judgement on whether or not Johnny is eligible for a particular vocation, continuing education or <whatever>. Do you then create laws to prevent medical schools, employers, etc. from discriminating based on intelligence or other measures of aptitude? So Johnny can be a doctor or rocket scientist even if he's not qualified??
Reply by ●March 8, 20152015-03-08
Don Y wrote:> I'd like to set up an environment where much of this can be measured > automatically and track "performance", over time (drawing any potential > "conclusions" at the finish). Unfortunately, I can't see how to > instrument the "time" aspect of the effort. It's misleading to note > the time between checkout and subsequent check-in as representative > (even loosely!) of the time spent working on/staring at a particular > piece of code. Even if you could measure the time during which the > code was "active" in an editor/IDE, there is no guarantee that the > developer is *looking* at it! Or, *thinking* about it!My own development process fits both waterfall and spiral methods of development. In the manner of good "Project Management" there is a significant portion of Up-Front work in getting the specs right (these documents are "Components of the System" as well and are kept under tight version control and change management throughout). The core element of my development process leaves an audit trail automatically (whether you operate it on paper or software tool aided). Within this audit trail you will find your metrics (on how many times a component has been round its action, review, change loop). How many issues were raised in the reviews against it. How many problem reports involved the component. How long it took to get an approved component to pass the review. When even the act of getting a good specification is under such a development regime you can ensure it meets the 6 C's criteria for a good requirements specification. -- ******************************************************************** Paul E. Bennett IEng MIET.....<email://Paul_E.Bennett@topmail.co.uk> Forth based HIDECS Consultancy.............<http://www.hidecs.co.uk> Mob: +44 (0)7811-639972 Tel: +44 (0)1392-426688 Going Forth Safely ..... EBA. www.electric-boat-association.org.uk.. ********************************************************************
Reply by ●March 8, 20152015-03-08
Hi Paul, On 3/8/2015 4:53 PM, Paul E Bennett wrote:> Don Y wrote: > >> I'd like to set up an environment where much of this can be measured >> automatically and track "performance", over time (drawing any potential >> "conclusions" at the finish). Unfortunately, I can't see how to >> instrument the "time" aspect of the effort. It's misleading to note >> the time between checkout and subsequent check-in as representative >> (even loosely!) of the time spent working on/staring at a particular >> piece of code. Even if you could measure the time during which the >> code was "active" in an editor/IDE, there is no guarantee that the >> developer is *looking* at it! Or, *thinking* about it! > > My own development process fits both waterfall and spiral methods of > development. In the manner of good "Project Management" there is a > significant portion of Up-Front work in getting the specs right (these > documents are "Components of the System" as well and are kept under tight > version control and change management throughout). > > The core element of my development process leaves an audit trail > automatically (whether you operate it on paper or software tool aided). > Within this audit trail you will find your metrics (on how many times a > component has been round its action, review, change loop). How many issues > were raised in the reviews against it. How many problem reports involved the > component. How long it took to get an approved component to pass the review. > When even the act of getting a good specification is under such a > development regime you can ensure it meets the 6 C's criteria for a good > requirements specification.I can tell you how and when an object is touched, what was done to it, etc. What I *can't* tell is how much *effort* goes into "creating/changing" it. (effort is measured in man-hours) E.g., how long (hours of labor) did it take to create a spec? How long (hours of labor) to create the hardware/software to reify that spec? While you may have "checked out" an object at a particular time and checked in the next version at some *later* time, the difference (in - out) isn't truly representative of the time required to "do whatever you did" to create that new version from its predecessor. It just acts as an upper limit on the "time required". Did you check it out, work on it (actively) for 10 minutes and then go on vacation for 2 weeks before checking it back in on your return? Did you check it out along with several other objects and work on other things along with it before checking it back in? Did you try several different, unsatisfactory variations of the revision and only check in the "final" attempt? I.e., I want to be able to put a number on the *effort* required (beyond counting effective keystrokes). I can check out a version and spend a lot of effort refactoring it into an *equivalent* version with very similar metrics. How is that *effort* measured and accounted?
Reply by ●March 8, 20152015-03-08
Don Y <this@is.not.me.com> wrote: (snip on coding metrics, and then on school metrics)>> Then add "No Child Left Behind", a plan from a C student president to >> make sure that all students are C students.> *Any* and *every* plan to "measure" the education system (its components, > its results, etc.) is a farce. Too many "special interests" -- teachers, > administrators, parents, etc. And, the system is uncharacterized. Like > trying to control a loop for which you have no idea of the extents of lag > present, etc.I don't think we will get completely away from testing, or for that matter, grades, but yes they are never perfect.> So, instead, we wait until Johnny is responsible for administering > that IV drug upon which your health/life depends. When he screws *that* > up, we fire Johnny (let him move on to some other profession... like > teaching!), give the patient 9or the patient's family) our condolences, > pay off some lawyers and lament how the system failed "Johnny" (with > no mention of the *patient*!)There was a case not so many years ago, where a nurse gave the wrong dose of some medicine to a patient. It required a complicated calculation to determine the right dose, and it seems that she got it wrong. She was immediately fired, and not so many days later, committed suicide. Now, certainly we expect nurses to always get it right, but on the other hand, what did the hospital expect her to do? She had gone to school for many years, and then had many years of experience as a nurse. Most likely, no other hospital would hire her. It seems reasonably to me that if something is that critical that two nurses should do the computation and verify that they agree. (That doesn't eliminate the problem, but maybe reduces it enough.) To get back to coding, I hope that there are strict standards for those writing control programs for nuclear (or any) power plants. Most likely, as noted above, with more than one person involved.> Or, enforcing *laws* on the streets.> Charter schools! Really?? We all know how well business has addressed > the needs of its "customers", historically. "Ah, but these are *schools*! > SHIRLEY, they'll do better -- due to the moral imperative!" (just like > pharmaceutical companies set their pricing and policies based on THAT > morality)>> In Washington state, all schools are now failing, according to NCLB, as >> they are refusing to use student test results for teacher evaluation.> To be fair, how would *you* measure teachers? It would be like measuring > *your* coding performance based on a starting point of some other developer(s)' > code upon which you've built. WITHOUT even giving you the choice of whose > codebase you will be supporting!I believe that there is a system to measure teachers based on the change in test scores. That is, from the end of the previous year (and teacher) to the end of the current year. That should work, but has a lot of statistical uncertainty. As I understand it, some in Washington now have the principal make decisions based on all data, including test scores, but not with a fixed proportion. It seems that isn't good enough for NCLB.> (No, you can't go back and rewrite it all; there are only 180 days in the > school year and you have to have made *progress* in that time!)>> The result is that pretty much all schools have to send a letter to >> parents indicating that the school is failing.(snip)> Building on Frank's comment. We all (think!) we know a good/bad teacher > when we see him/her. But, the verdict is never "in" until long after the > student has moved beyond. And, van never truly be isolated and identified > as the *cause* of the student's success/failure.Well, yes, but when a teacher has had bad reports for 10 years or so, and nothing changes, then parents get mad. But then the teacher has tenure and can't be fired.> I.e., don't measure students to reward/punish the students *or* the > teachers (or the System). Instead, use them as tools to evaluate areas > that may need special attention. Or, to gauge the benefits of certain > "investments".> (Personally, I don't understand the big hullabaloo over testing. I can > remember taking "big tests" throughout my primary school education. And, > I'm sure there was *some* effort to guide my studies in such a way that > I would fare well enough on those. Without drawing attention to the > fact that this is what was actually being done!)Well, I remember tests maybe once every four years. It seems that now they have two or three tests a year.> Otherwise, you wait until it is too late and some "vested interest" > casts judgement on whether or not Johnny is eligible for a particular > vocation, continuing education or <whatever>. Do you then create laws > to prevent medical schools, employers, etc. from discriminating based > on intelligence or other measures of aptitude? So Johnny can be a > doctor or rocket scientist even if he's not qualified??-- glen







