EmbeddedRelated.com
Forums

Software Metrics (cat flame > /dev/null)

Started by Don Y July 11, 2011
Em 14/7/2011 18:54, Tim Wescott escreveu:
> On 07/14/2011 02:28 PM, Cesar Rabak wrote: >> Em 13/7/2011 15:02, Don Y escreveu: >> [snipped] >> >>> >>> That's exactly the problem I'm addressing. The Developers aren't the >>> ones asking for metrics. :> My claim is the "higher ups" (asking >>> for the metrics) need to better understand what they want and why they >>> want it. Else it is just "bean counting", literally... "beans" having >>> no bearing on anything else in the organization or process! >> >> This is a symptom of a more fundamental problem: because in our >> profession we belittle use of metrics, we don't have the training to >> provide them on the time zero they're mentioned and asked by "higher >> ups". > > I have no problem with metrics _in principal_. >
Good to see it in written!
> _In fact_, however, I have not seen them applied well.
This comment can easily expanded to apply to every other measure in business. It is, of course, worrisome that your witness makes me infer that you _never_ saw them well applied.
> The best > 'software metrics' that I have ever seen applied weren't > software-specific at all -- it was just good old GANTT chart "percent of > task done", rolled up to a project completion metric.
This project management metrics, which also have their place and value.
> > But the metrics that bean counters want are things like lines of code > written, adherence to complexity metrics, etc. In other words, they > either want concrete 'measurable' signs of progress (without being > willing to believe project schedules), which rushes the development and > forces my team to turn out crap, or they want metrics that are easily > gamed (and the people who are actually in charge don't want to hear > about their metrics anyway).
This comment as well could be applied to any other measurement in the business in general, so I would say it will not get us to any point further in the maturity of the organization. I think everybody can remember cases of number cooking in accounting turned to scandals as big as Enron, etc. If the teams feel 'rushed' is either because they don't have the intellectual background to defend in robust way that the pace asked from them is unattainable or because the 'work done' metrics takes only one side of the equation and doesn't consider the others like already mentioned elsewhere in the thread, like maintainability attributes, testing results, quality, etc.
> >> People who work with much less predictable environments (say >> salespeople) are able to work with metrics scrutinizing their >> performance and do not complain nor one percent of what SW engineers >> complain about a metrics program for their roles. > > Sales guys have much shorter time horizons than most embedded software > engineers, much clearer goals, clearly defined measures of success ($$, > either net or gross), and the advantage that you don't get to be a CEO > unless you're at least something of a sales guy, and therefor understand > the turf.
Embedded SW engineers should be able to understand the problems of their profession and see the correct goals and in the correct time frame. The measures of success are to be settled between the engineers and the client, be it a boss or contractor.
> >> 'Beans' are the essence of the organization and the only way to choose >> between one or another process is by counting them!! Any other way of >> thinking is to be left to hobbyists or amateur work. > > Yet the beans that are counted are often misleading, illusory, or flat > out delusional. Which would you rather have: a product that costs $1M to > develop, breaks in the field, alienates customers, and leads to lost > sales for years, or a product that costs $5M to develop, works correctly > and well straight out of the chute, and saves your marketing budget > because your advertising becomes word-of-mouth?
If you don't have clear ways to demonstrate the later, the risk of expending 400% more in the project would make it very hard to be approved. We have to break the vicious circle of the delusional measures and offer the good ones that make sense in the business and technical realms.
> > Lines-o-code (or circuit boards laid out), prototype deadlines that > allow the discarding 'cumbersome' quality controls, preproduction goals > of being just in time for the trade show with no funding for actually > making a product manufacturable -- these are the sorts of "metrics" that > I've had the opportunity to work to. And they don't work!! >
Yes, of course, see my comments above.
> So, yea -- I'm kinda anti-metric in practice. >
This doesn't solve any problem... we have to start face it and educate our clients. -- Cesar Rabak GNU/Linux User 52247. Get counted: http://counter.li.org/
Hi Cesar,

On 7/14/2011 2:50 PM, Cesar Rabak wrote:

>> How *early* in the planning/exploratory phase (for those future >> projects) do you draw on those numbers? > > Depending on the sophistication of your metrics program, you can go as > far as as soon as the opportunity had been detected.
I think that relies on some critical assumptions that, IME, are not true for many organizations. Namely, that the organization has the resources and *time* to spend on thorough evaluations of potential projects *before* undertaking them. It takes a *lot* of effort to come up with anything more than a back-of-the-napkin sketch of what a product might entail. At the very least, you need to flesh out a "product specification" (which can lack much detail but must cover all the "essentials") that "someone" can explore in greater depth to make an initial estimate of the "interior requirements" thereof. I've worked for very *few* organizations that have the resources to spend on this sort of up-front effort. Many are "barely staffed" (a step above "under-staffed") and working in fast-moving markets where you can't invest calendar months *thinking* about whether or not to pursue a project. Indeed, some owe their existence to "lucky gambles" (intuiting The Right Projects to pursue) and often can't sit back on their laurels "milking" an old idea for their long-term survival -- their past successes can (or will) be too easily cloned and leave them as second rate competitors in their *own* market (!) Sure, Apple, MS, etc. can afford to have folks sitting around *thinking* about the next product to push into the pipeline *while* the current product is working its way through production. But, most (?) firms don't have that luxury. Everyone is either working on a *current* product in preparation for release, a newly *released* product or maintaining a "mature" product. [I'll admit I have deliberately gravitated to these types of firms as the work -- and division of labor (or lack thereof) -- has tended to be more interesting. Others may have different experiences]
>> Are these used to >> prepare estimates/plans for how long a project "will take"? >> Or, are they used to determine which (if any) projects are >> practical/profitable enough to undertake? > > Both uses are commonplace in mature organizations. > > Notice that one of the ways of perceiving quality is the regularity a > person/process performs in a task. Gathering the metrics and > understanding the structural reasons for their variation makes the firm > to rise its maturity.
I would consider that consistency, not quality. Someone can consistently produce "bad product". :<
>> I.e., do "you" sit down with potential projects and use your >> metrics to rule out certain projects based on "costs" extrapolated >> from these metrics (if so, are you using anything other than a >> "gut feel" to gauge the complexity of the proposed projects?)? > > You certainly can come with several things better than 'gut feel'.
See above.
>> Or, once "someone" has decided to undertake a project, do you >> come up with a more detailed appraisal of what is involved *in* >> that project and *then* use the metrics to determine what sort >> of resources you need to make available to complete it in a particular >> timeframe/budget/etc.? > > If '"someone" "someone" has decided to undertake a project' already, > without taking in account rules to obviate risks by the use of the > historic data and other estimation tools, then obviously, you could > arrive at a moot point. > > OTOH, if the team who 'receives' the job to fulfill has the metrics, > they would be in a better ground to negotiate internally and avoid > pressure on them.
<grin> I guess our experiences have been very different. I can recall several instances where a boss was complaining because I was "behind (his) schedule" -- despite my being able to show that I was within man-days of my "initial estimate". Sure, I can say, "See, *I* was right (with my initial estimate)!". But, if *he* has bid the job at a lower cost or shorter timescale, then that "pressure" has to build *somewhere*. Eventually, it manifests on The Bottom Line. Let me be clear. I see nothing wrong with metrics. Whether they are used for productivity, planning, quality, etc. Rather, where *experience* has shown metrics to be A Bad Idea is the lack of maturity of the consumers of those metrics. There have been *billions* of humans born on this planet. Surely a statistically large enough sample from which to draw some solid statistics. So, we *know* gestation period is ~267 days (IIRC). But, if you had to "bet your life" (livelihood) on this figure, you'd pad it to account for the *expected* variation (~260 - ~290). Yet, even *that* isn't a sure thing as a child could be born prematurely, etc. My point is, this is a well documented process *governed* by biological "laws". Yet, you still can't "bet your life" on it with 100.00% certainty. How wide a range of values would you be comfortable with if you were *just* "betting your livelihood" on the outcome? :> Shirley, any type of new product design is *less* well constrained that this. Yet, folks blissfully prepare charts with milestones laid out AS IF they were magically decided to occur at these points. And, *fret* when "reality" fails to coincide with "fantasy". I didn't want this discussion to degenerate into other "non-metric" related issues -- as I've done. :< Rather, I want to point out what I suspect most folks would acknowledge from their own personal experiences -- most "planning" (even *with* "good data") ends up being an exercise in "wishful thinking" and, a few ohnoseconds after the planning has been "finalized", all the caveats that were taken into consideration *during* the planning ("*If* we can get X on day Y... AND the algorithms we have designed actually *work*...") are gleefully forgotten. So, instead of a productive post-mortem on the planning *process* itself (i.e., what *were* the assumptions that were made? why were they faulty? were we overly optimistic or just naive? etc.) the "blame" is placed on "bad performance", "bad luck", "bad metrics", etc. [this is not unique to our industry. It is sometimes fun to watch other folks going through the same contortions in other industries... and learning just as LITTLE about their failures]
Cesar Rabak wrote:
[ ... ]
> If the teams feel 'rushed' is either because they don't have the > intellectual background to defend in robust way that the pace asked from > them is unattainable or because the 'work done' metrics takes only one > side of the equation and doesn't consider the others like already > mentioned elsewhere in the thread, like maintainability attributes, > testing results, quality, etc.
It may not be intellectual background. Alpha-dominance has a huge amount to do with management. Mel.
Hi Tim,

On 7/14/2011 2:54 PM, Tim Wescott wrote:

>>> That's exactly the problem I'm addressing. The Developers aren't the >>> ones asking for metrics. :> My claim is the "higher ups" (asking >>> for the metrics) need to better understand what they want and why they >>> want it. Else it is just "bean counting", literally... "beans" having >>> no bearing on anything else in the organization or process! >> >> This is a symptom of a more fundamental problem: because in our >> profession we belittle use of metrics, we don't have the training to >> provide them on the time zero they're mentioned and asked by "higher >> ups". > > I have no problem with metrics _in principal_.
Understood.
> _In fact_, however, I have not seen them applied well. The best > 'software metrics' that I have ever seen applied weren't > software-specific at all -- it was just good old GANTT chart "percent of > task done", rolled up to a project completion metric.
I'm talking more about things like measures of code complexity, quality, etc. "Scheduling" has too many other issues that come into play.
> But the metrics that bean counters want are things like lines of code > written, adherence to complexity metrics, etc. In other words, they > either want concrete 'measurable' signs of progress (without being > willing to believe project schedules), which rushes the development and > forces my team to turn out crap, or they want metrics that are easily > gamed (and the people who are actually in charge don't want to hear > about their metrics anyway).
Yes. Though I call this a "consumer" problem. Educate (or replace) the people using the data. I contend that this is "easier" to do than fabricating the data itself out of "nothingness".
>> People who work with much less predictable environments (say >> salespeople) are able to work with metrics scrutinizing their >> performance and do not complain nor one percent of what SW engineers >> complain about a metrics program for their roles. > > Sales guys have much shorter time horizons than most embedded software > engineers, much clearer goals, clearly defined measures of success ($$, > either net or gross), and the advantage that you don't get to be a CEO > unless you're at least something of a sales guy, and therefor understand > the turf. > >> 'Beans' are the essence of the organization and the only way to choose >> between one or another process is by counting them!! Any other way of >> thinking is to be left to hobbyists or amateur work. > > Yet the beans that are counted are often misleading, illusory, or flat > out delusional. Which would you rather have: a product that costs $1M to > develop, breaks in the field, alienates customers, and leads to lost > sales for years,
Or, causes your firm to simply cease to exist!
> or a product that costs $5M to develop, works correctly > and well straight out of the chute, and saves your marketing budget > because your advertising becomes word-of-mouth?
Practice seems to indicate the former to be the most often track followed. OTOH, you might not *have* the $5 to throw at the "right" solution (in which case, you're in the wrong *business*). The most common delusion that I have encountered is the "We don't have time to do it right -- but, we'll have time to do it over!" mentality. This seems to be the tacit admission that the project should *not* be undertaken, "but we really *want* to undertake it!". Consider: - if you don't have the time/money to do it right, the product you are likely to come up with will probably be substandard and not fare well in the market (you will then blame something *else* for the monies and opportunities that were diverted to this failed project instead of putting the blame where it really belongs) - if your product *doesn't* fail miserably, you will *still* need to spend those resource (and *more*) trying to finish/fix it to be the way it *should* (ideally) have been. So, your total investment will be increased *and* you will have exposed a product idea to your competitors who *may* have the resources to Do It Right and steal market from your INFERIOR product. - if your product is wildly successful (sales quantities), you won't have the *time* to spend fixing it. You'll be struggling to ramp up production and deal with all the blemishes that you glossed over previously. Again, an opportunity for a competitor to come in with a (slightly?) better product -- but reliable AVAILABILITY -- and steal your thunder. In each case, you have diverted your resources and attention from some OTHER project that could have been A Sure Thing -- fitting your resources and capabilities better. I.e., the only winning scenario here is to hope the product *fails* and you just swallow your losses up front.
> Lines-o-code (or circuit boards laid out), prototype deadlines that > allow the discarding 'cumbersome' quality controls, preproduction goals > of being just in time for the trade show with no funding for actually > making a product manufacturable -- these are the sorts of "metrics" that > I've had the opportunity to work to. And they don't work!! > > So, yea -- I'm kinda anti-metric in practice.
I actually find them useful "for myself" (note that I am self-employed). So, they are really only *relative* metrics, in my case. Used to tell me how a particular implementation compares to another/similar implementation. They help me decide when I need to rearrange the structure of a module ("refactor" being the term currently en vogue) to better manage its complexity, etc. My DTP tools give me metrics regarding the complexity of my writing. I use these to tell me when my sentence and paragraph structures are getting too complex for Joe Average to digest. (at which point, I insert a few paragraphs of "See Dick run. See Jane run. Run Dick, run!" until the "score" drops to something more acceptable :> ) But, I am neither qualified, motivated nor *educated* enough to be able to compare my metrics to those of another (writer/developer) and come to any *defensible* conclusions based solely on those numbers. Instead, I compare to the only Standard that I have any intimate knowledge of -- myself. :-/
Hi Cesar,

[*much* elided as there is a lot of overlap with other posts]

On 7/15/2011 9:33 AM, Cesar Rabak wrote:

>> Yet the beans that are counted are often misleading, illusory, or flat >> out delusional. Which would you rather have: a product that costs $1M to >> develop, breaks in the field, alienates customers, and leads to lost >> sales for years, or a product that costs $5M to develop, works correctly >> and well straight out of the chute, and saves your marketing budget >> because your advertising becomes word-of-mouth? > > If you don't have clear ways to demonstrate the later, the risk of > expending 400% more in the project would make it very hard to be approved. > > We have to break the vicious circle of the delusional measures and offer > the good ones that make sense in the business and technical realms.
I think folks in 9-to-5's have little recourse, here. They are at the mercy of their managers (who are at the mercy of *their* managers, etc.). It doesn't matter how accurate your assessment of a project is if the higher-ups refuse to be bound by physical laws. :> Even working freelance (with a lot of lattitude as to what jobs I am willing to undertake), you are still pressured by having to pay the bills, etc. Clients don't like it when you say "No (it can't be done for that money/time/size/etc.)". People *know* "where babies come from" -- so why are there *any* "unplanned pregnancies"? :> The "Just Say No" type of thinking fails to acknowledge Reality. Having said all that, there is nothing that prevents you AS AN INDIVIDUAL from benefiting from tracking these sorts of metrics on your own (there are tools to do so for most of them) and using them to better understand *you* "process". Regardless of the Fantasy that you are forced to work within ("We're going to have this baby in 3.5 months -- don't tell me it's going to take 9 months!"), Reality will, ultimately, prevail. [I have *no* idea why all my analogies in this thread revolve around childbirth... perhaps the above "classic" comment has been underlying many of my arguments as something easy to relate to]
Hi Cesar,

On 7/14/2011 2:11 PM, Cesar Rabak wrote:
> Em 12/7/2011 01:45, Don Y escreveu:
>> On the one hand, the "numbers" one comes up with are often only >> relevant in that particular Industry and/or organization. > > Maybe yes, but them they'd have little business value as the use of > metrics is intrinsically connect to the ability of _comparing_ them.
Yes, but you can compare to "yourself" (your other projects, etc.) just as well. And, the comparison is probably more appropriate since the types of products will tend to be similar (you won't be comparing a GUI design to an HRT control system), the staff similar (you won't be comparing experienced developers in a high budget shop to "college grads" at a small startup) and your familiarity with the "other side" of the comparison will be more valuable (you won't be comparing yourself to some random project undertaken at some obscure IBM division in the 1980's). Metrics distill too much out of an experience (intentionally). Some familiarity with all of the things being compared helps put the numbers back in perspective. E.g., my first commercial (software driven) product I could probably recreate, from scratch, in a few man-weeks *today*. has *it* changed? No. Has it's complexity changed? No. But, the tools and techniques that I would apply *today* (even if forced to use identical hardware) would make it an entirely different experience.
>> I.e., >> even if you settle on a *particular* ("standardized") metric, >> comparing observations of that metric in a desktop application >> vs. an embedded vs. a real-time, etc. application is essentially >> meaningless.While the metric might be "standardized", the > > problem to which it is *applied* renders it apples-and-oranges. > > Maybe not, if you use metrics in right 'cut' of the technology and work > them towards the business objectives. The complexity measures of > algorithms would apply equally well in any of the above cited realms. If > "only kept in a tree ring binder" then all the effort and resources to > gather this data is obviously wasted, on the other hand if used to > ascertain adequate coverage of tests, them they start to make business > sense (at least for me ;-) ).
My point was that the types of applications stress different metrics in different ways. And, that those factors might not be reflected in the metrics -- or, not *accurately*/proportionately reflected! E.g., you can write a graphic application that may be thousands of lines of code. It might have very high complexity measures. It *looks* (from the standpoint of a set of software metrics) to be much more complex than, for example, a PID loop. OTOH, in terms of *real* complexity, the PID loop might easily exceed that of the bulky graphic application because so much of its complexity is NOT manifest in attributes that can easily be *counted*. (semicolons, operators, etc.)
>> On the other hand, without *some* numerical "score", there is no >> way for an organization to evaluate its own progress. How do you >> know if quality is improving? Or productivity? etc (depending on >> what your actual goal happens to be). > > There is a phrase (incorrectly attributed to Deming) from P. Drucker &#4294967295;If > you can't measure it, you can't manage it.&#4294967295;
Agreed. Though I don't aspire to *manage* it as much as *understand* it (I consider the former to be a separate issue *dependent* on the latter)
>> It's hard to draw a parallel to any other aspect of a business. > > In fact, no! This is the first sin in our profession: to believe in this > fallacy...
I disagree. It is an intangible. Most businesses track *tangibles*. You can only measure (resulting) software aspects indirectly... how many hours to develop, how many hours to maintain SO FAR, how many dollars spent settling lawsuits, etc. And, you never have a "final figure" to point to. Have *all* the bugs been uncovered? Or, will we see a whole sh*tload of new bugs pop up in 2038? :> It's far too easy to enter uncharted water with a software design. Too many ways to arrange lines of code to come up with different products/results. By contrast, there are only a relatively few number of ways that a gas pedal can be installed on a Toyota -- correctly and incorrectly. And, you can easily inspect every instance and know how much it will cost to fix each of them (worst case: replace the entire car. What's the worst case cost of fixing a bug on a Mars rover? :> )
>> E.g., imagine if *your* accounting department tracked everything >> in terms of dollars... and another accounting department tracked >> everything in terms of loopholos. I.e., comparing between >> departments is meaningless (since loopholos and dollars are >> orthogonal measurement units) -- yet, comparing current to previous >> within a department *does* have value! > > If loopholos cannot (even if approximately) be converted in dollars, > then it would have no business value and the gathering of this > information should be dropped from the organization. It would have the > beneficial effect to save dollars!
No! If loopholos can be compared to other loopholos, your metric still has value! I've worked in several industries that had wacky metrics to track things that were important to them. E.g., one used "buckets of alumina grit" (what's a "bucket"? what size grit? etc.) poured over the product to *abrade* the appearance (i.e., testing the "finish" on the product). If the number of buckets went down, they quickly stopped the manufacturing process to identify what was going wrong... "It's always been 7 buckets! Why is it now suddenly *6* buckets??" How much worse was '6' than '7'? <shrug> If your LoC/day figures start to change, you have to wonder if something in your process has changed (maybe too many meetings?) or if there is something inherently different about this *project* that bears closer examination. I.e., if one metric has changed, there is a chance that *others* may eventually also change (e.g., what if your bugs/day figure changes and you need to double your test/certification time?)
>> IMO, the actual metric(s) chosen have to be *easy* to measure >> unambiguously (automated), not easily "subverted" *and* only > > All good so far. > >> used for relative comparisons within an organization/individual > > Bad, bad... using metrics which can only be used within an organization > hinders the comparison much needed to do business in the open world: > > Buyer: "We would like to buy some lumber of you folks. How much do you > charge per ton?" > Seller: "No, no, no, sir! Here in this mill we use an internal metric we > call knots of wood. How many knots are interested in?
But we don't "sell software". We sell *products*. The consumer cares little about how many LoC/day our developers achieved. Nor how complex their code is. What they care about is cost and functionality. They might not even care about the number of (known + unknown) REMAINING BUGS in the product (i.e., if a bug never manifests for them, what do they care? How many folks born in 1900 worried about the Y2K bug(s)? :> ) I.e., if you priced your product in frodbelgs while everyone else used dollars, customers would probably be distressed because they couldn't gauge the relative cost (to them) of your frodbelgs.
>> (i.e., a person writing device drivers would exhibit different >> metrics than a person working on GUI's -- even within the same >> Industry/Organization) > > The _values_ of the metrics obviously yes, but the nature of them, > clearly a mistake taken for a lot of people without enough experience in > metrics.
Don Y wrote:

[%X]

> So, instead of a productive post-mortem on the planning *process* > itself (i.e., what *were* the assumptions that were made? why were > they faulty? were we overly optimistic or just naive? etc.) the > "blame" is placed on "bad performance", "bad luck", "bad metrics", > etc. > > [this is not unique to our industry. It is sometimes fun to watch > other folks going through the same contortions in other industries... > and learning just as LITTLE about their failures]
I think, Don, you have come to the crux of the matter. The fact is that not many industries do a post-mortem on the development they have just completed. If they did, they would be better educated and informed about the effectiveness of their planning/estimation or development assumptions. -- ******************************************************************** Paul E. Bennett...............<email://Paul_E.Bennett@topmail.co.uk> Forth based HIDECS Consultancy Mob: +44 (0)7811-639972 Tel: +44 (0)1235-510979 Going Forth Safely ..... EBA. www.electric-boat-association.org.uk.. ********************************************************************
Dave Nadler wrote:

> On Wednesday, July 13, 2011 5:49:28 PM UTC-4, Don Y wrote: >> Hi Dave, >> >> On 7/13/2011 6:29 AM, Dave Nadler wrote: >> > The book "Making Software" does a great job >> > (if a bit voluminous and hard to read) of >> >> <frown> Meaning I can't just buy a couple of copies >> and hand them to The Powers That Be :< Reading >> something "for themselves" somehow seems to be more >> credible than hearing someone else *distill* that >> same information for them... > > Yes, but perhaps you can numb them into retreat. > "Read this book and you'll understand why this > won't work, and why it may make you look foolish..." > > Anyway, let us know what you think of the book ! > Best Regards, Dave
I think it might be better for them to read "Better Embedded Sysstems Software" by Phil Koopman. Highly recommended for every developers desk and all management conference tables (open at all chapters simultaneously by preference). -- ******************************************************************** Paul E. Bennett...............<email://Paul_E.Bennett@topmail.co.uk> Forth based HIDECS Consultancy Mob: +44 (0)7811-639972 Tel: +44 (0)1235-510979 Going Forth Safely ..... EBA. www.electric-boat-association.org.uk.. ********************************************************************
Many decades ago I worked for a well-known company
that made testers. They had a nice bonus for the
engineer that designed the board with the fewest
field failures. The engineers regularly fought hard
to design a memory board, which was a shoe-in to
win the prize (compared to the tough analog front-
ends exposed to regular customer abuse).

They also didn't count labor hours in the metrics
for board cost. That led them to take out of
production a UART board using an *expensive*
crystal and reintroduce its predecessor, which
had hand-tweaked-and-soldered RC frequency
generation. They also discontinued a subsystem
using ribbon cables to reintroduce hand-soldered
cable bundles because it was *clearly* less expensive.
BTW, labor was even expensive in USA back then.

I could go on for hours...

Most metrics used don't reflect what most of
us would consider reality. Software metrics in
use today lead to outcomes just as silly as
those listed above...

Hope this was entertaining and maybe even helpful,
Best Regards, Dave
Hi Paul,

On 7/15/2011 12:30 PM, Paul E. Bennett wrote:
> Don Y wrote: > >> So, instead of a productive post-mortem on the planning *process* >> itself (i.e., what *were* the assumptions that were made? why were >> they faulty? were we overly optimistic or just naive? etc.) the >> "blame" is placed on "bad performance", "bad luck", "bad metrics", >> etc. >> >> [this is not unique to our industry. It is sometimes fun to watch >> other folks going through the same contortions in other industries... >> and learning just as LITTLE about their failures] > > I think, Don, you have come to the crux of the matter. The fact is that not > many industries do a post-mortem on the development they have just
Or, if they do, it's distilled to a couple of numbers: - estimated cost: X - actual cost: Y suitable for bean-counting (but little else).
> completed. If they did, they would be better educated and informed about the > effectiveness of their planning/estimation or development assumptions.
I've gravitated towards email only contact with clients (typically out-of-state, etc.). This evolved from the unavoidable hassles of "phone tag" (unlike the client -- who is salaried -- I don't get paid for the time I spend trying to contact someone on the phone!) coupled with my odd working hours. Initially, it was a "win" because it cut down on a lot of silly "banter" ("How are the wife and kids? How's the weather?" etc.). But, it also saved me the trouble of transcribing/summarizing phone conversations (so I had a record of what was agreed to along with action items in each conversation). But, I discovered that it also had benefit because it forced folks to *think* about what they wanted to ask instead of just "shooting from the hip"... "musing". This seems to keep a project more focused than random "digressions" that creep in informally during a conversation ("Hey, we could add some blue and green lights and use it for a XMAS decoration, too!") *And*, it helps document how the project's scope may have changed along the way. Not that client's try to *deny* that there were changes but, rather, they tend to forget how *many* of them creep in if you don't exert some discipline *and* have a record of them! One client made a casual statement once about my having found "some bugs" in their product. As if it was an inconsequential thing (i.e., hardly worth many billable hours). Since I had kept all the email and snail-mail that I generated during the project, I was able to point to a stack of paper over an inch thick *documenting" those bugs. I.e., a testament to the actual number of bugs as well as a graphic depiction of the amount of labor involved (just in *documenting* them!) Again, metrics are A Good Thing (whether they describe the product or the process -- you have to have *some* quantifiable way of comparing X to Y). What's lacking is an understanding of how to interpret those metrics and *apply* them, productively. This brings me back to my initial post: *what* to track and *why* to track it (acknowledging how easily it is for "metrics for the sake of metrics" to lead one astray).