"vax3900" <vax3900@yahoo.com> wrote in message
news:cf4afn$33t$1@charm.magnus.acs.ohio-state.edu...
> I am going to do some embedded work with 80's technology.
That depends on many factors:
- resources available
- development schedule
- the language(s) you intend to use
- the reliability of the targeted hardware
- whether this is an "original" work of your own
- the complexity of the application
- how well you understand the technology you are applying
- how robust the finished product must be
- your (personal) coding/debugging methodology
I've designed products with and without the "assistance" of emulators.
And, devices *claiming* to be emulators that were little more than
"rom emulators" with a CPU attached. For a variety of reasons, I
have stopped using them on projects that I control.
If you are strapped for (financial/space/etc.) resources, an emulator
might be ruled out. It takes time to select/buy/rent one; money
(justification, contract reviews, etc.) for the purchase; time for it
to be delivered; space to set it up; "issues" resolving compatibility
with your prefered operating environment; etc.
The emulator itself may have hardware and/or software/firmware
"issues". So, instead of spending your time debugging your software,
you are now debugging your *tools* (while *you* might get paid
for this, your *employer* doesn't -- and, chances are, will not
look favorably on your "excuse" that you have been busy troubleshooting
bugs in the tools instead of "doing your job".)
Source level debuggers often necessitate the use of a particular
language/compiler to realize their full benefit. What is the quality of
*that* tool? Is it produced by the same vendor as the emulator?
Or, do you have yet another party involved in this?
Customer: "The debugger is buggy"
Debugger vendor: "No, the compiler is producing crappy code"
Compiler vendor: "Not a chance -- the customer's hardware is flakey"
Is this an "original work" of your own? Or, are you stepping in and
maintaining/completing someone else's design? How well do you
understand the design, the technology and the behaviours of the
various algorithms involved? And emulator is a poor, expensive and
inefficient means of "figuring out whats going on". If you don't know
what to *expect* from the code and the mechanisms tying it together,
then an emulator is a poor substitute for a good grounding in the
design!
Similarly, how complex is the application? Can it be simulated at
DC? Or, does it involve lots of intricate timing relationships
from external stimuli, etc.? Can you realistically "control" thos
stimuli so that they will be there when and as often as needed
for an emulator to catch their effects? Sure, an ICE may have a trace
buffer with conditional qualifiers and triggers, etc. But, will you be
able to EFFECTIVELY use them to capture "your experiment"?
Will the trace buffer be deep enough (e.g., if you are writing
in C++, you might exhaust the trace buffer just instantiating a
"temporary" of a particular object -- yet, how can you get finer
grained control over trace buffer triggers when your *source* hides
all these mechanics inside a single statement or *expression*?)
Are the tools for browsing the trace buffer adequate? Or, does
it just reduce to a bunch of "bus cycles" ("gee, I wonder what
the code is doing here...")
Some would argue that hardware of dubious quality merits the
use of an emulator "to track down hardware problems". I'm no
longer of that mindset; if the hardware is flakey, get someone to
redesign it. If your boss/client will tolerate flakey hardware,
then why worry about producing "quality code"? (cynicism)
Unless you are *really* working on bleeding edge technology,
you shouldn't have to put up with flakey hardware. If you fall
into the trap of "proving you found a hardware bug" (with the
use of your emulator), you will be stuck doing this for the life
of the project. If you suspect a bug, write a short, concise test
case that exercises the bug. When it doesn't work as expected,
hand it to the hardware designer and ask him/her to find the bug in
your *code* (if it's OK for you to be coerced into troubleshooting
hardware, then it is equally acceptable for the hardware personnel
to be coerced into troubleshooting your software -- to prove that
it is NOT their hardware that is at fault).
How robust must the finished product be? Sure, everyone *claims* they
produce "quality products" but, realistically, we *know* that's not the
case.
How insistent is "Management" that your product *truly* be "of the
finest quaity" -- vs. "that's good enough, ship it"? *Will* they shut down
the line to track down some elusive bug? Or, will they rationalize that
it isn't THAT serious (and pay "lip service" to any customer that *might*
complain about it later)? Will they gloss over bugs that aren't "hard
and fast" (reproducible) and write them off as "a fluke"? Or, does every
observed bug *demand* resolution?
But, most important in your emulator choice is your own personal
coding/debugging style. If you are the type that *designs* your
application and identifies the likely failure modes and addresses
them ahead of time, you probably won't gain much from the use
of an ICE (all else being equal). Chances are, you can design algorithms
in such a way that they can be driven by test cases and observed
at DC using a desktop debugger/simulator. Here, you are simply
trying to *verify* proper operation of your algorithms. Drive
them with "unexpected" inputs to verify that they handle them
predictably, etc. The final application essentially being created by
assembling a group of "known good" modules that have already
been bench-tested.
If, on the other extreme, you sit down and start writing code on
Day 1 without a clear and formal notion of where that code is
*going*, then you'll probably LOVE having an emulator. It
will let you "tinker". When you happen to catch something
behaving in a way you hadn't anticipated, you'll (possibly) be
able to walk through the code and see where it is defying you.
This approach seems to work well in environments where
employers want to "see" progress. As long as you are
putting bugs behind you, the *perception* is that you are
getting closer to your (*their*!) goal. Since few firms actually
employ any metrics on the development process, this ends up
being a crap shoot; *hopefully*, the number of bugs decreases.
<shrug> Horses for courses. Figure out where your application
environment sits in this continuum and make your decision based
on the issues *you* observe firsthand.
Sorry not to have a simple answer. But, hopefully, enough of the
issues here to get you to think more honestly about the pros and
cons of the choice.
--don
N.B. Incoming mail is unconditionally and silently discarded.