EmbeddedRelated.com
Forums
Memfault State of IoT Report

IAR or CrossWork

Started by alienmsp430 August 5, 2004
Tom, 

> At one point I had a 36-bit (plus sync bits) data
word that 
> might or might not, depending on which direction the tape was 
> moving, need to be reversed end to end, bitwise.  Or 
> sometimes it was a 24-bit word (plus sync bits), depending on 
> what version of the hardware had recorded the tape.  I 
> couldn't revise the latter format because it had to be 
> compatible with an earlier hardwired-logic implementation.
> 
> This bit reversal was fairly easy to do in assembler using 
> the 6800 ROLA and RORB instructions on 16-bit chunks of the 
> data word.  I don't know how I would have done it in C.

unsigned long long
reverse(unsigned long long x, int nbits)
{
  unsigned long long y = 0;
  while (--nbits >= 0)
    {
      y <<= 1;
      if (x & 1)
        ++y;
      x >>= 1;
    }
  return y;
}

This will bit reverse bits 0 through 63 of any quantity from 0 to 63
bits in width.  It's portable C99.  Once written, no need to write
again, and is truly reusable.

Of course, some compilers provide the __reverse intrinsic, but that's an
extension you can't rely on.

The one thing that C is not good at is providing access to the carry
bit.

I'm not arguing that an organic optimizer is much better than any
programmed optimizer--it's a given, if there is sufficient time and
inclination for the human brain.  For some systems, the MSP430 and
assembler are a perfect fit.  For some application domains, they are
non-starters.  Who would seriously code an application for Windows
completely in x86 assembler nowadays?
-- Paul.


Beginning Microcontrollers with the MSP430

On Sun, 8 Aug 2004 00:50:22 +0100, Paul wrote:

>I'm not arguing that an organic optimizer is
much better than any
>programmed optimizer--it's a given, if there is sufficient time and
>inclination for the human brain.

Actually, Paul, for smaller embedded systems (those Al and I seem to encounter)
I find that I can write better assembler than the compiler can generate, as a
matter of ho-hum routine and do so in about the same time it takes me to write
C.  And I've been using C regularly and full-time (on larger systems than
the
MSP-430, of course) since 1978, when I first started using it for Unix v6
coding.  (I like C a lot, by the way.)  Most of the effort is in other aspects
of the project (both before coding and also after coding) and, even if the
assembler takes me 30% longer to code up, it still only adds negligible time to
the entire project.

Of course, it would be trivial to concoct snippets of C code that would take
lots more time in assembly, but we *are* talking about the entire application
here and not selected fragments.  For reasons I think are obvious, it is not
reasonable to consider only certain snippets that support a theory and discard
the evidence that contradicts.  As I like to say, "If you're willing
to be
selective in the evidence you consider, you could reasonably conclude that the
earth is flat."

There are, of course, good reasons to use C.  There are also good reasons to use
assembler.  But I always plan on *much* larger application space when I must
chose C for the development tool and I always plan on excess CPU margins, as
well.  In the few cases where I've personally recoded the exact application
into
both languages, this guideline has always born true.  And not just by small
measures, I'll add.

Some of the reasons are because of the linker and the granularity of the library
code that is linked in.  Some are because of what works out in 20/20 hindsight
to be absolutely blind and stupid choices (from a human point of view) about how
the code is generated (although the C compilers are often excellent at certain
aspects of their job, they also always [almost always] have terrible blind spots
which more than make up for the difference.)

I have met exactly ONE compiler that produced code uniformly well and nearly as
well as I could routinely write it in assembler.  Never before or since then.
(I'm sure that if I were to use your compiler, Paul, I'd have met
two.)

However, as I said there are very good reasons to use C.  They just don't
ever
(in my book) include "smaller code space" or "faster
execution" or "more robust
code."  And only sometimes does it include "less development
time" and even
then, not "that much less."

>For some systems, the MSP430 and
>assembler are a perfect fit.  For some application domains, they are
>non-starters.  Who would seriously code an application for Windows
>completely in x86 assembler nowadays?

Well, this isn't such a good point, in my mind.  I mean, seriously...

... are we talking about MSP-430 applications here?

... or systems with instruction re-order buffers, branch prediction, many
parallel functional units including at least two floating point units, bus
transaction caches, L1 and L2 caches, memory clocked on *both* edges, inbound
and outbound queues for PCI, AGP, and DRAM interfaces, etc?

I mean.... do you have any idea how many 100's of millions of transistors
are
being thrown at Windows and the sheer size and time and numbers of people and
teams involved and the number of companies writing drivers, other software, etc?

How does bringing up Windows make a point for folks considering whether or not
to use assembler or C in writing applications for an MSP-430 based embedded
system, for gosh sake?

I'm still reeling from the very idea!

Jon

Jonathan Kirwan wrote:

> Of course, it would be trivial to concoct snippets
of C code that would take
> lots more time in assembly, but we *are* talking about the entire
application
> here and not selected fragments.  For reasons I think are obvious, it is
not
> reasonable to consider only certain snippets that support a theory and
discard
> the evidence that contradicts.  As I like to say, "If you're
willing to be
> selective in the evidence you consider, you could reasonably conclude that
the
> earth is flat."

That reminds me that sometimes what you're interested in is not really 
the actual truth, but just a good-enough approximation.

If you're navigating an airliner or defining routes for it to fly, you 
need to know that the earth is approximately a sphere.  But if you're 
designing an airport, laying out runways and access roads and terminal 
buildings and such, a flat-earth model will probably suffice.



--
                        -- Tom Digby
                        bubbles@bubb...
                        http://www.well.com/~bubbles/
                        http://www.well.com/~bubbles/BblB60DL.gif


Jon, 

> >I'm not arguing that an organic optimizer
is much better than any 
> >programmed optimizer--it's a given, if there is sufficient time
and 
> >inclination for the human brain.
> 
> Actually, Paul, for smaller embedded systems (those Al and I 
> seem to encounter) I find that I can write better assembler 
> than the compiler can generate, as a matter of ho-hum routine 
> and do so in about the same time it takes me to write C.

True.  You can manage your own calling conventions, register allocation,
and have a whole-program view of the world.  My organic optimizer is
much better than my programmed one, as the programmed one was given
birth by the organic one.  I'd pit my wetware against any binary machine
to produce better code, but in a much longer timeframe--a 3GHz Pentium
can analyse things much faster than I can sequentially, but I have
parallelism and insight on my side.

> >For some systems, the MSP430 and
> >assembler are a perfect fit.  For some application domains, they are 
> >non-starters.  Who would seriously code an application for Windows 
> >completely in x86 assembler nowadays?
> 
> Well, this isn't such a good point, in my mind.  I mean, seriously...
> 
> ... are we talking about MSP-430 applications here?

No, I merely point out that I would not automatically choose assembly
code as the tool of choice for some applications.

> ... or systems with instruction re-order buffers,
branch 
> prediction, many parallel functional units including at least 
> two floating point units, bus transaction caches, L1 and L2 
> caches, memory clocked on *both* edges, inbound and outbound 
> queues for PCI, AGP, and DRAM interfaces, etc?
> 
> I mean.... do you have any idea how many 100's of millions of 
> transistors are being thrown at Windows and the sheer size 
> and time and numbers of people and teams involved and the 
> number of companies writing drivers, other software, etc?

The transistors are a function of a marketing strategy--In Intel's mind,
it must have the fastest processor for the architecture they invented,
period.  Even if they didn't they'd still continue to outsell other
x86
processors 4:1 (last numbers indicate Intel ship 8 out of 10 x86
processors).  I don't think it's Microsoft's fault that Intel
produces
faster processors--Microsoft see more processing power and then fit more
features into Office because they can.  Microsoft get more revenue from
their apps than they do the OS, so putting the blame on Windows for the
transistor budget of these processors isn't exactly fair.
 
> How does bringing up Windows make a point for
folks 
> considering whether or not to use assembler or C in writing 
> applications for an MSP-430 based embedded system, for gosh sake?

I merely point out that for some developers, C is more natural than
assembly code.  If I needed to write code for a PowerPC-based embedded
system, I would not write in PowerPC assembly code because I don't know
it and don't want to learn it.

-- Paul.


Paul Curtis wrote:

> Tom, 
> 
> 
>>At one point I had a 36-bit (plus sync bits) data word that 
>>might or might not, depending on which direction the tape was 
>>moving, need to be reversed end to end, bitwise.  Or 
>>sometimes it was a 24-bit word (plus sync bits), depending on 
>>what version of the hardware had recorded the tape.  I 
>>couldn't revise the latter format because it had to be 
>>compatible with an earlier hardwired-logic implementation.
>>
>>This bit reversal was fairly easy to do in assembler using 
>>the 6800 ROLA and RORB instructions on 16-bit chunks of the 
>>data word.  I don't know how I would have done it in C.
> 
> 
> unsigned long long
> reverse(unsigned long long x, int nbits)
> {
>   unsigned long long y = 0;
>   while (--nbits >= 0)
>     {
>       y <<= 1;
>       if (x & 1)
>         ++y;
>       x >>= 1;
>     }
>   return y;
> }
> 
> This will bit reverse bits 0 through 63 of any quantity from 0 to 63
> bits in width.  It's portable C99.  Once written, no need to write
> again, and is truly reusable.
> 
> Of course, some compilers provide the __reverse intrinsic, but that's
an
> extension you can't rely on.
> 
> The one thing that C is not good at is providing access to the carry
> bit.
> 
> I'm not arguing that an organic optimizer is much better than any
> programmed optimizer--it's a given, if there is sufficient time and
> inclination for the human brain.  For some systems, the MSP430 and
> assembler are a perfect fit.  For some application domains, they are
> non-starters.  Who would seriously code an application for Windows
> completely in x86 assembler nowadays?
> -- Paul.

The question should be "Who would seriously code an application for 
Windows?" ;@}

Al


Hey Jon, are you sure I didn't write this? ;@}

This expresses my views, as most people here would know. Few might know 
that I love C. But not for most of the work I do. I see it as a rapid 
prototyping tool, when testing out concepts on a PC, or the tool of 
choice for windows apps that talk to my embedded designs. I think the 
ONLY issue with embedded systems choice of language is the individuals 
skill set and experience. The simple fact is that most people coming out 
of uni these days are C or C++ oriented (by the way that surely has to 
be C++; ).

Jonathan Kirwan wrote:

> On Sun, 8 Aug 2004 00:50:22 +0100, Paul wrote:
> 
> 
>>I'm not arguing that an organic optimizer is much better than any
>>programmed optimizer--it's a given, if there is sufficient time and
>>inclination for the human brain.
> 
> 
> Actually, Paul, for smaller embedded systems (those Al and I seem to
encounter)
> I find that I can write better assembler than the compiler can generate, as
a
> matter of ho-hum routine and do so in about the same time it takes me to
write
> C.  And I've been using C regularly and full-time (on larger systems
than the
> MSP-430, of course) since 1978, when I first started using it for Unix v6
> coding.  (I like C a lot, by the way.)  Most of the effort is in other
aspects
> of the project (both before coding and also after coding) and, even if the
> assembler takes me 30% longer to code up, it still only adds negligible
time to
> the entire project.

The point of my last, excessively long, post to Paul was to put the 
point that what might seem to be a small embedded system is 'small' in

the sense of complexity or function. A Brake Test Meter written 
originally in IAR for Atmel ended up on a Mega128 because nothing else 
was large enough to fit the code, even then only a few K was left. A 
greatly enhanced feature set on Mk 2 used 4k of code and data in an 
MSP430. Which was the 'small' system? Now had a C expert written Mk 1
I 
suspect it would have required around 14k in the MSP430 or AVR, but 
that, to me was a trivial system it had little to do. measure a dual 
accelerometer, measure a pressure sensor, record vehicle speed, run an 
RF link, run a thermal printer and produce a pretty graph, and table. I 
guess what I'm trying to say is that code size doesn't define sytsem 
size to me. I view 'small' projects as ones which have little 
functionality to them. Most of what I do I consider medium projects. 
Lots of functionality within a single system, or lots of identical 
single systems.

This wasn't meant to be a C vs Assembler war. That's simply a 
no-brainer. Some of us write good C, some of us write good assembler, 
some are good at both, some are pretty ordinary at either/both, but good 
enough for what they need to do, some of us are learners, and some of 
us, frankly, are terrible programmers, but, hey, there's plenty of room 
for us all, and, in the right environment, even terrible programmers can 
make a good living.

Al

> 


Tom Digby wrote:

> Jonathan Kirwan wrote:
> 
> 
>>Of course, it would be trivial to concoct snippets of C code that would
take
>>lots more time in assembly, but we *are* talking about the entire
application
>>here and not selected fragments.  For reasons I think are obvious, it is
not
>>reasonable to consider only certain snippets that support a theory and
discard
>>the evidence that contradicts.  As I like to say, "If you're
willing to be
>>selective in the evidence you consider, you could reasonably conclude
that the
>>earth is flat."
> 
> 
> That reminds me that sometimes what you're interested in is not really

> the actual truth, but just a good-enough approximation.
> 
> If you're navigating an airliner or defining routes for it to fly, you

> need to know that the earth is approximately a sphere.  But if you're 
> designing an airport, laying out runways and access roads and terminal 
> buildings and such, a flat-earth model will probably suffice.
> 

Which begs that age old question, at what point during take off does the 
world actually become round?

Al


On Sun, 8 Aug 2004 13:33:16 +0100, Paul wrote:

>> >I'm not arguing that an organic
optimizer is much better than any 
>> >programmed optimizer--it's a given, if there is sufficient
time and 
>> >inclination for the human brain.
>> 
>> Actually, Paul, for smaller embedded systems (those Al and I 
>> seem to encounter) I find that I can write better assembler 
>> than the compiler can generate, as a matter of ho-hum routine 
>> and do so in about the same time it takes me to write C.
>
>True.  You can manage your own calling conventions, register allocation,
>and have a whole-program view of the world.

Well, that's all true.  I can even mix various calling methods and not even
use
a consistent method.  I can count cycles where it's important and not count
them
where it isn't.  But it's not just that, Paul.  I can incorporate a
variety of
concepts (with downright ease) that arise from the CPU features themselves and
that C compiler writers will avoid almost like the plague.  A simple reading
will usually suggest a few unique semantic choices that are easy to "think
in"
for a particular application.

>My organic optimizer is
>much better than my programmed one, as the programmed one was given
>birth by the organic one.

Well, that doesn't mean it can't do darned well, Paul.  As I said,
I've actually
encountered a compiler (not C, but Pascal) that was incredible at optimization.
Time after time I found it to arrange things almost as I would.  Only once,
though.  But it tells me it *can* be done.

To be frank about it, it's my lay opinion that embedded compilers have a
limited
amount of development resource (namely, just a few folks working on it, at best)
to implement optimization techniques and pretty much everything else, as well.
(An exception here might be GNU GCC.)  And it just can't help to have to
write
code to support font colors, true type calculations for display, print preview
(yuck, that *can* be work), and a variety of "feature bullet points"
that have
nothing whatever to do with making the compiler tool incorporate more modern
technology.

With modern ideas incorporated more fully, I actually believe it would be
possible to come much closer to approaching my own skilled results, because
I've
seen what those ideas can actually do, when applied.  They are darned good.
Some don't apply to most embedded systems (optimizing for dram bank cycle
times
or scheduling instructions earlier across code edges [execution branches], for
example), but many of the ideas do apply quite well.

In fact, I brought up the relatively simple and easily understood idea of
structure disaggregation to make this point, in a subtle way (well, it was
subtle before this whole thread grew and now I'm faced with saying
something
more about it.)  It's such an easy concept for an assembly writer to apply,
it's
dead simple to explain to application coders who don't know much about
compiler
technology, and yet it's full implementation in actual, real C compilers
still
awaits.

Why?

There is no real excuse to my mind, as the benefits are quite substantial for
real code I've written a number of times.  Often, this alone accounts for a
factor of two or so in execution time (it turns out, by some coincidence, that
this problem crops up exactly in those routines where speed is important to me.)
And there are at least a dozen other ideas I could pop off the top of my head
that I'm pretty sure no embedded C compiler applies well if at all, are
easy to
explain and understand, would yield much better code in real situations.

Because embedded C compiler writers are more busy writing fancy IDEs and
libraries and wizards and who knows what else.

I just think the embedded compilers could be SOOOOO much better.  I think I have
an idea just how smart you compiler writers really are (fantastic, as a guiding
rule, I think) and that I, as an embedded compiler user, shouldn't have to
go
begging for something as relatively basic as structure disaggregation.  And
that's only one of so many things.  Yes, it's work to implement.  But
I can
think of lots worse things to waste your time on (like playing nanny in hoisting
novice C compiler writers with wizard tools and pretty colors.)

Of course, that's merely my lay opinion.  So I'll defer to your
superior,
professional knowledge on these points.

>I'd pit my wetware against any binary machine
>to produce better code, but in a much longer timeframe--a 3GHz Pentium
>can analyse things much faster than I can sequentially, but I have
>parallelism and insight on my side.

If.. Paul... If you implement the optimizations.

I actually think this is an excellent segue, here.  I use these fancy,
high-speed, modern, multi-100 million transistor systems to host my compiler
tools.  Many optimizations are very time consuming (and indeterminate) and would
greatly benefit from being hosted on these "super computers."

If the compiler implements them!!  But if it doesn't, it really
doesn't matter,
does it?

I'd be glad to have a 3GHz CPU spinning its wheels on my C code, applying
nearly
all of the modern optimizations according to my guidance.  So much possibility
opens up with 3GHz, it seems.  Yet, for all that power under my feet, these
embedded C compilers just cannot seem to achieve what, in looking at the
assembly, seems obvious to me.  So what real good is it if the compiler writers
won't actually take advantage of it??

>> >For some systems, the MSP430 and
>> >assembler are a perfect fit.  For some application domains, they
are 
>> >non-starters.  Who would seriously code an application for Windows 
>> >completely in x86 assembler nowadays?
>> 
>> Well, this isn't such a good point, in my mind.  I mean,
seriously...
>> 
>> ... are we talking about MSP-430 applications here?
>
>No, I merely point out that I would not automatically choose assembly
>code as the tool of choice for some applications.

Of course.  But, as I say to others sometimes, you "spoke in
extremes."  By
this, I mean that when someone moves a little bit "this way" in
speaking,
another person won't make a moderate counter-move but instead will
immediately
take the argument to the furthest and most opposite extreme.

For example, when someone lays out a careful discussion and summarizes that
abortion should be one option among a range of viable options for a mother,
another person may counter, "Then you support murdering adults, too. 
What's the
difference?"

This isn't taking the debate on its face, dealing with the details, and
countering them with thought and care.  It's just watching someone go
slowly in
one particular direction and then arguing by bringing the whole discussion to
some extreme edge.  Instead of moderation, it's extremism, in a way.

Making the point for C in embedded applications quite simply isn't made
well by
dragging in Windows.

So I imagine, anyway.

>> ... or systems with instruction re-order
buffers, branch 
>> prediction, many parallel functional units including at least 
>> two floating point units, bus transaction caches, L1 and L2 
>> caches, memory clocked on *both* edges, inbound and outbound 
>> queues for PCI, AGP, and DRAM interfaces, etc?
>> 
>> I mean.... do you have any idea how many 100's of millions of 
>> transistors are being thrown at Windows and the sheer size 
>> and time and numbers of people and teams involved and the 
>> number of companies writing drivers, other software, etc?
>
>The transistors are a function of a marketing strategy--In Intel's
mind,
>it must have the fastest processor for the architecture they invented,
>period.  Even if they didn't they'd still continue to outsell
other x86
>processors 4:1 (last numbers indicate Intel ship 8 out of 10 x86
>processors).  I don't think it's Microsoft's fault that Intel
produces
>faster processors--Microsoft see more processing power and then fit more
>features into Office because they can.  Microsoft get more revenue from
>their apps than they do the OS, so putting the blame on Windows for the
>transistor budget of these processors isn't exactly fair.

Different issue.  I wasn't blaming Microsoft for anything.  Just pointing
out
that these systems are pretty much at the very opposite extreme -- in fact,
about as much an opposite extreme as one could even possibly imagine -- from
MSP-430 application spaces.  It was the style of argumentation I was taking to
task.

It's not useful to engage in a C vs assembly debate by hauling in modern PC
technology and operating environments that are the combined effort of thousands
of programmers over decades of time.

>> How does bringing up Windows make a point for
folks 
>> considering whether or not to use assembler or C in writing 
>> applications for an MSP-430 based embedded system, for gosh sake?
>
>I merely point out that for some developers, C is more natural than
>assembly code.  If I needed to write code for a PowerPC-based embedded
>system, I would not write in PowerPC assembly code because I don't know
>it and don't want to learn it.

Well, I see this as a sudden retreat to a moderate, reasonable (if abstract)
stance.  And here, we will agree.  And your point about certain processors is
right, too.  There are some processors where you really want a good compiler.

Jon

Jonathan, as usual, you make good arguments, but you miss the point - the 
problem is not technical, but business. Sure, I know how to do structure 
disaggregation, and tons of other stuff, but at what cost? How many 
compilers do we have to sell to recoup that cost?

I would even be more blunt - there are 4 or 5 good compilers for MSP430 out 
there. How big do you think the market is? Do you think any of us is 
sipping Margarita on a tropical island? Certain 3 letter company is the 2 
hundred pounds gorilla in the embedded compiler business. You know, the one 
that claims they generate the best code with the best optimizations etc. 
Have you checked their Financial Report lately? They are not rolling in 
dough either.

Writing optimizing compilers takes time, and worse, can introduce bugs and 
can make your programs more difficult to debug. It may even expose bugs in 
YOUR programs, but try to tell that to an engineer whose products have to 
ship yesterday....

Having said that, we are investing resource to do an optimizer. The proof 
is in the pudding and we will see how it goes. We aren't afraid to test 
uncharted territories - we have the first whole program code compression 
engine out in the embedded compiler market in 1999, we were one of the 
first with easy to use Windows IDE back in the 90s. Look around, we are one 
of the few compiler companies that cover 6 or more CPUs from 4 different 
Silicon Vendors and soon we will throw ARM into the mix.

Writing compilers have always been my passion. I have plenty of ideas, it 
would be a matter of times to put the plans in place.

At 12:27 PM 8/8/2004, Jonathan Kirwan wrote:
>...
>Because embedded C compiler writers are more busy writing fancy IDEs and
>libraries and wizards and who knows what else.
>
>I just think the embedded compilers could be SOOOOO much better.  I think 
>I have
>an idea just how smart you compiler writers really are (fantastic, as a 
>guiding
>rule, I think) and that I, as an embedded compiler user, shouldn't have
to go
>begging for something as relatively basic as structure disaggregation.  And
>that's only one of so many things.  Yes, it's work to implement. 
But I can
>think of lots worse things to waste your time on (like playing nanny in 
>hoisting
>novice C compiler writers with wizard tools and pretty colors.)
>
>Of course, that's merely my lay opinion.  So I'll defer to your
superior,
>professional knowledge on these points.

// richard (This email is for mailing lists. To reach me directly, please 
use richard@rich...) 





On Sun, 08 Aug 2004 23:00:05 +0930, Al wrote:

>Hey Jon, are you sure I didn't write this? ;@}

hehe.  Sometimes, it's nice to sing a duet instead of solo, eh?

>This expresses my views, as most people here would
know. Few might know 
>that I love C. But not for most of the work I do. I see it as a rapid 
>prototyping tool, when testing out concepts on a PC, or the tool of 
>choice for windows apps that talk to my embedded designs. I think the 
>ONLY issue with embedded systems choice of language is the individuals 
>skill set and experience.

It really seems like that.

There's a phrase I keep in mind, "To a man with a chainsaw, everything
looks
like a tree."  This concept operates on several levels.  In one sense, it
means
that when you are familiar with something, that's what you use.  In anther
sense, it means that when you are ONLY familiar with a single tool, then you
don't reach for other tools because you either don't know about them
are aren't
comfortable with them.

This applies to assembly writing.  (1) If you aren't fluent with it, then
you
will quite simply fully subscribe to the idea that C compilers are "just as
good, if not better" tools and, of course, you will also choose them for
every
application you face, regardless of its needs.  (2) If you are fluent with a
number of tools, then you can better select the appropriate tool for the job.

In short, a skilled professional programmer, like a skilled woodworker, should
be quite competent with a very wide variety of tools and able to select the
proper tool for the tasks at hand.  For those willing to be more limited, they
will have more limited options available and the resulting products will show
this fact, just as a limited woodworker's "roll top" desk
won't be quite the
same as one from a more broadly skilled professional.  (Of course, both can
build one.)

My mission is to encourage programmers to be more fully competent, not to tell
folks to "use assembler" or "use C."  That choice should be
made by the person
"on the ground," not by me.

>The simple fact is that most people coming out 
>of uni these days are C or C++ oriented (by the way that surely has to 
>be C++; ).

Yeah.  I've taught C and C++ as an undergrad professor, as well as
computer
architecture, operating systems, and concurrent programming classes.

I remember one of my students (happened many times, but I remember the first
time better) coming to me about how "difficult" the classes were.  It
turns out
that she had been trying to choose between _accounting_ and _programming_ as a
profession.  Took me back, for a second.

But then, I guess, programming has become a big tent these days.  Under it, you
have all manner of hopes and people.  For me, it's a love.  But I realize
that
there are only a few people like me, anymore.  In my day, folks with a personal
love for physics, math, and electronics made up a much higher percentage.

But there are those seriously struggling to decide between corporate marketing
and computer programming and I understand the "one tool" mentality. 
It's a job,
that's all.  Not a love.  For those, they will rightly focus on what skills
the
programming marketplace is buying and learn that and only so much of it as they
will get paid well for.  They will have regular outdoor barbecues, pool parties,
and hot tub get-togethers, I suppose, and will clean behind their TV and mow
their lawns, too.

Oh, well.

>Jonathan Kirwan wrote:
>
>> On Sun, 8 Aug 2004 00:50:22 +0100, Paul wrote:
>> 
>> 
>>>I'm not arguing that an organic optimizer is much better than
any
>>>programmed optimizer--it's a given, if there is sufficient time
and
>>>inclination for the human brain.
>> 
>> 
>> Actually, Paul, for smaller embedded systems (those Al and I seem to
encounter)
>> I find that I can write better assembler than the compiler can
generate, as a
>> matter of ho-hum routine and do so in about the same time it takes me
to write
>> C.  And I've been using C regularly and full-time (on larger
systems than the
>> MSP-430, of course) since 1978, when I first started using it for Unix
v6
>> coding.  (I like C a lot, by the way.)  Most of the effort is in other
aspects
>> of the project (both before coding and also after coding) and, even if
the
>> assembler takes me 30% longer to code up, it still only adds negligible
time to
>> the entire project.
>
>The point of my last, excessively long, post to Paul was to put the 
>point that what might seem to be a small embedded system is
'small' in 
>the sense of complexity or function.

It was a brilliant example of communication, too, Al.  I ate it up all the way.
Much appreciated, too.  I think you made your point quite well.

>A Brake Test Meter written 
>originally in IAR for Atmel ended up on a Mega128 because nothing else 
>was large enough to fit the code, even then only a few K was left. A 
>greatly enhanced feature set on Mk 2 used 4k of code and data in an 
>MSP430. Which was the 'small' system?

Your points are clear to me and well made, Al.

>Now had a C expert written Mk 1 I 
>suspect it would have required around 14k in the MSP430 or AVR, but 
>that, to me was a trivial system it had little to do. measure a dual 
>accelerometer, measure a pressure sensor, record vehicle speed, run an 
>RF link, run a thermal printer and produce a pretty graph, and table. I 
>guess what I'm trying to say is that code size doesn't define
sytsem 
>size to me.

Right.  Agreed.

>I view 'small' projects as ones which
have little 
>functionality to them.

I think 'small' has many meanings.  I have a "small" project
I'm working on
(it's a personal project, just now) that will sit entirely inside a TO-8
can and
be wire-bonded with an MSP-430 die and some other parts.  It will provide very
sophisticated compensating software and produce both accurate and precise
measurements, as well as being incredibly convenient and very low power.  I
think of this as 'small.'  Yet it is going to take much out of me to
achieve
well.

>Most of what I do I consider medium projects.

I see your point, of course.
 
>Lots of functionality within a single system, or
lots of identical 
>single systems.

Yes.

>This wasn't meant to be a C vs Assembler war.
That's simply a 
>no-brainer. Some of us write good C, some of us write good assembler, 
>some are good at both, some are pretty ordinary at either/both, but good 
>enough for what they need to do, some of us are learners, and some of 
>us, frankly, are terrible programmers, but, hey, there's plenty of room

>for us all, and, in the right environment, even terrible programmers can 
>make a good living.

Yes.  And I don't mean to say, by the way, that assembly programmers are
smarter
or better or anything like that.  Kris, for example, has discussed a number of
interesting things here and I know he's very bright.  Yet he uses C quite a
lot.
There is no stigma to that.

Assembly is just a tool and, I think, it's good practice for any
professional
working in MSP-430 type embedded projects to stay current and fluent in it.
Doesn't mean it will get used for every project or even very many of them. 
But
the skill should be available for consideration, when facing a project, so that
it can be considered where it may be well used.

Being versatile with a wide variety of tools is a good thing.

Jon


Memfault State of IoT Report