EmbeddedRelated.com
Forums

What's the best MSP430 Development Enviroment?

Started by PFG February 6, 2013
On Thu, 7 Feb 2013 21:55:03 +0000, Paul wrote:

>On 7 Feb 2013, at 21:46, Jon Kirwan wrote:
>
>> On Thu, 07 Feb 2013 12:25:03 -0800, I wrote:
>>
>>> to greatly simplify the resulting code
>>
>> Well, that was abrupt from me. Sorry. What I meant to write
>> was:
>>
>> The use of lambdas in this case greatly simplified the final,
>> resulting code. I know, because it started out much more
>> complex in handling the communication of keys between tiers
>> and involved a number of explicit classes to get the work
>> done. Five modules were cut down to two, as a result of using
>> lambdas, and the overall code was far, far easier to explain
>> in a code review I did with the customer's programming team.
>> It was better in every conceivable way.
>
> 1960s technology to the rescue! Now, how many times have I
> said that my favourite language happens to come from the
> Lisp branch of the Language Tree? I love T; I like Common
> Lisp.

I had known only _some_ of what was out there because of my
limited exposure. But I knew it was powerful. I just didn't
know how good it would be when faced with a real project and
a compiler toolchain that supported the concepts well. (Okay,
so my experience remains limited... but it has grown a little
too.)

I kept lambdas in the back of my mind while working, but I
wasn't someone with a hammer looking for a nail. So the
project started out traditionally, like I would any other
project. The customer determined the language and environment
and set the scope of the application and its target goals.

I had neatly divided the project along natural, logical
divisions. And I used "helper" classes to manage things
between them. The design used essentially uncomplicated
concepts, but was forced to use enough "widgets" to get the
job done that explaining it was more painful than I wanted
and I knew instinctively that this also meant that later
support would be tricky and so would proving correctness of
the approach, as well.

On a whiteboard, while redrawing all this to myself for
another go, I suddenly realized that I was staring squarely
at a lambda concept. There it was, nakedly exposed and
obvious.

Took me exactly one day to throw away three modules and
slightly rewrite the other two and to test and include the
syntax for lambdas (which at the time I was unfamiliar with,
but knew existed.) The whole thing collapsed into its
essentially simple structure and worked fabulously well.
Which says three things: (1) the design process produced a
correctly aligned design, and (2) traditional language
limitations were creating a nightmare, and (3) lambdas are
very powerful tools.

When I contacted my client about it, not wanting to scare
them out of their wits, their first response was fear. They
knew nothing about the idea and although they had been
working in the language for decades already and had "heard
about" the idea... they had their trepidations. I took full
responsibility (having already written and tested the new
code) for it and gave them my assurance they would approve it
fully once I explained the result to them in a meeting.

That meeting went very, very well. I had no problem
explaining the basic idea and how it applied in this
particular case and to then show them the differences it
made.

By the way, one of the primary goals for the new system was
speed. It was pretty easy to do parallel testing and to
benchmark the old code vs the new code. At run times on the
order of 100 seconds for their old system, they weren't able
to measure the time required for the new one (beneath their
timer resolution of 1ms.) It took more testing to discover
that were were operating at almost a million times faster.
Which basically just means they don't have to worry about
this part of it, anymore. (The old system was causing
customer complaints.)

Lambdas are beautiful.

I HATE the C++ syntax for them, though. Damn the committee
people! Anyone could do better. And why in the heck does one
have to manually add text to "help" the compiler do closures?

> I'm not sure you really appreciate a language until you
> write a compiler or interpreter for it. And I'm not sure I
> will ever appreciate the finer details of C++ because I have
> no intention of writing a compiler for it.
>
>-- Paul.

But aren't you selling a gnu-based C++ toolset? You must have
some awareness. And besides, doesn't gnu support the new
lambda syntax and semantics now? Don't you spend late
evenings reading through gnu code that was built up with the
hands of thousands of tinkerers almost like the fabled stone
soup? ;)

Thanks very much for the time, Paul.

Jon

Beginning Microcontrollers with the MSP430

On Thu, 7 Feb 2013 22:19:46 +0000, you wrote:

>>> Customers can endure unending pain when the price point is $0.
>>
>> Well, there is more to it than that. One is the ability to
>> completely freeze all of the source code into a repository,
>> including that of the operating system it runs on, so that if
>> and when an embedded project needs to be unearthed (as has
>> happened to me many times) the tools are there and the
>> documents about them remain accurate and able to be followed.
>
> You just get yourself a virtual machineā€¦ We use VMs all the
> time in our rack of servers, and I use a VM to run multiple
> operating systems. It's great.

I do that already. But it doesn't always solve the problem.
As you should know. For example, while a 32-bit Windows XP VM
running under a 64-bit Windows 7 O/S can provide a DOS box
that executes 16-bit code (which 64-bit Windows 7 won't do),
it still won't support many of the features well supported
under "real DOS." Do you know of a VM manager that supports
DOS 5.0 out of the box from Microsoft, along with Windows 7
and 8, WinXP, Win98SE, Win 3.1, linux, freebsd, etc?

Maybe I need more experience here.

And worse, some of the old media may still need to be
unearthed. In some cases, clients will keep old stuff on the
floppies that were generated at the time. Or REQUIRE hardware
(such as an eprom programmer) that plugs into an ISA-8 or
ISA-16 slot. They keep the hardware on the shelf, along with
a system it can plug into. (I do, too, and still have a
working 80386 system, two working 80486 systems, and a couple
of Pentiums with ISA slots.) You can't virtualize that stuff
well. And some hardware doesn't survive the transition to new
interfaces, as it isn't important anymore to most people. So
no one makes it.

And VMs have their issues.

I prefer freezing the entire system. The doc matches up,
things work as expected, etc. And if it doesn't? You are no
worse off.

I still keep Lattice C around, as I said. And old copies of
compiler tools, as well. I still use MSC 1.52c, the last
Microsoft 16-bit compiler, for example. Can't tell you how
many versions of MPLAB I've sequestered. Lots.

> I still maintain that a price of $0 is hard to beat, and
> bean counters can easily impose that purchase on an
> engineering team, I guess.
>
>-- Paul.

Paul, your support is worth money. I think ANY business would
recognize that value. Maybe what is happening is a change in
the business model to the one envisioned by FSF and folks are
being sucked in, whether or not they like it, by choices made
by customers and their teams of programmers in the field
based on sitations they know better about than anyone else.

But I can't say for sure. You see a different part of the
elephant than I do.

Jon
The forebear of MSP430X was whispered about around as early as 2002, I
think. Of course, the 430X we have today is neither a fish nor fowl,
solving problems that no one wants it to solve.

Compiler business is at a crossroad, and folks clamoring for free GCC not
withstanding, the customers will suffer in the end. Just remember, the
silicon vendors' goal is to "lock you in," and the Open Source community
does whatever strike certain engineers' fancy. Neither of which is likely
to solve the "Next Problems."

The compiler tech and the IDE tech are still pretty much state of the art
circa 2000's, and will be unlikely to ever improve further.

--
// richard m: richard @imagecraft.com
On 07/02/13 23:40, Jon Kirwan wrote:
> On Thu, 7 Feb 2013 22:19:46 +0000, you wrote:
>
>>>> Customers can endure unending pain when the price point is $0.
>>>
>>> Well, there is more to it than that. One is the ability to
>>> completely freeze all of the source code into a repository,
>>> including that of the operating system it runs on, so that if and
>>> when an embedded project needs to be unearthed (as has happened
>>> to me many times) the tools are there and the documents about
>>> them remain accurate and able to be followed.
>>
>> You just get yourself a virtual machineā€¦ We use VMs all the time in
>> our rack of servers, and I use a VM to run multiple operating
>> systems. It's great.
>
> I do that already. But it doesn't always solve the problem. As you
> should know. For example, while a 32-bit Windows XP VM running under
> a 64-bit Windows 7 O/S can provide a DOS box that executes 16-bit
> code (which 64-bit Windows 7 won't do), it still won't support many
> of the features well supported under "real DOS." Do you know of a VM
> manager that supports DOS 5.0 out of the box from Microsoft, along
> with Windows 7 and 8, WinXP, Win98SE, Win 3.1, linux, freebsd, etc?
>
> Maybe I need more experience here.

For DOS and Win3.1, you can use Dosbox. It is not a general virtual
machine like VirtualBox or VMWare - it is specific to running DOS. And
since Win3.x is really just a DOS application, it can run in a DOSBox
too. The usual version of DOS to run is FreeDOS - because it is better
than any version of MS-DOS for most purposes (except for running Win3.x
- that apparently works better with MS-DOS 6.22 than FreeDOS), and
because it is free.
Other than that, I have read several places that Win3.1 and DOS (both
MS-DOS and FreeDOS) work fine under VirtualBox. You need a bit of extra
work if you want to avoid the virtual machine using 100% of a cpu (or
just make sure your host has multiple cores), since these old OS's don't
support CPU sleep modes. Certainly all your other OS's listed here will
work fine with VirtualBox.

Of course, hardware access can be an issue - USB passthrough will work
fine if the host OS is Linux, and most of the time with Windows as the
host. Serial port passthrough will work too, but may have more latency
than bare hardware would. Parallel port passthrough is quite limited in
VirtualBox - I believe it works a bit with Windows as the host, but not
(yet) with Linux hosts. But it will certainly have more latency, and
thus be a problem for things like parallel port debuggers and
programmers. PCI passthrough is available as an experimental feature
for Linux hosts, but not Windows hosts. And I don't think anyone has
considered making ISA bus passthrough!
>>
>> 1960s technology to the rescue! Now, how many times have I
>> said that my favourite language happens to come from the
>> Lisp branch of the Language Tree? I love T; I like Common
>> Lisp.
>
> I had known only _some_ of what was out there because of my
> limited exposure. But I knew it was powerful. I just didn't
> know how good it would be when faced with a real project and
> a compiler toolchain that supported the concepts well. (Okay,
> so my experience remains limited... but it has grown a little
> too.)

I'm sure some will think that lambdas are new. Of course, nothing is really new. I would say that quite a lot of the computer science field that deals with language design and implementation is "mined out" now. Logic, functional, declarative, imperative, take your pick!



> Which says three things: (1) the design process produced a
> correctly aligned design, and (2) traditional language
> limitations were creating a nightmare, and (3) lambdas are
> very powerful tools.

Lambdas are very nice. In Common Lisp, #' is natural and clean.

>
> Lambdas are beautiful.
>
> I HATE the C++ syntax for them, though. Damn the committee
> people! Anyone could do better. And why in the heck does one
> have to manually add text to "help" the compiler do closures?
>

Not having even looked at implementing C++ lambdas, I couldn't say why the capture list is required. Clearly, in Lisp a capture list is not required and a closure forms naturally (and there are a number of way of implementing such a closure, even within the same Lisp system.)
>> I'm not sure you really appreciate a language until you
>> write a compiler or interpreter for it. And I'm not sure I
>> will ever appreciate the finer details of C++ because I have
>> no intention of writing a compiler for it.
>>
>> -- Paul.
>
> But aren't you selling a gnu-based C++ toolset? You must have
> some awareness.

I understand the concept, of course, from Lisp. And lambdas are a natural accompaniment to STL when you want to do things with a container. Beyond that, I have no interest in their implementation. They are not widely implemented at this time, which means that using them in ostensibly portable software isn't possible.

> And besides, doesn't gnu support the new
> lambda syntax and semantics now? Don't you spend late
> evenings reading through gnu code that was built up with the
> hands of thousands of tinkerers almost like the fabled stone
> soup? ;)
>

Sorry, no. I have my own interests to pursue and don't need to be tinkering in that particular area. :-)

-- Paul.
On 7 Feb 2013, at 23:08, Richard Man wrote:

> The forebear of MSP430X was whispered about around as early as 2002, I
> think. Of course, the 430X we have today is neither a fish nor fowl,
> solving problems that no one wants it to solve.

The MSP430X design that we have now is rather problematic in a number of areas. The design was, as far as I remember, required because one of TI's big customers for glucose meters was clean out of code space -- you know the thing, need more functionality, a GUI, and all those good things to compete in the market and the 16-bit MSP430 had nowhere to go. What TI ended up with was something that wouldn't scare the horses and would run old 16-bit code, but would also present a 20-bit world.

You can now get MSP430s with 512K of flash. And how do you develop on that with a FET430UIF? We're lucky to get maybe 8K/second out of the FET430UIF on a great day. Compare that to your typical ARM uC that programs flash at 100KB per second or can download into RAM at 1 MB per second and is not encumbered by 16-bit-only registers. LPC1100s and LPC1300s are amazing devices.

What's more, with the formation of the low-energy benchmarks that EEMBC are developing, with Horst chairing things, there might be a real answer to just how an EFM32 compares to an MSP430 for the same task.

>
> Compiler business is at a crossroad, and folks clamoring for free GCC not
> withstanding, the customers will suffer in the end. Just remember, the
> silicon vendors' goal is to "lock you in," and the Open Source community
> does whatever strike certain engineers' fancy. Neither of which is likely
> to solve the "Next Problems."

I'm not sure that Si vendors want to lock anybody in. What they want is more customers to buy their ICs and, to that end, some have decided to take tools back in house. Microchip:HI-TECH, Freescale:Metrowerks, TI:Tartan, ADI:EPC, and so on... They don't want to risk their business on a single tools company.

-- Paul
"Lock in" is probably a poor choice of words. I meant as long as the
perception that they are satisfying their customer's current needs, there
is no need for them to provide anything different or better.

I will give one example, we all know there are more and more embedded
projects being done but there's no new tools to help. Compiler tools are
primitive, the IDE can be better - for example - why are we still writing
header files et. al.? Yes, I know the language requires certain syntax and
semantics, but why are the tools not helping? Or even Jon's lambdas. Lets
say it IS the greatest thing since sliced bread, but what's the chance of
sliced bread being made by the Si vendors or Open Source? And there is no
ROI for 3rd party to spend money developing products that are obscured by
the "we want free tools" mantra.

On Thu, Feb 7, 2013 at 3:36 PM, Paul Curtis wrote:
> **
>
> I'm not sure that Si vendors want to lock anybody in. What they want is
> more customers to buy their ICs and, to that end, some have decided to take
> tools back in house. Microchip:HI-TECH, Freescale:Metrowerks, TI:Tartan,
> ADI:EPC, and so on... They don't want to risk their business on a single
> tools company.
>
> -- Paul


// richard m: richard @imagecraft.com
On Thu, 7 Feb 2013 23:11:10 +0000, Paul wrote:

>><snip>
>> Lambdas are beautiful.
>>
>> I HATE the C++ syntax for them, though. Damn the committee
>> people! Anyone could do better. And why in the heck does one
>> have to manually add text to "help" the compiler do closures?
>
> Not having even looked at implementing C++ lambdas, I
> couldn't say why the capture list is required.
><snip>

That's a point. Without actually _implementing_ something,
it's hard to be sure what motivated them. So I take that
point.

But... C# and VB.NET both do the capture without any
programmer involvement. It seems to me that if the variables
are in scope which they must be if they are being used, and
this should apply to C++ as well as the others, then the
compiler has all the necessary information to know what needs
to be captured and how.

When I think about exceptions I'm not so sure anymore, but
then I realize that the C++ compiler does it fine when the
list is hand-coded. So that can't be the problem.

Which leaves me without a paddle, really. What could they
possibly have been thinking? If expressing a list explicitly
makes a difference somehow, how then does it? What could
possibly be the issue, Paul? Can you think of ANYTHING, even
hypothetically, that could explain how an explicit list would
solve something that the lack of one couldn't? If the
variable is in scope, it's in scope. If not, not. The list
changes none of that.

Oh, well.

It bugs me and it does so because I can't think of a good
excuse for what passed through committee.

>>> I'm not sure you really appreciate a language until you
>>> write a compiler or interpreter for it. And I'm not sure I
>>> will ever appreciate the finer details of C++ because I have
>>> no intention of writing a compiler for it.
>>>
>>> -- Paul.
>>
>> But aren't you selling a gnu-based C++ toolset? You must have
>> some awareness.
>
> I understand the concept, of course, from Lisp. And lambdas
> are a natural accompaniment to STL when you want to do
> things with a container. Beyond that, I have no interest in
> their implementation. They are not widely implemented at
> this time, which means that using them in ostensibly
> portable software isn't possible.

Hmm.... Well, let's see. Who causes things to become widely
implemented? I think the answer is... compiler writers. Yup.

So I guess this rests on your shoulders, Paul. It is you and
folks like you that will either make this happen... or not.

And since we both admit this is great stuff, works well, does
good things... and it has NOW been incorporated into the
standards... then either it happens in products like yours
and others or gnu moves on to the future and leaves everyone
else behind.

http://gcc.gnu.org/projects/cxx0x.html

Maybe that is the writing already on the wall that's been
discussed here and everyone knows the elephant in the room
and just doesn't talk much about it (until today.)

I'm not happy about the portents mentioned today because I
don't like some one-tool gorilla taking over the field. I
like choices. But perhaps the market won't support that and
in the end gnu's tools and the FSF vision has carved out big
changes in one section of the software world.

>> And besides, doesn't gnu support the new
>> lambda syntax and semantics now? Don't you spend late
>> evenings reading through gnu code that was built up with the
>> hands of thousands of tinkerers almost like the fabled stone
>> soup? ;)
>
> Sorry, no. I have my own interests to pursue and don't need
> to be tinkering in that particular area. :-)
>
>-- Paul.

You mean you have a life?! Shame on you. What is the software
priesthood going to do if more priests like you go out and
get themselves real lives?

Why, the church would as much as crumble into ashes. ;)

Jon
On Fri, Feb 8, 2013 at 6:19 AM, Paul Curtis wrote:
>>
>>> Customers can endure unending pain when the price point is $0.
>
> I still maintain that a price of $0 is hard to beat, and bean counters
> can easily impose that purchase on an engineering team, I guess.

I think you mean your are okay to compete withe a $0 GCC based
toolchain (eg: AVR GCC, msp430-gcc, ARM gcc, etc) but you
have problems competing with vendor supported non-gcc (or gcc
with proprietary bits, like Microchips C30/XC16 and C32/XC32).

I think for proprietary core, it makes sense for vendors to have
a strong compiler offering from in-house, along with a strong
3rd party (eg: IAR) which is highly desirable but not that essential
as long as the in-house offering can satisfy customer needs.
This seems to be what MCU vendors like TI, Microchip and
Freescale are doing (buying strong 3rd party toolchain vendors).
Renesas also has strong in-house offerings and then IAR is
the other major force.

This makes smaller 3rd party players' life more difficult.

The open multiple vendor 8051/ARM world are different since
3rd party tools are very strong: IAR, Keil. It is difficult for a
single 8051/ARM vendor to buy a good compiler which is
used for their own variation of 8051/ARM.

I agree that ARM adoption rate is accelerating. At work, we
kind of standardizing on ARM architecture for standard
MCU/MPU and ASIC built-in MCU/MPU core, other than a
few product facility which is too difficult to move away from
existing architecture.

At work, we usually do not use $0 tools since the cost
of the toolchain is really quite small comparing to engineering
cost. So we choose good ones, like Keil for 8051 and IAR for
ARM.

On the other hand, it seems to me that the lower end ARM
Cortex M0/M3/M4 MCU cost gets lower and lower and I
wonder how those MCU vendors (NXP, ST, TI, Freescale, etc)
can earn decent money from selling those MCUs.
--
Xiaofan
That's a pretty rosy picture: does anyone know IAR finance situation? Just
sayin'


> I think you mean your are okay to compete withe a $0 GCC based
> toolchain (eg: AVR GCC, msp430-gcc, ARM gcc, etc) but you
> have problems competing with vendor supported non-gcc (or gcc
> with proprietary bits, like Microchips C30/XC16 and C32/XC32).
>
> I think for proprietary core, it makes sense for vendors to have
> a strong compiler offering from in-house, along with a strong
> 3rd party (eg: IAR) which is highly desirable but not that essential
> as long as the in-house offering can satisfy customer needs.
> This seems to be what MCU vendors like TI, Microchip and
> Freescale are doing (buying strong 3rd party toolchain vendors).
> Renesas also has strong in-house offerings and then IAR is
> the other major force.
>
> This makes smaller 3rd party players' life more difficult.
>
> The open multiple vendor 8051/ARM world are different since
> 3rd party tools are very strong: IAR, Keil. It is difficult for a
> single 8051/ARM vendor to buy a good compiler which is
> used for their own variation of 8051/ARM.
>
> I agree that ARM adoption rate is accelerating. At work, we
> kind of standardizing on ARM architecture for standard
> MCU/MPU and ASIC built-in MCU/MPU core, other than a
> few product facility which is too difficult to move away from
> existing architecture.
>
> At work, we usually do not use $0 tools since the cost
> of the toolchain is really quite small comparing to engineering
> cost. So we choose good ones, like Keil for 8051 and IAR for
> ARM.
>
> On the other hand, it seems to me that the lower end ARM
> Cortex M0/M3/M4 MCU cost gets lower and lower and I
> wonder how those MCU vendors (NXP, ST, TI, Freescale, etc)
> can earn decent money from selling those MCUs.
>
> --
> Xiaofan

--
// richard m: richard @imagecraft.com
// portfolio: <http://www.dragonsgate.net/pub/richard/PICS/AnotherCalifornia
>
blog: http://rfman.wordpress.com
// book: http://www.blurb.com/bookstore/detail/745963