EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

Eclipse ARM toolchain for the Linux

Started by piyushpandey February 22, 2011
On 25/02/2011 22:04, Simon Clubley wrote:
> On 2011-02-25, David Brown<david@westcontrol.removethisbit.com> wrote: >> >> The cheapo debuggers are typically made from an FTDI2232 device and a >> couple of level converters. These work fairly well, and it's easy to >> integrate them into your own boards. There is good OpenOCD support for >> a whole range of combinations of pinning on these devices. > > And if you are a hobbyist and want to use something even cheaper than > that with a suitable ARM board, there's also the parallel port based > Wiggler clones. :-) > > (Assuming of course you still have hardware with a real parallel port.) > > The Olimex one (so far) works a lot better for me than I expected. > > Simon. >
You don't have to be a hobbyist to find parallel port debuggers useful - no drivers or conflicts, and they often lower latency than USB debuggers (though much worse throughput). It's also a lot easier to see why things are going wrong...
John Devereux wrote:
> ChrisQ <meru@devnull.com> writes: > >> John Devereux wrote: >> >>> Hi Chris, sure, I have no knowlege of gdb internals either! >>> >>> Laptop >>> ====== >>> >>> In my case the laptop has a "Amontec jtag-key" connected to it, with its >>> drivers. There are probably dozens of other even cheaper jtag pods you >>> can use these days. >>> >>> On the laptop I have openocd installed. A command line like >>> >>> openocd -f interface/jtag-key.cfg -f target/lpc2478.cfg -f pcb.cfg >>> >>> ...starts up the openocd daemon. >> >> So openocd is effectively standalone, input debug data at one side and >> usb jtag or <other> at the target hw side. Assuming a Linux / unix >> box,the daemon listens directly on a port, or gets called by inetd, >> depending on the system ?. > > > That's right. Never tried it via inetd, I just manually start it. I have > various configurations anyway for different boards/processors, so I have > to tell openocd what it is talking to by using the correct command line > to it. (You can also run a script from the command line to e.g. erase > and program flash, when using just to flash firmware. You don't need the > toolchain for that.) > >> What i'm trying to do is build a model of how all the bits fit together >> in terms of data flow between modules, but if the above or similar is >> the case, it fills in a lot of the gaps. > > That's it AFAIK.
I thought it was about time I reported back on the openocd and tools build. The openocd build effectively worked out of the box and I borrowed a j-Link to test with. Installed the latest Debian squeeze on an old T30 laptop, compilers etc and it all builds and seems fine. Either insight 6.1 or 6.8 on the Solaris 10 / sparc box can see it at the other end of the wire, though not tested with real code yet. Then spent quite a bit of time building the late rev gnu toolchain cross from Sol10 / sparc to arm, as earlier versions that I have don't support cortex. To start with, all relevant later versions of gcc need the gmp and mpfr math libs to build at all. Initially tried to build these from source, but all the make tests failed. Fortunately, sunfreeware.org has package binaries. If you go to 4.5.2, you also need the mpc lib, but more on that later. Had a few configure and build issues to start with. Things like gmp and mpfr include, lib, file paths and the ld environment variable for the local shell, but now have 4.1.0 and 4.4.1 cross builds completed in separate trees. The latest binutils 2.21 configures, builds and installs with only a couple of minor warnings, which is great and once you have all the prerequisites in place, both gcc revs build fine as well, though with a lot more minor warnings. Also have a version of newlib built in the 4.1.0 tree, but that required hack (to look at later perhaps) commenting out sections of code to get the build to work. They are all in areas not relevant to the work here and don't expect to be using newlib much, but did the build more out of curiosity as to what problems would arise. If you are wondering why the gcc 4.4.1 rev was built, it's needed to support cortex m3 etc. 4.5.2, the latest rev, is another issue altogether and it's needed to support later cortex versions like m0 and m1. The MPC lib is a new prerequisite and there are no binary packages available for sol10/sparc, afaik. So, build from source and again, all 57 tests segfault fail on make tests. As i'm unlikely to need any of this math stuff for embedded, have a go at configure for 4.5.2 anyway. Configure reports gmp, mpfr and mpc as "buggy but acceptable", but compile eventually stops with a fatal error. Will be having another look later, but no time at the mo, as I want to get some real code running on a target m3 board. The whole point of the exercise in the first place :-). Various things emerge from all this. The first is that gnu source quality has vastly improved over earlier revs, with far fewer warnings during build. The downside is the growing list of prerequisites just to build gcc at all. Looking at some of the sources, it's amazing that it even works, such is the complexity now, but hats off and many thanks to all the people who make it possible. If anyone here would like a binary tarball of either build tree, or just tool binaries and libs, please let me know. While I guess it's a little perverse to be running sparc boxes, the cpu gene pool diminishes more every year and you have to make a stand at some stage. They are also just nice, quality systems to work with... Regards, Chris
ChrisQ <meru@devnull.com> writes:

> John Devereux wrote: >> ChrisQ <meru@devnull.com> writes: >> >>> John Devereux wrote: >>> >>>> Hi Chris, sure, I have no knowlege of gdb internals either! >>>> >>>> Laptop >>>> ====== >>>> >>>> In my case the laptop has a "Amontec jtag-key" connected to it, with its >>>> drivers. There are probably dozens of other even cheaper jtag pods you >>>> can use these days. >>>> >>>> On the laptop I have openocd installed. A command line like >>>> >>>> openocd -f interface/jtag-key.cfg -f target/lpc2478.cfg -f pcb.cfg >>>> >>>> ...starts up the openocd daemon. >>> >>> So openocd is effectively standalone, input debug data at one side and >>> usb jtag or <other> at the target hw side. Assuming a Linux / unix >>> box,the daemon listens directly on a port, or gets called by inetd, >>> depending on the system ?. >> >> >> That's right. Never tried it via inetd, I just manually start it. I have >> various configurations anyway for different boards/processors, so I have >> to tell openocd what it is talking to by using the correct command line >> to it. (You can also run a script from the command line to e.g. erase >> and program flash, when using just to flash firmware. You don't need the >> toolchain for that.) >> >>> What i'm trying to do is build a model of how all the bits fit together >>> in terms of data flow between modules, but if the above or similar is >>> the case, it fills in a lot of the gaps. >> >> That's it AFAIK. > > I thought it was about time I reported back on the openocd and tools build. > > The openocd build effectively worked out of the box and I borrowed a > j-Link to test with. Installed the latest Debian squeeze on an old T30 > laptop, compilers etc and it all builds and seems fine. Either insight > 6.1 or 6.8 on the Solaris 10 / sparc box can see it at the other end of > the wire, though not tested with real code yet. > > Then spent quite a bit of time building the late rev gnu toolchain cross > from Sol10 / sparc to arm, as earlier versions that I have don't support > cortex. To start with, all relevant later versions of gcc need the gmp > and mpfr math libs to build at all. Initially tried to build these from > source, but all the make tests failed. Fortunately, sunfreeware.org has > package binaries. If you go to 4.5.2, you also need the mpc lib, but > more on that later. > > Had a few configure and build issues to start with. Things like gmp and > mpfr include, lib, file paths and the ld environment variable for the > local shell, but now have 4.1.0 and 4.4.1 cross builds completed in > separate trees. The latest binutils 2.21 configures, builds and installs > with only a couple of minor warnings, which is great and once you have > all the prerequisites in place, both gcc revs build fine as well, though > with a lot more minor warnings. Also have a version of newlib built in > the 4.1.0 tree, but that required hack (to look at later perhaps) > commenting out sections of code to get the build to work. They are all > in areas not relevant to the work here and don't expect to be using > newlib much, but did the build more out of curiosity as to what problems > would arise. If you are wondering why the gcc 4.4.1 rev was built, it's > needed to support cortex m3 etc. > > 4.5.2, the latest rev, is another issue altogether and it's needed to > support later cortex versions like m0 and m1. The MPC lib is a new > prerequisite and > there are no binary packages available for sol10/sparc, afaik. So, > build from > source and again, all 57 tests segfault fail on make tests. As i'm > unlikely to need any of this math stuff for embedded, have a go at configure > for 4.5.2 anyway. Configure reports gmp, mpfr and mpc as "buggy but > acceptable", but compile eventually stops with a fatal error. Will be > having another look later, but no time at the mo, as I want to get some > real code running on a target m3 board. The whole point of the exercise in > the first place :-). > > Various things emerge from all this. The first is that gnu source quality > has vastly improved over earlier revs, with far fewer warnings during > build. The downside is the growing list of prerequisites just to build > gcc at all. Looking at some of the sources, it's amazing that it even works, > such is the complexity now, but hats off and many thanks to all the people > who make it possible.
I've just been through some of this too, just on a PC though not a sparc! Of course for a PC there are lots of ready-made scripts on the web, but that would be too easy. (And I wanted it without newlib, and to experiment with lots of multilibs...). I know what you mean about the prerequisites, they seem to add another one each time I look at it. [...] -- John Devereux
On 2011-03-13, John Devereux <john@devereux.me.uk> wrote:
> > I've just been through some of this too, just on a PC though not a > sparc! Of course for a PC there are lots of ready-made scripts on the > web, but that would be too easy. (And I wanted it without newlib, and to > experiment with lots of multilibs...). >
I am curious what you used instead of newlib (or did you roll your own replacement functions) ? I've been building ARM GCC cross compiler toolkits recently as well and I am just making sure there isn't some option available which I am not aware of. :-)
> I know what you mean about the prerequisites, they seem to add another > one each time I look at it. >
Indeed. It was a lot quicker to build a toolkit back in the gcc 3.x days. :-) To be fair however, all the prerequisites do build and pass their tests without any problems for me. On the plus side, I have noticed the recent versions of gcc have become a lot stricter about certain C code constructs as some versions of open source packages which built under earlier versions of gcc needed changes to build under later versions. (No, I don't have any examples to hand unfortunately.) Simon. -- Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP Microsoft: Bringing you 1980s technology to a 21st century world
Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> writes:

> On 2011-03-13, John Devereux <john@devereux.me.uk> wrote: >> >> I've just been through some of this too, just on a PC though not a >> sparc! Of course for a PC there are lots of ready-made scripts on the >> web, but that would be too easy. (And I wanted it without newlib, and to >> experiment with lots of multilibs...). >> > > I am curious what you used instead of newlib (or did you roll your own > replacement functions) ?
The latter! I was finding that the infrastructure of newlib was more trouble than it was worth, for me. A careless assert() in a bootloader brought in most of stdio etc. I mainly just wrote a few easy things like memset, strlen etc, as required. A trivial allocate-only "malloc". There is also a public domain reduced printf that I customized with special conversion codes for "fixed decimal point" numbers. Quite pleased with that.
> I've been building ARM GCC cross compiler toolkits recently as well and > I am just making sure there isn't some option available which I am not > aware of. :-) > >> I know what you mean about the prerequisites, they seem to add another >> one each time I look at it. >> > > Indeed. It was a lot quicker to build a toolkit back in the gcc 3.x > days. :-)
Not really, I didn't have a 4 core hyperthreading machine then :) make -j 12 ... speeds things up no end.
> To be fair however, all the prerequisites do build and pass their tests > without any problems for me.
> On the plus side, I have noticed the recent versions of gcc have become a > lot stricter about certain C code constructs as some versions of open source > packages which built under earlier versions of gcc needed changes to build > under later versions. (No, I don't have any examples to hand unfortunately.) > > Simon.
-- John Devereux
On 2011-03-14, John Devereux <john@devereux.me.uk> wrote:
> Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> writes: > >> On 2011-03-13, John Devereux <john@devereux.me.uk> wrote: >>> >>> I've just been through some of this too, just on a PC though not a >>> sparc! Of course for a PC there are lots of ready-made scripts on the >>> web, but that would be too easy. (And I wanted it without newlib, and to >>> experiment with lots of multilibs...). >> >> I am curious what you used instead of newlib (or did you roll your own >> replacement functions) ? > > The latter! > > I was finding that the infrastructure of newlib was more trouble than it > was worth, for me. A careless assert() in a bootloader brought in most > of stdio etc. >
That I can _easily_ believe. :-)
> I mainly just wrote a few easy things like memset, strlen etc, as > required. A trivial allocate-only "malloc". > > There is also a public domain reduced printf that I customized with > special conversion codes for "fixed decimal point" numbers. Quite > pleased with that. >
Interesting. As my professional programming area involves business coding, fixed point calculations are something I am familiar with and I have thought about creating a fixed point library as a way of implementing non-integer calculations in a upcoming ARM project without having to pull in a software floating point library. BTW, I've also seen references to newlib been used by a cross compiled gcc itself, including in the generated code for various low-level functions, but I have not investigated for myself yet. Did you have any problems in this area ?
>> >> Indeed. It was a lot quicker to build a toolkit back in the gcc 3.x >> days. :-) > > Not really, I didn't have a 4 core hyperthreading machine then :) > > make -j 12 > > ... speeds things up no end. >
:-) Simon. -- Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP Microsoft: Bringing you 1980s technology to a 21st century world
Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> writes:

> On 2011-03-14, John Devereux <john@devereux.me.uk> wrote: >> Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> writes: >> >>> On 2011-03-13, John Devereux <john@devereux.me.uk> wrote: >>>> >>>> I've just been through some of this too, just on a PC though not a >>>> sparc! Of course for a PC there are lots of ready-made scripts on the >>>> web, but that would be too easy. (And I wanted it without newlib, and to >>>> experiment with lots of multilibs...). >>> >>> I am curious what you used instead of newlib (or did you roll your own >>> replacement functions) ? >> >> The latter! >> >> I was finding that the infrastructure of newlib was more trouble than it >> was worth, for me. A careless assert() in a bootloader brought in most >> of stdio etc. >> > > That I can _easily_ believe. :-) > >> I mainly just wrote a few easy things like memset, strlen etc, as >> required. A trivial allocate-only "malloc". >> >> There is also a public domain reduced printf that I customized with >> special conversion codes for "fixed decimal point" numbers. Quite >> pleased with that. >> > > Interesting. As my professional programming area involves business > coding, fixed point calculations are something I am familiar with and I > have thought about creating a fixed point library as a way of implementing > non-integer calculations in a upcoming ARM project without having to pull > in a software floating point library.
It simply adds letters to the format string so I can do things like fmt(lcd, "Test Time %6.3m seconds", milliseconds); Where the "m" could be d, c, m for deci- centi- milli. The output would then be Test Time 12.345 seconds Of course there are other ways to do the same thing. Actually, isn't there an extension to the C language for fixed point arithmetic too? Wouldn't trust it for financial work though :)
> BTW, I've also seen references to newlib been used by a cross compiled gcc > itself, including in the generated code for various low-level functions, > but I have not investigated for myself yet. > > Did you have any problems in this area ?
It seems to be needed for c++, but not a plain C compiler AFAIK. That just seems to use libgcc2. I gather gcc can optimise certain contructs into library calls ("builtins"). Perhaps a loop through an array that zeroes each element could be replaced by memset. I have not hit this myself yet, perhaps I have not enabled the right optimisations or I have already written the functions required! [...] -- John Devereux
John Devereux wrote:

> I've just been through some of this too, just on a PC though not a > sparc! Of course for a PC there are lots of ready-made scripts on the > web, but that would be too easy. (And I wanted it without newlib, and to > experiment with lots of multilibs...). >
Agree with the bit about too easy. Building the tools was a necessity here, as there's nothing else available. Crossworks is available for Solaris on intel, but I wouldn't think they would be too sympathetic to a request for s sparc version, even if I were in the market to buy in a toolchain. No, I wanted to get a bit more experience in the build process and experiment with the various options. Now that I have all the other gnu utilities in place, there is a reasonably good gnu build environment in place and it should be easier to bootstrap later versions in future. The 3 or 4 issues with the newlib build were: 1 each in 2 crt0.S modules, where I just commented out the offending lines. I always write my own crto / startup code anyway, so that bit is irrelevant. The other two were in two lib functions that I would never use anyway, so it seemed safe to comment those out as well. Crude, but you have to start by taking the pragmatic approach :-). After that, everything built fine. Didn't build newlib within the 4.4.1 tree, as some docs I found on the web suggest that you need to build gcc again after building and installing newlib. If true, it probably means that newlib introduces some dependencies into the gcc build process, which i'm not too happy about. I see most if not all code here running from bare metal board level, so binutils + gcc build should be all that's needed. So what configure options did you use to experiment with lots of multilibs and what were the results ?... Regards, Chris
Simon Clubley wrote:

> Interesting. As my professional programming area involves business > coding, fixed point calculations are something I am familiar with and I > have thought about creating a fixed point library as a way of implementing > non-integer calculations in a upcoming ARM project without having to pull > in a software floating point library. >
You can also work in scaled integers for some apps. On a cash handling machine from many years ago, all the values were stored in pence. The resulting math was quite trivial, as was conversion for display. I was very gratefull for decimalisation at the time. U32 had more than enough range to handle max values. Imagine having to do all that in pounds / shillings and pence. Eeeek !.
> BTW, I've also seen references to newlib been used by a cross compiled gcc > itself, including in the generated code for various low-level functions, > but I have not investigated for myself yet. >
Interesting - see other post to john re this issue, but haven't looked into what the dependencies are, if any... Regards, Chris
Grant Edwards wrote:

> I wasn't following the gdb developers list at the time but my _guess_ > is that maybe they didn't (remove functionality when they removed the > Angel stuff from the source tree). Obscure features/protocols often > end up in a broken state as other parts of the system evolve. If > there's nobody with the knowlege/desire/hardware/time to maintain the > obscure features, they end up being removed after they've been sitting > in a non-working, non-maintained state for a few years. > > In other cases, an obscure feature (that may be working) is removed > because the maintainers know it's going to be broken by a big > refactoring or redesign, and nobody steps ups to bring the obscure > feature forward with the rest of the system. > > Or maybe they just got tired of the ugliness. :) >
Have no real knowledge of gdb internals, but are the various debug protocols now handled by some sort of plug in format, or is it an ad hoc function call interface ?. A plug structure would make the integration of alternate debug formats easier as it would become more of a translation process, from internal format to and from target... Regards, Chris

The 2024 Embedded Online Conference