EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

DS-5 opinions/reviews

Started by Don Y April 17, 2015
Don Y <this@is.not.me.com> wrote:
> I've recommendations from two colleagues that speak very highly > of it (usage, support, code size, speed and quality). When GCC > is brought up, both <frown> recalling how much more "work" is > required getting, installing, begging for assistance/bug fixes; > all to "save a DEVELOPMENT dollar".
A day or two of fighting broken DRM is usually enough to make people see the real value of GCC's freedom. My own experience with DS-5 only extends to a couple of days of following tutorials for Altera's Cyclone-V, but it seemed like a pretty polished package. On the other hand, at least for Cortex-M development I don't know if it would give a lot more than Eclipse combined with the "GNU Tools for ARM Embedded Processors"[1] toolchain, supported by the GNU ARM Eclipse[2] plugin and either a J-Link or OpenOCD and cheap JTAG dongle. -a [1] <https://launchpad.net/gcc-arm-embedded> [2] <http://gnuarmeclipse.livius.net/blog/>
Hi Anders,

On 4/20/2015 6:12 AM, Anders.Montonen@kapsi.spam.stop.fi.invalid wrote:
> Don Y <this@is.not.me.com> wrote: >> I've recommendations from two colleagues that speak very highly >> of it (usage, support, code size, speed and quality). When GCC >> is brought up, both <frown> recalling how much more "work" is >> required getting, installing, begging for assistance/bug fixes; >> all to "save a DEVELOPMENT dollar". > > A day or two of fighting broken DRM is usually enough to make people see > the real value of GCC's freedom.
The only time I've had a problem with licensing on a software product was when trying to move from one machine to another (where the license was tied to the MAC of the NIC, etc.). A call to Support and proof of ownership solved that problem. [Note in Europe I think you are still dealing with dongle'd products?] I think most licensing woes come from folks using pirated/cracked software and later discovering that a crack was imperfect or was invalidated by an update they downloaded, etc.
> My own experience with DS-5 only extends to a couple of days of > following tutorials for Altera's Cyclone-V, but it seemed like a pretty > polished package. On the other hand, at least for Cortex-M development I > don't know if it would give a lot more than Eclipse combined with the > "GNU Tools for ARM Embedded Processors"[1] toolchain, supported by the > GNU ARM Eclipse[2] plugin and either a J-Link or OpenOCD and cheap JTAG > dongle.
I'm working with A series parts and doing bare-metal development. E.g., just getting the VM system up and running is an "effort". I believe ARM's core/device models should make that a lot easier without relying on "real" hardware -- especially if you haven't yet settled on a particular manufacturer's device, designed/laid out PCB, and fabbed prototype quantities. If, OTOH, ARM makes those models freely available (or not-freely but with ample support for integration with "third party" tools), then that opens the door for other approaches. Regardless, I would assume (hope) that ARM would have more of an incentive (and *specific* expertise on ARM-licensed devices) to getting you to production as most of their revenues are probably derived from recurring license fees (i.e., per unit device sales). What incentive does someone supporting GCC/GDB have to resolve some issue deep in the ARM IP? Dunno. So far I've just had good recommendations from the two colleagues using the toolchain. Once I get unpacked, I'll download the evaluation copy of the product and start throwing some of my code at it to see how well it compares (code quality, size, speed) with the other compilers (ARM and not) I have available. As far as evaluating their "Support", I can only, so far, rely on my colleagues' experiences (but neither of them work on bare metal so I suspect the sorts of questions/problems they encounter are different that I am likely to encounter)
Don Y <this@is.not.me.com> wrote:
> [Note in Europe I think you are still dealing with dongle'd products?]
Most of my problems have been with dongles for products that haven't been supported by their manufacturers for ages. Getting parallel port dongles to run on modern 64-bit Windows systems is no fun. But protected software also prevents you from eg. spinning up another CI server whenever you need one, and networked licenses always seem to mysteriously fail at the most inopportune moment. Given the choice, I will always pick the Free option.
> I'm working with A series parts and doing bare-metal development. > E.g., just getting the VM system up and running is an "effort". > I believe ARM's core/device models should make that a lot easier > without relying on "real" hardware -- especially if you haven't > yet settled on a particular manufacturer's device, designed/laid out > PCB, and fabbed prototype quantities.
You may want to check out QEMU.
> What incentive does someone supporting GCC/GDB have to resolve some > issue deep in the ARM IP?
ARM have long contracted companies like CodeSourcery/Mentor to develop ARM support for the GNU toolchain, and have in recent years even taken over the development work themselves (AFAIK, previously they were afraid of stepping on the toes of third party compiler vendors.) If there are erratas that need workarounds, they get added to GCC immediately. Most GCC development by far is done on a commercial basis, either sponsored by the chip makers directly, or by vendors like Mentor or RedHat. The incentives are the same as for any other compiler. -a
Don Y <this@is.not.me.com> writes:

> Hi Anders, > > On 4/20/2015 6:12 AM, Anders.Montonen@kapsi.spam.stop.fi.invalid wrote: >> Don Y <this@is.not.me.com> wrote: >>> I've recommendations from two colleagues that speak very highly >>> of it (usage, support, code size, speed and quality). When GCC >>> is brought up, both <frown> recalling how much more "work" is >>> required getting, installing, begging for assistance/bug fixes; >>> all to "save a DEVELOPMENT dollar". >> >> A day or two of fighting broken DRM is usually enough to make people see >> the real value of GCC's freedom. > > The only time I've had a problem with licensing on a software product > was when trying to move from one machine to another (where the license > was tied to the MAC of the NIC, etc.). A call to Support and proof > of ownership solved that problem. > > [Note in Europe I think you are still dealing with dongle'd products?] > > I think most licensing woes come from folks using pirated/cracked software > and later discovering that a crack was imperfect or was invalidated by > an update they downloaded, etc. > >> My own experience with DS-5 only extends to a couple of days of >> following tutorials for Altera's Cyclone-V, but it seemed like a pretty >> polished package. On the other hand, at least for Cortex-M development I >> don't know if it would give a lot more than Eclipse combined with the >> "GNU Tools for ARM Embedded Processors"[1] toolchain, supported by the >> GNU ARM Eclipse[2] plugin and either a J-Link or OpenOCD and cheap JTAG >> dongle. > > I'm working with A series parts and doing bare-metal development. > E.g., just getting the VM system up and running is an "effort". > I believe ARM's core/device models should make that a lot easier > without relying on "real" hardware -- especially if you haven't > yet settled on a particular manufacturer's device, designed/laid out > PCB, and fabbed prototype quantities. > > If, OTOH, ARM makes those models freely available (or not-freely but with > ample support for integration with "third party" tools), then that opens > the door for other approaches. > > Regardless, I would assume (hope) that ARM would have more of an incentive > (and *specific* expertise on ARM-licensed devices) to getting you to > production as most of their revenues are probably derived from recurring > license fees (i.e., per unit device sales). What incentive does someone > supporting GCC/GDB have to resolve some issue deep in the ARM IP?
Did you not see the link to ARM GCC? https://launchpad.net/gcc-arm-embedded It is maintained by ARM employees. They want to sell more of their chips (or, more accurately, they want their customers to sell more of their chips). That is their incentive. -- John Devereux
On 4/20/2015 9:16 AM, Anders.Montonen@kapsi.spam.stop.fi.invalid wrote:
> Don Y <this@is.not.me.com> wrote: >> [Note in Europe I think you are still dealing with dongle'd products?] > > Most of my problems have been with dongles for products that haven't > been supported by their manufacturers for ages. Getting parallel port > dongles to run on modern 64-bit Windows systems is no fun. But protected > software also prevents you from eg. spinning up another CI server > whenever you need one, and networked licenses always seem to > mysteriously fail at the most inopportune moment. Given the choice, I > will always pick the Free option.
I (USA) haven't seen a dongled product in over 20 years. The last I can recall using were some of DataI/O's products (DASH-STRIDES, DASH-PCB, ABEL, etc.).
>> I'm working with A series parts and doing bare-metal development. >> E.g., just getting the VM system up and running is an "effort". >> I believe ARM's core/device models should make that a lot easier >> without relying on "real" hardware -- especially if you haven't >> yet settled on a particular manufacturer's device, designed/laid out >> PCB, and fabbed prototype quantities. > > You may want to check out QEMU.
Please read -- and understand -- the DS5 datasheet (paying particular attention to "Fast Models"). AFAICT, the only "third party support" approach that comes close is (ARM's) Foundation Models (FVP) offering. "FVPs, as their name suggests, are fixed. They are a black box on which you can test your software, safe in the knowledge that when the hardware arrives, you can port it over easily and quickly. FVPs are binaries derived from Fast Models, though unlike Fast Models are not customizable. "Fast Models give you the flexibility to add complex peripherals, infrastructure and ARM CoreLink&#4294967295; interconnects along with a host of other ARM and third-party IP blocks. This gives software teams working on custom SoCs the ability to complete the majority of their software and integration ahead of the silicon availability. And, that's *still* a paid, closed product (with all the same potential issues you raised, above). Yet, it is only intended for folks doing things like Linux application development (and, at that, probably only on "generic ARM hardware" or, at best, the sorts of hardware that the ARM folks envisioned for that *fixed* platform emulation) How are the gcc/gdb folks (rgardless of who is "backing" them) going to support me, there? It would be like asking a generic tool vendor why the code from their compiler (verified as "correct" by you *and* them) isn't running on some vendor's particular "chip".
>> What incentive does someone supporting GCC/GDB have to resolve some >> issue deep in the ARM IP? > > ARM have long contracted companies like CodeSourcery/Mentor to develop > ARM support for the GNU toolchain, and have in recent years even taken > over the development work themselves (AFAIK, previously they were afraid > of stepping on the toes of third party compiler vendors.) If there are > erratas that need workarounds, they get added to GCC immediately. > > Most GCC development by far is done on a commercial basis, either > sponsored by the chip makers directly, or by vendors like Mentor or > RedHat. The incentives are the same as for any other compiler.
But there is more to development than getting the right *code* out of a compiler (given the "input source" provided). Did you note my OP reference to "bare metal"? Do you, for example, contact your (x86-target) compiler vendor regarding questions about why (YOUR CODE) isn't properly interacting with the MMU on *Intel's* CPU? Or, the interrupt controller? Camera interface (remember, we're talking about SoC's, here)? "Yes, I agree your compiler is generating the correct ASM code for the HLL sources that I'm feeding it. But, my code doesn't work; where's my bug?? (or, the hardware issue that I'm currently unaware of) Do I have to build at least one *physical* instance of every "system" for which I want to develop code before I can start troubleshooting it (with an ICE)? How helpful will the "GCC/GDB" folks be in that regard?
On 4/20/2015 10:58 AM, John Devereux wrote:
> Don Y <this@is.not.me.com> writes: > >> Hi Anders, >> >> On 4/20/2015 6:12 AM, Anders.Montonen@kapsi.spam.stop.fi.invalid wrote: >>> Don Y <this@is.not.me.com> wrote: >>>> I've recommendations from two colleagues that speak very highly >>>> of it (usage, support, code size, speed and quality). When GCC >>>> is brought up, both <frown> recalling how much more "work" is >>>> required getting, installing, begging for assistance/bug fixes; >>>> all to "save a DEVELOPMENT dollar". >>> >>> A day or two of fighting broken DRM is usually enough to make people see >>> the real value of GCC's freedom. >> >> The only time I've had a problem with licensing on a software product >> was when trying to move from one machine to another (where the license >> was tied to the MAC of the NIC, etc.). A call to Support and proof >> of ownership solved that problem. >> >> [Note in Europe I think you are still dealing with dongle'd products?] >> >> I think most licensing woes come from folks using pirated/cracked software >> and later discovering that a crack was imperfect or was invalidated by >> an update they downloaded, etc. >> >>> My own experience with DS-5 only extends to a couple of days of >>> following tutorials for Altera's Cyclone-V, but it seemed like a pretty >>> polished package. On the other hand, at least for Cortex-M development I >>> don't know if it would give a lot more than Eclipse combined with the >>> "GNU Tools for ARM Embedded Processors"[1] toolchain, supported by the >>> GNU ARM Eclipse[2] plugin and either a J-Link or OpenOCD and cheap JTAG >>> dongle. >> >> I'm working with A series parts and doing bare-metal development. >> E.g., just getting the VM system up and running is an "effort". >> I believe ARM's core/device models should make that a lot easier >> without relying on "real" hardware -- especially if you haven't >> yet settled on a particular manufacturer's device, designed/laid out >> PCB, and fabbed prototype quantities. >> >> If, OTOH, ARM makes those models freely available (or not-freely but with >> ample support for integration with "third party" tools), then that opens >> the door for other approaches. >> >> Regardless, I would assume (hope) that ARM would have more of an incentive >> (and *specific* expertise on ARM-licensed devices) to getting you to >> production as most of their revenues are probably derived from recurring >> license fees (i.e., per unit device sales). What incentive does someone >> supporting GCC/GDB have to resolve some issue deep in the ARM IP? > > Did you not see the link to ARM GCC? > > https://launchpad.net/gcc-arm-embedded > > It is maintained by ARM employees. > > They want to sell more of their chips (or, more accurately, they want > their customers to sell more of their chips). That is their incentive.
Did you see my reference to "models"? Do the ARM GCC folks provide (virtual) *hardware* support? Or, refer you to your silicon vendor for that?? Please see my (coincident) reply to Anders...
Don Y <this@is.not.me.com> wrote:
> Then only folks willing to shell out the $1K would be capable of > kernel hacking "out-of-the-box"! It doesn't seem uncommon for FOSS > projects to "arbitrarily" pick their own tools, build systems, etc. > How is burdening a prospective developer with having to acquire and > install (even if they are "free") a different version of make(1), > a different VC system, testing framework, etc. any different than > saying "shell out $X instead of your *time* if you want to play"? > Haven't they, in effect, said their time and effort is worth > enough that they should pick the tools and approach that *they* > consider the most effective approach to the problem?
All the setup problems can be solved in FOSS by throwing them a preconfigured virtual machine. You can't do that with proprietary software unless you're very sure of the licences. On a lower level, a package manager solves a lot of the same problems - saying 'take Ubuntu xx.xx, type apt-get install libfoo bar2 xbaz and you're ready'. Doesn't matter if the project uses wierd tools, just add a few more characters to the apt-get line. Again it's more pain when you have to mess about with downloading from behind paywalls, setting up licence servers, etc etc. Theo
Don Y <this@is.not.me.com> wrote:
> Do I have to build at least one *physical* instance of every "system" > for which I want to develop code before I can start troubleshooting it > (with an ICE)? How helpful will the "GCC/GDB" folks be in that regard?
I think it depends. Models can be really useful, and ARM's are the obvious choice. If you're modifying the processor, or implementing the ISA yourself, you need a model. Models are also very handy for verification. Models can, however, be slow (eg we have a MIPS formal model that runs at ~120KIPS. Some models run at IPS - memory system modelling is a lot more complicated.) However, most end-users aren't dealing with implementing the ISA. Indeed, most end-users aren't even building silicon. If you're just writing software, it's the core that matters and a Cortex An in chip X is the same as a Cortex An in chip Y. The cache setup and the SoC peripherals will be different, but so will the model from your intended SoC. So you have more visibility and rigour in a model, but a Cortex An emulator combined with a Cortex An implementation may be sufficient, and likely faster. In that case it doesn't matter so much what the implementation is, since you are only interested in the core. Though I don't know the state of verification of emulators - I get the feeling that Qemu and friends are rather ad-hoc rather than being formally checked against the ISA spec (verification gets rather complicated where memory consistency and concurrency are involved). Theo
Hi Theo,

On 4/21/2015 7:58 AM, Theo Markettos wrote:
> Don Y <this@is.not.me.com> wrote: >> Do I have to build at least one *physical* instance of every "system" >> for which I want to develop code before I can start troubleshooting it >> (with an ICE)? How helpful will the "GCC/GDB" folks be in that regard? > > I think it depends. > > Models can be really useful, and ARM's are the obvious choice. If you're > modifying the processor, or implementing the ISA yourself, you need a model. > Models are also very handy for verification. Models can, however, be slow > (eg we have a MIPS formal model that runs at ~120KIPS. Some models run at > IPS - memory system modelling is a lot more complicated.)
The goal isn't to create "virtual products" (that need to perform exactly like the silicon in all regards). Rather, to be able to verify the integrity of the software -- and, to some extent, proposed *hardware* -- implementation(s) without having to physically implement *every* board/design... only to discover that stepping up to a larger memory complement or more performant processor would have been a better investment. Or, understand what power consumption is likely to be (given an actual, "modeled" performance level -- how many opcodes actually *are* executed to perform this particular task?) [I.e., the "physical hardware" approach to development gets expensive very quickly when you have to redo two dozen designs -- schematic, layout, fab -- just to add memory or beef up the processor, etc. Or, take advantage of newly announced -- but not yet available -- devices. Ask your hardware folks what it costs -- time and money -- to change the SoC on a board and just "reconnect the I/O's"!] We routinely debug hardware "at DC" or "in a simulator". We "single step" code to verify it's proper operation. Each approach *models* the physical characteristics of the mechanism being evaluated.
> However, most end-users aren't dealing with implementing the ISA. Indeed, > most end-users aren't even building silicon. If you're just writing > software, it's the core that matters and a Cortex An in chip X is > the same as a Cortex An in chip Y. The cache setup and the SoC peripherals > will be different, but so will the model from your intended SoC.
Most users (developers) aren't exposed to bare metal. When was the last time you tinkered with the scheduling algorithm in your RTOS? Or, tweaked the paging algorithm in the virtual memory implementation? I don't see the logic of crippling a development approach just so some *potential* future "developer/maintainer doesn't have to spend any money". E.g., another approach that "solves" the problem is to *close* the RTOS implementation and treat it entirely as a "binary component": "You don't need to understand or tinker with anything inside this box" Just like many video, wireless, network drivers in Linux. This gives their developers the leeway to approach their tasks with whatever tools they choose! Don't want to use their binary? Write your own, from scratch, using whatever tools you want on the metal!
> So you have more visibility and rigour in a model, but a Cortex An emulator > combined with a Cortex An implementation may be sufficient, and likely > faster. In that case it doesn't matter so much what the implementation is, > since you are only interested in the core.
But then you can only test the "instruction set" -- not the peripherals tied to that core. E.g., the "ARMulator" would let you test your ARM binaries... but any twiddling with cache/MMU controls was out of the question. Code that adjusted the "address decoder" did nothing, etc.
> Though I don't know the state of verification of emulators - I get > the feeling that Qemu and friends are rather ad-hoc rather than being > formally checked against the ISA spec (verification gets rather complicated > where memory consistency and concurrency are involved).
Exactly. AFAICT, ARMs models are essentially the same IP that is used to fab the silicon, placed in a wrapper that can be plugged into an IDE. If that is, indeed, the case (as expressed by ARM), then this also gives you a reference against which to measure other vendors' actual silicon: "the model claims it should behave, thusly". I doubt anyone (vendor) would lend much credence with how a FOSS model performed ("then, the model is flawed!") But, I'm not "developing Linux apps" so a *canned* model is unlikely to do more than an instruction set simulator would. Hence the appeal of their "Fast Models" product. Dunno. I stumbled on a colleague using DS5 at an offsite and he raved about it. While discussing it, another colleague chimed in about his experiences with it and how effective it had been at getting code up and running ("debugged") long before hardware or even silicon was available. When I countered with gcc/gdb/eclipse, both frowned as if a bad taste in their mouths. But, we didn't have time to "play" with the tools there so I'll have to look at an evaluation copy to get a better feel for what they are talking about. At the very least, I can probably run my code samples through their toolchain and look at size/speed/memory utilization, etc. Then, be on a better footing to "intelligently" discuss the pros and cons and explore the model side at our *next* offsite (formally put it on the agenda instead of trying to squeeze it in during a lunch break). I was hoping folks, here, would have more experience with the tools to comment, first-hand. Morning tea...
On 4/21/2015 7:34 AM, Theo Markettos wrote:
> Don Y <this@is.not.me.com> wrote: >> Then only folks willing to shell out the $1K would be capable of >> kernel hacking "out-of-the-box"! It doesn't seem uncommon for FOSS >> projects to "arbitrarily" pick their own tools, build systems, etc. >> How is burdening a prospective developer with having to acquire and >> install (even if they are "free") a different version of make(1), >> a different VC system, testing framework, etc. any different than >> saying "shell out $X instead of your *time* if you want to play"? >> Haven't they, in effect, said their time and effort is worth >> enough that they should pick the tools and approach that *they* >> consider the most effective approach to the problem? > > All the setup problems can be solved in FOSS by throwing them a > preconfigured virtual machine. You can't do that with proprietary software > unless you're very sure of the licences. > > On a lower level, a package manager solves a lot of the same problems - > saying 'take Ubuntu xx.xx, type apt-get install libfoo bar2 xbaz and you're > ready'. Doesn't matter if the project uses wierd tools, just add a few more > characters to the apt-get line. Again it's more pain when you have to mess > about with downloading from behind paywalls, setting up licence servers, etc > etc.
The issue is, what sort of (self-imposed!) obligation do I have to pick tools that are "affordable" (with no concern for other issues that might affect their overall utility/effectiveness)? E.g., given that, IN REAL TERMS, few folks will ever be tinkering at this level (how many Linux kernel hackers are there? How many userland contributors? How many "simple users"? Seriously... offer up REAL estimates of each of these!) Is it really worth burdening *my* efforts just to make it "inexpensive" for folks who *might* want to tinker in the future? My hardware designs are all "open" (schematics, films, etc). But, does that mean I have to use FOSS tools to *create* them, as well? On the off chance that someone will want to *modify* a design? Is it acceptable to produce TIFFs (i.e., un-editable) of schematic pages so they don't have to purchase those same tools? Even if that means more effort required for them to create an editable document in their tool-of-choice? Should I only use thru-hole components for those folks who can't afford the tools for SMT work? (or, should I leave the burden of making a thru-hole version in *their* lap: "here's the schematic, YOU see if you can find this SoC in a DIP...") What about components that are really only available (or affordable) in large production quantities? Does everything have to be a PIC or a PC? Do I have to create the molds for the various plastic enclosures using FOSS CAD tools in case someone wants to add another mounting boss? Ditto any sheet-metal work? At what point do you say, "the likelihood of someone tinkering with this thing is low enough that *they* can afford to bear the costs if they choose to do so"? I'm not sure how many FOSS projects you've looked "under the hood". Why so many different implementation language choices (yet, never a clear analysis of why "this is better than that")? Or, build systems (I have more INCOMPATIBLE "makes" here than I can count!)? Different file compressors (I've even got cpio archives!)? Different (incompatible) VCS's? Is the criteria "as long as a FREE piece of software/hardware can be acquired to do the job, then you are at liberty to use whatever you want"? I.e., within those constraints, the initial developer(s) are free to choose whatever environment/implementation they want? I'm surprised they are willing to pay for the actual *components* (chips) and don't insist on those being "free", as well! :>

The 2024 Embedded Online Conference