Reply by David Brown September 6, 20222022-09-06
On 06/09/2022 12:41, Philipp Klaus Krause wrote:
> Am 06.09.22 um 11:09 schrieb David Brown: >> >> One important question, which I certainly can't answer myself, is >> whether this is worth the effort. > > That clearly depends on many aspects. What is the higher goal? What are > the available resources? IMO improving the free toolchain for 8-Bit > devices is worth it at this time. >
Fair enough. You have a far better idea of the users, of the effort involved, and the developer commitment than I do.
>> […]How many of these users would switch toolchains, even if SDCC were >> made hugely better than whatever they have now?  I'd expect almost >> none, they'd stick to what they have - most would not even upgrade to >> newer versions of the same tools that they already use. >> >> I would expect existing SDCC users to be more interested in upgrading, >> and they would always be happy with better code generation.  But I do >> not imagine there are many /new/ users - either people starting >> working on 8051 projects today, or moving from commercial toolchains. > > Indeed there is a question of putting in effort to match the needs of > different user groups, such as current SDCC users targetting µC, current > SDCC retrocomputing and retrogaming, current users of non-free tools, etc. > Naturally, SDCC developers do have an idea about the needs and wants of > current SDCC users from the mailing lists, issue trackers, etc. > On the other hand, such information was not readily available about > users that currently use a non-free compiler for architectures supported > by SDCC. Knowing how much overlap there is between what could be done > for different user groups is already useful information. In particular > improving the machine-independent optimizations and debug support is > something that will benefit both current and potential new users. > >
Reply by Don Y September 6, 20222022-09-06
On 9/6/2022 3:12 AM, Philipp Klaus Krause wrote:
> Am 06.09.22 um 10:59 schrieb Don Y: >> >> It could also be that many of the 8b devices are just not seeing much >> market share (or have fallen out of production). How many 68xx devices >> win designs nowadays? Does Zilog even make processors anymore? Etc. > > However, there are still plenty of people compiling code for the Z80 and SM83. > But practically no one uses non-free compilers to do that.
I think much of that has to do with *when* those devices came on the market. The choices for toolchains in the 68xx(x) and 808x/Z8x eras was barely more than manufacturer supplied tools (e.g., under ISIS on the Intellec, RIO on the ZDS, Versados on the EXORmacs, etc.). Recall PCs only came into being in the early 80's; CP/M boxen being more common for nonproprietary platforms. I didn't use PC-hosted tools until the NS32K -- and even those weren't actually hosted on an x86!
> Most use SDCC either > directly or via the z88dk fork. A few use zcc or ack. All of these are free, so > not covered by the question that started the thread.
I'm sure every device I designed is still using the toolchain that I selected at the time -- hence my comment of "inertia" in my initial post in this thread. There are a fair number of products that have very long lifetimes where the cost of making a significant change (i.e., complete redesign) drags in so many externalities that it becomes prohibitive. "If it ain't broke, don't fix it!" (I have some devices that are still being supported 30+ years after the initial design) ISTR the US military still uses 6502's in some of their armaments. And I know there was a radhard 8085 some time ago...
> It is mostly a retrocomputing / -gaming crowd. Since many of them are willing > to try development snapshots, and report bugs, their use of SDCC helps a lot in > spotting bugs in SDCC early, so they can be fixed before a release.
Most of the arcade pieces that I'm familiar with were developed in ASM (though I have no idea what the design methodology was for consoles). Often, the "OS" (more of an "executive") was tailored to a very low overhead implementation that doesn't lend itself to use of HLLs (e.g., a single stack so any multitasking has to ensure stack protocol isn't violated across a task switch) [There was also a lot of proprietary hardware to manipulate the video out-of-band as the processors of that era couldn't update displays as fast as they were refreshed!]
Reply by Philipp Klaus Krause September 6, 20222022-09-06
Am 06.09.22 um 11:09 schrieb David Brown:
> > One important question, which I certainly can't answer myself, is > whether this is worth the effort.
That clearly depends on many aspects. What is the higher goal? What are the available resources? IMO improving the free toolchain for 8-Bit devices is worth it at this time.
> […]How many of these > users would switch toolchains, even if SDCC were made hugely better than > whatever they have now?  I'd expect almost none, they'd stick to what > they have - most would not even upgrade to newer versions of the same > tools that they already use. > > I would expect existing SDCC users to be more interested in upgrading, > and they would always be happy with better code generation.  But I do > not imagine there are many /new/ users - either people starting working > on 8051 projects today, or moving from commercial toolchains.
Indeed there is a question of putting in effort to match the needs of different user groups, such as current SDCC users targetting µC, current SDCC retrocomputing and retrogaming, current users of non-free tools, etc. Naturally, SDCC developers do have an idea about the needs and wants of current SDCC users from the mailing lists, issue trackers, etc. On the other hand, such information was not readily available about users that currently use a non-free compiler for architectures supported by SDCC. Knowing how much overlap there is between what could be done for different user groups is already useful information. In particular improving the machine-independent optimizations and debug support is something that will benefit both current and potential new users.
Reply by Philipp Klaus Krause September 6, 20222022-09-06
Am 06.09.22 um 10:59 schrieb Don Y:
> > It could also be that many of the 8b devices are just not seeing much > market share (or have fallen out of production).  How many 68xx devices > win designs nowadays?  Does Zilog even make processors anymore?  Etc. >
However, there are still plenty of people compiling code for the Z80 and SM83. But practically no one uses non-free compilers to do that. Most use SDCC either directly or via the z88dk fork. A few use zcc or ack. All of these are free, so not covered by the question that started the thread. It is mostly a retrocomputing / -gaming crowd. Since many of them are willing to try development snapshots, and report bugs, their use of SDCC helps a lot in spotting bugs in SDCC early, so they can be fixed before a release. Philipp
Reply by David Brown September 6, 20222022-09-06
On 06/09/2022 09:22, Philipp Klaus Krause wrote:
> Am 05.09.22 um 19:32 schrieb Don Y: >> >> I've rarely worried about code *size* and only seldom worried about >> efficiency (execution speed). >> >> But, I *do* get annoyed if the generated code doesn't do what it >> was supposed to do!  Or, does it with unexpected side-effects, etc. > > However, the replies so far show that code size, not wrong code is the > problem. IMO, that is not surprising for mcs51: The mcs51 port in SDCC > is old, bug reports come in rarely, and in recnet years, most work on > mcs51 has been bugfixes. IMO, the mcs51 port is very stable. Improving > code generation always comes with the risk of introducing bugs. Still, > if time allows, it might be worth it (and I hope that most of the new > bugs will be found before a release). >
<snip>
> > Well, I asked for reasons why people are using non-free compilers > instead of SDCC. Many of the replies were indeed for mcs51. IMO, this is > because the mcs51 is a common &micro;C where SDCC has fallen behind vs. the > non-free compilers. > SDCC has other ports, that got far less replies, because the > architectures are less common&nbsp; (e.g. ds390) or because SDCC is already > the leading compiler for them (e.g. stm8). > 0)-3) were chosen is a way that I hope will make SDCC more competitive > for mcs51, while not neglecting other ports. > >
One important question, which I certainly can't answer myself, is whether this is worth the effort. For the most part, 8-bit microcontrollers are a dying breed. The only real exception is the AVR, which is a very different kind of processor and well supported by gcc (and maybe clang/llvm?). It used to be the case that whenever a chip manufacturer wanted a small processor in their device - radio chip, complex analogue converter, etc., - they put in an 8051. Now they put in an ARM Cortex-M device. So these kinds of brain-dead 8-bit CISC cores are almost only for legacy use - when a company already has so much time and money invested in hardware or software that is tied tightly to such cores, that they cannot easily change to something from this century. How many of these users would switch toolchains, even if SDCC were made hugely better than whatever they have now? I'd expect almost none, they'd stick to what they have - most would not even upgrade to newer versions of the same tools that they already use. I would expect existing SDCC users to be more interested in upgrading, and they would always be happy with better code generation. But I do not imagine there are many /new/ users - either people starting working on 8051 projects today, or moving from commercial toolchains. It's great that there are still people interested in improving this venerable toolchain. But when you start talking about a person-year of work, that's a lot of effort - it is not going to happen unless there is a clear justification for the cost. (Maybe it is possible to make this a student project for someone studying compiler design?)
Reply by Don Y September 6, 20222022-09-06
On 9/6/2022 12:22 AM, Philipp Klaus Krause wrote:
> Am 05.09.22 um 19:32 schrieb Don Y: >> >> I've rarely worried about code *size* and only seldom worried about >> efficiency (execution speed). >> >> But, I *do* get annoyed if the generated code doesn't do what it >> was supposed to do! Or, does it with unexpected side-effects, etc. > > However, the replies so far show that code size, not wrong code is the problem.
Understood. I was merely relaying my experiences (e.g., abandoning MS because of their approach to bug fixes). Most of my "smaller" projects have had large codebases (it wasn't uncommon to have a 250KB binary running on an 8b MCU; sewing various "bank switching" schemes into the toolkit was a prerequisite)
> IMO, that is not surprising for mcs51: The mcs51 port in SDCC is old, bug > reports come in rarely, and in recnet years, most work on mcs51 has been > bugfixes. IMO, the mcs51 port is very stable. Improving code generation always > comes with the risk of introducing bugs. Still, if time allows, it might be > worth it (and I hope that most of the new bugs will be found before a release). > >> To that end, the biggest win was vendor responsiveness; knowing >> that reporting a bug will result in prompt attention to fix *that* >> bug [&hellip;] >> >> Unfortunately (for you, supporting a product), the only way to get that >> sort of responsiveness is to make "support" your full-time job. <frown> > > Unpaid support with fixed response times for a free compiler doesn't look like > a good full-time job to me.
Exactly. FOSS projects that thrive seem to rely on lots of eyes and hands so the "load" isn't too great on any one individual. But, many projects are relatively easy to contribute without requiring specific knowledge beyond "this code fragment looks broken". E.g., I have no problem commiting patches for drivers and many services -- but don't bother doing so with gcc as the "admission fee" is too high.
> IMO, in general, the SDCC support channels (ticket > trackers, mailing lists) are quite responsive; most of the time, there is a > reply within hours, but sometimes it takes much longer.
My experience with tools for small processors predates "internet forums". I would typically have had to log on (with a modem) to a vendor's "BBS" and leave a message, there; picking up a new binary (from there) when available and transfering it via X/Y/ZMODEM to my own host. One typically didn't see other correspondence from other customers. Nor do I imagine they saw my bug reports or the vendors' announcements of new binaries built in response to those (unless the vendor deliberately reached out to them).
>>> In my opinion, the best way forward from here to make SDCC more competitive >>> vs. non-free compilers is: >>> >>> 0) Improve machine-independent optimizations >>> 1) Improve machine-dependent optimizations for mcs51 >>> 2) Improve debug support and integration >>> 3) Find and fix bugs >> >> If "uptake" is your goal, you might focus on just a single processor (8051 >> family seems a common application) and be known for how well you address >> *that* segment of the market -- rather than trying to bring the quality >> of all code generators up simultaneously. > > Well, I asked for reasons why people are using non-free compilers instead of > SDCC. Many of the replies were indeed for mcs51. IMO, this is because the mcs51 > is a common &micro;C where SDCC has fallen behind vs. the non-free compilers.
It could also be that many of the 8b devices are just not seeing much market share (or have fallen out of production). How many 68xx devices win designs nowadays? Does Zilog even make processors anymore? Etc. Other "small CPU" vendors often offer their own toolchains thus removing the burden of that expense (free competing with free). OTOH, the '51 (et al.) is a pretty ubiquitous architecture offered by a variety of vendors. And, at relatively high levels of integration (compared to 8b processors of days gone by)
> SDCC has other ports, that got far less replies, because the architectures are > less common (e.g. ds390) or because SDCC is already the leading compiler for > them (e.g. stm8). > 0)-3) were chosen is a way that I hope will make SDCC more competitive for > mcs51, while not neglecting other ports.
Again, good luck!
Reply by Philipp Klaus Krause September 6, 20222022-09-06
Am 05.09.22 um 19:32 schrieb Don Y:
> > I've rarely worried about code *size* and only seldom worried about > efficiency (execution speed). > > But, I *do* get annoyed if the generated code doesn't do what it > was supposed to do!&nbsp; Or, does it with unexpected side-effects, etc.
However, the replies so far show that code size, not wrong code is the problem. IMO, that is not surprising for mcs51: The mcs51 port in SDCC is old, bug reports come in rarely, and in recnet years, most work on mcs51 has been bugfixes. IMO, the mcs51 port is very stable. Improving code generation always comes with the risk of introducing bugs. Still, if time allows, it might be worth it (and I hope that most of the new bugs will be found before a release).
> To that end, the biggest win was vendor responsiveness; knowing > that reporting a bug will result in prompt attention to fix *that* > bug [&hellip;] > > Unfortunately (for you, supporting a product), the only way to get that > sort of responsiveness is to make "support" your full-time job.&nbsp; <frown>
Unpaid support with fixed response times for a free compiler doesn't look like a good full-time job to me. IMO, in general, the SDCC support channels (ticket trackers, mailing lists) are quite responsive; most of the time, there is a reply within hours, but sometimes it takes much longer.
> [&hellip;] > > [The devices I used were unlike current offerings in that they didn't > require > large "vendor/manufacturer libraries" to implement basic functionality > of on-chip components] >
That is still true for many 8-bit devices, which are the targets of SDCC.
>> In my opinion, the best way forward from here to make SDCC more >> competitive vs. non-free compilers is: >> >> 0) Improve machine-independent optimizations >> 1) Improve machine-dependent optimizations for mcs51 >> 2) Improve debug support and integration >> 3) Find and fix bugs > > If "uptake" is your goal, you might focus on just a single processor (8051 > family seems a common application) and be known for how well you address > *that* segment of the market -- rather than trying to bring the quality > of all code generators up simultaneously.
Well, I asked for reasons why people are using non-free compilers instead of SDCC. Many of the replies were indeed for mcs51. IMO, this is because the mcs51 is a common &micro;C where SDCC has fallen behind vs. the non-free compilers. SDCC has other ports, that got far less replies, because the architectures are less common (e.g. ds390) or because SDCC is already the leading compiler for them (e.g. stm8). 0)-3) were chosen is a way that I hope will make SDCC more competitive for mcs51, while not neglecting other ports.
Reply by Don Y September 5, 20222022-09-05
On 9/5/2022 8:33 AM, Philipp Klaus Krause wrote:
> Thanks for all the replies, here and elsewhere. Since by now, further ones are > arriving very slowly only, I'd like to give a quick summary. > > I'll quote just one reply in full, since in just a few lines it illustrates the > main points: > > "In my case the customer requested SDCC based project but it failed to > compile into the small flash size. Debugging was quite difficult. Using > the Simplicity Studio and Keil Compiler pairing made the code small > enough to fit into the device and made debugging much easier." > > The 3 most-cited reasons to not use SDCC were: > > * Lack of efficiency of the code generated by SDCC. > * Better debug support and integration in non-free toolchains. > * Availability of paid support for non-free compilers.
I've rarely worried about code *size* and only seldom worried about efficiency (execution speed). But, I *do* get annoyed if the generated code doesn't do what it was supposed to do! Or, does it with unexpected side-effects, etc. To that end, the biggest win was vendor responsiveness; knowing that reporting a bug will result in prompt attention to fix *that* bug (so I don't have to explore alternative ways of writing the code to avoid triggering it -- and then leaving a "FIXME" to remind myself to restore the code to its "correct" form once the compiler is fixed. When I was doing small processors (early 80's thru 90's), I developed relationships with a few vendors that let me get overnight turnaround on bug reports. In addition to the quick response, I *knew* that the changes in the tools were only oriented towards my reported bug; I didn't have to worry about some "major rewrite" that likely introduced NEW bugs, elsewhere! [I abandoned MS's tools when I reported a bug -- a pointer to a member function -- and was offered a completely new version of the compiler, "for free" (what, so I can debug THIS compiler, too??)] Unfortunately (for you, supporting a product), the only way to get that sort of responsiveness is to make "support" your full-time job. <frown> The other big win I found in tools of that era was how well the "under the hood" aspects of the code generator and support routines were documented. As I would have to modify the generated code to exist in a multitasking environment, I wanted to know where helper routines stored any static data on which they relied. Or, rewrite standard libraries to support reentrancy. Or, hook the debugger so I could see *a* task's evolving state regardless of the actions of other tasks (this isn't always trivial) [The devices I used were unlike current offerings in that they didn't require large "vendor/manufacturer libraries" to implement basic functionality of on-chip components]
> In my opinion, the best way forward from here to make SDCC more competitive vs. > non-free compilers is: > > 0) Improve machine-independent optimizations > 1) Improve machine-dependent optimizations for mcs51 > 2) Improve debug support and integration > 3) Find and fix bugs
If "uptake" is your goal, you might focus on just a single processor (8051 family seems a common application) and be known for how well you address *that* segment of the market -- rather than trying to bring the quality of all code generators up simultaneously. Good luck!
Reply by Philipp Klaus Krause September 5, 20222022-09-05
Thanks for all the replies, here and elsewhere. Since by now, further 
ones are arriving very slowly only, I'd like to give a quick summary.

I'll quote just one reply in full, since in just a few lines it 
illustrates the main points:

"In my case the customer requested SDCC based project but it failed to
compile into the small flash size. Debugging was quite difficult. Using
the Simplicity Studio and Keil Compiler pairing made the code small
enough to fit into the device and made debugging much easier."

The 3 most-cited reasons to not use SDCC were:

* Lack of efficiency of the code generated by SDCC.
* Better debug support and integration in non-free toolchains.
* Availability of paid support for non-free compilers.

In my opinion, the best way forward from here to make SDCC more 
competitive vs. non-free compilers is:

0) Improve machine-independent optimizations
1) Improve machine-dependent optimizations for mcs51
2) Improve debug support and integration
3) Find and fix bugs

I'd estimate the total effort at a full-time position for slightly more 
than a year, though even less effort should allow some improvements.

Philipp


Reply by Michael Schwingen September 4, 20222022-09-04
On 2022-07-20, Philipp Klaus Krause <pkk@spth.de> wrote:
> I wonder why some developers choose non-free compilers (Keil, IAR, > Cosmic, Raisonance, etc) when targeting architectures supported by the > free Small Device C Compiler (SDCC).
For 8051, Keil seems to generate better code than SDCC - I am currently doing some work on an old TI CC2511 (8051-core) chip, and tend to run into data size issues because SDCC statically allocates variables for all function parameters - Keil does have better optimizations for that (and probably also a better code generator, but I don't have much experience with keil). Also, at work, we have used IAR because TI only supplied binary libraries for the CC2541 for that compiler (we had to get the correct compiler version, the latest-and-greatest would not do). If I can choose the chip, I tend to choose something that has working gcc support if possible. cu Michael