EmbeddedRelated.com
Forums

How to write a simple driver in bare metal systems: volatile, memory barrier, critical sections and so on

Started by pozz October 22, 2021
On 10/26/2021 10:22 PM, antispam@math.uni.wroc.pl wrote:
>> One wants to be able to move towards the goal of software *components*. >> You don't want to have to inspect the design of every *diode* that >> you use; you want to look at it's overall specifications and decide >> if those fit your needs. > > Sure, I would love to see really reusable components. But IMHO we > are quite far from that.
Do you use the standard libraries? Aren't THEY components? You rely on the compiler to decide how to divide X by Y -- instead of writing your own division routine. How often do you reimplement ?printf() to avoid all of the bloat that typically accompanies it? (when was the last time you needed ALL of those format specifiers in an application? And modifiers?
> There are some things which are reusable > if you accept modest to severe overhead.
What you need is components with varying characteristics. You can buy diodes with all sorts of current carrying capacities, PIVs, package styles, etc. But, they all still perform the same function. Why so many different part numbers? Why not just use the biggest, baddest diode in ALL your circuits? I.e., we readily accept differences in "standard components" in other disciplines; why not when it comes to software modules?
> For example things tends > to compose nicely if you dynamically allocate everything and use > garbage collection. But performace cost may be substantial. > And in embedded setting garbage collection may be unacceptable. > In some cases I have found out that I can get much better > speed joing things that could be done as composition of library > operations into single big routine.
Sure, but now you're tuning a solution to a specific problem. I've designed custom chips to solve particular problems. But, they ONLY solve those particular problems! OTOH, I use lots of OTC components in my designs because those have been designed (for the most part) with an eye towards meeting a variety of market needs.
> In other cases I fixed > bugs by replacing composition of library routines by a single > routine: there were interactions making simple composition > incorrect. Correct alterantive was single routine. > > As I wrote my embedded programs are simple and small. But I > use almost no external libraries. Trying some existing libraries > I have found out that some produce rather large programs, linking > in a lot of unneeded stuff.
Because they try to address a variety of solution spaces without trying to be "optimal" for any. You trade flexibility/capability for speed/performance/etc.
> Of course, writing for scratch > will not scale to bigger programs. OTOH, I feel that with > proper tooling it would be possible to retain efficiency and > small code size at least for large class of microntroller > programs (but existing tools and libraries do not support this).
Templates are an attempt in this direction. Allowing a class of problems to be solved once and then tailored to the specific application. But, personal experience is where you win the most. You write your second or third UART driver and start realizing that you could leverage a previous design if you'd just thought it out more fully -- instead of tailoring it to the specific needs of the original application. And, as you EXPECT to be reusing it in other applications (as evidenced by the fact that it's your third time writing the same piece of code!), you anticipate what those *might* need and think about how to implement those features "economically". It's rare that an application is *so* constrained that it can't afford a couple of extra lines of code, here and there. If you've considered efficiency in the design of your algorithms, then these little bits of inefficiency will be below the noise floor.
>> Unlikely that this code will describe itself as "works well enough >> SOME of the time..." >> >> And, when/if you stumble on such faults, good luck explaining to >> your customer why it's going to take longer to fix and retest the >> *existing* codebase before you can get on with your modifications... > > Commercial vendors like to say how good their progam are. But > market reality is that program my be quite bad and still sell.
The same is true of FOSS -- despite the claim that many eyes (may) have looked at it (suggesting that bugs would have been caught!) From "KLEE: Unassisted and Automatic Generation of High-Coverage Tests for Complex Systems Programs": KLEE finds important errors in heavily-tested code. It found ten fatal errors in COREUTILS (including three that had escaped detection for 15 years), which account for more crashing bugs than were reported in 2006, 2007 and 2008 combined. It further found 24 bugs in BUSYBOX, 21 bugs in MINIX, and a security vulnerability in HISTAR– a total of 56 serious bugs. Ooops! I wonder how many FOSS *eyes* missed those errors? Every time you reinvent a solution, you lose much of the benefit of the previous TESTED solution.
Don Y <blockedofcourse@foo.invalid> wrote:
> On 10/26/2021 10:22 PM, antispam@math.uni.wroc.pl wrote: > >> One wants to be able to move towards the goal of software *components*. > >> You don't want to have to inspect the design of every *diode* that > >> you use; you want to look at it's overall specifications and decide > >> if those fit your needs. > > > > Sure, I would love to see really reusable components. But IMHO we > > are quite far from that. > > Do you use the standard libraries?
Yes, I uses libraries when appropriate.
> Aren't THEY components?
Well, some folks expect more from components than from traditional libraries. Some evan claim to deliver. However, libraries have limitations and ATM I see nothing that fundamentally change situation.
> You rely on the compiler to decide how to divide X by Y -- instead > of writing your own division routine.
Well, normally in C code I relay on compiler provied division. To say the truth, my MCU code uses division sparingly, only when I can not avoid it. OTOH I also use languages with multiprecision integers. In one case I use complier provided routines, but I am provider of modifed compiler and modification includes replacement of division routine. In other case I override compiler supplied division routine by my own (which in turn sends real work to external library).
> How often do you reimplement > ?printf() to avoid all of the bloat that typically accompanies it?
I did that once (for OS kernel where standard library would not work). If needed I can reuse it. On PC-s I am not worried by bloat due to printf. OTOH, on MCU-s I am not sure if I ever used printf. Rather, printing was done by specialized routines either library provided or my own.
> (when was the last time you needed ALL of those format specifiers > in an application? And modifiers? > > > There are some things which are reusable > > if you accept modest to severe overhead. > > What you need is components with varying characteristics. > You can buy diodes with all sorts of current carrying capacities, > PIVs, package styles, etc. But, they all still perform the > same function. Why so many different part numbers? Why not > just use the biggest, baddest diode in ALL your circuits?
I heard such electronic analogies many times. But they miss important point: there is no way for me to make my own diode, I am stuck with what is available on the market. And diode is logically pretty simple component, yet we need many kinds.
> I.e., we readily accept differences in "standard components" > in other disciplines; why not when it comes to software > modules?
Well, software is _much_ more compilcated than physical engineering artifacts. Physical thing may have 10000 joints, but if joints are identical, then this is moral equivalent of simple loop that just iterates fixed number of times. At software level number of possible pre-composed blocks is so large that it is infeasible to deliver all of them. Classic trick it to parametrize. However even if you parametrize there are hundreds of design decisions going into relatively small piece of code. If you expose all design decisions then user as well may write his/her own code because complexity will be similar. So normaly parametrization is limited and there will be users who find hardcoded desion choices inadequate. Another things is that current tools are rather weak at supporting parametrization.
> > For example things tends > > to compose nicely if you dynamically allocate everything and use > > garbage collection. But performace cost may be substantial. > > And in embedded setting garbage collection may be unacceptable. > > In some cases I have found out that I can get much better > > speed joing things that could be done as composition of library > > operations into single big routine. > > Sure, but now you're tuning a solution to a specific problem. > I've designed custom chips to solve particular problems. > But, they ONLY solve those particular problems! OTOH, > I use lots of OTC components in my designs because those have > been designed (for the most part) with an eye towards > meeting a variety of market needs.
Maybe I made wrong impression, I think some explanation is in place here. I am trying to make my code reusable. For my problems performance is important part of reusablity: our capability to solve problem is limited by performance and with better perfomance users can solve bigger problems. I am re-using code that I can and I would re-use more if I could but there there are technical obstacles. Also, while I am trying to make my code reusable, there are intrusive design decision which may interfere with your possiobility and willingness to re-use. In slightly different spirit: in another thread you wrote about accessing disc without OS file cache. Here I normaly depend on OS and OS file caching is big thing. It is not perfect, but OS (OK, at least Linux) is doing this resonably well I have no temptation to avoid it. And I appreciate that with OS cache performance is usually much better that would be "without cache". OTOH, I routinly avoid stdio for I/O critical things (so no printf in I/O critical code).
> > In other cases I fixed > > bugs by replacing composition of library routines by a single > > routine: there were interactions making simple composition > > incorrect. Correct alterantive was single routine. > > > > As I wrote my embedded programs are simple and small. But I > > use almost no external libraries. Trying some existing libraries > > I have found out that some produce rather large programs, linking > > in a lot of unneeded stuff. > > Because they try to address a variety of solution spaces without > trying to be "optimal" for any. You trade flexibility/capability > for speed/performance/etc.
I think that this is more subtle: libraries frequently force some way of doing things. Which may be good if you try to quickly roll solution and are within capabilities of library. But if you need/want different design, then library may be too inflexible to deliver it.
> > Of course, writing for scratch > > will not scale to bigger programs. OTOH, I feel that with > > proper tooling it would be possible to retain efficiency and > > small code size at least for large class of microntroller > > programs (but existing tools and libraries do not support this). > > Templates are an attempt in this direction. Allowing a class of > problems to be solved once and then tailored to the specific > application.
Yes, templates could help. But they also have problems. One of them is that (among others) I would like to target STM8 and I have no C++ compiler for STM8. My idea is to create custom "optimizer/generator" for (annotated) C code. ATM it is vapourware, but I think it is feasible with reasonable effort.
> But, personal experience is where you win the most. You write > your second or third UART driver and start realizing that you > could leverage a previous design if you'd just thought it out > more fully -- instead of tailoring it to the specific needs > of the original application. > > And, as you EXPECT to be reusing it in other applications (as > evidenced by the fact that it's your third time writing the same > piece of code!), you anticipate what those *might* need and > think about how to implement those features "economically". > > It's rare that an application is *so* constrained that it can't > afford a couple of extra lines of code, here and there. If > you've considered efficiency in the design of your algorithms, > then these little bits of inefficiency will be below the noise floor.
Well, I am not talking about "couple of extra lines". Rather about IMO substantial fixed overhead. As I wrote, one of my targets is STM8 with 8k flash, another is MSP430 with 16k flash, another is STM32 with 16k flash (there are also bigger targets). One of libraries/frameworks for STM32 after activating few featurs pulled in about 16k code, this is substantial overhead given how little features I needed. Other folks reported that for trivial programs vendor supplied frameworks pulled close to 30k code. That may be fine if you have bigger device and need features, but for smaller MCU-s it may be difference between not fitting into device or (without library) having plenty of free space. When I tried it Free RTOS for STM32 needed about 8k flash. Which is fine if you need RTOS. But ATM my designs run without RTOS. I have found libopencm3 to have small overhead. But is routines are doing so little that direct register access may give simpler code.
> >> Unlikely that this code will describe itself as "works well enough > >> SOME of the time..." > >> > >> And, when/if you stumble on such faults, good luck explaining to > >> your customer why it's going to take longer to fix and retest the > >> *existing* codebase before you can get on with your modifications... > > > > Commercial vendors like to say how good their progam are. But > > market reality is that program my be quite bad and still sell. > > The same is true of FOSS -- despite the claim that many eyes (may) > have looked at it (suggesting that bugs would have been caught!) > > From "KLEE: Unassisted and Automatic Generation of High-Coverage > Tests for Complex Systems Programs": > > KLEE finds important errors in heavily-tested code. It > found ten fatal errors in COREUTILS (including three > that had escaped detection for 15 years), which account > for more crashing bugs than were reported in 2006, 2007 > and 2008 combined. It further found 24 bugs in BUSYBOX, 21 > bugs in MINIX, and a security vulnerability in HISTAR? a > total of 56 serious bugs. > > Ooops! I wonder how many FOSS *eyes* missed those errors?
Open source folks tend to be more willing to talk about bugs. And the above nicely shows that there is a lot of bugs, most waiting to by discovered.
> Every time you reinvent a solution, you lose much of the benefit > of the previous TESTED solution.
TESTED part works for simple repeatable tasks. But if you have complex task it is quite likely that you will be the first person with given use case. gcc is borderline case: if you throw really new code at it you can expect to see bugs. gcc user community it large and there is resonable chance that sombody wrote earlier code which is sufficiently similar to yours to catch troubles. But there are domains that are at least as complicated as compilation and have much smaller user community. You may find out that there are _no_ code that could be reasonably re-used. Were you ever in situation when you looked how some "standard library" solves a tricky problem and realized that in fact library does not solve the problem? -- Waldek Hebisch
On 10/31/2021 3:54 PM, antispam@math.uni.wroc.pl wrote:
>> Aren't THEY components? > > Well, some folks expect more from components than from > traditional libraries. Some evan claim to deliver. > However, libraries have limitations and ATM I see nothing > that fundamentally change situation.
A component is something that you can use as a black box, without having to reinvent it. It is the epitome of reuse.
>> How often do you reimplement >> ?printf() to avoid all of the bloat that typically accompanies it? > > I did that once (for OS kernel where standard library would not > work). If needed I can reuse it. On PC-s I am not worried by > bloat due to printf. OTOH, on MCU-s I am not sure if I ever used > printf. Rather, printing was done by specialized routines > either library provided or my own.
You can also create a ?printf() that you can configure at build time to support the modifiers and specifiers that you know you will need. Just like you can configure a UART driver to support a FIFO size defined at configuration, hardware handshaking, software flowcontrol, the high and low water marks for each of those (as they can be different), the character to send to request the remote to stop transmitting, the character you send to request resumption of transmission, which character YOU will recognize as requesting your Tx channel to pause, the character (or condition) you will recognize to resume your Tx, whether or not you will sample the condition codes in the UART, how you read/write the data register, how you read/write the status register, etc. While these sound like lots of options, they are all relatively trivial additions to the code.
>> (when was the last time you needed ALL of those format specifiers >> in an application? And modifiers? >> >>> There are some things which are reusable >>> if you accept modest to severe overhead. >> >> What you need is components with varying characteristics. >> You can buy diodes with all sorts of current carrying capacities, >> PIVs, package styles, etc. But, they all still perform the >> same function. Why so many different part numbers? Why not >> just use the biggest, baddest diode in ALL your circuits? > > I heard such electronic analogies many times. But they miss > important point: there is no way for me to make my own diode,
Sure there is! It is just not an efficient way of spending your resources when you have so many OTS offerings available. You can design your own processor. Why do you "settle" for an OTS device (ANS: because there is so little extra added value you will typically gain from rolling your own vs. the "inefficiency" of using a COTS offering)
> I am stuck with what is available on the market. And diode > is logically pretty simple component, yet we need many kinds. > >> I.e., we readily accept differences in "standard components" >> in other disciplines; why not when it comes to software >> modules? > > Well, software is _much_ more compilcated than physical > engineering artifacts. Physical thing may have 10000 joints, > but if joints are identical, then this is moral equivalent of > simple loop that just iterates fixed number of times.
This is the argument in favor of components. You'd much rather read a comprehensive specification ("datasheet") for a software component than have to read through all of the code that implements it. What if it was implemented in some programming language in which you aren't expert? What if it was a binary "BLOB" and couldn't be inspected?
> At software level number of possible pre-composed blocks > is so large that it is infeasible to deliver all of them.
You don't have to deliver all of them. When you wire a circuit, you still have to *solder* connections, don't you? The components don't magically glue themselves together...
> Classic trick it to parametrize. However even if you > parametrize there are hundreds of design decisions going > into relatively small piece of code. If you expose all > design decisions then user as well may write his/her own > code because complexity will be similar. So normaly > parametrization is limited and there will be users who > find hardcoded desion choices inadequate. > > Another things is that current tools are rather weak > at supporting parametrization.
Look at a fleshy UART driver and think about how you would decompose it into N different variants that could be "compile time configurable". You'll be surprised as to how easy it is. Even if the actual UART hardware differs from instance to instance.
>>> For example things tends >>> to compose nicely if you dynamically allocate everything and use >>> garbage collection. But performace cost may be substantial. >>> And in embedded setting garbage collection may be unacceptable. >>> In some cases I have found out that I can get much better >>> speed joing things that could be done as composition of library >>> operations into single big routine. >> >> Sure, but now you're tuning a solution to a specific problem. >> I've designed custom chips to solve particular problems. >> But, they ONLY solve those particular problems! OTOH, >> I use lots of OTC components in my designs because those have >> been designed (for the most part) with an eye towards >> meeting a variety of market needs. > > Maybe I made wrong impression, I think some explanation is in > place here. I am trying to make my code reusable. For my > problems performance is important part of reusablity: our > capability to solve problem is limited by performance and with > better perfomance users can solve bigger problems. I am > re-using code that I can and I would re-use more if I could > but there there are technical obstacles. Also, while I am > trying to make my code reusable, there are intrusive > design decision which may interfere with your possiobility > and willingness to re-use.
If you don't know where the design is headed, then you can't pick the components that it will need. I approach a design from the top (down) and bottom (up). This lets me gauge the types of information that I *may* have available from the hardware -- so I can sort out how to approach those limitations from above. E.g., if I can't control the data rate of a comm channel, then I either have to ensure I can catch every (complete) message *or* design a protocol that lets me detect when I've missed something. There are costs to both approaches. If I dedicate resource to ensuring I don't miss anything, then some other aspect of the design will bear that cost. If I rely on detecting missed messages, then I have to put a figure on their relative likelihood so my device doesn't fail to provide its desired functionality (because it is always missing one or two characters out of EVERY message -- and, thus, sees NO messages).
> In slightly different spirit: in another thread you wrote > about accessing disc without OS file cache. Here I > normaly depend on OS and OS file caching is big thing. > It is not perfect, but OS (OK, at least Linux) is doing > this resonably well I have no temptation to avoid it. > And I appreciate that with OS cache performance is > usually much better that would be "without cache". > OTOH, I routinly avoid stdio for I/O critical things > (so no printf in I/O critical code).
My point about the cache was that it is of no value in my case; I'm not going to revisit a file once I've seen it the first time (so why hold onto that data?)
>>> In other cases I fixed >>> bugs by replacing composition of library routines by a single >>> routine: there were interactions making simple composition >>> incorrect. Correct alterantive was single routine. >>> >>> As I wrote my embedded programs are simple and small. But I >>> use almost no external libraries. Trying some existing libraries >>> I have found out that some produce rather large programs, linking >>> in a lot of unneeded stuff. >> >> Because they try to address a variety of solution spaces without >> trying to be "optimal" for any. You trade flexibility/capability >> for speed/performance/etc. > > I think that this is more subtle: libraries frequently force some > way of doing things. Which may be good if you try to quickly roll > solution and are within capabilities of library. But if you > need/want different design, then library may be too inflexible > to deliver it.
Use a different diode.
>>> Of course, writing for scratch >>> will not scale to bigger programs. OTOH, I feel that with >>> proper tooling it would be possible to retain efficiency and >>> small code size at least for large class of microntroller >>> programs (but existing tools and libraries do not support this). >> >> Templates are an attempt in this direction. Allowing a class of >> problems to be solved once and then tailored to the specific >> application. > > Yes, templates could help. But they also have problems. One > of them is that (among others) I would like to target STM8 > and I have no C++ compiler for STM8. My idea is to create > custom "optimizer/generator" for (annotated) C code. > ATM it is vapourware, but I think it is feasible with > reasonable effort. > >> But, personal experience is where you win the most. You write >> your second or third UART driver and start realizing that you >> could leverage a previous design if you'd just thought it out >> more fully -- instead of tailoring it to the specific needs >> of the original application. >> >> And, as you EXPECT to be reusing it in other applications (as >> evidenced by the fact that it's your third time writing the same >> piece of code!), you anticipate what those *might* need and >> think about how to implement those features "economically". >> >> It's rare that an application is *so* constrained that it can't >> afford a couple of extra lines of code, here and there. If >> you've considered efficiency in the design of your algorithms, >> then these little bits of inefficiency will be below the noise floor. > > Well, I am not talking about "couple of extra lines". Rather > about IMO substantial fixed overhead. As I wrote, one of my > targets is STM8 with 8k flash, another is MSP430 with 16k flash, > another is STM32 with 16k flash (there are also bigger targets). > One of libraries/frameworks for STM32 after activating few featurs > pulled in about 16k code, this is substantial overhead given > how little features I needed. Other folks reported that for > trivial programs vendor supplied frameworks pulled close to 30k
A "framework" is considerably more than a set of individually selectable components. I've designed products with 2KB of code and 128 bytes of RAM. The "components" were ASM modules instead of HLL modules. Each told me how big it was, how much RAM it required, how deep the stack penetration when invoked, how many T-states (worst case) to execute, etc. So, before I designed the hardware, I knew what I would need by way of ROM/RAM (before the days of FLASH) and could commit the hardware to foil without fear of running out of "space" or "time".
> code. That may be fine if you have bigger device and need features, > but for smaller MCU-s it may be difference between not fitting into > device or (without library) having plenty of free space.
Sure. But a component will have a datasheet that tells you what it provides and at what *cost*.
> When I tried it Free RTOS for STM32 needed about 8k flash. Which > is fine if you need RTOS. But ATM my designs run without RTOS.
RTOS is a commonly misused term. Many are more properly called MTOSs (they provide no real timeliness guarantees, just multitasking primitives). IMO, the advantages of writing in a multitasking environment so far outweigh the "costs" of an MTOS that it behooves one to consider how to shoehorn that functionality into EVERY design. When writing in a HLL, there are complications that impose constraints on how the MTOS provides its services. But, for small projects written in ASM, you can gain the benefits of an MTOS for very few bytes of code (and effectively zero RAM).
> I have found libopencm3 to have small overhead. But is routines > are doing so little that direct register access may give simpler > code. > >>>> Unlikely that this code will describe itself as "works well enough >>>> SOME of the time..." >>>> >>>> And, when/if you stumble on such faults, good luck explaining to >>>> your customer why it's going to take longer to fix and retest the >>>> *existing* codebase before you can get on with your modifications... >>> >>> Commercial vendors like to say how good their progam are. But >>> market reality is that program my be quite bad and still sell. >> >> The same is true of FOSS -- despite the claim that many eyes (may) >> have looked at it (suggesting that bugs would have been caught!) >> >> From "KLEE: Unassisted and Automatic Generation of High-Coverage >> Tests for Complex Systems Programs": >> >> KLEE finds important errors in heavily-tested code. It >> found ten fatal errors in COREUTILS (including three >> that had escaped detection for 15 years), which account >> for more crashing bugs than were reported in 2006, 2007 >> and 2008 combined. It further found 24 bugs in BUSYBOX, 21 >> bugs in MINIX, and a security vulnerability in HISTAR? a >> total of 56 serious bugs. >> >> Ooops! I wonder how many FOSS *eyes* missed those errors? > > Open source folks tend to be more willing to talk about bugs. > And the above nicely shows that there is a lot of bugs, most > waiting to by discovered.
Part of the problem is ownership of the codebase. You are more likely to know where your own bugs lie -- and, more willing to fix them ("pride of ownership"). When a piece of code is shared, over time, there seems to be less incentive for folks to tackle big -- often dubious -- issues as the "reward" is minimal (i.e., you may not own the code when the bug eventually becomes a problem)
>> Every time you reinvent a solution, you lose much of the benefit >> of the previous TESTED solution. > > TESTED part works for simple repeatable tasks. But if you have > complex task it is quite likely that you will be the first > person with given use case. gcc is borderline case: if you > throw really new code at it you can expect to see bugs. > gcc user community it large and there is resonable chance that > sombody wrote earlier code which is sufficiently similar to > yours to catch troubles. But there are domains that are at > least as complicated as compilation and have much smaller > user community. You may find out that there are _no_ code > that could be reasonably re-used. Were you ever in situation > when you looked how some "standard library" solves a tricky > problem and realized that in fact library does not solve > the problem?
As I said, your *personal* experience tells you where YOU will likely benefit. I did a stint with a company that manufactured telecommunications kit. We had all sorts of bizarre interface protocols with which we had to contend (e.g., using RLSD as a hardware "pacing" signal). So, it was worthwhile to spend time developing a robust UART driver (and handler, above it) as you *knew* the next project would likely have need of it, in some form or other. If you're working free-lance and client A needs a BITBLTer for his design, you have to decide how likely client B (that you haven't yet met) will be to need the same sort of module/component. For example, I've never (until recently) needed to interface to a disk controller in a product. So, I don't have a ready-made "component" in my bag-of-tricks. When I look at a new project, I "take inventory" of what I am likely to need... and compare that to what I know I have "in stock". If there's a lot of overlap, then my confidence in my bid goes up. If there'a a lot of new ground that I'll have to cover, then it goes down (and the price goes up!). Reuse helps you better estimate new projects, especially as projects grow in complexity. [There's nothing worse than having to upgrade someone else's design that didn't plan for the future. It's as if you have to redesign the entire product from scratch --- despite the fact that it *seems* to work, "as is" (but, not "as desired"!]
Don Y <blockedofcourse@foo.invalid> wrote:
> On 10/31/2021 3:54 PM, antispam@math.uni.wroc.pl wrote: > >> Aren't THEY components? > > > > Well, some folks expect more from components than from > > traditional libraries. Some evan claim to deliver. > > However, libraries have limitations and ATM I see nothing > > that fundamentally change situation. > > A component is something that you can use as a black box, > without having to reinvent it. It is the epitome of reuse. > > >> How often do you reimplement > >> ?printf() to avoid all of the bloat that typically accompanies it? > > > > I did that once (for OS kernel where standard library would not > > work). If needed I can reuse it. On PC-s I am not worried by > > bloat due to printf. OTOH, on MCU-s I am not sure if I ever used > > printf. Rather, printing was done by specialized routines > > either library provided or my own. > > You can also create a ?printf() that you can configure at build time to > support the modifiers and specifiers that you know you will need. > > Just like you can configure a UART driver to support a FIFO size defined > at configuration, hardware handshaking, software flowcontrol, the > high and low water marks for each of those (as they can be different), > the character to send to request the remote to stop transmitting, > the character you send to request resumption of transmission, which > character YOU will recognize as requesting your Tx channel to pause, > the character (or condition) you will recognize to resume your Tx, > whether or not you will sample the condition codes in the UART, how > you read/write the data register, how you read/write the status register, > etc. > > While these sound like lots of options, they are all relatively > trivial additions to the code. > > >> (when was the last time you needed ALL of those format specifiers > >> in an application? And modifiers? > >> > >>> There are some things which are reusable > >>> if you accept modest to severe overhead. > >> > >> What you need is components with varying characteristics. > >> You can buy diodes with all sorts of current carrying capacities, > >> PIVs, package styles, etc. But, they all still perform the > >> same function. Why so many different part numbers? Why not > >> just use the biggest, baddest diode in ALL your circuits?
<snip>
> > I am stuck with what is available on the market. And diode > > is logically pretty simple component, yet we need many kinds. > > > >> I.e., we readily accept differences in "standard components" > >> in other disciplines; why not when it comes to software > >> modules? > > > > Well, software is _much_ more compilcated than physical > > engineering artifacts. Physical thing may have 10000 joints, > > but if joints are identical, then this is moral equivalent of > > simple loop that just iterates fixed number of times. > > This is the argument in favor of components. You'd much rather > read a comprehensive specification ("datasheet") for a software > component than have to read through all of the code that implements > it.
Well, if there is simple to use component that performs what you need, then using it is fine. However, for many tasks once component is flexible enough to cover both your and my needs its specification may be longer and more tricky than code doing task at hand.
> What if it was implemented in some programming language in > which you aren't expert? What if it was a binary "BLOB" and > couldn't be inspected?
There are many reasons when existing code can not be reused. Concerning BLOB-s, I am trying to avoid them and in first order approximation I am not using them. One (serious IMO) problem with BLOB-s is that sooner or later they will be incompatible with other things (OS/other libraries/my code). Very old source code usually can be run on modern systems with modest effort. BLOB-s normally would require much more effort.
> > At software level number of possible pre-composed blocks > > is so large that it is infeasible to deliver all of them. > > You don't have to deliver all of them. When you wire a circuit, > you still have to *solder* connections, don't you? The > components don't magically glue themselves together...
Yes, one needs to make connections. In fact, in programming most work is "making connections". So you want something which is simple to connect. In other words, you can all parts of your design to play nicely together. With code deliverd by other folks that is not always the case.
> > Classic trick it to parametrize. However even if you > > parametrize there are hundreds of design decisions going > > into relatively small piece of code. If you expose all > > design decisions then user as well may write his/her own > > code because complexity will be similar. So normaly > > parametrization is limited and there will be users who > > find hardcoded desion choices inadequate. > > > > Another things is that current tools are rather weak > > at supporting parametrization. > > Look at a fleshy UART driver and think about how you would decompose > it into N different variants that could be "compile time configurable". > You'll be surprised as to how easy it is. Even if the actual UART > hardware differs from instance to instance.
UART-s are simple. And yet some things are tricky: in C to have "compile time configurable" buffer size you need to use macros. Works, but in a sense UART implementation "leaks" to user code.
> >>> For example things tends > >>> to compose nicely if you dynamically allocate everything and use > >>> garbage collection. But performace cost may be substantial. > >>> And in embedded setting garbage collection may be unacceptable. > >>> In some cases I have found out that I can get much better > >>> speed joing things that could be done as composition of library > >>> operations into single big routine. > >> > >> Sure, but now you're tuning a solution to a specific problem. > >> I've designed custom chips to solve particular problems. > >> But, they ONLY solve those particular problems! OTOH, > >> I use lots of OTC components in my designs because those have > >> been designed (for the most part) with an eye towards > >> meeting a variety of market needs. > > > > Maybe I made wrong impression, I think some explanation is in > > place here. I am trying to make my code reusable. For my > > problems performance is important part of reusablity: our > > capability to solve problem is limited by performance and with > > better perfomance users can solve bigger problems. I am > > re-using code that I can and I would re-use more if I could > > but there there are technical obstacles. Also, while I am > > trying to make my code reusable, there are intrusive > > design decision which may interfere with your possiobility > > and willingness to re-use. > > If you don't know where the design is headed, then you can't > pick the components that it will need.
Well, there are routine tasks, for them it is natural to re-use existing code. There are new tasks that are "almost" routine, than one can come with good design at the start. But in a sense "interesting" tasks are when at start you have only limited understanding. In such case it is hard to know "where the design is headed", except that it is likely to change. Of course, customer may be dissatisfied if you tell "I will look at the problem and maybe I will find solution". But lack of understanding is normal in research (at starting point), and I think that software houses also do risky projects hoping that big win on succesful ones will cover losses on failures.
> I approach a design from the top (down) and bottom (up). This > lets me gauge the types of information that I *may* have > available from the hardware -- so I can sort out how to > approach those limitations from above. E.g., if I can't > control the data rate of a comm channel, then I either have > to ensure I can catch every (complete) message *or* design a > protocol that lets me detect when I've missed something.
Well, with UART there will be some fixed transmission rate (with wrong clock frequency UART would be unable to receive anything). I would expect MCU to be able to receive all incoming characters (OK, assuming hardware UART with drivier using high priority interrupt). So, detecting that you got too much should not be too hard. OTOH, sensibly handling excess input is different issue: if characters are coming faster than you can process them, then either your CPU is underpowered or there is some failure causing excess transmission. In either case specific application will dictate what should be avoided.
> There are costs to both approaches. If I dedicate resource to > ensuring I don't miss anything, then some other aspect of the > design will bear that cost. If I rely on detecting missed > messages, then I have to put a figure on their relative > likelihood so my device doesn't fail to provide its desired > functionality (because it is always missing one or two characters > out of EVERY message -- and, thus, sees NO messages).
My thinking goes toward using relatively short messages and buffer big enough for two messages. If there is need for high speed I would go for continous messages and DMA transfers (using break interrupt to discover end of message in case of variable length messages). So device should be able to get all messages and in case of excess message trafic whole message could be dropped (possibly looking first for some high priority messages). Of course, there may be some externaly mandated message format and/or communitation protocal making DMA inappropriate. Still, assuming interrupts, all characters should reach interrupt handler, causing possibly some extra CPU load. The only possiblity of unnoticed loss of characters would be blocking interrupts too long. If interrupts can be blocked for too long, then I would expect loss of whole messages. In such case protocol should have something like "dont talk to me for next 100 miliseconds, I will be busy" to warn other nodes and request silence. Now, if you need to faithfully support sillyness like Modbus RTU timeouts, then I hope that you are adequatly paid...
> > In slightly different spirit: in another thread you wrote > > about accessing disc without OS file cache. Here I > > normaly depend on OS and OS file caching is big thing. > > It is not perfect, but OS (OK, at least Linux) is doing > > this resonably well I have no temptation to avoid it. > > And I appreciate that with OS cache performance is > > usually much better that would be "without cache". > > OTOH, I routinly avoid stdio for I/O critical things > > (so no printf in I/O critical code). > > My point about the cache was that it is of no value in my case; > I'm not going to revisit a file once I've seen it the first > time (so why hold onto that data?)
Well, OS "cache" has many functions. One of them is read-ahead, another is scheduling of requests to minimize seek time. And beside data there is also meta-data. OS functions need access to meta-data and OS-es are designed under assumption that there is decent cache hit rate on meta-data access.
> >>> In other cases I fixed > >>> bugs by replacing composition of library routines by a single > >>> routine: there were interactions making simple composition > >>> incorrect. Correct alterantive was single routine. > >>> > >>> As I wrote my embedded programs are simple and small. But I > >>> use almost no external libraries. Trying some existing libraries > >>> I have found out that some produce rather large programs, linking > >>> in a lot of unneeded stuff. > >> > >> Because they try to address a variety of solution spaces without > >> trying to be "optimal" for any. You trade flexibility/capability > >> for speed/performance/etc. > > > > I think that this is more subtle: libraries frequently force some > > way of doing things. Which may be good if you try to quickly roll > > solution and are within capabilities of library. But if you > > need/want different design, then library may be too inflexible > > to deliver it. > > Use a different diode.
Well, when needed I use my own library.
> >>> Of course, writing for scratch > >>> will not scale to bigger programs. OTOH, I feel that with > >>> proper tooling it would be possible to retain efficiency and > >>> small code size at least for large class of microntroller > >>> programs (but existing tools and libraries do not support this). > >> > >> Templates are an attempt in this direction. Allowing a class of > >> problems to be solved once and then tailored to the specific > >> application. > > > > Yes, templates could help. But they also have problems. One > > of them is that (among others) I would like to target STM8 > > and I have no C++ compiler for STM8. My idea is to create > > custom "optimizer/generator" for (annotated) C code. > > ATM it is vapourware, but I think it is feasible with > > reasonable effort. > > > >> But, personal experience is where you win the most. You write > >> your second or third UART driver and start realizing that you > >> could leverage a previous design if you'd just thought it out > >> more fully -- instead of tailoring it to the specific needs > >> of the original application. > >> > >> And, as you EXPECT to be reusing it in other applications (as > >> evidenced by the fact that it's your third time writing the same > >> piece of code!), you anticipate what those *might* need and > >> think about how to implement those features "economically". > >> > >> It's rare that an application is *so* constrained that it can't > >> afford a couple of extra lines of code, here and there. If > >> you've considered efficiency in the design of your algorithms, > >> then these little bits of inefficiency will be below the noise floor. > > > > Well, I am not talking about "couple of extra lines". Rather > > about IMO substantial fixed overhead. As I wrote, one of my > > targets is STM8 with 8k flash, another is MSP430 with 16k flash, > > another is STM32 with 16k flash (there are also bigger targets). > > One of libraries/frameworks for STM32 after activating few featurs > > pulled in about 16k code, this is substantial overhead given > > how little features I needed. Other folks reported that for > > trivial programs vendor supplied frameworks pulled close to 30k > > A "framework" is considerably more than a set of individually > selectable components. I've designed products with 2KB of code and > 128 bytes of RAM. The "components" were ASM modules instead of > HLL modules. Each told me how big it was, how much RAM it required, > how deep the stack penetration when invoked, how many T-states > (worst case) to execute, etc.
Nice, but I am not sure how practical this would be in modern times. I have C code and can resonably estimate resource use. But there are changable parameters which may enable/disable some parts. And size/speed/stack use depends on compiler optimizations. So there is variation. And there are traps. Linker transitively pulls dependencies, it there are "false" dependencies, they can pull much more than strictly needed. One example of "false" dependence are (or maybe were) C++ VMT-s. Namely, any use of object/class pulled VMT which in turn pulled all ancestors and methods. If unused methods referenced other classes that could easily cascade. In both cases authors of libraries probably thought that provided "goodies" justified size (intended targets were larger).
> So, before I designed the hardware, I knew what I would need > by way of ROM/RAM (before the days of FLASH) and could commit > the hardware to foil without fear of running out of "space" or > "time". > > > code. That may be fine if you have bigger device and need features, > > but for smaller MCU-s it may be difference between not fitting into > > device or (without library) having plenty of free space. > > Sure. But a component will have a datasheet that tells you what > it provides and at what *cost*.
My 16x2 text LCD routine may pull I2C driver. If I2C is not needed anyway, this is additional cost, otherwise cost is shared. LCD routine depends also on timer. Both timer and I2C affect MCU initialization. So even in very simple situations total cost is rather complex. And libraries that I tried presumably were not "components" in your sense, you had to link the program to learn total size. Documentation mentioned dependencies, when they affected correctness but otherwise not. To say the truth, when library supports hundreds or thousends of different targets (combinations of CPU core, RAM/ROM sizes, peripherial configurations) with different compilers, then there is hard to make exact statements. IMO, in ideal world for "standard" MCU functionality we would have configuration tool where user can specify needed functionality and tool would generate semi-custom code and estimate its resource use. MCU vendor tools attempt to offer something like this, but reports I heard were rather unfavourable, in particular it seems that vendors simply deliver thick library that supports "everything", and linking to this library causes code bloat.
> > When I tried it Free RTOS for STM32 needed about 8k flash. Which > > is fine if you need RTOS. But ATM my designs run without RTOS. > > RTOS is a commonly misused term. Many are more properly called > MTOSs (they provide no real timeliness guarantees, just multitasking > primitives).
Well, Free RTOS comes with "no warranty", but AFAICS they make honest effort to have good real time behaviour. In particular, code paths trough Free RTOS from events to user code are of bounded and rather short length. User code still may be delayed by interrupts/process priorities, but they give resonable explanation. So it is up to user to code things in way that gives needed real-time behaviour, but Free RTOS normally will not spoil it and may help.
> IMO, the advantages of writing in a multitasking environment so > far outweigh the "costs" of an MTOS that it behooves one to consider > how to shoehorn that functionality into EVERY design. > > When writing in a HLL, there are complications that impose > constraints on how the MTOS provides its services. But, for small > projects written in ASM, you can gain the benefits of an MTOS > for very few bytes of code (and effectively zero RAM).
Well, looking at books and articles I did not find convincing argument/example showing that one really need multitasking for small systems. I tend to think rather in terms of collection of coupled finite state machines (or if you prefer Petri net). State machines transition in response to events and may generate events. Each finite state machine could be a task. But it is not clear if it should. Some transitions are simple and should be fast and that I would do in interrupt handlers. Some other are triggered in regular way from other machines and are naturally handled by function calls. Some need queues. The whole thing fits resonably well in "super loop" paradigm. I have found one issue that at first glance "requires" multitasking. Namely, when one wants to put system in sleep mode when there is no work natural "super loop" approach looks like if (work_to_do) { do_work(); } else { wait_for_interrupt(); } where 'work_to_do' is flag which may be set by interrupt handlers. But there is nasty race condition, if interrupt comes between test for 'work_to_do' and 'wait_for_interrupt': despite having work to do system will go to sleep and only wake on next interrupt (which depending on specific requirements may be harmless or disaster). I was unable to find simple code that avoids this race. With multitasking kernel race vanishes: there is idle task which is only doing 'wait_for_interrupt' and OS scheduler passes control to worker tasks when there is work to do. But when one looks how multitasker avoids race, then it is clear that crucial point is doing control transfer via return from interrupt. More precisely, variables are tested with interrupts disabled and after decision is made return from interrupt transfers control. Important point is that if interrupt comes after control transfer interrupt handler will re-do test before returning to user code. So what is needed is piece of low-level code that uses return from interrupt for control transfer and all interrupt handlers need to jump to this code when finished. The rest (usually majority) of multitasker is not needed...
> >>>> Unlikely that this code will describe itself as "works well enough > >>>> SOME of the time..." > >>>> > >>>> And, when/if you stumble on such faults, good luck explaining to > >>>> your customer why it's going to take longer to fix and retest the > >>>> *existing* codebase before you can get on with your modifications... > >>> > >>> Commercial vendors like to say how good their progam are. But > >>> market reality is that program my be quite bad and still sell. > >> > >> The same is true of FOSS -- despite the claim that many eyes (may) > >> have looked at it (suggesting that bugs would have been caught!) > >> > >> From "KLEE: Unassisted and Automatic Generation of High-Coverage > >> Tests for Complex Systems Programs": > >> > >> KLEE finds important errors in heavily-tested code. It > >> found ten fatal errors in COREUTILS (including three > >> that had escaped detection for 15 years), which account > >> for more crashing bugs than were reported in 2006, 2007 > >> and 2008 combined. It further found 24 bugs in BUSYBOX, 21 > >> bugs in MINIX, and a security vulnerability in HISTAR? a > >> total of 56 serious bugs. > >> > >> Ooops! I wonder how many FOSS *eyes* missed those errors? > > > > Open source folks tend to be more willing to talk about bugs. > > And the above nicely shows that there is a lot of bugs, most > > waiting to by discovered. > > Part of the problem is ownership of the codebase. You are > more likely to know where your own bugs lie -- and, more > willing to fix them ("pride of ownership"). When a piece > of code is shared, over time, there seems to be less incentive > for folks to tackle big -- often dubious -- issues as the > "reward" is minimal (i.e., you may not own the code when the bug > eventually becomes a problem)
Ownership may cause problems: there is tendency to "solve" problems locally, that is in code that given person "owns". This is good if there is easy local solution. However, this may also lead to ugly workarounds that really do not work well, while problem is easily solvable in different part ("owned" by different programmer). I have seen such thing several times, looking at whole codebase after some effort it was possible to do simple fix, while there were workarounds in different ("wrong") places. I had no contact with original authors, but it seems that workarounds were due to "ownership". -- Waldek Hebisch
On 11/10/2021 9:34 PM, antispam@math.uni.wroc.pl wrote:
> Don Y <blockedofcourse@foo.invalid> wrote:
>>> Classic trick it to parametrize. However even if you >>> parametrize there are hundreds of design decisions going >>> into relatively small piece of code. If you expose all >>> design decisions then user as well may write his/her own >>> code because complexity will be similar. So normaly >>> parametrization is limited and there will be users who >>> find hardcoded desion choices inadequate. >>> >>> Another things is that current tools are rather weak >>> at supporting parametrization. >> >> Look at a fleshy UART driver and think about how you would decompose >> it into N different variants that could be "compile time configurable". >> You'll be surprised as to how easy it is. Even if the actual UART >> hardware differs from instance to instance. > > UART-s are simple. And yet some things are tricky: in C to have > "compile time configurable" buffer size you need to use macros. > Works, but in a sense UART implementation "leaks" to user code.
You can configure using manifest constants, conditional compilation, or even run-time switches. Or, by linking against different "support" routines. How and where the configuration "leaks" into user code is a function of the configuration mechanisms that you decide to employ. E.g., You'd likely NOT design your network stack to be tightly integrated with your choice of NIC (all else being equal) -- simply because you'd want to be able to reuse the stack with some *other* NIC without having to rewrite it. OTOH, it's not unexpected to want to isolate the caching of ARP results in an "application specific" manner as you'll likely know the sorts (and number!) of clients/services with which the device in question will be connecting. So, that (sub)module can be replaced with something most appropriate to the application yet with a "standardized" interface to the stack itself (*YOU* define that standard) All of these require decisions up-front; you can't expect to be able to retrofit an existing piece of code (cheaply) to support a more modular/configurable implementation in the future. But, personal experience teaches you what you are likely to need by way of flexibility/configurability. Most folks tend to eork in a very narrow set of application domains. Chances are, the network stack you design for an embedded product will be considerably different than one for a desktop OS. If you plan to straddle both domains, then the configurability challenge is greater!
>> There are costs to both approaches. If I dedicate resource to >> ensuring I don't miss anything, then some other aspect of the >> design will bear that cost. If I rely on detecting missed >> messages, then I have to put a figure on their relative >> likelihood so my device doesn't fail to provide its desired >> functionality (because it is always missing one or two characters >> out of EVERY message -- and, thus, sees NO messages). > > My thinking goes toward using relatively short messages and > buffer big enough for two messages.
You can also design with the intent of parsing messages before they are complete and "reducing" them along the way. This is particularly important if messages can have varying length *or* there is a possibility for the ass end of a message to get dropped (how do you know when the message is complete? Imagine THE USER misconfiguring your device to expect CRLFs and the traffic only contains newlines; the terminating CRLF never arrives!) [At the limit case, a message reduces to a concept -- that is represented in some application specific manner: "Start the motor", "Clear the screen", etc.] Barcodes are messages (character sequences) of a sort. I typically process a barcode at several *concurrent* levels: - an ISR that captures the times of transitions (black->white->black) - a task that reduces the data captured by the ISR into "bar widths" - a task that aggregates bar widths to form characters - a task that parses character sequences to determine valid messages - an application layer interpretation (or discard) of that message This allows each layer to decide when the data on which it relies does not represent a valid barcode and discard some (or all) of it... without waiting for a complete message to be present. So, the resources that were consumed by that (partial?) message are freed earlier. As such, there is never a "start time" nor "end time" for a barcode message -- because you don't want the user to have to "do something" to tell you that he is now going to scan a barcode (otherwise, the efficiency of using barcodes is subverted). [Think about the sorts of applications that use barcodes; how many require the user to tell the device "here comes a barcode, please start your decoder algorithm NOW!"] As users can abuse the barcode reader (there is nothing preventing them from continuously scanning barcodes, in violation of any "protocol" that the product may *intend*), you have to tolerate the case where the data arrives faster than it can be consumed. *Knowing* where (in the event stream) you may have "lost" some data (transitions, widths, characters or messages) lets you resync to a less pathological event stream later (when the user starts "behaving properly")
> If there is need for > high speed I would go for continous messages and DMA > transfers (using break interrupt to discover end of message > in case of variable length messages). So device should > be able to get all messages and in case of excess message > trafic whole message could be dropped (possibly looking > first for some high priority messages). Of course, there > may be some externaly mandated message format and/or > communitation protocal making DMA inappropriate. > Still, assuming interrupts, all characters should reach > interrupt handler, causing possibly some extra CPU > load. The only possiblity of unnoticed loss of characters > would be blocking interrupts too long. If interrupts can > be blocked for too long, then I would expect loss of whole > messages. In such case protocol should have something like > "dont talk to me for next 100 miliseconds, I will be busy" > to warn other nodes and request silence. Now, if you > need to faithfully support sillyness like Modbus RTU timeouts, > then I hope that you are adequatly paid...
>> IMO, the advantages of writing in a multitasking environment so >> far outweigh the "costs" of an MTOS that it behooves one to consider >> how to shoehorn that functionality into EVERY design. >> >> When writing in a HLL, there are complications that impose >> constraints on how the MTOS provides its services. But, for small >> projects written in ASM, you can gain the benefits of an MTOS >> for very few bytes of code (and effectively zero RAM). > > Well, looking at books and articles I did not find convincing > argument/example showing that one really need multitasking for > small systems.
The advantages of multitasking lie in problem decomposition. Smaller problems are easier to "get right", in isolation. The *challenge* of multitasking is coordinating the interactions between these semi-concurrent actors. Experience teaches you how to partition a "job". I want to blink a light at 1 Hz and check for a button to be pressed which will start some action that may be lengthy. I can move the light blink into an ISR (which GENERALLY is a ridiculous use of that "resource") to ensure the 1Hz timeliness is maintained regardless of what the "lengthy" task may be doing, at the time. Or, I can break the lengthy task into smaller chunks that are executed sequentially with "peeks" at the "light timer" between each of those segments. sequence1 := sequence2 := sequence3 := sequence4 := 0; while (FOREVER) { task1: case sequence1++ { 0 => do_task1_step0; 1 => do_task1_step1; 2 => do_task1_step2; ... } do_light; task2: case sequence2++ { 0 => do_task2_step0; 1 => do_task2_step1; 2 => do_task2_step2; ... } do_light; task3: switch sequence3++ { 0 => do_task3_step0; 1 => do_task3_step1; 2 => do_task3_step2; ... } do_light; ... } When you need to do seven (or fifty) other "lengthy actions" concurrently (each of which may introduce other "blinking lights" or timeliness constraints), its easier (less brittle) to put a structure in place that lets those competing actions share the processor without requiring the developer to micromanage at this level. [50 tasks isn't an unusual load in a small system; video arcade games from the early 80's -- 8 bit processors, kilobytes of ROM+RAM -- would typically treat each object on the screen (including bullets!) as a separate process] The above example has low overhead for the apparent concurrency. But, pushes all of the work onto the developer's lap. He has to carefully size each "step" of each "task" to ensure the overall system is responsive. A nicer approach is to just let an MTOS handle the switching between tasks. But, this comes at a cost of additional run-time overhead (e.g., arbitrary context switches).
> I tend to think rather in terms of collection > of coupled finite state machines (or if you prefer Petri net). > State machines transition in response to events and may generate > events. Each finite state machine could be a task. But it is > not clear if it should. Some transitions are simple and should > be fast and that I would do in interrupt handlers. Some > other are triggered in regular way from other machines and > are naturally handled by function calls. Some need queues. > The whole thing fits resonably well in "super loop" paradigm.
I use FSMs for UIs and message parsing. They let the structure of the code "rise to the top" where it is more visible (to another developer) instead of burying it in subroutines and function calls. "Event sources" create events which are consumed by FSMs, as needed. So, a "power monitor" could generate POWER_FAIL, LOW_BATTERY, POWER_RESTORED, etc. events while a "keypad decoder" could put out ENTER, CLEAR, ALPHA_M, NUMERIC_5, etc. events. Because there is nothing *special* about an "event", *ANY* piece of code can generate them. Their significance assigns based on where they are "placed" (in memory) and who/what can "see" them. So, you can use an FSM to parse a message (using "received characters" as an ordered stream of events) and "signal" MESSAGE_COMPLETE to another FSM that is awaiting "messages" (along with a pointer to the completed message)
>>>> From "KLEE: Unassisted and Automatic Generation of High-Coverage >>>> Tests for Complex Systems Programs": >>>> >>>> KLEE finds important errors in heavily-tested code. It >>>> found ten fatal errors in COREUTILS (including three >>>> that had escaped detection for 15 years), which account >>>> for more crashing bugs than were reported in 2006, 2007 >>>> and 2008 combined. It further found 24 bugs in BUSYBOX, 21 >>>> bugs in MINIX, and a security vulnerability in HISTAR? a >>>> total of 56 serious bugs. >>>> >>>> Ooops! I wonder how many FOSS *eyes* missed those errors? >>> >>> Open source folks tend to be more willing to talk about bugs. >>> And the above nicely shows that there is a lot of bugs, most >>> waiting to by discovered. >> >> Part of the problem is ownership of the codebase. You are >> more likely to know where your own bugs lie -- and, more >> willing to fix them ("pride of ownership"). When a piece >> of code is shared, over time, there seems to be less incentive >> for folks to tackle big -- often dubious -- issues as the >> "reward" is minimal (i.e., you may not own the code when the bug >> eventually becomes a problem) > > Ownership may cause problems: there is tendency to "solve" > problems locally, that is in code that given person "owns". > This is good if there is easy local solution. However, this > may also lead to ugly workarounds that really do not work > well, while problem is easily solvable in different part > ("owned" by different programmer). I have seen such thing > several times, looking at whole codebase after some effort > it was possible to do simple fix, while there were workarounds > in different ("wrong") places. I had no contact with > original authors, but it seems that workarounds were due to > "ownership".
You are *always* at the mercy of the code's owner. Just as folks are at YOUR mercy for the code that you (currently) exert ownership over. The best compliments you'll receive are from folks who inherit your codebase and can appreciate its structure and consistency. Conversely, your worst nightmares will be inheriting a codebase that was "hacked together", willy-nilly, by some number of predecessors with no real concern over their "product" (code). E.g., For FOSS projects, ownership isn't just a matter of who takes "responsibility" for coordinating/merging diffs into the codebase but, also, who has a compatible "vision" for the codebase, going forward. You'd not want a radically different vision from one owner to the next as this leads to gyrations in the codebase that will be seen as instability by its users (i.e., other developers). I use PostgreSQL in my current design. I have no desire to *develop* the RDBMS software -- let folks who understand that sort of thing work their own magic on the codebase. I can add value *elsewhere* in my designs. But, I eventually have to take ownership of *a* version of the software as I can't expect the "real owners" to maintain some version that *I* find useful, possibly years from now. Once I assume ownership of that chosen release, it will be my priorities and skillset that drive how it evolves. I can choose to cherry pick "fixes" from the main branch and back-port them into the version I've adopted. Or, decide to live with some particular set of problems/bugs/shortcomings. If I am prudent, I will attempt to adopt the "style" of the original developers in fitting any changes that I make to that codebase. I'd want my changes to "blend in" and seem consistent with that which preceded them. Folks following the main distribution would likely NOT be interested in the changes that I choose to embrace as they'll likely have different goals than I. But that doesn't say my ownership is "toxic", just that it doesn't suit the needs of (most) others. --- I've got to bow out of this conversation. I made a commitment to release 6 designs to manufacturing before year end. As it stands, now, it looks like I'll only have time enough for four of them as I got "distracted", spending the past few weeks gallavanting (but it was wicked fun!). OTOH, It won't be fun starting the new year two weeks "behind"... :< [Damn holidays eat into my work time. And, no excuse on my part; it's not like I didn't KNOW they were coming!! :< ]