EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

C++ hardware library for (small) 32-bit micro-controllers

Started by Wouter van Ooijen December 4, 2013
Hi Tom (and Wouter),

[snips throughout]

On 12/8/2013 2:52 AM, Tom Gardner wrote:
> On 04/12/13 18:45, Wouter van Ooijen wrote:
>> I am working on a portable C++ hardware library for real-time
-------------------------------------------------------^^^^^^^^^ Let's assume we actually mean "real-time" and not "real fast"...
>> applications on small (but still 32-bit) micro-controllers. My typical >> (and current only) target is the Cortex M0 LPC1114FN28 (4k RAM, 32K >> Flash) using GCC. Efficiency is very important, in run-time, RAM use, >> and ROM use. The typical restrictions for small microcontrollers >> apply: no RTTI, no exceptions, and no heap (or at most just an >> allocate-only heap).
> When I start new designs, my thought processes are along the lines of: > 1) what needs to be done at the hardware level, e.g. turn LED on/off, > sleep, write text, draw rectangle, PWM an output to control a motor, > read an ADC > 2) what is the level of abstraction that I want to use in my > application, e.g. setMotorSpeed, illuminateTarget, measureTemperature > displayTemperature > 3) what hardware do I have to use, both individual devices and > /combinations/ of devices > 4) what documentation, code examples and libraries are available > that I can more or less cut-and-paste into my code > 5) does the library contain complex algorithms or is it merely a > indirect access to the peripherals > > And then, most critically: > 6) what has the shortest learning+implementation curve > 7) when something doesn't work as expected, how easy will it be to debug
I'm a hardware person that ends up spending the majority of my time writing code -- because I can quickly craft minimalist hardware designs (minimizing DM+DL) that often require considerable contortions to coax into working in the software! (firmware) So, I look at a project from the top down, initially (what's likely to be *in* here?) and then jump to the bottom and start looking back *up* (how do I get to *there*?) Then, I choose the hardware that gives me the best bang for the least buck. Often, this means using "devices" in atypical ways. E.g., using a PWM output and comparator input to build a tracking ADC when a *real* ADC isn't available (or, doesn't have sufficient resolution). So, there's usually very little AT THE HARDWARE LEVEL that I can pilfer from a library or even some other project (other than symbolic names for configuration "bits")
> It is /very/ difficult to pass that filter for devices that have > kilobytes rather than gigabytes of memory. It can be done, e.g. > a networking stack, but it isn't easy.
+42 With most embedded devices, functionality is *known* at design time. And, if resource constrained, you don't want to *waste* resources (e.g., memory, CPU) on features that you don't need. [In some markets, "dead code" is actually a significant liability] I rewrite my network stacks for each project. There's too much generality in a typical stack to "just port it" to another application (given the constraints above). Especially when you consider the other "support services" that can go along with it (do you really *need* a DNS client? DHCP client? etc.) Do you need TCP? Or, just UDP? How many connections? Ah, I guess we'll need some timers for those... Do you need to support packet reassembly? Or, can your environment guarantee no fragmentation? Do you really need to support ICMP? Or, can we just ignore those niggling details? If you want to support a variety of YET TO BE DETERMINED applications atop the network stack, then your hands are tied. OTOH, if you already *know* what's sitting up there, you can selectively whittle away the superfluous cruft and/or tweak performance accordingly. The same mentality applies to other hardware devices. E.g., I may opt to reprogram some hardware device *in* an ISR that services that device. Or, perhaps some *other* device. Given your RT goal, I care more about how *predictable* your implementation will be than how *fast*. "Determinism". If I start talking directly to the hardware *around* your library, can I *break* it? Imagine a hardware device with a write-only configuration register. E.g., you write configuration and read status. So, you may not be able to *read* the current configuration! (this is common) No problem! You keep a static for each such device that tracks any changes *you* (your library) makes to the configuration. So, if you have a routine that allows you to change some portion of the configuration independantly, that funcion can peek at the "most recent configuration value written" and determine how to update that to reflect the desired changes WITHOUT DISTURBING OTHER THINGS CONTROLLED BY THAT CONFIGURATION VALUE. Now, assume a developer has need to *violate* your contract with him and manipulate the hardware directly. Will he *know* that you've encapsulated this "static"? Will he know to update it so that *your* functions will remain consistent with his modifications? Have you exported a method by which he *can* modify this static? Will this interface remain part of the contract forward-going? Don't get me wrong (OP) -- everything you can encapsulate is a win! Just don't be surprised if you find yourself (or others) "unwrapping" significant parts of your work in their applications. If they can't easily work-around your library when they *need* to (because you didn't anticipate some particular need of theirs), then your library will be a hindrance instaed of a help. <shrug> HTH, --don
> 1) what needs to be done at the hardware level, e.g. turn LED on/off, > sleep, write text, draw rectangle, PWM an output to control a motor, > read an ADC
Sorry, forgot to ask. The above reads as a small part of a wish list. Those items are amoung the things I do offer (or plan to do, or am writing). If you can think of more items, please share them! And probably the hardest part: I am especially looking for good interfaces. For instance, my A/D interface is (omitting a few details) template< int n_bits > struct pin_ad { static constexpr int ad_bits = n_bits; static constexpr int ad_maximum = ( 1 << ( n_bits + 1 )) - 1; static void ad_init(); static int ad_get(); }; Wouter
> Now, assume a developer has need to *violate* your contract with > him and manipulate the hardware directly. Will he *know* that > you've encapsulated this "static"? Will he know to update it > so that *your* functions will remain consistent with his > modifications? Have you exported a method by which he *can* modify > this static? Will this interface remain part of the contract > forward-going?
There are at least two ways to look at this problem. - As the library author, the big question is why would this user need to bypass the abstraction? That points to a problem in the design, or maybe in the documentation. - For the user, using only part of an abstraction is always risky. It means that the abstraction does not fit your needs, yet you still want to use a part of it? Maybe better throw it away entirely (and tell the author why!).
> Don't get me wrong (OP) -- everything you can encapsulate is a win! > Just don't be surprised if you find yourself (or others) "unwrapping" > significant parts of your work in their applications. If they can't > easily work-around your library when they *need* to (because you > didn't anticipate some particular need of theirs), then your library > will be a hindrance instaed of a help.
IMO one way to work around this problem is to use many small, interacting abstractions with clear and simple interfaces. This enables the user to throw away the one or few he does not like, but still use the others. LEGO (at least the old style) and Meccano are to be preferred over Playmobil. Wouter
["Followup-To:" header set to comp.lang.c++.]

On Sun, 2013-12-08, Wouter van Ooijen wrote:

(attributions lost, not my fault)

>>> I am working on a portable C++ hardware library >> >> 1. Portable hardware is myth. > > I have a lot of hardware that I can carry if I want to :) > > Seriously (I am not sure you are, but I'll be), a lot of > hardware-related code can be portable, except for the frustrating aspect > of accessing the I/O pins (and dealing with timing). > >> 2. Universal solutions and modular designs don't work. > > I don't think any comment is needed. > >> 3. Trying to cover everything instead of doing particular task is waste >> of time and effort. > > First part is true, second part is nonsense, otherwise no libraries > would exist or be used. Library design is the art of balancing between > doing everything and doing a specific task well.
Yes, but it's a difficult art, and too many people do it badly. I hope that was what Vladimir(?) tried to say. I used to do it -- badly -- but nowadays I try to fit my code to the design I'm working on, in an elegant way if possible. When I've done similar things in two or three different projects, I stop to see if it makes sense to split it out into a library. At that point I have real world experience. How this applies to you I cannot tell. Perhaps you've seen enough different hardware already so you can tell what's the common metaphor for most of it. /Jorgen -- // Jorgen Grahn <grahn@ Oo o. . . \X/ snipabacken.se> O o .
Hi Wouter,

On 12/8/2013 3:50 AM, Wouter van Ooijen wrote:

> For instance, my A/D interface is (omitting a few details) > > template< int n_bits >
What if n_bits doesn't fit in an "int" (i.e., the data type returned by your ad_get() method)?
> struct pin_ad { > static constexpr int ad_bits = n_bits; > static constexpr int ad_maximum = ( 1 << ( n_bits + 1 )) - 1; > static void ad_init(); > static int ad_get(); > };
What if n_bits *varies* during the course of execution? Or, if "ad_maximum" is less than your computed value (presumably, you are using it for "something"?) E.g., I frequently use integrating converters. Their advantage is that I can dynamically trade resolution for speed. It takes longer to get a "more precise" reading. But, I can do so with little extra product cost. And, get *coarse* readings in shorter times. What if my converter technology takes a long time to come up with a result? Does your ad_get() *block* while waiting for the converter to yield its result? How do I (developer) ensure my application isn;t penalized by using your ADC interface (i.e., do I have to rewrite it so that it suspends the invoking task until the ADC result is available thereby allowing other tasks to continue executing "while waiting"? What if I don't run an MTOS/RTOS? I.e., has the presence of your library -- and the framework it imposes/suggests -- forced me to compromise how I would otherwise approach a problem? Note, I'm not saying it does -- or doesn't. Rather, trying to point out how hardware variations and exploits can complicate trying to cram those abstractions into a generic wrapper. (An ADC is an ADC, right?)
Hi Wouter,

On 12/8/2013 3:59 AM, Wouter van Ooijen wrote:
>> Now, assume a developer has need to *violate* your contract with >> him and manipulate the hardware directly. Will he *know* that >> you've encapsulated this "static"? Will he know to update it so >> that *your* functions will remain consistent with his >> modifications? Have you exported a method by which he *can* modify >> this static? Will this interface remain part of the contract >> forward-going? > > There are at least two ways to look at this problem. > > - As the library author, the big question is why would this user need > to bypass the abstraction? That points to a problem in the design, or > maybe in the documentation.
Most often, this would be because of efficiency. Will your library make it *easy* for the developer to know the (costs* associated with each method invocation? (remember, you spoke of portability... whose target processor? whose compiler??) OTOH, he can *write* a specific value to a specific device address and be pretty sure as to what that's going to "cost" him at run time. Do you choose to expose the data member that holds the configuration value (in my previous example)? Or, wrap that in an accessor method? How do you generalize that operation? set_ad_mode() to allow all the bits to be updated? alter_ad_mode() to allow some *virtualized* subset to be manipulated???
> - For the user, using only part of an abstraction is always risky. It > means that the abstraction does not fit your needs, yet you still > want to use a part of it? Maybe better throw it away entirely (and > tell the author why!).
Maybe you started to use it and, later, discovered this shortcoming OF THE IMPLEMENTATION. Do you now go back and retract it from your design? Do you patch it? Or, do you perhaps not understand it well enough (hence the reason for adopting the library in the first place!) and *incorrectly* work-around its limitations??
>> Don't get me wrong (OP) -- everything you can encapsulate is a >> win! Just don't be surprised if you find yourself (or others) >> "unwrapping" significant parts of your work in their applications. >> If they can't easily work-around your library when they *need* to >> (because you didn't anticipate some particular need of theirs), >> then your library will be a hindrance instaed of a help. > > IMO one way to work around this problem is to use many small, > interacting abstractions with clear and simple interfaces. This > enables the user to throw away the one or few he does not like, but > still use the others. LEGO (at least the old style) and Meccano are > to be preferred over Playmobil.
But that assumes you have nice, cleanly partitioned bits of hardware that don't *share* any resources! I.e., each ADC has its own independant control, status and data registers. Each DMA channel, counter/timer, "MMU", etc. In practice, bits get packed wherever the CPU designer can find some unused space in an "I/O" register. Pad and power restrictions will dictate which *combinations* of "I/O devices" and their respective capabilities can be employed at any given time ("Sorry, the ADC input is not available as that package pin is being used for outgoing serial port data"). Hardware is just too "messy" to expect it to fall into neat little, *orthogonal* arrangements -- esp as you go *down* in resource availability. I've faced this same problem trying to "virtualize" I/O devices so "applications" can control them directly (without "privilege"). It only works on very specific processors and with very specific constraints. Each of these issues limits the applicability of a library such as yours. Please don't get me wrong -- I am not trying to discourage or dissuade you. Rather, trying to point out that there are lots of permutations of hardware out there and trying to force them into a nice, cleanly partitioned view is likely to be disappointing.
Hi Dimiter,

[We've finally touched 0C!]

On 12/8/2013 3:18 AM, dp wrote:
> On Sunday, December 8, 2013 8:35:31 AM UTC+2, Clifford Heath wrote:
>> A tool is only as good as the people who use it. Especially a sharp tool. > > The tool user, the tool itself or both can be the limiting factor. > I have yet to encounter someone to match my efficiency using any tool > compared to me using my (own) tools, for example (because > of my tools, not because of me being that better, obviously).
I think this is one of the reasons why so many software people are also "tool designers/builders". Often, off-the-shelf tools are ineffective (or, inefficient) at addressing a particular problem. Or, are a poor fit for how the user (developer) *wants* to apply them (this is particularly true of tools that *impose* a certain style of usage: do this, *then* do this, and, finally, do *that*). When I was in school, I earned pin money repairing machines at an arcade (predated the "video game" revolution). One day, I was troubleshooting an old EM pin table (delightful kludges!). An "old timer" (pinball mechanic) came by and started looking over my shoulder. At the time, I was using a VOM to check coils, relay contacts (they often get highly pitted), bulb filaments, etc. The guy asked me what the meter was, how to use it, etc. So, I showed him: "For example, to check if this coil is 'open', I can put the leads across it and verify continuity, the approximate nominal resistance for a coil of this size, etc. (most coil problems being obvious opens)." He reached into his pocket and pulled out a ~18 inch length of wire that looked like it was the first wire ever manufactured in the history of time! Knotted, insulation hardened and flaking off, etc. He touched one end to the coil I had been discussing, the other end to a nearby *energized* coil (i.e., so he knew V+ was present, there), watched the coil in question pull in and pronounced, "This one's good..." I.e., for him, the wire was a suitable tool for that job. OTOH, had he encountered a coil that was only partially pulling in, he'd be hard pressed to see the high-ohmic path *feeding* the coil (bad set of contacts upstream) or a supply that was otherwise dragged down. [There are ways to do this with "just a wire". But, it takes more steps and a different diagnostic approach] Fitting the tool to its user is the key. If you intend a tool to be used in a different way than the user will *want* to use it, you've got an impedance mismatch :>
On 08/12/13 18:09, Vladimir Vassilevsky wrote:
> On 12/8/2013 12:35 AM, Clifford Heath wrote: >> On 08/12/13 16:33, Vladimir Vassilevsky wrote: >>> QT 5.x has ~100M runtime. And it is slow, too. This is price of GUI >>> portability. > What are you arguing to? > What point are you trying to make?
Problems reading your own words? You wrote: "This is the price of GUI portability". You were wrong. Your example showed the price of using Qt, a price which, incidentally, is incredibly damaging and may explain why Nokia is in decline - so I agree. It is however not the fault of either C++ or of GUI portability, but of bad design.
On Sunday, December 8, 2013 1:55:59 PM UTC+2, Don Y wrote:
> Hi Dimiter, > > [We've finally touched 0C!]
We got switched from a mild autumn into a harsh winter overnight by the end of November... It is not a worst case winter yet (-10 to -20) but way harsher than last year.
> Fitting the tool to its user is the key. If you intend a tool to be > used in a different way than the user will *want* to use it, you've > got an impedance mismatch :>
Yes, this is a major part of it. Then when the tool/author combination gets around 20 years old more effects can be observed, too. The most obvious one being the fact that you keep on developing the tools/language to suit what you need; I have been lucky enough to not have to throw away much if anything written so far so things do pile up. A may be less obvious one is that having to maintain/remember all that stuff you wrote last 20 years all the time (a few tens of megabytes of sources, about 1.5M lines (non multilined as C :-) in my case) tends to keep you alert and in good shape; I am not sure if this is any less important than anything else really. Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/sets/72157600228621276/
On 08/12/13 10:59, Wouter van Ooijen wrote:
> - As the library author, the big question is why would this user need to bypass the abstraction? That points to a problem in the design, or maybe in the documentation.
Question for a library implementer developing a library that will work with more than one device: "does your library library expose the union or intersection of all the devices' capabilities?" Too many libraries don't have documentation saying which set of advantages/disadvantages they have chosen :(

The 2024 Embedded Online Conference