EmbeddedRelated.com
Forums

C++, Ada, ...

Started by pozz April 17, 2021
On 19/04/2021 13:04, Niklas Holsti wrote:
> On 2021-04-18 23:23, David Brown wrote: >> On 18/04/2021 20:29, Tom Gardner wrote: >>> On 18/04/21 18:26, David Brown wrote: >>>> But some programmers have not changed, and the way they write C code >>>> has >>>> not changed.� Again, it's a question of whether you are comparing how >>>> languages/could/� be used at their best, or how some people use them at >>>> their worst, or guessing something in between. >>> >>> That is indeed a useful first question. >>> >>> A second question is "how easy is it to get intelligent >>> but inexperienced programmers to avoid the worst features >>> and use the languages well?" (We'll ignore all the programmers >>> that shouldn't be given a keyboard :) ) >>> >>> A third is "will that happen in company X's environment when >>> they are extending code written by people that left 5 years >>> ago?" >> >> All good questions. >> >> Another is what will happen to the company when the one person that >> understood the language at all, leaves? > > > If the company trained only one person in the language, that was a > stupid (risky) decision by the company, or they should not have started > using that language at all. > > >> With C++, you can hire another >> qualified and experienced C++ programmer.� (Okay, so you might have to >> beat them over the head with an 8051 emulator until they stop using >> std::vector and realise you don't want UCS2 encoding for your 2x20 >> character display, but they'll learn that soon enough.)� With Ada, or >> Forth, or any other minor language, you are scuppered. > > > During all my years (since about 1995) working on on-board SW for ESA > spacecraft, the company hired one person with earlier experience in Ada, > and that was I. All other hires working in Ada projects learned Ada on > the job (and some became enthusiasts). > > Sadly, some of the large aerospace "prime" companies in Europe are > becoming unwilling to accept subcontracted SW products in Ada, for the > reason discussed: because their HR departments say that they cannot find > programmers trained in Ada. Bah, a competent programmer will pick up the > core concepts quickly, says I. >
I agree with you there. But so many people learn "programming in C++" or "programming in Java", rather than learning "programming".
> Of course there are also training companies that offer Ada training > courses.
On 19.4.21 9.52, David Brown wrote:
> On 19/04/2021 00:17, Tom Gardner wrote: >> On 18/04/21 21:17, David Brown wrote: >>> On 18/04/2021 20:30, Tom Gardner wrote: >>>> On 18/04/21 18:26, David Brown wrote: >>>>> C++ has got a lot stronger at compile-time work in recent versions. >>>>> Not >>>>> only have templates got more powerful, but we've got "concepts" (named >>>>> sets of features or requirements for types), constexpr functions (that >>>>> can be evaluated at compile time or runtime), consteval functions (that >>>>> must be evaluated at compile time), constinit data (that must have >>>>> compile-time constant initialisers), etc.&#4294967295; And constants determined at >>>>> compile-time are not restricted to scaler or simple types. >>>> >>>> Sounds wordy ;) >>> >>> Have you looked at the C++ standards documents?&#4294967295; There are more than a >>> few words there! >> >> No. I value my sanity. > > I have looked at it, and come back as sane as ever (apart from the > pencils up my nose and underpants on my head). But I've worked up to it > through many versions of the C standards. > > More seriously, if I need to look up any details of C or C++, I find > <https://en.cppreference.com/w/> vastly more user-friendly. > >> >> >>> I'm not suggesting C++ is a perfect language - not by a long way.&#4294967295; It >>> has plenty of ugliness, and in this thread we've covered several points >>> where Ada can do something neater and clearer than you can do it in C++. >>> >>> But it's a good thing that it has more ways for handling things at >>> compile time.&#4294967295; In many of my C projects, I have had Python code for >>> pre-processing, for computing tables, and that kind of thing.&#4294967295; With >>> modern C++, these are no longer needed. >> >> The useful question is not whether something is good, >> but whether there are better alternatives. "Better", >> of course, can include /anything/ relevant! >> > > Yes - and "better" is usually highly subjective. > > In the case of compile-time calculations, modern C++ is certainly better > than older C++ versions or C (or, AFAIK, Ada). It can't do everything > that I might do with external Python scripts - it can't do code > generation, or make tables that depend on multiple source files, or make > CRC checks for the final binary. But it can do some things that > previously required external scripts, and that's nice. > >> >> >>> An odd thing about the compile-time calculation features of C++ is that >>> they came about partly by accident, or unforeseen side-effects.&#4294967295; Someone >>> discovered that templates with integer parameters could be used to do >>> quite a lot of compile-time calculations.&#4294967295; The code was /really/ ugly, >>> slow to compile, limited in scope.&#4294967295; But people were finding use for it. >>> &#4294967295; So the motivation for "constexpr" was that programmers were already >>> doing compile-time calculations, and so it was best to let them do it in >>> a nicer way. >> >> Infamously, getting a valid C++ program that cause the >> compiler to generate the sequence of prime numbers during >> compilation came as an unpleasant /surprise/ to the C++ >> standards committee. > > I don't believe that the surprise was "unpleasant" - it's just something > they hadn't considered. (I'm not even sure of that either - my feeling > is that this is an urban myth, or at least a story exaggerated in the > regular retelling.) > >> >> The code is short; whether it is ugly is a matter of taste! >> https://en.wikibooks.org/wiki/C%2B%2B_Programming/Templates/Template_Meta-Programming#History_of_TMP >> > > Template-based calculations were certainly very convoluted - they needed > a functional programming style structure but with much more awkward > syntax (and at the height of their "popularity", horrendous compiler > error messages when you made a mistake - that too has improved greatly). > And that is why constexpr (especially in latest versions) is so much > better. > > Template-based calculations are a bit like trying to do calculations and > data structures in LaTeX. It is all possible, but it doesn't roll off > the tongue very easily. > > > (I wonder if anyone else understands the pencil and underpants > reference. I am sure Tom does.)
I just wonder if templates and C++ would be valid for the IOCCC contest. The example would be a good candidate. -- -TV
On 19/04/2021 15:15, Tauno Voipio wrote:
> On 19.4.21 9.52, David Brown wrote: >> On 19/04/2021 00:17, Tom Gardner wrote:
>>> >>> The code is short; whether it is ugly is a matter of taste! >>> https://en.wikibooks.org/wiki/C%2B%2B_Programming/Templates/Template_Meta-Programming#History_of_TMP >>> >>> >> >> Template-based calculations were certainly very convoluted - they needed >> a functional programming style structure but with much more awkward >> syntax (and at the height of their "popularity", horrendous compiler >> error messages when you made a mistake - that too has improved greatly). >> &#4294967295; And that is why constexpr (especially in latest versions) is so much >> better. >> >> Template-based calculations are a bit like trying to do calculations and >> data structures in LaTeX.&#4294967295; It is all possible, but it doesn't roll off >> the tongue very easily. >> >> >> (I wonder if anyone else understands the pencil and underpants >> reference.&#4294967295; I am sure Tom does.) > > > I just wonder if templates and C++ would be valid for the IOCCC contest. > The example would be a good candidate. >
I am sure that if the IOCCC were open to C++ entries, templates would be involved. But that particular example is not hard to follow IMHO. The style is more like functional programming, with recursion and pattern matching rather than loops and conditionals, so that might make it difficult to understand at first.
On 2021-04-19 15:22, David Brown wrote:
> On 19/04/2021 12:51, Niklas Holsti wrote: >> On 2021-04-18 19:59, David Brown wrote: >>> On 18/04/2021 17:48, Niklas Holsti wrote: >>>> On 2021-04-18 13:53, David Brown wrote: >>>>> On 17/04/2021 20:55, Tom Gardner wrote: >>>>>> On 17/04/21 17:45, David Brown wrote: >>>>>>> And most of >>>>>>> the advantages of Ada (such as better typing) can be achieved in C++ >>>>>>> with less effort, and at the same time C++ can give additional >>>>>>> safety on >>>>>>> resources that is harder to get on Ada. >>>>>> >>>>>> Interesting. Could you give a quick outline of that? >>>>>> >>>>> >>>>> Which part? >>>>> >>>>> My understanding of Ada classes is that, like Pascal classes, you need >>>>> to explicitly construct and destruct objects.&#4294967295; This gives far greater >>>>> scope for programmers to get things wrong than when they are handled >>>>> automatically by the language. >>>> >>>> >>>> If you mean automatic allocation and deallocation of storage, Ada lets >>>> you define types that have an "initializer" that is called >>>> automatically, and can allocate memory if it needs to, and a "finalizer" >>>> that is automatically called when leaving the scope of the object in >>>> question. The finalizer does have to explicitly deallocate any >>>> explicitly allocated and owned resources, and it may have to use >>>> reference counting for that, for complex data structures. >>> >>> I had a little look for this (these discussions are great for inspiring >>> learning!).&#4294967295; The impression I got was that it was possible, but what >>> takes a few lines of C++ (excluding whatever work must be done inside >>> the constructor and destructor bodies) involves inheriting from a >>> specific library type. >> >> >> Yes, you must inherit from one of the types Ada.Finalization.Controlled >> or Ada.Finalization.Limited_Controlled when you create a type for which >> you can program an initializer and/or a finalizer. >> >> However, you can aggregate a component of such a type into some other >> composite type, and then that component's initializer and finalizer will >> be called automatically when any object of the containing composite type >> is constructed and destroyed. >> >> >>> And you don't have automatic initialisation of >>> subobjects and ancestors in a controlled order, nor automatic >>> finalisation of them in the reverse order. >> >> >> No, and yes. Subobjects (components) are automatically initialized >> before the composite is initialized (bottom-up), and are automatically >> finalized after the composite is finalized (top-down). But there is no >> automatic invocation of the initializer or finalizer of the parent >> class; that would have to be called explicitly (except in the case of an >> "extension aggregate" expression, where an object of the parent type is >> created and then extended to an object of the derived class). >> > > OK. Am I right in assuming the subobjects here also need to inherit > from the "Finalization" types individually, in order to be automatically > initialised in order?
Yes, if they need more initialization than provided by the automatic "construction" step (Default_Value etc.)
> > Are there any overheads (other than in the source code) for all this > inheriting? Ada (like C++) aims to be minimal overhead, AFAIUI, but its > worth checking. >
If the compiler sees the actual type of an object that needs finalization (as in the critical-section example) it can generate direct calls to Initialize and Finalize without dispatching. If the object is polymorphic (what in Ada is called a "class") the calls must go through a dispatch table according to the actual type of the object.
>> So the Ada "initializer" is not like a C++ constructor, which in Ada >> corresponds more closely to a function returning an object of the class. >> >> An Ada "finalizer" is more similar to a C++ destructor, taking care of >> any clean-up that is needed before the object disappears. >> > > C++ gives you the choice. You can do work in a constructor, or you can > leave it as a minimal (often automatically generated) function. You can > give default values to members. You can add "initialise" member > functions as you like. You can have "factory functions" that generate > instances. This lets you structure the code and split up functionality > in whatever way suits your requirements.
So, just as in Ada.
> For a well-structured class, the key point is that a constructor will > always establish the class invariant. Any publicly accessible function > will assume that invariant, and maintain it. Private functions might > temporarily break the invariant - these are only accessible by code that > "knows what it is doing". And the destructor will always clean up after > the object, recycling any resources used. > > Having C++ style constructors are not a requirement for having control > of the class invariant, but they do make it more convenient and more > efficient (both at run-time and in the source code) than separate > minimal constructors (or default values) and initialisers.
In Ada one can write the preconditions, invariants and postconditions in the source code itself, with standard "aspects" and Ada Boolean expressions, and have them either checked at run-time or verified by static analysis/proof.
>>> Let's take a little example.&#4294967295; And since this is comp.arch.embedded, >>> let's take a purely embedded example of disabling interrupts, rather >>> than shunned dynamic memory allocations: >>> >>> static inline uint32_t disableGlobalInterrupts(void) { >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295; uint32_t pri; >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295; asm volatile( >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; "&#4294967295; mrs %[pri], primask\n\t" // Get old mask >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; "&#4294967295; cpsid i\n\t"&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; // Disable interrupts entirely >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; "&#4294967295; dsb"&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; // Ensures that this takes effect before next >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; // instruction >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; : [pri] "=r" (pri) :: "memory"); >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295; return pri; >>> } >>> >>> static inline void restoreGlobalInterrupts(uint32_t pri) { >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295; asm volatile( >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; "&#4294967295; msr primask, %[pri]"&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; // Restore old mask >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; :: [pri] "r" (pri) : "memory"); >>> } >> >> >> I won't try to write Ada equivalents of the above :-) though I have of >> course written much Ada code to manage and handle interrupts. > > I think Ada has built-in (or standard library) support for critical > sections, does it not?
Yes, "protected objects". See below.
> But this is just an example, not necessarily > something that would be directly useful. Obviously the code above is > highly target-specific. > >> >> >>> class CriticalSectionLock { >>> private : >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295; uint32_t oldpri; >>> public : >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295; CriticalSectionLock() { oldpri = disableGlobalInterrupts(); } >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295; ~CriticalSectionLock() { restoreGlobalInterrupts(oldpri); } >>> }; >> >> >> Here is the same in Ada. I chose to derive from Limited_Controlled >> because that makes it illegal to assign a Critical_Section value from >> one object to another. >> >> &#4294967295;&#4294967295; -- Declaration of the type: >> >> &#4294967295;&#4294967295; type Critical_Section is new Ada.Finalization.Limited_Controlled >> &#4294967295;&#4294967295; with record >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; old_pri : Interfaces.Unsigned_32; >> &#4294967295;&#4294967295; end record; >> >> &#4294967295;&#4294967295; overriding procedure Initialize (This : in out Critical_Section); >> &#4294967295;&#4294967295; overriding procedure Finalize&#4294967295;&#4294967295; (This : in out Critical_Section); >> >> &#4294967295;&#4294967295; -- Implementation of the operations: >> >> &#4294967295;&#4294967295; procedure Initialize (This : in out Critical_Section) >> &#4294967295;&#4294967295; is begin >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; This.old_pri := disableGlobalInterrupts; >> &#4294967295;&#4294967295; end Initialize; >> >> &#4294967295;&#4294967295; procedure Finalize (This : in out Critical_Section) >> &#4294967295;&#4294967295; is begin >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; restoreGlobalInterrupts (This.old_pri); >> &#4294967295;&#4294967295; end Finalize; >> >> > > Are "Initialize" and "Finalize" overloaded global procedures, or is this > the syntax always used for member functions?
They are operations ("member functions") of the Ada.Finalization.Limited_Controlled type, that are null (do nothing) for that (base) type, and here we override them for the derived Critical_Section type, to replace the inherited null operations. The "overriding" keyword is optional (an unfortunate wart from history).
>>> You can use it like this: >>> >>> bool compare_and_swap64(uint64_t * p, uint64_t old, uint64_t x) >>> { >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295;CriticalSectionLock lock; >>> >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295;if (*p != old) return false; >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295;*p = x; >>> &#4294967295;&#4294967295;&#4294967295;&#4294967295;return true; >>> } >>> >> >> >> &#4294967295;&#4294967295; function Compare_and_Swap64 ( >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; p&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; : access Interfaces.Unsigned_64; >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; old, x : in&#4294967295;&#4294967295;&#4294967295;&#4294967295; Interfaces.Unsigned_64) >> &#4294967295;&#4294967295; return Boolean >> &#4294967295;&#4294967295; is >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; Lock : Critical_Section; >> &#4294967295;&#4294967295; begin >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; if p.all /= old then >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; return False; >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; else >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; p.all := x; >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; return True; >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; end if; >> &#4294967295;&#4294967295; end Compare_and_Swap64; >> >> (I think there should be a "volatile" spec for the "p" object, don't you?) > > It might be logical to make it volatile, but the code would not be > different (the inline assembly has memory clobbers already, which force > the memory accesses to be carried out without re-arrangements).
So you are relying on the C++ compiler actually respecting the "inline" directive? Are C++ compilers required to do that?
> But adding "volatile" would do no harm, and let the user of the > function pass a volatile pointer. > > The Ada and C++ code is basically the same here, which is nice.
(Personally I dislike this style of critical sections. I think it is a confusing mis-use of local variables. Its only merit is that it ensures that the finalization occurs even in the case of an exception or other abnormal exit from the critical section.)
> How would it look with block scope? > > extern int bar(int x); > int foo(volatile int * p, int x, int y) { > int u = bar(x); > { > CriticalSectionLock lock; > *p += z; > } > int v = bar(y); > return v; > } > > The point of this example is that the "*p += z;" line should be within > the calls to disableGlobalInterrupts and restoreGlobalInterrupts, but > the calls to "bar" should be outside. This requires the lifetime of the > lock variable to be more limited.
Much as you would expect; the block is declare Lock : Critical_Section; begin p.all := p.all + z: end: (In the next Ada standard -- probably Ada 2022 -- one can write such updating assignments more briefly, as p.all := @ + z; but the '@' can be anywhere in the right-hand-side expression, in one or more places, which is more flexible than the C/C++ combined assignment-operations like "+=".)
>> But critical sections in Ada applications are more often written >> using the Ada "protected object" feature. >> >> <snip PO example> >>> > I think the idea of language support for protected sections is nice, but > I'd be concerned about how efficiently it would map to the requirements > of the program and the target.
I haven't had any problems so far. Though in some highly stressed real-time applications I have resorted to communicating through shared atomic variables with lock-free protocols.
> Such things are often a bit "brute > force", because they have to emphasise "always works" over efficiency. > For example, if you have a 64-bit atomic type (on a 32-bit device), you > don't /always/ need to disable interrupts around it. If you are already > in an interrupt routine and know that no higher priority interrupt > accesses the data, no locking is needed.
Interrupt handlers in Ada are written as procedures in protected objects, with the protected object given the appropriate priority. Other operations in that same protected object can then be executed in automatic mutual exclusion with the interrupt handler. The protected object can also provide one or more "entry" operations with Boolean "guards" on which tasks can wait, for example to wait for an interrupt to have occurred. I find this works very well in practice.
> If you are in main thread code but only read the data, maybe > repeatedly reading it until you get two reads with the same value is > more efficient. Such shortcuts must, of course, be used with care.
Sure.
> In C and C++, there are atomic types (since C11/C++11). They require > library support for different targets, which are (unfortunately) not > always good. But certainly it is common in C++ to think of an atomic > type here rather than atomic access functions, just as you describe in Ada.
The next Ada standard includes several generic atomic operations on typed objects in the standard package System.Atomic_Operations and its child packages. The proposal is at http://www.ada-auth.org/standards/2xaarm/html/AA-C-6-1.html Follow the "Next" arrows to see all of it. In summary, it seems to me that the main difference we have found in this discussion, so far, between Ada and C++ services in the areas we have looked at, is that Ada makes it simpler to define one's own scalar types, while C++ has more compile-time programmability like constexpr functions.
Il 17/04/2021 18:45, David Brown ha scritto:
> On 17/04/2021 17:48, pozz wrote: >> What do you think about different languages than usual C for embedded >> systems? >> >> I mean C++, Ada but also Python. Always more powerful embedded >> processors are coming, so I expect new and modern languages will enter >> in the embedded world. >> >> Hardware are cheaper and more powerful than ever, but software stays >> expensive. New and modern languages could reduce the software cost, >> because they are simpler than C and much more similar to desktop/server >> programming paradigm. >> >> We embedded sw developers were lucky: electronics and technologies >> change rapidly, but sw for embedded has changed slower than >> desktop/mobile sw. Think of mobile app developers: maybe they already >> changed IDEs, tools and languages ten times in a few years. >> C language for embedded is today very similar than 10 years ago. >> >> However I think this situation for embedded developers will change in >> the very next years. And we will be forced to learn modern technologies, >> such as new languages and tools. >> Is it ok for me to study and learn new things... but it will be more >> difficult to acquire new skills for real jobs. >> >> What do you think? > > You should probably add Rust to your list - I think its popularity will > increase. > > Python is great when you have the resources. It's the language I use > most on PC's and servers, and it is very common on embedded Linux > systems (like Pi's, and industrial equivalents). Micropython is > sometimes found on smaller systems, such as ESP32 devices. > > Ada involves a fair amount of change to how you work, compared to C > development. (Disclaimer - I have only done a very small amount of Ada > coding, and no serious projects. But I have considered it as an > option.) I really don't see it becoming much more common, and outside > of niche areas (defence, aerospace) it is very rare. Programming in Ada > often takes a lot more effort even for simple things, leading quickly to > code that is so wordy that it is unclear what is going on. And most of > the advantages of Ada (such as better typing) can be achieved in C++ > with less effort, and at the same time C++ can give additional safety on > resources that is harder to get on Ada. (But Ada has some nice features > introspective that C++ programmers currently only dream about.) > > C++ is not uncommon in embedded systems, and I expect to see more of it. > I use it as my main embedded language now. C++ gives more scope for > getting things wrong in weird ways, and more scope for accidentally > making your binary massively larger than you expect, but with care it > makes it a lot easier to write clear code and safe code, where common C > mistakes either can't happen or you get compile-time failures. > > It used to be the case that C++ compilers were expensive and poor > quality, that the resulting code was very slow on common > microcontrollers, and that the language didn't have much extra to offer > small-systems embedded developers. That, I think, has changed in all > aspects. I use gcc 10 with C++17 on Cortex-M devices, and features like > templates, strong enumerations, std::array, and controlled automatic > conversions make it easier to write good code.
What do you suggest for a poor C embedded developer that wants to try C++ on the next project? I would use gcc on Cortex-M MCUs.
pozz <pozzugno@gmail.com> writes:
> What do you suggest for a poor C embedded developer that wants to try > C++ on the next project?
I'm not sure what kind of answer you are looking for, but I recommend the book "Effective Modern C++" by Scott Meyers. C++ is a mudball with many layers of cruft, that improved tremendously over the past few revisions. The book shows you how to do things the right way, using the improvements instead of the cruft.
On 19/04/21 17:38, pozz wrote:

> What do you suggest for a poor C embedded developer that wants to try C++ on the > next project?
Choose a /very/ small project, and try to Get It Right (TM). When you think there might be a better set of implementation cliches and design strategies, refactor bits of your code to investigate them. Don't forget to use your favourite IDE to do the mechanics of that refactoring.
Niklas Holsti <niklas.holsti@tidorum.invalid> writes:
> In Ada... The construction step can assign some initial values that > can be defined by default (pointers default to null, for example)... > > type Zero_Handler is access procedure; > type Counter is record ... > At_Zero : Zero_Handler; -- Default init to null.
Wow, that is disappointing. I had thought Ada access types were like C++ or ML references, i.e. they have to be initialized to point to a valid object, so they can never be null. I trust or at least hope that SPARK has good ways to ensure that a given access variable is always valid. Debugging null pointer exceptions is a standard time-waster in most languages that have null pointers. Is it that way in Ada as well? Fwiw, I'm not a C++ expert but I do use it. I try to write in a style that avoids pointers (e.g. by using references instead), and still find myself debugging stuff with invalid addresses that I think wouldn't happen in Ada. But I've never used Ada beyond some minor playing around. It seems like a big improvement over C. C++ it seems to me also improves on C, but by going in a different direction than Ada. I plan to spend some time on Rust pretty soon. This is based on impression rather than experience, but ISTM that a lot of Rust is designed around managing dynamic memory allocation by ownership tracking, like C++ unique_ptr on steroids built into the language. That lets you write big applications that heavily use dynamic allocation while avoiding the usual malloc/free bugs and without using garbage collection. Ada on the other hand is built for high assurance embedded applications that don't use dynamic allocation much, except maybe at program initialization time. So Rust and Ada aim to solve different problems. I like to think it is reasonable to write the outer layers of complex applications in garbage collected languages, with critical or realtime parts written in something like Ada. Tim Sweeney talks about this in his old slide deck "The Next Mainstream Programming Language": https://www.st.cs.uni-saarland.de//edu/seminare/2005/advanced-fp/docs/sweeny.pdf The above url currently throws an expired-certificate warning but it is ok to click past that.
Dimiter_Popoff <dp@tgi-sci.com> writes:
> On 4/19/2021 14:04, Niklas Holsti wrote: >> ...their HR departments say that they cannot find programmers trained >> in Ada. Bah, a competent programmer will pick up the core concepts >> quickly, says I. > > This is valid not just for ADA. An experienced programmer will need days > to adjust to this or that language. I guess most if not all of us have > been through it.
No it's much worse than that. First of all some languages are really different and take considerable conceptual adjustment: it took me quite a while as a C and Python programmer to become anywhere near clueful about Haskell. But understanding Haskell then demystified parts of C++ that had made no sense to me at all. Secondly, being competent in a language now means far more than the language itself. There is also a culture and a code corpus out there which also have to be assimilated for each language. E.g. Ruby is a very simple language, but coming up to speed as a Ruby developer means getting used to a decade of Rails hacks, ORM internals, 100's of "gems" (packages) scattered over 100s of Github repositories, etc. It's the same way with Javascript and the NPM universe plus whatever framework-of-the-week your project is using. Python is not yet that bad, because it traditionally had a "batteries included" ethic that tried to standardize more useful functions than other languages did, but it seems to have given up on that in the past few years. Maybe things are better in the embedded world, but in the internet world any significant application will have far too much internal functionality (dealing with complex network protocols, file formats, etc) for the developers to get anything done without bringing in a mass of external dependencies. A lot of dev work ISTM now is about understanding and managing those dependencies, and also in connecting to a wider dev community that you can exchange wisdom with. "Computer science", such as knowing how to balance binary trees, is now almost a useless subject. (On the other hand, math in general, particularly probability, has become a lot more useful. This is kind of satisfying for me since I studied a lot of it in school and then for many years never used it in programming.)
On 19/04/2021 18:16, Niklas Holsti wrote:
> On 2021-04-19 15:22, David Brown wrote: >> On 19/04/2021 12:51, Niklas Holsti wrote: >>> On 2021-04-18 19:59, David Brown wrote:
(I'm snipping for brevity - I appreciate your comments even though I've snipped many.)
> > In Ada one can write the preconditions, invariants and postconditions in > the source code itself, with standard "aspects" and Ada Boolean > expressions, and have them either checked at run-time or verified by > static analysis/proof. >
Yes, I like that for Ada. These are on the drawing board for C++, but it will be a while yet before they are in place.
>>> >>> (I think there should be a "volatile" spec for the "p" object, don't >>> you?) >> >> It might be logical to make it volatile, but the code would not be >> different (the inline assembly has memory clobbers already, which force >> the memory accesses to be carried out without re-arrangements). > > > So you are relying on the C++ compiler actually respecting the "inline" > directive? Are C++ compilers required to do that? >
No, it is not relying on the "inline" at all - it is relying on the semantics of the inline assembly code (which is compiler-specific, though several major compilers support the gcc inline assembly syntax). Compilers are required to support "inline" correctly, of course - but the keyword doesn't actually mean "generate this code inside the calling function". It is one of these historical oddities - it was originally conceived as a hint to the compiler for optimisation purposes, but what it /actually/ means is roughly "It's okay for there to be multiple definitions of this function in the program - I promise they will all do the same thing, so I don't mind which you use in any given case". The compiler is likely to generate the code inline as part of normal optimisation, but it would do that anyway.
> >> But adding "volatile" would do no harm, and let the user of the >> function pass a volatile pointer. > >> The Ada and C++ code is basically the same here, which is nice. > > > (Personally I dislike this style of critical sections. I think it is a > confusing mis-use of local variables. Its only merit is that it ensures > that the finalization occurs even in the case of an exception or other > abnormal exit from the critical section.)
Fair enough - styles are personal things. And they are heavily influenced by what is convenient or idiomatic in the language(s) we commonly use (and vice versa).
> > >> How would it look with block scope? >> >> extern int bar(int x); >> int foo(volatile int * p, int x, int y) { >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;int u = bar(x); >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;{ >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; CriticalSectionLock lock; >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; *p += z; >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;} >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;int v = bar(y); >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;return v; >> } >> >> The point of this example is that the "*p += z;" line should be within >> the calls to disableGlobalInterrupts and restoreGlobalInterrupts, but >> the calls to "bar" should be outside.&#4294967295; This requires the lifetime of the >> lock variable to be more limited. > > > Much as you would expect; the block is > > &#4294967295;&#4294967295; declare > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; Lock : Critical_Section; > &#4294967295;&#4294967295; begin > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; p.all := p.all + z: > &#4294967295;&#4294967295; end:
Fair enough. (I expected there was some way to have smaller block-scope variables in Ada, though I didn't know how to write them. And it is not a given that the scope and the lifetime would be the same, though it looks like it is the case here.) As a matter of style, I really do not like the "declare all variables at the start of the block" style, standard in Pascal, C90 (or older), badly written (IMHO) newer C, and apparently also Ada. I much prefer to avoid defining variables until I know what value they should hold, at least initially. Amongst other things, it means I can be much more generous about declaring them as "const", there are almost no risks of using uninitialised data, and the smaller scope means it is easier to see all use of the variable.
> > (In the next Ada standard -- probably Ada 2022 -- one can write such > updating assignments more briefly, as > > &#4294967295;&#4294967295;&#4294967295; p.all := @ + z; > > but the '@' can be anywhere in the right-hand-side expression, in one or > more places, which is more flexible than the C/C++ combined > assignment-operations like "+=".) >
It may be flexible, but I'm not convinced it is clearer nor that it would often be useful. But I guess that will be highly related to familiarity.
> >>> But critical sections in Ada applications are more often written >>> using the Ada "protected object" feature. >>> >>> <snip PO example> >>> >> I think the idea of language support for protected sections is nice, but >> I'd be concerned about how efficiently it would map to the requirements >> of the program and the target. > > > I haven't had any problems so far. Though in some highly stressed > real-time applications I have resorted to communicating through shared > atomic variables with lock-free protocols. > > >> Such things are often a bit "brute >> force", because they have to emphasise "always works" over efficiency. >> For example, if you have a 64-bit atomic type (on a 32-bit device), you >> don't /always/ need to disable interrupts around it.&#4294967295; If you are already >> in an interrupt routine and know that no higher priority interrupt >> accesses the data, no locking is needed. > > > Interrupt handlers in Ada are written as procedures in protected > objects, with the protected object given the appropriate priority. Other > operations in that same protected object can then be executed in > automatic mutual exclusion with the interrupt handler. The protected > object can also provide one or more "entry" operations with Boolean > "guards" on which tasks can wait, for example to wait for an interrupt > to have occurred. I find this works very well in practice. > > >> If you are in main thread code but only read the data, maybe >> repeatedly reading it until you get two reads with the same value is >> more efficient.&#4294967295; Such shortcuts must, of course, be used with care. > > Sure. > > >> In C and C++, there are atomic types (since C11/C++11).&#4294967295; They require >> library support for different targets, which are (unfortunately) not >> always good.&#4294967295; But certainly it is common in C++ to think of an atomic >> type here rather than atomic access functions, just as you describe in >> Ada. > > > The next Ada standard includes several generic atomic operations on > typed objects in the standard package System.Atomic_Operations and its > child packages. The proposal is at > > http://www.ada-auth.org/standards/2xaarm/html/AA-C-6-1.html > > Follow the "Next" arrows to see all of it. > > In summary, it seems to me that the main difference we have found in > this discussion, so far, between Ada and C++ services in the areas we > have looked at, is that Ada makes it simpler to define one's own scalar > types, while C++ has more compile-time programmability like constexpr > functions. >
These are certainly example differences. In general it would appear that most things that can be written in one language could be written in the other in roughly the same way (given appropriate libraries or type definitions). And we can probably agree that both are better than plain old C in terms of expressibility and (in the right hands) writing safer code by reducing tedious and error-prone manual boilerplate.