EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

C++, Ada, ...

Started by pozz April 17, 2021
On 18/04/21 21:17, David Brown wrote:
> On 18/04/2021 20:30, Tom Gardner wrote: >> On 18/04/21 18:26, David Brown wrote: >>> C++ has got a lot stronger at compile-time work in recent versions.� Not >>> only have templates got more powerful, but we've got "concepts" (named >>> sets of features or requirements for types), constexpr functions (that >>> can be evaluated at compile time or runtime), consteval functions (that >>> must be evaluated at compile time), constinit data (that must have >>> compile-time constant initialisers), etc.� And constants determined at >>> compile-time are not restricted to scaler or simple types. >> >> Sounds wordy ;) > > Have you looked at the C++ standards documents? There are more than a > few words there!
No. I value my sanity.
> I'm not suggesting C++ is a perfect language - not by a long way. It > has plenty of ugliness, and in this thread we've covered several points > where Ada can do something neater and clearer than you can do it in C++. > > But it's a good thing that it has more ways for handling things at > compile time. In many of my C projects, I have had Python code for > pre-processing, for computing tables, and that kind of thing. With > modern C++, these are no longer needed.
The useful question is not whether something is good, but whether there are better alternatives. "Better", of course, can include /anything/ relevant!
> An odd thing about the compile-time calculation features of C++ is that > they came about partly by accident, or unforeseen side-effects. Someone > discovered that templates with integer parameters could be used to do > quite a lot of compile-time calculations. The code was /really/ ugly, > slow to compile, limited in scope. But people were finding use for it. > So the motivation for "constexpr" was that programmers were already > doing compile-time calculations, and so it was best to let them do it in > a nicer way.
Infamously, getting a valid C++ program that cause the compiler to generate the sequence of prime numbers during compilation came as an unpleasant /surprise/ to the C++ standards committee. The code is short; whether it is ugly is a matter of taste! https://en.wikibooks.org/wiki/C%2B%2B_Programming/Templates/Template_Meta-Programming#History_of_TMP
On 19/04/2021 00:17, Tom Gardner wrote:
> On 18/04/21 21:17, David Brown wrote: >> On 18/04/2021 20:30, Tom Gardner wrote: >>> On 18/04/21 18:26, David Brown wrote: >>>> C++ has got a lot stronger at compile-time work in recent versions.� >>>> Not >>>> only have templates got more powerful, but we've got "concepts" (named >>>> sets of features or requirements for types), constexpr functions (that >>>> can be evaluated at compile time or runtime), consteval functions (that >>>> must be evaluated at compile time), constinit data (that must have >>>> compile-time constant initialisers), etc.� And constants determined at >>>> compile-time are not restricted to scaler or simple types. >>> >>> Sounds wordy ;) >> >> Have you looked at the C++ standards documents?� There are more than a >> few words there! > > No. I value my sanity.
I have looked at it, and come back as sane as ever (apart from the pencils up my nose and underpants on my head). But I've worked up to it through many versions of the C standards. More seriously, if I need to look up any details of C or C++, I find <https://en.cppreference.com/w/> vastly more user-friendly.
> > >> I'm not suggesting C++ is a perfect language - not by a long way.&#4294967295; It >> has plenty of ugliness, and in this thread we've covered several points >> where Ada can do something neater and clearer than you can do it in C++. >> >> But it's a good thing that it has more ways for handling things at >> compile time.&#4294967295; In many of my C projects, I have had Python code for >> pre-processing, for computing tables, and that kind of thing.&#4294967295; With >> modern C++, these are no longer needed. > > The useful question is not whether something is good, > but whether there are better alternatives. "Better", > of course, can include /anything/ relevant! >
Yes - and "better" is usually highly subjective. In the case of compile-time calculations, modern C++ is certainly better than older C++ versions or C (or, AFAIK, Ada). It can't do everything that I might do with external Python scripts - it can't do code generation, or make tables that depend on multiple source files, or make CRC checks for the final binary. But it can do some things that previously required external scripts, and that's nice.
> > >> An odd thing about the compile-time calculation features of C++ is that >> they came about partly by accident, or unforeseen side-effects.&#4294967295; Someone >> discovered that templates with integer parameters could be used to do >> quite a lot of compile-time calculations.&#4294967295; The code was /really/ ugly, >> slow to compile, limited in scope.&#4294967295; But people were finding use for it. >> &#4294967295; So the motivation for "constexpr" was that programmers were already >> doing compile-time calculations, and so it was best to let them do it in >> a nicer way. > > Infamously, getting a valid C++ program that cause the > compiler to generate the sequence of prime numbers during > compilation came as an unpleasant /surprise/ to the C++ > standards committee.
I don't believe that the surprise was "unpleasant" - it's just something they hadn't considered. (I'm not even sure of that either - my feeling is that this is an urban myth, or at least a story exaggerated in the regular retelling.)
> > The code is short; whether it is ugly is a matter of taste! > https://en.wikibooks.org/wiki/C%2B%2B_Programming/Templates/Template_Meta-Programming#History_of_TMP >
Template-based calculations were certainly very convoluted - they needed a functional programming style structure but with much more awkward syntax (and at the height of their "popularity", horrendous compiler error messages when you made a mistake - that too has improved greatly). And that is why constexpr (especially in latest versions) is so much better. Template-based calculations are a bit like trying to do calculations and data structures in LaTeX. It is all possible, but it doesn't roll off the tongue very easily. (I wonder if anyone else understands the pencil and underpants reference. I am sure Tom does.)
On 19/04/2021 00:09, Tom Gardner wrote:
> On 18/04/21 21:23, David Brown wrote: >> On 18/04/2021 20:29, Tom Gardner wrote: >>> On 18/04/21 18:26, David Brown wrote: >>>> But some programmers have not changed, and the way they write C code >>>> has >>>> not changed.&#4294967295; Again, it's a question of whether you are comparing how >>>> languages/could/&#4294967295; be used at their best, or how some people use them at >>>> their worst, or guessing something in between. >>> >>> That is indeed a useful first question. >>> >>> A second question is "how easy is it to get intelligent >>> but inexperienced programmers to avoid the worst features >>> and use the languages well?" (We'll ignore all the programmers >>> that shouldn't be given a keyboard :) ) >>> >>> A third is "will that happen in company X's environment when >>> they are extending code written by people that left 5 years >>> ago?" >> >> All good questions. >> >> Another is what will happen to the company when the one person that >> understood the language at all, leaves?&#4294967295; With C++, you can hire another >> qualified and experienced C++ programmer.&#4294967295; (Okay, so you might have to >> beat them over the head with an 8051 emulator until they stop using >> std::vector and realise you don't want UCS2 encoding for your 2x20 >> character display, but they'll learn that soon enough.)&#4294967295; With Ada, or >> Forth, or any other minor language, you are scuppered. >> >> These are important considerations. > > They are important considerations, indeed. > > I'm unconvinced that it is practical to rely on hiring > another C++ programmer that is /actually/ qualified > and experienced - and wants to work on someone else's > code.
You don't tell the new guy that he or she must maintain old code! You spring that as a surprise, once you've got them hooked on the quality of your coffee machine.
> > The "uncanny valley" is a major problem for any new tech, > languages included. > > We've all seen the next better mousetrap that turns > out to merely have exchanged swings and roundabouts.
Indeed. But code written in the latest fad language is not the worst. I've had to deal with code (for a PC) written in ancient versions of a propriety "RAD tool" where the vendor will no longer sell the outdated version and the new tool version is not remotely compatible. I'd pick Ada over that any day of the week.
On 19/04/21 07:52, David Brown wrote:
> On 19/04/2021 00:17, Tom Gardner wrote: >> Infamously, getting a valid C++ program that cause the >> compiler to generate the sequence of prime numbers during >> compilation came as an unpleasant /surprise/ to the C++ >> standards committee. > > I don't believe that the surprise was "unpleasant" - it's just something > they hadn't considered. (I'm not even sure of that either - my feeling > is that this is an urban myth, or at least a story exaggerated in the > regular retelling.)
I was thinking of "unpleasant" /because/ it was a surprise. Bjarne dismissed the possibility before being stunned a couple of days later. Here's a google (mis)translation of the Erwin Unruh's account at http://www.erwin-unruh.de/meta.html Temple meta programming The template meta programming is a way to carry out calculations already in C ++ during the translation. This allows additional checks to be installed. This is particularly used to build efficient algorithms. In 2000, a special workshop was held for this purpose. This started with the C ++ standardization meeting in 1994 in Sandiego. Here is my personal memory: We discussed the possibilities of determining template arguments from a template. The question came up as to whether the inverse of a function could be determined. So if "i + 1 == 5" could be closed, that "i == 4". This was denied, but the question inspired me to the idea to calculate primes during the translation. The first version I crafted on Monday, but she was fundamentally wrong. Bjarne Strouffup said so something would not work in principle. This stacked my zeal, and so I finished the scaffolding of the program Wednesday afternoon. On Wednesday evening another work meeting was announced where I had some air. There I met Tom Pennello, and we put together. He had his notebook and we tapped my program briefly. After some crafts, the program ran. We made a run and printed program and error message. Then Tom came to the idea of &#8203;&#8203;taking a more complicated function. We chose the Ackermann function. After a few hours, this also ran and calculated the value of the Ackermann function during the translation. On Thursday I showed the term Bjarne. He was extremely stunned. I then made copies for all participants and officially distributed this curious program. I kept the whole thing for a joke. A few weeks later, I developed a proof that the template mechanism is turbine-complete. However, since this proof was quite dry, I just put it to the files. I still have the notes. On the occasion, I will tempt this time and provide here. Later, Todd Veldhuizen picked up the idea and published an article in the C ++ Report. This appeared in May 1995. He understood the possibilities behind the idea and put them in concrete metatograms that make something constructive. This article was the basis on which template meta programming was built. Although I gave the kick-off, but did not recognize the range of the idea. Erwin Unruh, 1. 1. 2002
> Template-based calculations are a bit like trying to do calculations and > data structures in LaTeX. It is all possible, but it doesn't roll off > the tongue very easily.
Being perverse can be fun, /provided/ it doesn't happen accidentally in everyday life.
On 19/04/2021 11:08, Tom Gardner wrote:
> On 19/04/21 07:52, David Brown wrote: >> On 19/04/2021 00:17, Tom Gardner wrote: >>> Infamously, getting a valid C++ program that cause the >>> compiler to generate the sequence of prime numbers during >>> compilation came as an unpleasant /surprise/ to the C++ >>> standards committee. >> >> I don't believe that the surprise was "unpleasant" - it's just something >> they hadn't considered.&nbsp; (I'm not even sure of that either - my feeling >> is that this is an urban myth, or at least a story exaggerated in the >> regular retelling.) > > I was thinking of "unpleasant" /because/ it was a surprise. > Bjarne dismissed the possibility before being stunned a > couple of days later. > > Here's a google (mis)translation of the Erwin Unruh's account at > http://www.erwin-unruh.de/meta.html > > Temple meta programming
<snip for brevity>
> > >> Template-based calculations are a bit like trying to do calculations and >> data structures in LaTeX.&nbsp; It is all possible, but it doesn't roll off >> the tongue very easily. > > Being perverse can be fun, /provided/ it doesn't happen > accidentally in everyday life.
Usually it is not a problem when you discover something has extra features or possibilities beyond what you imagined. About the only disadvantage of "turbine-complete" templates is that compilers need to have limits to how hard they will try to compile them - it would be quite inconvenient to have your compiler work for hours trying to calculate Ackerman(5, 2) before melting your processor. (I've done "perverted" stuff in LaTeX - but it wasn't an accident. Fortunately I don't have to do it /every/ day.)
On 2021-04-18 19:59, David Brown wrote:
> On 18/04/2021 17:48, Niklas Holsti wrote: >> On 2021-04-18 13:53, David Brown wrote: >>> On 17/04/2021 20:55, Tom Gardner wrote: >>>> On 17/04/21 17:45, David Brown wrote: >>>>> And most of >>>>> the advantages of Ada (such as better typing) can be achieved in C++ >>>>> with less effort, and at the same time C++ can give additional >>>>> safety on >>>>> resources that is harder to get on Ada. >>>> >>>> Interesting. Could you give a quick outline of that? >>>> >>> >>> Which part? >>> >>> My understanding of Ada classes is that, like Pascal classes, you need >>> to explicitly construct and destruct objects.&#4294967295; This gives far greater >>> scope for programmers to get things wrong than when they are handled >>> automatically by the language. >> >> >> If you mean automatic allocation and deallocation of storage, Ada lets >> you define types that have an "initializer" that is called >> automatically, and can allocate memory if it needs to, and a "finalizer" >> that is automatically called when leaving the scope of the object in >> question. The finalizer does have to explicitly deallocate any >> explicitly allocated and owned resources, and it may have to use >> reference counting for that, for complex data structures. > > I had a little look for this (these discussions are great for inspiring > learning!). The impression I got was that it was possible, but what > takes a few lines of C++ (excluding whatever work must be done inside > the constructor and destructor bodies) involves inheriting from a > specific library type.
Yes, you must inherit from one of the types Ada.Finalization.Controlled or Ada.Finalization.Limited_Controlled when you create a type for which you can program an initializer and/or a finalizer. However, you can aggregate a component of such a type into some other composite type, and then that component's initializer and finalizer will be called automatically when any object of the containing composite type is constructed and destroyed.
> And you don't have automatic initialisation of > subobjects and ancestors in a controlled order, nor automatic > finalisation of them in the reverse order.
No, and yes. Subobjects (components) are automatically initialized before the composite is initialized (bottom-up), and are automatically finalized after the composite is finalized (top-down). But there is no automatic invocation of the initializer or finalizer of the parent class; that would have to be called explicitly (except in the case of an "extension aggregate" expression, where an object of the parent type is created and then extended to an object of the derived class). The Ada initializer and finalizer concept is subtly different from the C++ constructor and destructor concept. In Ada, the construction and destruction are considered to happen implicitly and automatically. The construction step can assign some initial values that can be defined by default (pointers default to null, for example) or can be specified for the type of the component in question, or can be defined for that component explicitly. For example: type Down_Counter is range 0 .. 100 with Default_Value => 100; type Zero_Handler is access procedure; type Counter is record Count &#4294967295; : Down_Counter; -- Implicit init to 100. Running : Boolean := False; -- Explicit init. At_Zero : Zero_Handler; -- Default init to null. end record; Beyond that automatic construction step, the programmable initializer is used to perform further automatic activities that may further initialize the object, or may have some other effects. For example, we might want to automatically register every instance of a Counter (as above) with the kernel, and that would be done in the initializer. Conversely, the finalizer would then deregister the Counter, before the Counter is automatically destroyed (removed from the stack or from the heap). So the Ada "initializer" is not like a C++ constructor, which in Ada corresponds more closely to a function returning an object of the class. An Ada "finalizer" is more similar to a C++ destructor, taking care of any clean-up that is needed before the object disappears.
> > Let's take a little example. And since this is comp.arch.embedded, > let's take a purely embedded example of disabling interrupts, rather > than shunned dynamic memory allocations: > > static inline uint32_t disableGlobalInterrupts(void) { > uint32_t pri; > asm volatile( > " mrs %[pri], primask\n\t" // Get old mask > " cpsid i\n\t" // Disable interrupts entirely > " dsb" // Ensures that this takes effect before next > // instruction > : [pri] "=r" (pri) :: "memory"); > return pri; > } > > static inline void restoreGlobalInterrupts(uint32_t pri) { > asm volatile( > " msr primask, %[pri]" // Restore old mask > :: [pri] "r" (pri) : "memory"); > }
I won't try to write Ada equivalents of the above :-) though I have of course written much Ada code to manage and handle interrupts.
> class CriticalSectionLock { > private : > uint32_t oldpri; > public : > CriticalSectionLock() { oldpri = disableGlobalInterrupts(); } > ~CriticalSectionLock() { restoreGlobalInterrupts(oldpri); } > };
Here is the same in Ada. I chose to derive from Limited_Controlled because that makes it illegal to assign a Critical_Section value from one object to another. -- Declaration of the type: type Critical_Section is new Ada.Finalization.Limited_Controlled with record old_pri : Interfaces.Unsigned_32; end record; overriding procedure Initialize (This : in out Critical_Section); overriding procedure Finalize (This : in out Critical_Section); -- Implementation of the operations: procedure Initialize (This : in out Critical_Section) is begin This.old_pri := disableGlobalInterrupts; end Initialize; procedure Finalize (This : in out Critical_Section) is begin restoreGlobalInterrupts (This.old_pri); end Finalize;
> > You can use it like this: > > bool compare_and_swap64(uint64_t * p, uint64_t old, uint64_t x) > { > CriticalSectionLock lock; > > if (*p != old) return false; > *p = x; > return true; > } >
function Compare_and_Swap64 ( p : access Interfaces.Unsigned_64; old, x : in Interfaces.Unsigned_64) return Boolean is Lock : Critical_Section; begin if p.all /= old then return False; else p.all := x; return True; end if; end Compare_and_Swap64; (I think there should be a "volatile" spec for the "p" object, don't you?)
> This is the code compiled for a 32-bit Cortex-M device: > > <https://godbolt.org/z/7KM9M6Kcd> > > The use of the class here has no overhead compared to manually disabling > and re-enabling interrupts. > > What would be the Ada equivalent of this class, and of the > "compare_and_swap64" function?
See above. I don't have an Ada-to-Cortex-M compiler at hand to compare the target code, sorry. But critical sections in Ada applications are more often written using the Ada "protected object" feature. Here is the same as a protected object "CS", with separate declaration and body as usual in Ada. Here I must write the operation as a procedure instead of a function, because protected objects have "single writer, multiple readers" semantics, and any function is considered a "reader" although it may have side effects: protected CS with Priority => System.Interrupt_Priority'Last is procedure Compare_and_Swap64 ( p : access Interfaces.Unsigned_64; old, x : in Interfaces.Unsigned_64; result : out Boolean); end CS; protected body CS is procedure Compare_and_Swap64 ( p : access Interfaces.Unsigned_64; old, x : in Interfaces.Unsigned_64; result : out Boolean); is begin result := p.all = old; if result then p.all := x; end if; end Compare_and_Swap64; end CS; However, it would be more in the style of Ada to focus on the thing that is being "compared and swapped", so that "p" would be either a discriminant of the protected object, or a component of the protected object, instead of a parameter to the copy-and-swap operation. But it would look similar to the above.
>>> On the third hand (three hands are always useful for programming), the >>> wordy nature of type conversions in Ada mean programmers would be >>> tempted to take shortcuts and skip these extra types. >> >> >> Huh? A normal conversion in C is written "(newtype)expression", the same >> in Ada is written "newtype(expression)". Exactly the same number of >> characters, only the placement of the () is different. The C form might >> even require an extra set of parentheses around it, to demarcate the >> expression to be converted from any containing expression. >> >> Of course, in C you have implicit conversions between all kinds of >> numerical types, often leading to a whole lot of errors... not only >> apples+oranges, but also truncation or other miscomputation. > > C also makes explicit conversions wordy, yes. In C++, you can choose > which conversions are explicit and which are implicit - done carefully, > your safe conversions will be implicit and unsafe ones need to be explicit.
Ada does not have programmable implicit conversions, but one can override some innocuous operator, usually "+", to perform whatever conversions one wants. For example: function "+" (Item : Boolean) return Float is (if Item then 1.0 else 0.0); or more directly function "+" (Item : Boolean) return Float is (Float (Boolean'Pos (Item)));
> (C++ suffers from its C heritage and backwards compatibility, meaning it > can't fix things that were always implicit conversion. It's too late to > make "int x = 123.4;" an error. The best C++ can do is add a new syntax > with better safety - so "int y { 123 };" is fine but "int z { 123.4 };" > is an error.)
Ada also has some warts, but perhaps not as easily illustrated.
On 19/04/21 11:15, David Brown wrote:
> (I've done "perverted" stuff in LaTeX - but it wasn't an accident. > Fortunately I don't have to do it/every/ day.)
We've all done that in one language or another!
On 2021-04-18 23:23, David Brown wrote:
> On 18/04/2021 20:29, Tom Gardner wrote: >> On 18/04/21 18:26, David Brown wrote: >>> But some programmers have not changed, and the way they write C code has >>> not changed.&#4294967295; Again, it's a question of whether you are comparing how >>> languages/could/&#4294967295; be used at their best, or how some people use them at >>> their worst, or guessing something in between. >> >> That is indeed a useful first question. >> >> A second question is "how easy is it to get intelligent >> but inexperienced programmers to avoid the worst features >> and use the languages well?" (We'll ignore all the programmers >> that shouldn't be given a keyboard :) ) >> >> A third is "will that happen in company X's environment when >> they are extending code written by people that left 5 years >> ago?" > > All good questions. > > Another is what will happen to the company when the one person that > understood the language at all, leaves?
If the company trained only one person in the language, that was a stupid (risky) decision by the company, or they should not have started using that language at all.
> With C++, you can hire another > qualified and experienced C++ programmer. (Okay, so you might have to > beat them over the head with an 8051 emulator until they stop using > std::vector and realise you don't want UCS2 encoding for your 2x20 > character display, but they'll learn that soon enough.) With Ada, or > Forth, or any other minor language, you are scuppered.
During all my years (since about 1995) working on on-board SW for ESA spacecraft, the company hired one person with earlier experience in Ada, and that was I. All other hires working in Ada projects learned Ada on the job (and some became enthusiasts). Sadly, some of the large aerospace "prime" companies in Europe are becoming unwilling to accept subcontracted SW products in Ada, for the reason discussed: because their HR departments say that they cannot find programmers trained in Ada. Bah, a competent programmer will pick up the core concepts quickly, says I. Of course there are also training companies that offer Ada training courses.
On 4/19/2021 14:04, Niklas Holsti wrote:
> .... > Sadly, some of the large aerospace "prime" companies in Europe are > becoming unwilling to accept subcontracted SW products in Ada, for the > reason discussed: because their HR departments say that they cannot find > programmers trained in Ada. Bah, a competent programmer will pick up the > core concepts quickly, says I.
This is valid not just for ADA. An experienced programmer will need days to adjust to this or that language. I guess most if not all of us have been through it. Dimiter ====================================================== Dimiter Popoff, TGI http://www.tgi-sci.com ====================================================== http://www.flickr.com/photos/didi_tgi/
On 19/04/2021 12:51, Niklas Holsti wrote:
> On 2021-04-18 19:59, David Brown wrote: >> On 18/04/2021 17:48, Niklas Holsti wrote: >>> On 2021-04-18 13:53, David Brown wrote: >>>> On 17/04/2021 20:55, Tom Gardner wrote: >>>>> On 17/04/21 17:45, David Brown wrote: >>>>>> And most of >>>>>> the advantages of Ada (such as better typing) can be achieved in C++ >>>>>> with less effort, and at the same time C++ can give additional >>>>>> safety on >>>>>> resources that is harder to get on Ada. >>>>> >>>>> Interesting. Could you give a quick outline of that? >>>>> >>>> >>>> Which part? >>>> >>>> My understanding of Ada classes is that, like Pascal classes, you need >>>> to explicitly construct and destruct objects.&#4294967295; This gives far greater >>>> scope for programmers to get things wrong than when they are handled >>>> automatically by the language. >>> >>> >>> If you mean automatic allocation and deallocation of storage, Ada lets >>> you define types that have an "initializer" that is called >>> automatically, and can allocate memory if it needs to, and a "finalizer" >>> that is automatically called when leaving the scope of the object in >>> question. The finalizer does have to explicitly deallocate any >>> explicitly allocated and owned resources, and it may have to use >>> reference counting for that, for complex data structures. >> >> I had a little look for this (these discussions are great for inspiring >> learning!).&#4294967295; The impression I got was that it was possible, but what >> takes a few lines of C++ (excluding whatever work must be done inside >> the constructor and destructor bodies) involves inheriting from a >> specific library type. > > > Yes, you must inherit from one of the types Ada.Finalization.Controlled > or Ada.Finalization.Limited_Controlled when you create a type for which > you can program an initializer and/or a finalizer. > > However, you can aggregate a component of such a type into some other > composite type, and then that component's initializer and finalizer will > be called automatically when any object of the containing composite type > is constructed and destroyed. > > >> And you don't have automatic initialisation of >> subobjects and ancestors in a controlled order, nor automatic >> finalisation of them in the reverse order. > > > No, and yes. Subobjects (components) are automatically initialized > before the composite is initialized (bottom-up), and are automatically > finalized after the composite is finalized (top-down). But there is no > automatic invocation of the initializer or finalizer of the parent > class; that would have to be called explicitly (except in the case of an > "extension aggregate" expression, where an object of the parent type is > created and then extended to an object of the derived class). >
OK. Am I right in assuming the subobjects here also need to inherit from the "Finalization" types individually, in order to be automatically initialised in order? Are there any overheads (other than in the source code) for all this inheriting? Ada (like C++) aims to be minimal overhead, AFAIUI, but its worth checking.
> The Ada initializer and finalizer concept is subtly different from the > C++ constructor and destructor concept. In Ada, the construction and > destruction are considered to happen implicitly and automatically. The > construction step can assign some initial values that can be defined by > default (pointers default to null, for example) or can be specified for > the type of the component in question, or can be defined for that > component explicitly. For example: > > &#4294967295;&#4294967295; type Down_Counter is range 0 .. 100 with Default_Value => 100; > > &#4294967295;&#4294967295; type Zero_Handler is access procedure; > > &#4294967295;&#4294967295; type Counter is record > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; Count &#4294967295; : Down_Counter;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; -- Implicit init to 100. > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; Running : Boolean := False;&#4294967295; -- Explicit init. > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; At_Zero : Zero_Handler;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; -- Default init to null. > &#4294967295;&#4294967295; end record; > > Beyond that automatic construction step, the programmable initializer is > used to perform further automatic activities that may further initialize > the object, or may have some other effects. For example, we might want > to automatically register every instance of a Counter (as above) with > the kernel, and that would be done in the initializer. Conversely, the > finalizer would then deregister the Counter, before the Counter is > automatically destroyed (removed from the stack or from the heap). > > So the Ada "initializer" is not like a C++ constructor, which in Ada > corresponds more closely to a function returning an object of the class. > > An Ada "finalizer" is more similar to a C++ destructor, taking care of > any clean-up that is needed before the object disappears. >
C++ gives you the choice. You can do work in a constructor, or you can leave it as a minimal (often automatically generated) function. You can give default values to members. You can add "initialise" member functions as you like. You can have "factory functions" that generate instances. This lets you structure the code and split up functionality in whatever way suits your requirements. For a well-structured class, the key point is that a constructor will always establish the class invariant. Any publicly accessible function will assume that invariant, and maintain it. Private functions might temporarily break the invariant - these are only accessible by code that "knows what it is doing". And the destructor will always clean up after the object, recycling any resources used. Having C++ style constructors are not a requirement for having control of the class invariant, but they do make it more convenient and more efficient (both at run-time and in the source code) than separate minimal constructors (or default values) and initialisers.
>> >> Let's take a little example.&#4294967295; And since this is comp.arch.embedded, >> let's take a purely embedded example of disabling interrupts, rather >> than shunned dynamic memory allocations: >> >> static inline uint32_t disableGlobalInterrupts(void) { >> &#4294967295;&#4294967295;&#4294967295;&#4294967295; uint32_t pri; >> &#4294967295;&#4294967295;&#4294967295;&#4294967295; asm volatile( >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; "&#4294967295; mrs %[pri], primask\n\t" // Get old mask >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; "&#4294967295; cpsid i\n\t"&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; // Disable interrupts entirely >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; "&#4294967295; dsb"&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; // Ensures that this takes effect before next >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; // instruction >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; : [pri] "=r" (pri) :: "memory"); >> &#4294967295;&#4294967295;&#4294967295;&#4294967295; return pri; >> } >> >> static inline void restoreGlobalInterrupts(uint32_t pri) { >> &#4294967295;&#4294967295;&#4294967295;&#4294967295; asm volatile( >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; "&#4294967295; msr primask, %[pri]"&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; // Restore old mask >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; :: [pri] "r" (pri) : "memory"); >> } > > > I won't try to write Ada equivalents of the above :-) though I have of > course written much Ada code to manage and handle interrupts.
I think Ada has built-in (or standard library) support for critical sections, does it not? But this is just an example, not necessarily something that would be directly useful. Obviously the code above is highly target-specific.
> > >> class CriticalSectionLock { >> private : >> &#4294967295;&#4294967295;&#4294967295;&#4294967295; uint32_t oldpri; >> public : >> &#4294967295;&#4294967295;&#4294967295;&#4294967295; CriticalSectionLock() { oldpri = disableGlobalInterrupts(); } >> &#4294967295;&#4294967295;&#4294967295;&#4294967295; ~CriticalSectionLock() { restoreGlobalInterrupts(oldpri); } >> }; > > > Here is the same in Ada. I chose to derive from Limited_Controlled > because that makes it illegal to assign a Critical_Section value from > one object to another. > > &#4294967295;&#4294967295; -- Declaration of the type: > > &#4294967295;&#4294967295; type Critical_Section is new Ada.Finalization.Limited_Controlled > &#4294967295;&#4294967295; with record > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; old_pri : Interfaces.Unsigned_32; > &#4294967295;&#4294967295; end record; > > &#4294967295;&#4294967295; overriding procedure Initialize (This : in out Critical_Section); > &#4294967295;&#4294967295; overriding procedure Finalize&#4294967295;&#4294967295; (This : in out Critical_Section); > > &#4294967295;&#4294967295; -- Implementation of the operations: > > &#4294967295;&#4294967295; procedure Initialize (This : in out Critical_Section) > &#4294967295;&#4294967295; is begin > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; This.old_pri := disableGlobalInterrupts; > &#4294967295;&#4294967295; end Initialize; > > &#4294967295;&#4294967295; procedure Finalize (This : in out Critical_Section) > &#4294967295;&#4294967295; is begin > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; restoreGlobalInterrupts (This.old_pri); > &#4294967295;&#4294967295; end Finalize; > >
Are "Initialize" and "Finalize" overloaded global procedures, or is this the syntax always used for member functions?
>> >> You can use it like this: >> >> bool compare_and_swap64(uint64_t * p, uint64_t old, uint64_t x) >> { >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;CriticalSectionLock lock; >> >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;if (*p != old) return false; >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;*p = x; >> &#4294967295;&#4294967295;&#4294967295;&#4294967295;return true; >> } >> > > > &#4294967295;&#4294967295; function Compare_and_Swap64 ( > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; p&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; : access Interfaces.Unsigned_64; > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; old, x : in&#4294967295;&#4294967295;&#4294967295;&#4294967295; Interfaces.Unsigned_64) > &#4294967295;&#4294967295; return Boolean > &#4294967295;&#4294967295; is > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; Lock : Critical_Section; > &#4294967295;&#4294967295; begin > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; if p.all /= old then > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; return False; > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; else > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; p.all := x; > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; return True; > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; end if; > &#4294967295;&#4294967295; end Compare_and_Swap64; > > (I think there should be a "volatile" spec for the "p" object, don't you?)
It might be logical to make it volatile, but the code would not be different (the inline assembly has memory clobbers already, which force the memory accesses to be carried out without re-arrangements). But adding "volatile" would do no harm, and let the user of the function pass a volatile pointer. The Ada and C++ code is basically the same here, which is nice. How would it look with block scope? extern int bar(int x); int foo(volatile int * p, int x, int y) { int u = bar(x); { CriticalSectionLock lock; *p += z; } int v = bar(y); return v; } The point of this example is that the "*p += z;" line should be within the calls to disableGlobalInterrupts and restoreGlobalInterrupts, but the calls to "bar" should be outside. This requires the lifetime of the lock variable to be more limited.
> > > >> This is the code compiled for a 32-bit Cortex-M device: >> >> <https://godbolt.org/z/7KM9M6Kcd> >> >> The use of the class here has no overhead compared to manually disabling >> and re-enabling interrupts. >> >> What would be the Ada equivalent of this class, and of the >> "compare_and_swap64" function? > > > See above. I don't have an Ada-to-Cortex-M compiler at hand to compare > the target code, sorry.
godbolt.org has Ada and gnat 10.2 too, but only for x86-64. The enable/restore interrupt functions could be changed to simply reading and writing a volatile int. Then you could compare the outputs of Ada and C++ for x86-64. If you have the time and inclination, it might be fun to see.
> > But critical sections in Ada applications are more often written using > the Ada "protected object" feature. Here is the same as a protected > object "CS", with separate declaration and body as usual in Ada. Here I > must write the operation as a procedure instead of a function, because > protected objects have "single writer, multiple readers" semantics, and > any function is considered a "reader" although it may have side effects: > > &#4294967295;&#4294967295; protected CS > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; with Priority => System.Interrupt_Priority'Last > &#4294967295;&#4294967295; is > > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; procedure Compare_and_Swap64 ( > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; p&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; : access Interfaces.Unsigned_64; > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; old, x : in&#4294967295;&#4294967295;&#4294967295;&#4294967295; Interfaces.Unsigned_64; > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; result :&#4294967295;&#4294967295;&#4294967295; out Boolean); > > &#4294967295;&#4294967295; end CS; > > &#4294967295;&#4294967295; protected body CS > &#4294967295;&#4294967295; is > > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; procedure Compare_and_Swap64 ( > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; p&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; : access Interfaces.Unsigned_64; > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; old, x : in&#4294967295;&#4294967295;&#4294967295;&#4294967295; Interfaces.Unsigned_64; > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; result :&#4294967295;&#4294967295;&#4294967295; out Boolean); > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; is begin > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; result := p.all = old; > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; if result then > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; p.all := x; > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; end if; > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; end Compare_and_Swap64; > > &#4294967295;&#4294967295; end CS; > > However, it would be more in the style of Ada to focus on the thing that > is being "compared and swapped", so that "p" would be either a > discriminant of the protected object, or a component of the protected > object, instead of a parameter to the copy-and-swap operation. But it > would look similar to the above. >
I think the idea of language support for protected sections is nice, but I'd be concerned about how efficiently it would map to the requirements of the program and the target. Such things are often a bit "brute force", because they have to emphasise "always works" over efficiency. For example, if you have a 64-bit atomic type (on a 32-bit device), you don't /always/ need to disable interrupts around it. If you are already in an interrupt routine and know that no higher priority interrupt accesses the data, no locking is needed. If you are in main thread code but only read the data, maybe repeatedly reading it until you get two reads with the same value is more efficient. Such shortcuts must, of course, be used with care. In C and C++, there are atomic types (since C11/C++11). They require library support for different targets, which are (unfortunately) not always good. But certainly it is common in C++ to think of an atomic type here rather than atomic access functions, just as you describe in Ada.
> >>>> On the third hand (three hands are always useful for programming), the >>>> wordy nature of type conversions in Ada mean programmers would be >>>> tempted to take shortcuts and skip these extra types. >>> >>> >>> Huh? A normal conversion in C is written "(newtype)expression", the same >>> in Ada is written "newtype(expression)". Exactly the same number of >>> characters, only the placement of the () is different. The C form might >>> even require an extra set of parentheses around it, to demarcate the >>> expression to be converted from any containing expression. >>> >>> Of course, in C you have implicit conversions between all kinds of >>> numerical types, often leading to a whole lot of errors... not only >>> apples+oranges, but also truncation or other miscomputation. >> >> C also makes explicit conversions wordy, yes.&#4294967295; In C++, you can choose >> which conversions are explicit and which are implicit - done carefully, >> your safe conversions will be implicit and unsafe ones need to be >> explicit. > > > Ada does not have programmable implicit conversions, but one can > override some innocuous operator, usually "+", to perform whatever > conversions one wants. For example: > > &#4294967295;&#4294967295; function "+" (Item : Boolean) return Float > &#4294967295;&#4294967295; is (if Item then 1.0 else 0.0); > > or more directly > > &#4294967295;&#4294967295; function "+" (Item : Boolean) return Float > &#4294967295;&#4294967295; is (Float (Boolean'Pos (Item))); > > >> (C++ suffers from its C heritage and backwards compatibility, meaning it >> can't fix things that were always implicit conversion.&#4294967295; It's too late to >> make "int x = 123.4;" an error.&#4294967295; The best C++ can do is add a new syntax >> with better safety - so "int y { 123 };" is fine but "int z { 123.4 };" >> is an error.) > > > Ada also has some warts, but perhaps not as easily illustrated. >
A language without warts would be boring! Thank you for the insights and updates to my Ada knowledge.

The 2024 Embedded Online Conference