EmbeddedRelated.com
Forums

How to write a simple driver in bare metal systems: volatile, memory barrier, critical sections and so on

Started by pozz October 22, 2021
On 10/24/2021 4:14 AM, Dimiter_Popoff wrote:
>> Disable interrupts while accessing the fifo. you really have to. >> alternatively you'll often get away not using a fifo at all, >> unless you're blocking for a long while in some part of the code. > > Why would you do that. The fifo write pointer is only modified by > the interrupt handler, the read pointer is only modified by the > interrupted code. Has been done so for times immemorial.
The OPs code doesn't differentiate between FIFO full and empty. So, *he* may gain some advantage from disabling interrupts to ensure the character he is about to retrieve is n ot overwritten by an incoming character, placed at that location (cuz he lets his FIFO wrap indiscriminately). And, if the offsets ever got larger (wider) -- or became actual pointers -- then the possibility of PART of a value being updated on either "side" of an ISR is also possible. And, there's nothing to say the OP has disclosed EVERYTHING that might be happening in his ISR (maintaining handshaking signals, flow control, etc.) which could compound the references (e.g., if you need to know that you have space for N characters remaining so you can signal the remote device to stop sending, then you're doing "pointer/offset arithmetic" and *acting* on the result)
> Although this thread is on how to wrestle a poor > language to do what you want, sort of how to use a hammer on a screw > instead of taking the screwdriver, there would be no need to > mask interrupts with C either.
The "problem" with the language is that it gives the compiler the freedom to make EQUIVALENT changes to your code that you might not have foreseen or that might not have been consistent with your "style" -- yet do not alter the results. For example, you might want to write: x = <expr1> y = <expr2> just because of some "residual OCD" that makes you think in terms of "x before y". Yet, there may be no dependencies in those statements that *require* that ordering. So, why should the compiler be crippled to implementing them in that order if it has found a way to alter their order (or their actual content)? A correctly written compiler will follow a set of rules that it *knows* to be safe "code translations"; but many developers don't have a similar understanding of those; so the problem lies in the developer's skillset, not the compiler or language. After all, a programming language -- ANY programming language -- is just a vehicle for conveying our desires to the machine in a semi-unambiguous manner. I'd much rather *SAY*, "What are the roots of ax^2 + bx + c?" than have to implement an algorithmic solution, worry about cancellation, required precision, etc. (and, in some languages, you can do just that!)
On 10/24/2021 22:54, Don Y wrote:
> On 10/24/2021 4:14 AM, Dimiter_Popoff wrote: >>> Disable interrupts while accessing the fifo. you really have to. >>> alternatively you'll often get away not using a fifo at all, >>> unless you're blocking for a long while in some part of the code. >> >> Why would you do that. The fifo write pointer is only modified by >> the interrupt handler, the read pointer is only modified by the >> interrupted code. Has been done so for times immemorial. > > The OPs code doesn't differentiate between FIFO full and empty.
So he should fix that first, there is no sane reason why not. Few things are simpler to do than that.
> So, *he* may gain some advantage from disabling interrupts to > ensure the character he is about to retrieve is n ot overwritten > by an incoming character, placed at that location (cuz he lets > his FIFO wrap indiscriminately). > > And, if the offsets ever got larger (wider) -- or became actual > pointers -- then the possibility of PART of a value being updated > on either "side" of an ISR is also possible. > > And, there's nothing to say the OP has disclosed EVERYTHING > that might be happening in his ISR (maintaining handshaking signals, > flow control, etc.) which could compound the references > (e.g., if you need to know that you have space for N characters > remaining so you can signal the remote device to stop sending, > then you're doing "pointer/offset arithmetic" and *acting* on the > result)
Whatever handshakes he makes there is no problem knowing whether the fifo is full - just check if the position the write pointer will have after putting the next byte matches the read pointer at the moment. Like I said before, few things are simpler than that, can't imagine someone working as a programmer being stuck at *that*.
> >> Although this thread is on how to wrestle a poor >> language to do what you want, sort of how to use a hammer on a screw >> instead of taking the screwdriver, there would be no need to >> mask interrupts with C either. > > The "problem" with the language is that it gives the compiler the freedom > to make EQUIVALENT changes to your code that you might not have foreseen > or that might not have been consistent with your "style" -- yet do not > alter the results.
Don, let us not go into this. Just looking at the thread is enough to see it is about wrestling the language so it can be made some use of.
> > After all, a programming language -- ANY programming language -- is just > a vehicle for conveying our desires to the machine in a semi-unambiguous > manner.&#4294967295; I'd much rather *SAY*, "What are the roots of ax^2 + bx + c?" > than have to implement an algorithmic solution, worry about cancellation, > required precision, etc.&#4294967295; (and, in some languages, you can do just that!)
Indeed you don't want to write how the equation is solved every time. This is why you can call it once you have it available. This is language independent. Then solving expressions etc. is well within 1% of the effort in programming if the task at hand is going to take > 2 weeks; after that the programmer's time is wasted on wrestling the language like demonstrated by this thread. Sadly almost everybody has accepted C as a standard - which makes it a very popular poor language. Similar to say Chinese, very popular, spoken by billions, yet where are its literary masterpieces. Being hieroglyph based there are none; you will have to look at alphabet based languages to find some. ====================================================== Dimiter Popoff, TGI http://www.tgi-sci.com ====================================================== http://www.flickr.com/photos/didi_tgi/
On 10/24/2021 1:27 PM, Dimiter_Popoff wrote:
> On 10/24/2021 22:54, Don Y wrote: >> On 10/24/2021 4:14 AM, Dimiter_Popoff wrote: >>>> Disable interrupts while accessing the fifo. you really have to. >>>> alternatively you'll often get away not using a fifo at all, >>>> unless you're blocking for a long while in some part of the code. >>> >>> Why would you do that. The fifo write pointer is only modified by >>> the interrupt handler, the read pointer is only modified by the >>> interrupted code. Has been done so for times immemorial. >> >> The OPs code doesn't differentiate between FIFO full and empty. > > So he should fix that first, there is no sane reason why not. > Few things are simpler to do than that.
Yes, I pointed that out earlier, to him. Why worry about what the compiler *might* do if you haven't sorted out what you really WANT to do?
>> So, *he* may gain some advantage from disabling interrupts to >> ensure the character he is about to retrieve is n ot overwritten >> by an incoming character, placed at that location (cuz he lets >> his FIFO wrap indiscriminately). >> >> And, if the offsets ever got larger (wider) -- or became actual >> pointers -- then the possibility of PART of a value being updated >> on either "side" of an ISR is also possible. >> >> And, there's nothing to say the OP has disclosed EVERYTHING >> that might be happening in his ISR (maintaining handshaking signals, >> flow control, etc.) which could compound the references >> (e.g., if you need to know that you have space for N characters >> remaining so you can signal the remote device to stop sending, >> then you're doing "pointer/offset arithmetic" and *acting* on the >> result) > > Whatever handshakes he makes there is no problem knowing whether > the fifo is full - just check if the position the write pointer > will have after putting the next byte matches the read pointer > at the moment. Like I said before, few things are simpler than > that, can't imagine someone working as a programmer being > stuck at *that*.
Yes, but if you want to implement flow control, you have to tell the other end of the line BEFORE you've filled your buffer. There may be a character being deserialized AS you are retrieving the "last" character, another one (or more) preloaded into the transmitter on the far device, etc. And, it will take some time for your notification to reach the far end and be recognized as a desire to suspend transmission. etc. If you wait until you have no more space available, you are almost certain to lose characters.
>>> Although this thread is on how to wrestle a poor >>> language to do what you want, sort of how to use a hammer on a screw >>> instead of taking the screwdriver, there would be no need to >>> mask interrupts with C either. >> >> The "problem" with the language is that it gives the compiler the freedom >> to make EQUIVALENT changes to your code that you might not have foreseen >> or that might not have been consistent with your "style" -- yet do not >> alter the results. > > Don, let us not go into this. Just looking at the thread is enough to > see it is about wrestling the language so it can be made some use of.
The language isn't the problem. Witness the *millions* (?) of programs written in it, over the past 5 decades. The problem is that it never was an assembly language -- even though it was treated as such "in days gone by" (because the compiler's were just "language translators" and didn't add any OTHER value to the "programming process"). It's only recently that compilers have become "independent agents", of a sort... adding their own "spin" on the developer's code. And, with more capable hardware (multiple cores/threads) being "dirt cheap", it's a lot easier for a developer to find himself in a situation that was previously pie-in-the-sky.
>> After all, a programming language -- ANY programming language -- is just >> a vehicle for conveying our desires to the machine in a semi-unambiguous >> manner. I'd much rather *SAY*, "What are the roots of ax^2 + bx + c?" >> than have to implement an algorithmic solution, worry about cancellation, >> required precision, etc. (and, in some languages, you can do just that!) > > Indeed you don't want to write how the equation is solved every time. > This is why you can call it once you have it available. This is language > independent.
For a simple quadratic, you can explore the coefficients to determine which algorithm is best suited to giving you *accurate* results. What if I present *any* expression? Can you have your solution available to handle any case? Did you even bother to develop such a solution if you were only encountering quadratics?
> Then solving expressions etc. is well within 1% of the effort in > programming if the task at hand is going to take > 2 weeks; after that > the programmer's time is wasted on wrestling the language like > demonstrated by this thread. Sadly almost everybody has accepted > C as a standard - which makes it a very popular poor language.
It makes it *popular* but concluding that it is "poor" is an overreach. There are (and have been) many "safer" languages. Many that are more descriptive (for certain classes of problem). But, C has survived to handle all-of-the-above... perhaps in a suboptimal way but at least a manner that can get to the desired solution. Look at how few applications SNOBOL handles. Write an OS in COBOL? Ada? A tool is only effective if it solves real problems. Under real cost and time constraints. There are lots of externalities that come into play in that analysis. I've made some syntactic changes to my code that make it much easier to read -- yet mean that I have to EXPLAIN how they work and why they are present as any other developer would frown on encountering them. (But, it's my opinion that, once explained, that developer will see them as an efficient addition to the language in line with other *existing* mechanisms that are already present, there).
> Similar to say Chinese, very popular, spoken by billions, yet > where are its literary masterpieces. Being hieroglyph based there > are none; you will have to look at alphabet based languages to > find some.
One can say the same thing about Unangax&#770; -- spoken by ~100! Popularity and literary masterpieces are completely different axis. Hear much latin or ancient greek spoken, recently?
On 10/25/2021 0:08, Don Y wrote:
> On 10/24/2021 1:27 PM, Dimiter_Popoff wrote: > .... >> >> Whatever handshakes he makes there is no problem knowing whether >> the fifo is full - just check if the position the write pointer >> will have after putting the next byte matches the read pointer >> at the moment.&nbsp; Like I said before, few things are simpler than >> that, can't imagine someone working as a programmer being >> stuck at *that*. > > Yes, but if you want to implement flow control, you have to tell the > other end of the line BEFORE you've filled your buffer.&nbsp; There may be > a character being deserialized AS you are retrieving the "last" > character, another one (or more) preloaded into the transmitter on > the far device, etc.&nbsp; And, it will take some time for your > notification to reach the far end and be recognized as a desire > to suspend transmission.&nbsp; etc. > > If you wait until you have no more space available, you are almost > certain to lose characters.
Well of course so, we have all done that sort of thing since the 80-s, other people have done it before I suppose. Implementing fifo thresholds is not (and has never been) rocket science. The point is there is no point in throwing huge efforts at a self-inflicted problem instead of just doing it the easy way which is well, common knowledge.
> >>>> Although this thread is on how to wrestle a poor >>>> language to do what you want, sort of how to use a hammer on a screw >>>> instead of taking the screwdriver, there would be no need to >>>> mask interrupts with C either. >>> >>> The "problem" with the language is that it gives the compiler the >>> freedom >>> to make EQUIVALENT changes to your code that you might not have foreseen >>> or that might not have been consistent with your "style" -- yet do not >>> alter the results. >> >> Don, let us not go into this. Just looking at the thread is enough to >> see it is about wrestling the language so it can be made some use of. > > The language isn't the problem.&nbsp; Witness the *millions* (?) of programs > written in it, over the past 5 decades.
This does not prove much, it has been the only language allowing "everybody" to do what they did. I am not denying this is the best language currently available to almost everybody. I just happened to have been daring enough to explore my own way/language and have seen how much is there to be gained if not having to wrestle a language which is just a more complete phrase book than the rest of the phrase books (aka high level languages).
>> Indeed you don't want to write how the equation is solved every time. >> This is why you can call it once you have it available. This is language >> independent. > > For a simple quadratic, you can explore the coefficients to determine which > algorithm is best suited to giving you *accurate* results. > > What if I present *any* expression?&nbsp; Can you have your solution available > to handle any case?&nbsp; Did you even bother to develop such a solution if you > were only encountering quadratics?
Any expression solver has its limitations, why go into that? Mine (in the dps environment) can do all arithmetic and logic for integers, the fp can do all arithmetic, knows pi, e, haven't needed to expand it for years. And again, solving expressions has never taken me any significant part of the effort on a project.
> I've made some syntactic changes to my code that make it much easier > to read -- yet mean that I have to EXPLAIN how they work and why they > are present as any other developer would frown on encountering them.
Oh I am well aware of the value of standardization and popularity, these are the strongest points of C.
> (But, it's my opinion that, once explained, that developer will see them > as an efficient addition to the language in line with other *existing* > mechanisms that are already present, there).
Of course, but you have to have them on board first...
>> Similar to say Chinese, very popular, spoken by billions, yet >> where are its literary masterpieces. Being hieroglyph based there >> are none; you will have to look at alphabet based languages to >> find some. > > One can say the same thing about Unangax&#770; -- spoken by ~100! > Popularity and literary masterpieces are completely different > axis. > > Hear much latin or ancient greek spoken, recently?
The Latin alphabet looks pretty popular nowadays :-). Everything evolves, including languages. And there are dead ends within them which just die out - e.g. roman numbers. Can't see much future in any hieroglyph based language though, inventing a symbol for each word has been demonstrated to be a bad idea by history.
On 10/24/2021 2:50 PM, Dimiter_Popoff wrote:
> On 10/25/2021 0:08, Don Y wrote: >> On 10/24/2021 1:27 PM, Dimiter_Popoff wrote: >> .... >>> >>> Whatever handshakes he makes there is no problem knowing whether >>> the fifo is full - just check if the position the write pointer >>> will have after putting the next byte matches the read pointer >>> at the moment. Like I said before, few things are simpler than >>> that, can't imagine someone working as a programmer being >>> stuck at *that*. >> >> Yes, but if you want to implement flow control, you have to tell the >> other end of the line BEFORE you've filled your buffer. There may be >> a character being deserialized AS you are retrieving the "last" >> character, another one (or more) preloaded into the transmitter on >> the far device, etc. And, it will take some time for your >> notification to reach the far end and be recognized as a desire >> to suspend transmission. etc. >> >> If you wait until you have no more space available, you are almost >> certain to lose characters. > > Well of course so, we have all done that sort of thing since the 80-s, > other people have done it before I suppose. Implementing fifo thresholds > is not (and has never been) rocket science. > The point is there is no point in throwing huge efforts at a > self-inflicted problem instead of just doing it the easy way which > is well, common knowledge.
*My* point (to the OP) was that you need to understand what you will be doing before you can understand the "opportunities" the compiler will have to catch you off guard.
>>>>> Although this thread is on how to wrestle a poor >>>>> language to do what you want, sort of how to use a hammer on a screw >>>>> instead of taking the screwdriver, there would be no need to >>>>> mask interrupts with C either. >>>> >>>> The "problem" with the language is that it gives the compiler the freedom >>>> to make EQUIVALENT changes to your code that you might not have foreseen >>>> or that might not have been consistent with your "style" -- yet do not >>>> alter the results. >>> >>> Don, let us not go into this. Just looking at the thread is enough to >>> see it is about wrestling the language so it can be made some use of. >> >> The language isn't the problem. Witness the *millions* (?) of programs >> written in it, over the past 5 decades. > > This does not prove much, it has been the only language allowing > "everybody" to do what they did.
ASM has always been available. Folks just found it too inefficient to solve "big" problems, in reasonable effort.
> I am not denying this is the best > language currently available to almost everybody. I just happened to > have been daring enough to explore my own way/language and have seen > how much is there to be gained if not having to wrestle a language > which is just a more complete phrase book than the rest of the > phrase books (aka high level languages).
But you only have yourself as a client. Most of us have to write code (or modify already written code) that others will see/maintain. It does no good to have a "great tool" if no one else uses it! I use (scant!) ASM, a modified ("proprietary") C dialect, SQL and a scripting language in my current design. (not counting the tools that generate my documentation). This is a LOT to expect a developer to have a firm grasp of. But, inventing a new language that will address all of these needs would be even moreso! At least one can find books/documentation describing each of these individual languages *and* likely find folks with proficiency in each of them. So, I can spend my efforts describing "how things work" instead of the details of how to TELL them to work.
>> I've made some syntactic changes to my code that make it much easier >> to read -- yet mean that I have to EXPLAIN how they work and why they >> are present as any other developer would frown on encountering them. > > Oh I am well aware of the value of standardization and popularity, > these are the strongest points of C. > >> (But, it's my opinion that, once explained, that developer will see them >> as an efficient addition to the language in line with other *existing* >> mechanisms that are already present, there). > > Of course, but you have to have them on board first...
Yes. They have to have incentive to want to use the codebase. They'd not be keen on making any special effort to learn how to modify a "program" that picks resistor values to form voltage dividers. And, even "well motivated", what you are asking them to embrace has to be acceptable to their sense of reason. E.g., expecting folks to adopt postfix notation just because you chose to use it is probably a nonstarter (i.e., "show me some OTHER reason that justifies its use!"). Or, the wonky operator set that APL uses...
>>> Similar to say Chinese, very popular, spoken by billions, yet >>> where are its literary masterpieces. Being hieroglyph based there >>> are none; you will have to look at alphabet based languages to >>> find some. >> >> One can say the same thing about Unangax&#770; -- spoken by ~100! >> Popularity and literary masterpieces are completely different >> axis. >> >> Hear much latin or ancient greek spoken, recently? > > The Latin alphabet looks pretty popular nowadays :-). Everything > evolves, including languages. And there are dead ends within them > which just die out - e.g. roman numbers. Can't see much future in > any hieroglyph based language though, inventing a symbol for each > word has been demonstrated to be a bad idea by history.
Witness the rise of arabic numerals and their efficacy towards advancing mathematics.
On 10/25/2021 1:47, Don Y wrote:
> ... > > ASM has always been available.
There is no such language as ASM, there is a wide variety of machines.
> Folks just found it too inefficient > to solve "big" problems, in reasonable effort.
Especially with the advent of load/store machines (although C must have been helped a lot by the clunky x86 architecture for its popularity), programming in the native assembler for any RISC machine would be masochistic at best. Which is why I took the steps I took etc., no need to go into that.
> >> I am not denying this is the best >> language currently available to almost everybody. I just happened to >> have been daring enough to explore my own way/language and have seen >> how much is there to be gained if not having to wrestle a language >> which is just a more complete phrase book than the rest of the >> phrase books (aka high level languages). > > But you only have yourself as a client.
Yes, but this does not mean much. Looking at pieces I wrote 20 or 30 years ago - even 10 years ago sometimes - is like reading it for the first time for many parts (tens of megabytes of sources, http://tgi-sci.com/misc/scnt21.gif ).
> Most of us have to write code > (or modify already written code) that others will see/maintain.&nbsp; It > does no good to have a "great tool" if no one else uses it! > > I use (scant!) ASM, a modified ("proprietary") C dialect, SQL and a > scripting > language in my current design.&nbsp; (not counting the tools that generate my > documentation).
Here comes the advantage of an "alphabet" rather than "hieroglyph" based approach/language. A lot less of lookup tables to memorize, you learn while going etc. I am quite sure someone like you would get used to it quite fast, much much faster than to an unknown high level language. In fact it may take you very short to see it is something you have more or less been familiar with forever. Grasping the big picture of the entire environment and becoming really good at writing within it would take longer, obviously.
> .... >>> >>> Hear much latin or ancient greek spoken, recently? >> >> The Latin alphabet looks pretty popular nowadays :-). Everything >> evolves, including languages. And there are dead ends within them >> which just die out - e.g. roman numbers. Can't see much future in >> any hieroglyph based language though, inventing a symbol for each >> word has been demonstrated to be a bad idea by history. > > Witness the rise of arabic numerals and their efficacy towards > advancing mathematics.
Yes, another good example of how it is the foundation you step on that really matters. Step on the Roman numbers and good luck with your math...
On 10/24/2021 4:32 PM, Dimiter_Popoff wrote:
> On 10/25/2021 1:47, Don Y wrote: >> ... >> >> ASM has always been available. > > There is no such language as ASM, there is a wide variety of machines.
Of course there's a language called ASM! It's just target specific! It is available for each different processor. And highly NONportable!
>> Folks just found it too inefficient >> to solve "big" problems, in reasonable effort. > > Especially with the advent of load/store machines (although C must have > been helped a lot by the clunky x86 architecture for its popularity), > programming in the native assembler for any RISC machine would be > masochistic at best. Which is why I took the steps I took etc., no > need to go into that. > >>> I am not denying this is the best >>> language currently available to almost everybody. I just happened to >>> have been daring enough to explore my own way/language and have seen >>> how much is there to be gained if not having to wrestle a language >>> which is just a more complete phrase book than the rest of the >>> phrase books (aka high level languages). >> >> But you only have yourself as a client. > > Yes, but this does not mean much. Looking at pieces I wrote 20 or > 30 years ago - even 10 years ago sometimes - is like reading it > for the first time for many parts (tens of megabytes of sources, > http://tgi-sci.com/misc/scnt21.gif ).
Of course it means something! If someone else has to step into your role *tomorrow*, there'd be little/no progress on your codebase until they learned your toolchain/language. An employer has to expect that any employee can "become unavailable" at any time. And, with that, the labors for which they'd previously paid, should still retain their value. I've had clients outright ask me, "What happens if you get hit by a bus?"
>> Most of us have to write code >> (or modify already written code) that others will see/maintain. It >> does no good to have a "great tool" if no one else uses it! > >> I use (scant!) ASM, a modified ("proprietary") C dialect, SQL and a scripting >> language in my current design. (not counting the tools that generate my >> documentation). > > Here comes the advantage of an "alphabet" rather than "hieroglyph" based > approach/language. A lot less of lookup tables to memorize, you learn > while going etc. I am quite sure someone like you would get used to it > quite fast, much much faster than to an unknown high level language. > In fact it may take you very short to see it is something you have more > or less been familiar with forever. > Grasping the big picture of the entire environment and becoming > really good at writing within it would take longer, obviously.
But that can be said of any HLL. That doesn't mean an employer wants to pay you to *learn* (some *previous* employer was expected to have done that!). They want to have to, at most, train you on the needs of their applications/markets.
On 24/10/2021 13:14, Dimiter_Popoff wrote:
> On 10/24/2021 13:39, Johann Klammer wrote:
>> Disable interrupts while accessing the fifo. you really have to. >> alternatively you'll often get away not using a fifo at all, >> unless you're blocking for a long while in some part of the code. >> > > Why would you do that. The fifo write pointer is only modified by > the interrupt handler, the read pointer is only modified by the > interrupted code. Has been done so for times immemorial. > > Although this thread is on how to wrestle a poor > language to do what you want, sort of how to use a hammer on a screw > instead of taking the screwdriver, there would be no need to > mask interrupts with C either. >
There's nothing wrong with the language here - C is perfectly capable of expressing what the OP needs. But getting the "volatile" usage optimal here - enough to cover what you need, but not accidentally reducing the efficiency of the code - requires a bit of thought. "volatile" is often misunderstood in C, and it's good that the OP is asking to be sure. C also has screwdrivers in its toolbox, they are just buried under all the hammers!
On 24/10/2021 23:08, Don Y wrote:

> The language isn't the problem.&nbsp; Witness the *millions* (?) of programs > written in it, over the past 5 decades. > > The problem is that it never was an assembly language -- even though it > was treated as such "in days gone by" (because the compiler's were > just "language translators" and didn't add any OTHER value to the > "programming process"). >
No - the problem is that some people /thought/ it was supposed to be a kind of assembly language. It's a people problem, not a language problem. C has all you need to handle code such as the OP's - all it takes is for people to understand that they need to use the right features of the language.
> It's only recently that compilers have become "independent agents", > of a sort... adding their own "spin" on the developer's code. >
Baring bugs, compilers do what they are told - in the language specified. If programmers don't properly understand the language they are using, or think it means more than it does, that's the programmers that are at fault - not the language or the compiler. If you go into a French bakery and ask for horse dung instead of the end of a baguette, that's /your/ fault - not the language's fault, and not the baker's fault. Add to that, the idea that optimising compilers are new is equally silly. The C language is defined in terms of an "abstract machine". The generated code has the same effect "as if" it executed everything you wrote - but the abstract machine and the real object code only synchronise on the observable behaviour. In practice, that means volatile accesses happen exactly as often, with exactly the values and exactly the order that you gave in the code. Non-volatile accesses can be re-ordered, re-arranged, combined, duplicated, or whatever. This has been the situation since C was standardised and since more advanced compilers arrived, perhaps 30 years ago. C is what it is - a language designed long ago, but which turned out to be surprisingly effective and long-lived. It's not perfect, but it is pretty good and works well for many situations where you need low-level coding or near-optimal efficiency. It's not as safe or advanced as many new languages, and it is not a beginners' language - you have to know what you are doing in order to write C code correctly. You have to understand it and follow its rules, whether you like these rules or not. Unfortunately, there are quite a few C programmers who /don't/ know these rules. And there is a small but vocal fraction who /do/ know the rules, but don't like them and feel the rules should therefore not apply - and blame compilers, standards committees, and anyone else when things inevitably go wrong. Some people are always a problem, regardless of the language!
On 2021-10-25 0:08, Don Y wrote:

    [snip]


> There are (and have been) many "safer" languages.&nbsp; Many that are more > descriptive (for certain classes of problem).&nbsp; But, C has survived to > handle all-of-the-above... perhaps in a suboptimal way but at least > a manner that can get to the desired solution. > > Look at how few applications SNOBOL handles.&nbsp; Write an OS in COBOL?&nbsp; Ada?
I don't know about COBOL, but typically the real-time kernels ("run-time systems") associated with Ada compilers for bare-board embedded systems are written in Ada, with a minor amount of assembly language for the most HW-related bits like HW context saving and restoring. I'm pretty sure that C-language OS kernels also use assembly for those things.