EmbeddedRelated.com
Forums
The 2026 Embedded Online Conference

MCU mimicking a SPI flash slave

Started by John Speth June 14, 2017
On 20/06/17 13:19, Dimiter_Popoff wrote:
> On 20.6.2017 г. 13:57, David Brown wrote: >> On 20/06/17 11:36, dp wrote: >>> On Tuesday, June 20, 2017 at 11:44:39 AM UTC+3, David Brown wrote: >>>> On 20/06/17 09:25, Dimiter_Popoff wrote: >>>>> On 20.6.2017 г. 07:12, rickman wrote: >>>>>> .... >>>>>> >>>>>> So you think for a language to be modern it has to have hard coded >>>>>> data >>>>>> sizes? >>>> >>>> If you are going to use it for low-level programming and embedded >>>> development, then yes. >>> >>> No. If you don't know which datum of which size and type is you are >>> not up to programming. >> >> The reason I know exactly which size a datum is, is because the type has >> a fixed size! >> >> If I need to know that "x" has 32 bits, I make sure of that fact by >> declaring "x" as a "uint32_t" or "int32_t" in C. I can do that, >> precisely because C supports such hard coded data sizes. >> >> In a language like Forth, or pre-C99 C, you can't do that portably. An >> "int" might be 16-bit, or maybe 32-bit, or maybe something weird - the >> same applies to a Forth "cell". You need pre-processor directives, >> conditional compilation, implementation-specific code, etc., in order to >> know for sure what sizes you are using. > > So C has improved.
We are talking about a change in the language nearly 20 years ago... And even before that, it was normal to have a header with things like: typedef short int i16; typedef unsigned long int u32; etc. Pre C99, you had to make such headers yourself, adapt them to fit a given implementation, and there was no standardisation of the names. But it was a job you did once for each platform you used. Because C is a typed language, you can make the definitions of the types depend on the platform, but the code using the types is then platform independent.
> But this is only about C overcoming one of its > shortcomings by adding more complexity for the programmer to deal > with, which is exactly my point.
/Where/ is this complexity you talk about? I cannot understand what you mean here, and why you think there is some extra effort. If I want a 32-bit variable, I make an int32_t or a uint32_t (signed or unsigned). I make the choice of the characteristics I need for the data, and write it clearly, simply, and quickly in the source code. There is no effort involved.
> You need to know what your compiler > does with your data type only if you have to rely on it to deal with > it instead of just dealing with it yourself when you know what it > is anyway.
That makes no sense whatsoever. How do you "deal with it yourself"? You write source code, the compiler compiles it to machine code. You are relying on the compiler, just like when you write in assembly code you rely on the assembler. You can get a difference when you don't care about the details. If I write "x * 5", I (usually) don't care if the compiler implements that with a multiply instruction, or shift and add instructions, or a "load effective address" instruction with odd addressing modes, or if it has figured out that "x" is always 3 at this point in the code, and can use 15 directly. And when I /do/ care about the details, such as the size of a piece of data, /then/ I write explicitly what I want. That, if anything, is "dealing with it myself".
> >> >>> Delegating to the tool to track that means >>> only more work for the programmer, sometimes a lot more while tracking >>> what the tool got wrong _this_ time. >> >> Are you seriously suggesting that sometimes compilers will get the sizes >> wrong? That if I ask a C compiler for an "int64_t", sometimes it will >> give me a different size? > > Are you seriously suggesting that you have not spent well over half > of your programming time figuring out what the compiler expects from > you.
The compiler expects me to write valid C code. Nothing more, nothing less. I have spent time learning how to write valid C code - but no, that has not taken half my programming career.
> >> .... >>> >>> Controlled types, sizes etc. by the machine are meant for the user, >>> not for the programmer. They are supposed to make these work, this *is* >>> their job. Pretty much like a meal is supposed to be served nicely >>> arranged on a plate for the consumer; however for the cook the >>> ingredients are a lot more convenient in a raw form. >>> >> >> Think of having the compiler check sizes and do "counting" as like the >> thermostat and timer on the oven. The cook (or programmer) decides on >> the temperature and timing he wants, but the oven handles the boring bit >> of turning the elements on and off to get the right temperature, and >> warns the cook when the timer is done. > > Then a month later the menu changes and you have to set the temperature > to the one for the meal you did last month. Oops, what was it? > Spend another week rediscovering that. >
What on earth does that mean? When a cook sets the temperature on his oven thermostat, that does not somehow suck the information out his brain and erase it from all his recipe books!
On 20/06/17 14:09, Dimiter_Popoff wrote:
> Like it or not thinking is about making analogies. I realize > my point is probably doomed to never come across to the vast > majority of programmers today, sort of like trying to explain > colours to a blind person (absolutely no insult meant here, I > know I am talking to intelligent people, just wrestling to make > a point).
Yeah. Been there done that :(
> What I see from where I stand is that C as a language - not as > a compiler, toolchain quality etc. - costs a lot more work to > the programmer than needed. One of the reasons for that is the > fact that the programmer has to delegate to the toolchain a lot > of trivial "no brainer" work and _this_ costs a significant, at > times a prohibitive, effort. How do I make my point to people > who have never been really fluent in a lower level language > which does not have the ugliness of a poor underlying model etc... > A lost cause I guess.
Oh, there we are in violent agreement about the end result! Overall C/C++ has (arguably) become too complex for simple things, and (unarguably IMNHSHO!) become too poorly specified for complex things. Trying to be "all things to all people" is rarely successful. Nowadays C (and even more with C++) is part of the problem rather than part of the solution. The abstractions, which were a useful valid advance in K&R days, have become *very* leaky over the years with the advance of technology. Hells teeth, it is only recently that C/C++ has recognised the need for a memory model to deal with all the subtle behaviour in SMP and NUMA machines. I reserve judgement on whether it will be a success; even starting from a clean slate Java had to revise its memory model!
On 20/06/17 15:09, Dimiter_Popoff wrote:

<snip>

> What I see from where I stand is that C as a language - not as > a compiler, toolchain quality etc. - costs a lot more work to > the programmer than needed.
You have given absolutely /no/ indication in the slightest as to why that might be the case, or why you might think so. What do you mean by this? Can you give examples?
> One of the reasons for that is the > fact that the programmer has to delegate to the toolchain a lot > of trivial "no brainer" work and _this_ costs a significant, at > times a prohibitive, effort.
Again, this makes no sense and is contrary to common experience. What makes you think that it is more productive for the programmer to do trivial no-brainer work than to let the tools do it?
> How do I make my point to people > who have never been really fluent in a lower level language > which does not have the ugliness of a poor underlying model etc... > A lost cause I guess.
I have programmed in assembly for a dozen or more architectures - when I started my job, it was the language of choice for small microcontrollers. In fact, when I learned assembly, I even had to hand assembly the instructions to machine code on paper. So assume that I am fluent in low level languages, and try to explain. I think the real challenge lies in that you don't really know how C works or how it is used. (As with your posts, this is not an insult in any way.)
On 20/06/17 14:12, Anton Ertl wrote:
> David Brown <david.brown@hesbynett.no> writes: >> On 19/06/17 16:44, Anton Ertl wrote: >>> A static checker might say that the DROP and the - access a value that >>> is not present in the stack effect, so they would be a little more >>> precise at pinpointing the problem, but stack depth issues are easy >>> enough that nobody found it worthwhile to write such a checker yet. >> >> I would be much happier to see the language supporting such static >> checks in some way (not as comments, but as part of the language), and >> tools doing the checking. Spotting such errors during testing is better >> than spotting them when running the program, but spotting them during >> compilation is far better. > > Why would that be? I can see that it's far better for programmers who > don't test their programs, but what is the advantage for programmers > who test their programs?
Honestly? You can't see the advantage of spotting errors at as early a stage as possible? Why would someone bother writing test patterns to catch possible errors that the tools can see automatically? That is just a waste of everyone's time, and it's easy to forget some tests. Errors of various sorts can happen when you write code. They can be everything from misunderstandings of the specifications, to small typos, to stylistic errors (which don't affect the running code, but can affect maintainability and lead to higher risk of errors in the future), to unwarranted assumptions about how the code is used. Producing a correct program involves a range of methods for avoiding errors, or detecting them as early as possible. Testing (of many different kinds) is /part/ of that - but it is most certainly not sufficient. It is /always/ cheaper and more productive to spot errors at an earlier stage than at a later stage - and detecting them at compilation time is earlier than detecting them at unit test time or system test time.
> >> (Better still is spotting them while editing >> - IDEs for C usually do a fair amount of checking while you write the code.) > > And I would especially hate it if an IDE is distracting me by nagging > me about minor details while I am focusing on something else.
Incorrect code is not a minor detail.
> >>> No! I have had lots of portability problems for C code when porting >>> between 32-bit and 64-bit systems, thanks to the integer type zoo of >>> C. In Forth I have had very few such problems, thanks to the fact >>> that we only have cells and occasionally double-cells (and when you >>> get a double-cell program right on 32-bit systems, it also works on >>> 64-bit systems). If you want a FLOOR5 variant that works for integers >>> that don't fit in a cell, you write DFLOOR5. And if it does not fit >>> in double cells (but would fit in 64 bits), you probably have the >>> wrong machine for what you are trying to do. C did not acquire 64-bit >>> integer types until 32-bit machines were mainstream. >>> >> >> And there you have illustrated my point, to some extent - C has >> progressed as a language, to include new features for more modern >> systems, such as support for 64-bit types. Now I can use a C compiler >> for an 8-bit microcontroller and have 64-bit datatypes. (OK, not all >> implementations have such support - but that is a quality of >> implementation issue, not a language failure.) > > If the language does not require the 64-bit types, you can hardly > claim them as a language feature.
C /does/ require them (in C99), and they /are/ a language feature. (Technically, C requires an integer type that is at least 64 bits, but for most practical purposes, real implementations have exactly 64-bit types.) There are C compilers that don't support all of C99. And I have used 64-bit integers on an 8-bit microcontroller. It is a rare requirement, certainly, but not inconceivable.
> > Anyway, if 64-bit integers were needed on 16-bit-cell systems, we > would add them to Forth. But in discussions about this subject, the > consensus emerged that we do not need them (at least not for > computations). > > By contrast, Gforth and PFE have provided 128-bit integers on 64-bit > systems since 1995, something that C compilers have not supported for > quite a while after that. And once GCC started supporting it, it was > quite buggy; I guess the static-checking-encouraged lack of testing > was at work here.
Static checking is an addition to testing, not an alternative.
> >> And it is perfectly possible to write C code that is portable across >> 32-bit and 64-bit systems > > There is a difference between "it is possible" and "it happens". My > experience is that, in C, if you have tested a program only on 32-bit > systems, it will likely not work on 64-bit systems; in Forth, it > likely will.
I don't see a way to write portable code that works with known sizes of data in Forth. All I have seen so far is that you can use single cells for 16-bit data, and double cells for 32-bit data. This means if you want to use 32-bit values, your choice is between broken code on 16-bit systems or inefficient code on 32-bit systems. (And you can only tell if it is broken on 16-bit systems if you have remembered to include a test case with larger data values.) I work on embedded systems. I need to be able to access memory with /specific/ sizes. I need to be able to make structures with /specific/ sizes. Can you show me how this is possible in Forth, in a clear, simple and portable manner?
> >> On the other hand, you don't seem to be able to write a FLOOR5 >> definition that will handle 32-bit values efficiently on both 16-bit >> cell systems and 32-bit cell systems. > > Tough luck. Why would I need a double FLOOR5 on a 16-bit platform? >
That is not for you to worry about. Think of me as a customer asking for a piece of code written in Forth. I want a FLOOR5 function that handles 32-bit values, works correctly on 16-bit and 32-bit cell systems, and is efficient on both sizes of system. Can it be done?
On 20/06/17 15:24, Tom Gardner wrote:
> Overall C/C++ has (arguably) become too complex for simple > things, and (unarguably IMNHSHO!) become too poorly > specified for complex things. > Nowadays C (and even more with C++) is part of the problem > rather than part of the solution. The abstractions, which > were a useful valid advance in K&R days, have become *very* > leaky over the years with the advance of technology.
Examples I've come across include such gems as... "However, many C compilers use non-standard expression grammar where ?: is designated higher precedence than =, which parses that expression as e = ( ((a < d) ? (a++) : a) = d ), which then fails to compile due to semantic constraints: ?: is never lvalue and = requires a modifiable lvalue on the left. Note that this is different in C++, where the conditional operator has the same precedence as assignment." http://en.cppreference.com/w/c/language/operator_precedence "i = ++i + 1; // undefined behavior[in C] (well-defined in C++11)" http://en.cppreference.com/w/cpp/language/eval_order The ability to break a compiler's legitimate optimisations by "casting away constness and volatility" (IIRC that took several years of committee deliberation as to whether it was required or forbidden behaviour!) And of course, the amusing C++ FQA. All of which makes the simplicity of Forth seem appealing :)
On 6/20/2017 2:25 AM, Albert van der Horst wrote:
> In article <oi9a7d$ibe$1@dont-email.me>, > Cecil Bayona <cbayona@cbayona.com> wrote: >> On 6/19/2017 2:39 PM, Albert van der Horst wrote: > <SNIP> >>> Interestingly, Java is supposed to be safe. I've seen dozens of >>> discussions of Euler problems of Java problems who had problems >>> with overflow and had wasted time debugging that. >>> (Once you've solved one you're entitled to write about how you did it). >>> Of course sometimes when you scale up a problem you get wrong >>> results in Forth caused by overflow too. That was never a time waster, >>> because that is the first thing to look at in such a case, >>> and it is easy to detect and correct in Forth. >> >> I'm impressed with Java due to the constant stream of updates because of >> security issues. > > A constant stream of security issues looks like a good reason to > stay away from a language. > >> >> Forth of course has its problems too, overflow is highly possible due to >> no error checking so I'm in favor of 256 byte integers less overflow and >> rounding issues there. > > I've made a strong case that overflow is a problem in Java and not > in Forth. > > 256 byte integers would get you nowhere in projecteuler > where 2 minute computing time is the norm, sometimes hard to > stay under. > >> >> -- >> Cecil - k5nwa > > Groetjes Albert >
I was being sarcastic in my whole post. -- Cecil - k5nwa
On 20/06/2017 15:48, David Brown wrote:
> I work on embedded systems. I need to be able to access memory with > /specific/ sizes. I need to be able to make structures with/specific/ > sizes. > > Can you show me how this is possible in Forth, in a clear, simple and > portable manner? > >>> On the other hand, you don't seem to be able to write a FLOOR5 >>> definition that will handle 32-bit values efficiently on both 16-bit >>> cell systems and 32-bit cell systems. >> Tough luck. Why would I need a double FLOOR5 on a 16-bit platform? >> > That is not for you to worry about. Think of me as a customer asking > for a piece of code written in Forth. I want a FLOOR5 function that > handles 32-bit values, works correctly on 16-bit and 32-bit cell > systems, and is efficient on both sizes of system. Can it be done? > >
Yes, but you may not like it - use conditional compilation 0 invert 65535 u> [if] : floor5 ( n1 -- n2 ) 1- 5 max ; \ 32 bit cells [else] : floor5 ( d1 -- d2 ) 1. d- 5. dmax ; \ 16 bit cells [then] -- Gerry
On 20/06/17 18:46, Tom Gardner wrote:
> On 20/06/17 15:24, Tom Gardner wrote: >> Overall C/C++ has (arguably) become too complex for simple >> things, and (unarguably IMNHSHO!) become too poorly >> specified for complex things. >> Nowadays C (and even more with C++) is part of the problem >> rather than part of the solution. The abstractions, which >> were a useful valid advance in K&R days, have become *very* >> leaky over the years with the advance of technology. > > Examples I've come across include such gems as... > > "However, many C compilers use non-standard expression > grammar where ?: is designated higher precedence than =, > which parses that expression as > e = ( ((a < d) ? (a++) : a) = d ), which then fails to > compile due to semantic constraints: ?: is never lvalue > and = requires a modifiable lvalue on the left. Note > that this is different in C++, where the conditional > operator has the same precedence as assignment." > http://en.cppreference.com/w/c/language/operator_precedence > > "i = ++i + 1; // undefined behavior[in C] (well-defined in C++11)" > http://en.cppreference.com/w/cpp/language/eval_order >
Neither C nor C++ is a problem here. People writing absurd obfuscated nonsense in their code may be a problem, but that applies in any language.
> The ability to break a compiler's legitimate optimisations > by "casting away constness and volatility" (IIRC that took > several years of committee deliberation as to whether > it was required or forbidden behaviour!)
"casting away constness and volatility" means writing code that explicitly tells the compiler "I know better than you do here, and I know it is safe to break rules about the code". Either that is true, and it lets you write the code you want, or it is wrong and you've made a mistake - as you can do in all languages.
> > And of course, the amusing C++ FQA.
Have you read it? It is mostly misunderstandings, repetitions, outdated information, or completely unrealistic code. There are a few good points in it, but you have to work hard to find them.
> > All of which makes the simplicity of Forth seem appealing :)
C is mostly simple and clear (if well written). C++ is a much bigger and more complex language - it has greater scope for writing good code, but also greater scope for making a mess. Simplicity of a language is not necessarily a good thing any more than complexity is - you don't get much simpler than a Turing machine, but I would not want to use it for application programming!
On 20/06/17 22:16, Gerry Jackson wrote:
> On 20/06/2017 15:48, David Brown wrote: >> I work on embedded systems. I need to be able to access memory with >> /specific/ sizes. I need to be able to make structures with/specific/ >> sizes. >> >> Can you show me how this is possible in Forth, in a clear, simple and >> portable manner? >> >>>> On the other hand, you don't seem to be able to write a FLOOR5 >>>> definition that will handle 32-bit values efficiently on both 16-bit >>>> cell systems and 32-bit cell systems. >>> Tough luck. Why would I need a double FLOOR5 on a 16-bit platform? >>> >> That is not for you to worry about. Think of me as a customer asking >> for a piece of code written in Forth. I want a FLOOR5 function that >> handles 32-bit values, works correctly on 16-bit and 32-bit cell >> systems, and is efficient on both sizes of system. Can it be done? >> >> > > Yes, but you may not like it - use conditional compilation > > 0 invert 65535 u> [if] > : floor5 ( n1 -- n2 ) 1- 5 max ; \ 32 bit cells > [else] > : floor5 ( d1 -- d2 ) 1. d- 5. dmax ; \ 16 bit cells > [then] >
Conditional compilation is fine as a solution. But supposing you wanted a number of functions that were all 32-bit (let's say, floor6, floor7, and floor8 due to a lack of imagination). Is there any way to have a single conditional bit, and then use the features in other words? (Like defining the type "int32_t" once in C, and using it thereafter.) My stab at a solution would be: 0 invert 65535 u> [if] \ 32 bit cells : -32 ( n1, n2 -- n3 ) - ; : max32 ( n1, n2 -- n3) max ; : to32 ( n1 -- n1 ) ; [else] \ 16 bit cells : -32 ( d1, d2 -- d3 ) d- ; : max32 ( d1, d2 -- d3) dmax ; : to32 ( n1 -- d1 ) S>D ; [then] : floor5 ( 32x1 -- 32x2 ) 1 to32 -32 5 to32 max32 ; : floor6 ( 32x1 -- 32x2 ) 1 to32 -32 6 to32 max32 ; etc. The equivalent C (without the C99 sized integers) is: #include <limits.h> #if UINT_MAX == 65535 typedef long int i32; #else typedef int i32; #endif i32 floor5(32 v) { return (v < 6) ? 5 : (v - 1); } i32 floor6(32 v) { return (v < 7) ? 6 : (v - 1); } (That is, like your Forth, assuming that you either have 16-bit cells / ints and 32-bit double cells / long ints, or 32-bit cells / ints.)
On 20/06/17 21:28, David Brown wrote:
> On 20/06/17 18:46, Tom Gardner wrote: >> On 20/06/17 15:24, Tom Gardner wrote: >>> Overall C/C++ has (arguably) become too complex for simple >>> things, and (unarguably IMNHSHO!) become too poorly >>> specified for complex things. >>> Nowadays C (and even more with C++) is part of the problem >>> rather than part of the solution. The abstractions, which >>> were a useful valid advance in K&R days, have become *very* >>> leaky over the years with the advance of technology. >> >> Examples I've come across include such gems as... >> >> "However, many C compilers use non-standard expression >> grammar where ?: is designated higher precedence than =, >> which parses that expression as >> e = ( ((a < d) ? (a++) : a) = d ), which then fails to >> compile due to semantic constraints: ?: is never lvalue >> and = requires a modifiable lvalue on the left. Note >> that this is different in C++, where the conditional >> operator has the same precedence as assignment." >> http://en.cppreference.com/w/c/language/operator_precedence >> >> "i = ++i + 1; // undefined behavior[in C] (well-defined in C++11)" >> http://en.cppreference.com/w/cpp/language/eval_order >> > > Neither C nor C++ is a problem here. People writing absurd obfuscated nonsense > in their code may be a problem, but that applies in any language.
Agreed, but the differences between the two languages is a big hint that there are surprising and unnecessary dragons lurking to catch people that haven't spent several decades following the differences and /newly introduced/ pitfalls. Do you have any comment about the previous point about /some/ compilers apparently /choosing/ non-standard expression grammars? That seems remarkable to me.
>> The ability to break a compiler's legitimate optimisations >> by "casting away constness and volatility" (IIRC that took >> several years of committee deliberation as to whether >> it was required or forbidden behaviour!) > > "casting away constness and volatility" means writing code that explicitly tells > the compiler "I know better than you do here, and I know it is safe to break > rules about the code". Either that is true, and it lets you write the code you > want, or it is wrong and you've made a mistake - as you can do in all languages.
That is problematic when a library is compiled and optimised assuming that the const statements are correct, and later on someone else in a different company uses that library in a way which violates those assumptions. In those circumstances the user probably doesn't know better.
>> And of course, the amusing C++ FQA. > > Have you read it? It is mostly misunderstandings, repetitions, outdated > information, or completely unrealistic code. There are a few good points in it, > but you have to work hard to find them.
Indeed. But not all of the points can be "wished away"; many a truth is spoken in jest.
>> All of which makes the simplicity of Forth seem appealing :) > > C is mostly simple and clear (if well written). C++ is a much bigger and more > complex language - it has greater scope for writing good code, but also greater > scope for making a mess. Simplicity of a language is not necessarily a good > thing any more than complexity is - you don't get much simpler than a Turing > machine, but I would not want to use it for application programming!
I completely agree :) The major problem with C/C++ is that it can't make up its mind whether it wants to be simple low-level and near to the silicon, or an expressive high-level general purpose applications language. Either would be valid, but in trying to be both it misses both targets. Fortunately the marketplace has decided that in most cases C/C++ isn't "the best" general purpose application language; Java, Python and similar are the future there.
The 2026 Embedded Online Conference