EmbeddedRelated.com
Forums
The 2026 Embedded Online Conference

MCU mimicking a SPI flash slave

Started by John Speth June 14, 2017
On 20/06/17 21:28, David Brown wrote:
> On 20/06/17 18:46, Tom Gardner wrote: >> On 20/06/17 15:24, Tom Gardner wrote: >>> Overall C/C++ has (arguably) become too complex for simple >>> things, and (unarguably IMNHSHO!) become too poorly >>> specified for complex things. >>> Nowadays C (and even more with C++) is part of the problem >>> rather than part of the solution. The abstractions, which >>> were a useful valid advance in K&R days, have become *very* >>> leaky over the years with the advance of technology. >> >> Examples I've come across include such gems as... >> >> "However, many C compilers use non-standard expression >> grammar where ?: is designated higher precedence than =, >> which parses that expression as >> e = ( ((a < d) ? (a++) : a) = d ), which then fails to >> compile due to semantic constraints: ?: is never lvalue >> and = requires a modifiable lvalue on the left. Note >> that this is different in C++, where the conditional >> operator has the same precedence as assignment." >> http://en.cppreference.com/w/c/language/operator_precedence >> >> "i = ++i + 1; // undefined behavior[in C] (well-defined in C++11)" >> http://en.cppreference.com/w/cpp/language/eval_order >> > > Neither C nor C++ is a problem here. People writing absurd obfuscated nonsense > in their code may be a problem, but that applies in any language.
Agreed, but the differences between the two languages is a big hint that there are surprising and unnecessary dragons lurking to catch people that haven't spent several decades following the differences and /newly introduced/ pitfalls. Do you have any comment about the previous point about /some/ compilers apparently /choosing/ non-standard expression grammars? That seems remarkable to me.
>> The ability to break a compiler's legitimate optimisations >> by "casting away constness and volatility" (IIRC that took >> several years of committee deliberation as to whether >> it was required or forbidden behaviour!) > > "casting away constness and volatility" means writing code that explicitly tells > the compiler "I know better than you do here, and I know it is safe to break > rules about the code". Either that is true, and it lets you write the code you > want, or it is wrong and you've made a mistake - as you can do in all languages.
That is problematic when a library is compiled and optimised assuming that the const statements are correct, and later on someone else in a different company uses that library in a way which violates those assumptions. In those circumstances the user probably doesn't know better.
>> And of course, the amusing C++ FQA. > > Have you read it? It is mostly misunderstandings, repetitions, outdated > information, or completely unrealistic code. There are a few good points in it, > but you have to work hard to find them.
Indeed. But not all of the points can be "wished away"; many a truth is spoken in jest.
>> All of which makes the simplicity of Forth seem appealing :) > > C is mostly simple and clear (if well written). C++ is a much bigger and more > complex language - it has greater scope for writing good code, but also greater > scope for making a mess. Simplicity of a language is not necessarily a good > thing any more than complexity is - you don't get much simpler than a Turing > machine, but I would not want to use it for application programming!
I completely agree :) The major problem with C/C++ is that it can't make up its mind whether it wants to be simple low-level and near to the silicon, or an expressive high-level general purpose applications language. Either would be valid, but in trying to be both it misses both targets. Fortunately the marketplace has decided that in most cases C/C++ isn't "the best" general purpose application language; Java, Python and similar are the future there.
David Brown wrote on 6/20/2017 3:49 AM:
> On 20/06/17 06:16, rickman wrote: >> David Brown wrote on 6/19/2017 9:34 AM: >>> On 19/06/17 15:23, rickman wrote: >>>> David Brown wrote on 6/19/2017 8:47 AM: >>>>> On 19/06/17 14:36, rickman wrote: >>>>>> David Brown wrote on 6/19/2017 3:02 AM: > <snip> >>>>> What you get with the PSoC is a chip that can give you a couple of >>>>> specialised custom-tuned peripherals if that is what your application >>>>> needs, or 2 or 3 standard peripherals (timers, uarts, SPI, etc.) for >>>>> the >>>>> silicon, power and dollar cost of 20 standard peripherals on a "normal" >>>>> microcontroller. A fixed UART or 16-bit timer is /much/ more efficient >>>>> than one made from flexible digital cells and all the programmability >>>>> needed to make them into the same type of peripheral. >>>>> >>>>> When you need a custom peripheral of some sort, then the flexibility of >>>>> a PSoC is (presumably) great. But only then. >>>> >>>> So programmability in the PSOC is a niche feature while programmability >>>> in the XMega E is somehow a big feature? >>> >>> No, the small programmability of the XMega E is a nice addition to all >>> the other peripherals. Without its timers, communications, ADCs, DMA, >>> etc., it would be a very poor microcontroller. >>> >>> The small programmability of the PSoC is niche because it has its >>> programmable blocks /instead of/ ordinary microcontroller peripherals. >>> If they were on top of a base of solid standard peripherals, the >>> programmable blocks would be much more interesting. >> >> I don't follow. What can the XMega E do that the PSOC devices can't in >> terms of peripherals? >> > > It is a good while since I have looked at PSoC devices, and my > comparison was with the older 8-bit and 16-bit core devices (not > entirely unreasonable, since the XMega is an 8-bit device). The key > difference is that the PSoC can have a couple of UARTs /or/ a few PWM > timers /or/ a couple of SPI interfaces /or/ other digital interfaces > with customisation. The XMegaE can have a couple of UARTs /and/ some > PWM timers /and/ a couple of SPI interfaces /and/ a bit of customer > hardware. Now do you see the point? > > Now that I have looked at the PSoC website, I see that for their ARM > Cortex devices, Cypress have figured this one out and have dedicated > standard peripherals in additional to the programmable blocks - because, > as I have said all along, these are /far/ more efficient.
You are missing a lot. The original PSOC devices had a simple MCU which was not any standard device. It had truly programmable digital blocks and programmable analog blocks. So they were *much* more functional than other MCUs with fixed peripherals. If those fixed peripherals met your need, then the PSOC was still better in situations where you had modes where you would need these peripherals in this mode and those peripherals in other operating modes. I think an example they promoted was for making measurements in one mode and reporting the results in another mode. The newer devices offer both 8 bit 8051 CPUs and ARM CM0, CM3 or CM4 devices. I have not looked hard at them lately, but the 8051 based PSOC3 has up to 24 digital blocks, 16 to 24 universal digital blocks (UDB), programmable to create any number of functions: &bull; 8-, 16-, 24-, and 32-bit timers, counters, and PWMs &bull; I2C, UART, SPI, I2S, LIN 2.0 interfaces &bull; Cyclic redundancy check (CRC) &bull; Pseudo random sequence (PRS) generators &bull; Quadrature decoders &bull; Gate-level logic functions That totally blows away the totally wimpy XMega E programmability. The Cypress web site has always been a PITA to find the info you want, but in searching this I see they have come out with a very wide range of new devices including other custom CPUs including 128 MHz RISC as well as a line of ARM CR4 and 240 MHz CR5 devices. These seem to be more conventional devices with no mention of the programmable hardware, analog or digital. But then that is not surprising. The programmable hardware allows a very cost competitive product. As mentioned in the Wikipedia article, they use PSOC in toothbrushes and Adidas sneakers. They have to be cheap to be used in sneakers. The CR5 devices are much higher cost parts.
>>>> I think you may have missed some of the PSOC devices, like 90%. One >>>> subfamily of about four devices have programmable peripherals. The >>>> others have programmable logic and analog blocks. Much more powerful. >>>> >>>> >>>>>>> Again, I don't know the numbers here. XMOS has been running for >>>>>>> quite a >>>>>>> few years, with regular new products and new versions of their >>>>>>> tools, so >>>>>>> they seem to be doing okay. >>>>>> >>>>>> The issue I have isn't that they aren't stable, but that they don't >>>>>> seem >>>>>> to be able to produce a device price competitive with the lower end >>>>>> device. For me that make FPGAs a more viable solution economically for >>>>>> the large majority of designs. Combine that with the large learning >>>>>> curve and we end up with many users never taking the time to become >>>>>> proficient with them. If they get priced down to the $1 range, they >>>>>> will get a *lot* more users and sales. >>>>>> >>>>> >>>>> I agree with you - and as I say, I don't really know why they can't >>>>> make >>>>> (or sell) the chips for a lower price. >>>>> >>>>>> Likewise, if they did a shrink to make the GA144 more cost >>>>>> competitive a >>>>>> lot more users would be interested in learning how to program the >>>>>> device. But it would still be a very ugly duckling requiring a whole >>>>>> new approach to complex systems. The inability to draw on the huge >>>>>> base >>>>>> of established software make the GA144 a non-starter for many apps. >>>>>> >>>>> >>>>> Nah, the GA144 would not be popular even if they /paid/ people to use >>>>> them. Less unpopular, perhaps, but not popular. >>>> >>>> That much processing power in a very low cost device would become useful >>>> in many apps. It is odd and difficult to learn, but it is not without >>>> functionality and application. >>>> >>> >>> Well, I disagree. I think the individual cpus are too weak to be >>> practical. It does not really matter if they can do simple operations >>> at 700 MHz if they can't do more than a few dozen lines of code. There >>> are not nearly enough processors on the chip, a far, far too little >>> communication channels between nodes, to be useful despite the small >>> memory size. >> >> Is an Ethernet interface an adequate indication of functionality? I >> think that is more than "simple" operations, no? >> > > OK, I admit to being impressed by that possibility. It is 10 Mb, needs > external ram, and has little additional possibilities beyond simple UDP > telegrams, but I am still impressed. > > (For comparison, the XMOS can do a software 100 Mb NIC in about half of > a cpu, letting you run lwip and network software in the other half. But > if you really want Ethernet on an XMOS device, you are better off with > the chips that have a dedicated hardware Ethernet interface.)
I assume it can do a NIC in software because of the hardware assist for the I/O? Still, that's pretty good. -- Rick C
On 20/06/2017 22:15, David Brown wrote:
> On 20/06/17 22:16, Gerry Jackson wrote: >> On 20/06/2017 15:48, David Brown wrote: >>> I work on embedded systems. I need to be able to access memory with >>> /specific/ sizes. I need to be able to make structures with/specific/ >>> sizes. >>> >>> Can you show me how this is possible in Forth, in a clear, simple and >>> portable manner? >>> >>>>> On the other hand, you don't seem to be able to write a FLOOR5 >>>>> definition that will handle 32-bit values efficiently on both 16-bit >>>>> cell systems and 32-bit cell systems. >>>> Tough luck. Why would I need a double FLOOR5 on a 16-bit platform? >>>> >>> That is not for you to worry about. Think of me as a customer asking >>> for a piece of code written in Forth. I want a FLOOR5 function that >>> handles 32-bit values, works correctly on 16-bit and 32-bit cell >>> systems, and is efficient on both sizes of system. Can it be done? >>> >>> >> >> Yes, but you may not like it - use conditional compilation >> >> 0 invert 65535 u> [if] >> : floor5 ( n1 -- n2 ) 1- 5 max ; \ 32 bit cells >> [else] >> : floor5 ( d1 -- d2 ) 1. d- 5. dmax ; \ 16 bit cells >> [then] >> > > Conditional compilation is fine as a solution. But supposing you wanted > a number of functions that were all 32-bit (let's say, floor6, floor7, > and floor8 due to a lack of imagination). Is there any way to have a > single conditional bit, and then use the features in other words? (Like > defining the type "int32_t" once in C, and using it thereafter.) My > stab at a solution would be: > > 0 invert 65535 u> [if] \ 32 bit cells > : -32 ( n1, n2 -- n3 ) - ; > : max32 ( n1, n2 -- n3) max ; > : to32 ( n1 -- n1 ) ; > [else] \ 16 bit cells > : -32 ( d1, d2 -- d3 ) d- ; > : max32 ( d1, d2 -- d3) dmax ; > : to32 ( n1 -- d1 ) S>D ; > [then] > > > : floor5 ( 32x1 -- 32x2 ) 1 to32 -32 5 to32 max32 ; > : floor6 ( 32x1 -- 32x2 ) 1 to32 -32 6 to32 max32 ; > etc. >
Yes that's a way to achieve it. However, for performance, I would use POSTPONE and IMMEDIATE to compile the small functions inline e.g. 0 invert 65535 u> [if] \ 32 bit cells : -32 ( n1, n2 -- n3 ) postpone - ; immediate : max32 ( n1, n2 -- n3) postpone max ; immediate : to32 ( n1 -- n1 ) ; immediate [else] \ 16 bit cells : -32 ( d1, d2 -- d3 ) postpone d- ; immediate : max32 ( d1, d2 -- d3) postpone dmax ; immediate : to32 ( n1 -- d1 ) postpone S>D ; immediate [then] Then, for example, TO32 for 32 bit cells compiles nothing in FLOOR5 etc. But, as an aside, a warning, -32 is treated by Forth as an integer so after the above definitions you couldn't ever use -32 as a literal in another definition as it would compile a -. Better to call it, say, SUB32 With this technique you could also do: 0 invert 65535 u> constant 32bits : -32 32bits if postpone - else postpone d- then ; immediate and so on. This would result in the same compiled code for FLOOR5 etc with less source code noise. At the cost of more compiled code of course as -32 etc are bigger. But this wouldn't matter if you were cross compiling on a host for a target system. Another alternative is to factor out the conditional part: 0 invert 65535 u> constant 32bits : postpone-it ( xt1 xt2 -- ) \ xt1 is for 32 bit cells, xt2 for 16 bits 32bits if drop else nip then compile, ; : -32 ['] - ['] d- postpone-it ; immediate : max32 ['] max ['] dmax postpone-it ; immediate etc but whether that is worthwhile depends on how many of these definitions there are. -- Gerry
On 21/06/17 04:44, rickman wrote:
> David Brown wrote on 6/20/2017 3:49 AM: >> On 20/06/17 06:16, rickman wrote: >>> David Brown wrote on 6/19/2017 9:34 AM: >>>> On 19/06/17 15:23, rickman wrote: >>>>> David Brown wrote on 6/19/2017 8:47 AM: >>>>>> On 19/06/17 14:36, rickman wrote: >>>>>>> David Brown wrote on 6/19/2017 3:02 AM: >> <snip> >>>>>> What you get with the PSoC is a chip that can give you a couple of >>>>>> specialised custom-tuned peripherals if that is what your application >>>>>> needs, or 2 or 3 standard peripherals (timers, uarts, SPI, etc.) for >>>>>> the >>>>>> silicon, power and dollar cost of 20 standard peripherals on a >>>>>> "normal" >>>>>> microcontroller. A fixed UART or 16-bit timer is /much/ more >>>>>> efficient >>>>>> than one made from flexible digital cells and all the programmability >>>>>> needed to make them into the same type of peripheral. >>>>>> >>>>>> When you need a custom peripheral of some sort, then the >>>>>> flexibility of >>>>>> a PSoC is (presumably) great. But only then. >>>>> >>>>> So programmability in the PSOC is a niche feature while >>>>> programmability >>>>> in the XMega E is somehow a big feature? >>>> >>>> No, the small programmability of the XMega E is a nice addition to all >>>> the other peripherals. Without its timers, communications, ADCs, DMA, >>>> etc., it would be a very poor microcontroller. >>>> >>>> The small programmability of the PSoC is niche because it has its >>>> programmable blocks /instead of/ ordinary microcontroller peripherals. >>>> If they were on top of a base of solid standard peripherals, the >>>> programmable blocks would be much more interesting. >>> >>> I don't follow. What can the XMega E do that the PSOC devices can't in >>> terms of peripherals? >>> >> >> It is a good while since I have looked at PSoC devices, and my >> comparison was with the older 8-bit and 16-bit core devices (not >> entirely unreasonable, since the XMega is an 8-bit device). The key >> difference is that the PSoC can have a couple of UARTs /or/ a few PWM >> timers /or/ a couple of SPI interfaces /or/ other digital interfaces >> with customisation. The XMegaE can have a couple of UARTs /and/ some >> PWM timers /and/ a couple of SPI interfaces /and/ a bit of customer >> hardware. Now do you see the point? >> >> Now that I have looked at the PSoC website, I see that for their ARM >> Cortex devices, Cypress have figured this one out and have dedicated >> standard peripherals in additional to the programmable blocks - because, >> as I have said all along, these are /far/ more efficient. > > You are missing a lot. The original PSOC devices had a simple MCU which > was not any standard device. It had truly programmable digital blocks > and programmable analog blocks. So they were *much* more functional > than other MCUs with fixed peripherals.
No, they were not much more useful. They had a few points where they were particularly good (I remember they were a good way of doing capacitive touch sensing when they were new). But you needed so many of the programmable digital blocks to do anything. If you wanted a UART, a 16-bit timer and an ADC, you had to get one of the bigger devices with more blocks - and that is for basic stuff that every other microcontroller had had for a decade.
> If those fixed peripherals met > your need, then the PSOC was still better in situations where you had > modes where you would need these peripherals in this mode and those > peripherals in other operating modes. I think an example they promoted > was for making measurements in one mode and reporting the results in > another mode.
Nope. It is much easier and cheaper to have dedicated peripherals for the standard tasks. /Then/ you can make use of the interesting programmable blocks to make specialised peripherals.
> > The newer devices offer both 8 bit 8051 CPUs and ARM CM0, CM3 or CM4 > devices. I have not looked hard at them lately, but the 8051 based > PSOC3 has up to 24 digital blocks, > > 16 to 24 universal digital blocks (UDB), programmable to > create any number of functions: > &bull; 8-, 16-, 24-, and 32-bit timers, counters, and PWMs > &bull; I2C, UART, SPI, I2S, LIN 2.0 interfaces > &bull; Cyclic redundancy check (CRC) > &bull; Pseudo random sequence (PRS) generators > &bull; Quadrature decoders > &bull; Gate-level logic functions > > That totally blows away the totally wimpy XMega E programmability.
The AVR in the XMega will do things like CRC and PRS in software - it is a much more powerful cpu than the 8051 of the early PSoC's. (But not nearly as fast as the Cortex-M cpus.) And since you have all these other peripherals in hardware already, you don't /need/ programmable blocks to implement them. (I am not suggesting that the XMega E has /programmability/ to compare with the PSoCs' - I have never suggested such a thing. I /am/ suggesting that they are more /useful/ than the early PSoC's because those PSoC's were far too limited.) Now the PSoC's have enough blocks to be useful - they did not originally. And the newer devices (Cortex based) have finally got things right - they have a proper cpu rather than a core that was outdated 30 years ago, and they have a full selection of normal fixed peripherals. The programmable blocks are an /addition/ to a solid microcontroller base, rather than instead of it. You see this process again and again. When the PSoC came out, the marketing was all about claims like yours - you don't need fixed hardware peripherals because you can use the flexible programmable blocks. Now modern PSoC's have lots of fixed hardware peripherals as well. When the XMOS came out, marketing talked about how their deterministic SMT and I/O blocks meant that you could make Ethernet and USB in software. Now you can buy XMOS chips with an Ethernet MAC or a USB interface. When FPGAs were younger, you apparently did not need a hard processor because soft processors could do such a good job. Now FPGAs with hard processor cores are much more common - even when the speed (such as SmartFusion2's 166 MHz Cortex-M3) would be achievable in a soft processor. And guess what? These devices come with a range of dedicated fixed hardware peripherals such as CAN controllers, I2C, SPI, Timers, etc. And that's on an /FPGA/ - making a timer block on an FPGA is about as simple a task for programmable logic as you can get. Again and again it is shown - a selection of dedicated hardware standard peripherals is important, and vastly more efficient than doing everything in programmable blocks, programmable logic, bit banging, etc.
> > The Cypress web site has always been a PITA to find the info you want, > but in searching this I see they have come out with a very wide range of > new devices including other custom CPUs including 128 MHz RISC as well > as a line of ARM CR4 and 240 MHz CR5 devices. These seem to be more > conventional devices with no mention of the programmable hardware, > analog or digital. But then that is not surprising. The programmable > hardware allows a very cost competitive product. As mentioned in the > Wikipedia article, they use PSOC in toothbrushes and Adidas sneakers. > They have to be cheap to be used in sneakers. The CR5 devices are much > higher cost parts. >
<snip>
>>> Is an Ethernet interface an adequate indication of functionality? I >>> think that is more than "simple" operations, no? >>> >> >> OK, I admit to being impressed by that possibility. It is 10 Mb, needs >> external ram, and has little additional possibilities beyond simple UDP >> telegrams, but I am still impressed. >> >> (For comparison, the XMOS can do a software 100 Mb NIC in about half of >> a cpu, letting you run lwip and network software in the other half. But >> if you really want Ethernet on an XMOS device, you are better off with >> the chips that have a dedicated hardware Ethernet interface.) > > I assume it can do a NIC in software because of the hardware assist for > the I/O? Still, that's pretty good. >
Yes, you basically have a SERDES system for each of the I/O pins, and a whole array of hardware timers that can trigger the transfers.
On 21/06/17 01:58, Tom Gardner wrote:
> On 20/06/17 21:28, David Brown wrote: >> On 20/06/17 18:46, Tom Gardner wrote: >>> On 20/06/17 15:24, Tom Gardner wrote: >>>> Overall C/C++ has (arguably) become too complex for simple >>>> things, and (unarguably IMNHSHO!) become too poorly >>>> specified for complex things. >>>> Nowadays C (and even more with C++) is part of the problem >>>> rather than part of the solution. The abstractions, which >>>> were a useful valid advance in K&R days, have become *very* >>>> leaky over the years with the advance of technology. >>> >>> Examples I've come across include such gems as... >>> >>> "However, many C compilers use non-standard expression >>> grammar where ?: is designated higher precedence than =, >>> which parses that expression as >>> e = ( ((a < d) ? (a++) : a) = d ), which then fails to >>> compile due to semantic constraints: ?: is never lvalue >>> and = requires a modifiable lvalue on the left. Note >>> that this is different in C++, where the conditional >>> operator has the same precedence as assignment." >>> http://en.cppreference.com/w/c/language/operator_precedence >>> >>> "i = ++i + 1; // undefined behavior[in C] (well-defined in C++11)" >>> http://en.cppreference.com/w/cpp/language/eval_order >>> >> >> Neither C nor C++ is a problem here. People writing absurd obfuscated >> nonsense >> in their code may be a problem, but that applies in any language. > > Agreed, but the differences between the two languages is a > big hint that there are surprising and unnecessary dragons > lurking to catch people that haven't spent several decades > following the differences and /newly introduced/ pitfalls.
What pitfalls? Everyone who has ever been involved in C (or C++) knows that expressions like "i = ++i + 1;" are classic examples of undefined or unspecified behaviour. There is never a reason for writing such things in code, and it is always unclear to the reader (even in later C++ standards where some cases now have defined ordering). Just don't write such silly code - problem solved. Later C++ standards some such cases defined behaviour - that does not affect C, or introduce new pitfalls to either C or C++. At most, it makes previously undefined code into defined code - that will either fix the broken code or leave it broken. It will not break code that was previously working. If you are interested, the reason why these things are now defined in C++ is not because anyone would ever /want/ to write "i = i++ + ++i;". It is merely a side-effect of making certain other orders defined, where they /are/ useful. In particular, people have assumed that expressions like "cout << ++i << ++i;" are fully defined - in fact their orders were not. But compilers helpfully /did/ follow the order that programmers expected, and the standards have merely been updated to codify existing practice.
> > Do you have any comment about the previous point about > /some/ compilers apparently /choosing/ non-standard > expression grammars? That seems remarkable to me. >
As far as I can see, the difference for C is merely which error message the compiler will give depending on the way it interprets the expression. If it follows the C grammar rules for how a mixture of ? and = is to be interpreted, the result cannot be parsed and is therefore an error. If it follows the C++ grammar rules (which is the non-standard C compiler behaviour mentioned), the result can be parsed but is a constraint violation (a bit like writing "5 = a;"), and is therefore an error. I don't see a significant practical difference here. (There are plenty of other cases where some C compilers get the details of the standards wrong - but unless you are talking about really poor quality compilers, these will very rarely be seen in real-world code.)
> >>> The ability to break a compiler's legitimate optimisations >>> by "casting away constness and volatility" (IIRC that took >>> several years of committee deliberation as to whether >>> it was required or forbidden behaviour!) >> >> "casting away constness and volatility" means writing code that >> explicitly tells >> the compiler "I know better than you do here, and I know it is safe to >> break >> rules about the code". Either that is true, and it lets you write the >> code you >> want, or it is wrong and you've made a mistake - as you can do in all >> languages. > > That is problematic when a library is compiled and optimised > assuming that the const statements are correct, and later on > someone else in a different company uses that library in a > way which violates those assumptions.
No, I cannot see any problem here. If a library exports some constant data, then it is absolutely fine that the library is compiled and optimised on the assumption that the data never changes. If user code casts away constness and tried to change that data, the user code is clearly wrong. It is wrong in the same way that code passing -1 to a square root function is wrong, or code that calls a sin() function but is expecting to get the results of cos().
> > In those circumstances the user probably doesn't know better. >
If the user does not know that he should not be changing data that is specified to be constant, then the user is not qualified for the job as programmer.
> >>> And of course, the amusing C++ FQA. >> >> Have you read it? It is mostly misunderstandings, repetitions, outdated >> information, or completely unrealistic code. There are a few good >> points in it, >> but you have to work hard to find them. > > Indeed. But not all of the points can be "wished away"; many > a truth is spoken in jest. >
The FQA is not particularly funny or truthful. (I have read it, as well as the original C++ FAQ - have you?). C++ certainly has plenty of flaws - it is a /big/ language. Some of these flaws get fixed over time in newer standards, others remain, and yet more get introduced. And of course there is plenty that is a matter of taste or style. But wild exaggeration of the problems is no more helpful than any claim that the language is perfect. <https://gist.github.com/klmr/5423873>
> >>> All of which makes the simplicity of Forth seem appealing :) >> >> C is mostly simple and clear (if well written). C++ is a much bigger >> and more >> complex language - it has greater scope for writing good code, but >> also greater >> scope for making a mess. Simplicity of a language is not necessarily >> a good >> thing any more than complexity is - you don't get much simpler than a >> Turing >> machine, but I would not want to use it for application programming! > > I completely agree :) > > The major problem with C/C++ is that it can't make up its > mind whether it wants to be simple low-level and near to > the silicon, or an expressive high-level general purpose > applications language. Either would be valid, but in > trying to be both it misses both targets.
One of the major problems with C/C++ is that there is no such language - but a lot of people seem to think there is. A lot of people find C++ works fine for one or both targets. It is, I think, the only language that covers such a wide range. But it is not a /simple/ low-level language - it is a big language, whether you use it for low-level tasks or high-level tasks. There is, however, no need to use /all/ of C++. If you are writing PC code, you will use the standard library a lot - you will use containers, strings, etc. But you don't use them on low-level code. Different parts of C++ are better suited for different needs. C, on the other hand, is not well suited for higher level work at all. There was a time when it was one of the better choices, because there were so few alternatives. But not now.
> > Fortunately the marketplace has decided that in most cases > C/C++ isn't "the best" general purpose application language; > Java, Python and similar are the future there.
Most of my embedded programming is in C, with a small (but increasing) part C++. Most of my PC programming is in Python.
On Thursday, June 15, 2017 at 1:56:44 PM UTC-7, John Speth wrote:
> On 6/14/2017 10:44 AM, John Speth wrote: > > Does anybody have any success or failure stories to relate that would > > help us gauge the feasibility of the proposed design? > > Thanks all for the suggestions and stories. We decided to take the safe > route and design in a SPI flash with an MCU-controlled switch that will > switch between external and internal access. We figured the expense and > certainty of the HW design is less than the time and uncertainty of the > SW design. > > We figured that with enough time end effort plus a worthy DMA engine, we > could make an MCU SPI slave look like SPI flash, but with some > yet-to-be-learned challenge. We didn't have the time. > > JJS
I agree with Rick that this should be an FPGA project. We are working on something similar. Our first thought was to tap into the SD card. But the SD card only receives data in batch at pre-determined interval. During that time, the device is suspended and not usable. We really want better real time access. So, we open up the box and find couple of dip32 sockets next to the surface mounted sram. We would have to disable the onboard sram and wire up headers to a custome FPGA board. Found a cyclone II with 64K sram. Asking seller if he can upgrade it to 128K. If so, save us half of the project time. Eventually, we can probably build a DIP32 header with FPGA and SRAM. I can't mount BGA myself, but more than happy to pay someone to do it. The FPGA code is just one page (incomplete with control lines mux), just to see if it will fit in the cheapest Max II CPLD. --------------------------------------------------------- library ieee; use ieee.std_logic_1164.all; use IEEE.STD_LOGIC_ARITH.ALL; use IEEE.STD_LOGIC_UNSIGNED.ALL; entity ram is port( P7: in std_logic; -- preserve upper 9 bits for next shift P6: in std_logic; -- preserve upper 10 bits for next shift P5: in std_logic; -- preserve upper 11 bits for next shift ACLK, clear, pass : in std_logic; -- Address serial clock ASI: in std_logic; -- Address serial in ASO: buffer std_logic; -- Address serial out A: buffer std_logic_vector(16 downto 0); -- Address register AA: in std_logic_vector(16 downto 0); -- Address parallel in DCLK: in std_logic; -- Data serial clock DSI: in std_logic; -- Data serial in DSO: buffer std_logic; -- Data serial out D: buffer std_logic_vector(7 downto 0); -- Data register DD: in std_logic_vector(7 downto 0) -- Data parallel in ); end ram; architecture arch of ram is begin process (ACLK, clear) begin if clear = '1' then A <= "00000000000000000"; elsif (P7 = '1') then A(9 downto 0) <= A(16 downto 7); elsif (P6 = '1') then A(10 downto 0) <= A(16 downto 6); elsif (P5 = '1') then A(11 downto 0) <= A(16 downto 5); elsif (pass = '1') then A <= AA; elsif (ACLK'event and ACLK='1') then ASO <= A(16); A(16 downto 1) <= A(15 downto 0); A(0) <= ASI; end if; end process; process (DCLK) begin if (pass = '1') then D <= DD; elsif (DCLK'event and DCLK='1') then DSO <= D(7); D(7 downto 1) <= D(6 downto 0); D(0) <= DSI; end if; end process; end arch;
On Mon, 19 Jun 2017 00:47:31 +0200, David Brown
<david.brown@hesbynett.no> wrote:

>Again, you are missing my point entirely. > >The /language/ has not changed.
In the same way that C hasn't changed.
> You are still stuck with a typeless >system relying on programmers writing comments to describe a function's >inputs and outputs.
Just like C. And I read a lot of C.
> You are still stuck on doing everything with >"cells", that are usually 16-bit or 32-bit, or double-cells - no >standardised way of working with data of specific sizes.
Unless you use a library for the purpose. Just like C.
>You are still >stuck on a single word list, with 31 characters significance (the GA144 >Forth is limited to "5 to 7" significant characters) - no modules, >namespaces or other local naming.
No, not me. We use 256 character significance. The base system has about 20 named namesspaces.
>Some details have changed, but the language has not.
You persist in believing that colorForth represents the state of Forth. It doesn't. It's a one-man system for one man's use and Chuck Moore does not pretend anything else. Stephen -- Stephen Pelc, stephenXXX@mpeforth.com MicroProcessor Engineering Ltd - More Real, Less Time 133 Hill Lane, Southampton SO15 5AF, England tel: +44 (0)23 8063 1441, fax: +44 (0)23 8033 9691 web: http://www.mpeforth.com - free VFX Forth downloads
On Mon, 19 Jun 2017 00:47:31 +0200, David Brown
<david.brown@hesbynett.no> wrote:

> You are still tied to blocks of 16x64 characters.
I haven't used blocks for decades. You are deeply misinformed. Stephen -- Stephen Pelc, stephenXXX@mpeforth.com MicroProcessor Engineering Ltd - More Real, Less Time 133 Hill Lane, Southampton SO15 5AF, England tel: +44 (0)23 8063 1441, fax: +44 (0)23 8033 9691 web: http://www.mpeforth.com - free VFX Forth downloads
David Brown wrote on 6/21/2017 6:10 AM:
> On 21/06/17 04:44, rickman wrote: >> David Brown wrote on 6/20/2017 3:49 AM: >>> On 20/06/17 06:16, rickman wrote: >>>> David Brown wrote on 6/19/2017 9:34 AM: >>>>> On 19/06/17 15:23, rickman wrote: >>>>>> David Brown wrote on 6/19/2017 8:47 AM: >>>>>>> On 19/06/17 14:36, rickman wrote: >>>>>>>> David Brown wrote on 6/19/2017 3:02 AM: >>> <snip> >>>>>>> What you get with the PSoC is a chip that can give you a couple of >>>>>>> specialised custom-tuned peripherals if that is what your application >>>>>>> needs, or 2 or 3 standard peripherals (timers, uarts, SPI, etc.) for >>>>>>> the >>>>>>> silicon, power and dollar cost of 20 standard peripherals on a >>>>>>> "normal" >>>>>>> microcontroller. A fixed UART or 16-bit timer is /much/ more >>>>>>> efficient >>>>>>> than one made from flexible digital cells and all the programmability >>>>>>> needed to make them into the same type of peripheral. >>>>>>> >>>>>>> When you need a custom peripheral of some sort, then the >>>>>>> flexibility of >>>>>>> a PSoC is (presumably) great. But only then. >>>>>> >>>>>> So programmability in the PSOC is a niche feature while >>>>>> programmability >>>>>> in the XMega E is somehow a big feature? >>>>> >>>>> No, the small programmability of the XMega E is a nice addition to all >>>>> the other peripherals. Without its timers, communications, ADCs, DMA, >>>>> etc., it would be a very poor microcontroller. >>>>> >>>>> The small programmability of the PSoC is niche because it has its >>>>> programmable blocks /instead of/ ordinary microcontroller peripherals. >>>>> If they were on top of a base of solid standard peripherals, the >>>>> programmable blocks would be much more interesting. >>>> >>>> I don't follow. What can the XMega E do that the PSOC devices can't in >>>> terms of peripherals? >>>> >>> >>> It is a good while since I have looked at PSoC devices, and my >>> comparison was with the older 8-bit and 16-bit core devices (not >>> entirely unreasonable, since the XMega is an 8-bit device). The key >>> difference is that the PSoC can have a couple of UARTs /or/ a few PWM >>> timers /or/ a couple of SPI interfaces /or/ other digital interfaces >>> with customisation. The XMegaE can have a couple of UARTs /and/ some >>> PWM timers /and/ a couple of SPI interfaces /and/ a bit of customer >>> hardware. Now do you see the point? >>> >>> Now that I have looked at the PSoC website, I see that for their ARM >>> Cortex devices, Cypress have figured this one out and have dedicated >>> standard peripherals in additional to the programmable blocks - because, >>> as I have said all along, these are /far/ more efficient. >> >> You are missing a lot. The original PSOC devices had a simple MCU which >> was not any standard device. It had truly programmable digital blocks >> and programmable analog blocks. So they were *much* more functional >> than other MCUs with fixed peripherals. > > No, they were not much more useful. They had a few points where they > were particularly good (I remember they were a good way of doing > capacitive touch sensing when they were new). But you needed so many of > the programmable digital blocks to do anything. If you wanted a UART, a > 16-bit timer and an ADC, you had to get one of the bigger devices with > more blocks - and that is for basic stuff that every other > microcontroller had had for a decade.
I'm calling applesauce on this one. Again, you are talking in vague terms that mean little and trying to make comparisons without specifics. The PSOC 1 parts were designed for *very* low cost. They had two advantages over other devices. They could use the same die for a wide range of peripheral combinations making the part cost lower from higher production volumes. The other advantage is the peripherals could be changed on the fly being used for one set of peripherals in one mode of operations and another set of peripherals in another mode, again keeping the part cost low because you can use the smallest possible part. If you think there are other parts that can reach the same price point with the same capabilities, please point them out. Apple has used the PSOC 1 in their iPod Nano as well as other companies using them in high volume low margin applications where performance vs. cost is critical. Your criticism regarding the peripherals just doesn't hold water.
>> If those fixed peripherals met >> your need, then the PSOC was still better in situations where you had >> modes where you would need these peripherals in this mode and those >> peripherals in other operating modes. I think an example they promoted >> was for making measurements in one mode and reporting the results in >> another mode. > > Nope. > > It is much easier and cheaper to have dedicated peripherals for the > standard tasks. /Then/ you can make use of the interesting programmable > blocks to make specialised peripherals.
The fact that you said "Nope" doesn't make it true. Dedicated peripherals are just that, dedicated. They can't be anything else. If you aren't using them they are wasted silicon. They tend to be added in proportion. Chips with more UARTs are likely to have more I2C and SPI as well, often wasted. PSOC 1 devices sold well enough to markets that needed to save every last penny. The only real issue with PSOC 1 was the design software. I scheduled a live remote class once which turned out to be me and the instructors. lol They were willing to teach the tools one on one in the early days because they hadn't gotten things working well enough to convey the knowledge any other way. That's why they revamped the line to PSOC 3 and 5 and now all the others with all new tools. Personally I don't care for the tools because they isolate the designer from what is going on, but they work at least.
>> The newer devices offer both 8 bit 8051 CPUs and ARM CM0, CM3 or CM4 >> devices. I have not looked hard at them lately, but the 8051 based >> PSOC3 has up to 24 digital blocks, >> >> 16 to 24 universal digital blocks (UDB), programmable to >> create any number of functions: >> &bull; 8-, 16-, 24-, and 32-bit timers, counters, and PWMs >> &bull; I2C, UART, SPI, I2S, LIN 2.0 interfaces >> &bull; Cyclic redundancy check (CRC) >> &bull; Pseudo random sequence (PRS) generators >> &bull; Quadrature decoders >> &bull; Gate-level logic functions >> >> That totally blows away the totally wimpy XMega E programmability. > > The AVR in the XMega will do things like CRC and PRS in software - it is > a much more powerful cpu than the 8051 of the early PSoC's. (But not > nearly as fast as the Cortex-M cpus.) And since you have all these > other peripherals in hardware already, you don't /need/ programmable > blocks to implement them. (I am not suggesting that the XMega E has > /programmability/ to compare with the PSoCs' - I have never suggested > such a thing. I /am/ suggesting that they are more /useful/ than the > early PSoC's because those PSoC's were far too limited.)
The "early PSOCs" didn't use an 8051, it was a custom M8C core. I don't recall the speed relative to other 8 bitters, but it runs at 24 MHz with a built in multiplier. To say the AVR blows it away I expect is rather an exaggeration.
> Now the PSoC's have enough blocks to be useful - they did not > originally. And the newer devices (Cortex based) have finally got > things right - they have a proper cpu rather than a core that was > outdated 30 years ago, and they have a full selection of normal fixed > peripherals. The programmable blocks are an /addition/ to a solid > microcontroller base, rather than instead of it.
You keep talking about devices purely from your perspective. The market says your criticisms are wrong. The original PSOC devices are still sold and are very cost effective. They offer a range of combinations that are much, much wider becoming optimal for a much wider range of applications.
> You see this process again and again. When the PSoC came out, the > marketing was all about claims like yours - you don't need fixed > hardware peripherals because you can use the flexible programmable > blocks. Now modern PSoC's have lots of fixed hardware peripherals as > well. When the XMOS came out, marketing talked about how their > deterministic SMT and I/O blocks meant that you could make Ethernet and > USB in software. Now you can buy XMOS chips with an Ethernet MAC or a > USB interface. When FPGAs were younger, you apparently did not need a > hard processor because soft processors could do such a good job. Now > FPGAs with hard processor cores are much more common - even when the > speed (such as SmartFusion2's 166 MHz Cortex-M3) would be achievable in > a soft processor. And guess what? These devices come with a range of > dedicated fixed hardware peripherals such as CAN controllers, I2C, SPI, > Timers, etc. And that's on an /FPGA/ - making a timer block on an FPGA > is about as simple a task for programmable logic as you can get. > > Again and again it is shown - a selection of dedicated hardware standard > peripherals is important, and vastly more efficient than doing > everything in programmable blocks, programmable logic, bit banging, etc.
None of this makes the XMega E a useful part. Programmability is highly useful allowing devices to reach an optimum price point. The XMega E would have only a very, very tiny niche.
>> The Cypress web site has always been a PITA to find the info you want, >> but in searching this I see they have come out with a very wide range of >> new devices including other custom CPUs including 128 MHz RISC as well >> as a line of ARM CR4 and 240 MHz CR5 devices. These seem to be more >> conventional devices with no mention of the programmable hardware, >> analog or digital. But then that is not surprising. The programmable >> hardware allows a very cost competitive product. As mentioned in the >> Wikipedia article, they use PSOC in toothbrushes and Adidas sneakers. >> They have to be cheap to be used in sneakers. The CR5 devices are much >> higher cost parts. >> > <snip> >>>> Is an Ethernet interface an adequate indication of functionality? I >>>> think that is more than "simple" operations, no? >>>> >>> >>> OK, I admit to being impressed by that possibility. It is 10 Mb, needs >>> external ram, and has little additional possibilities beyond simple UDP >>> telegrams, but I am still impressed. >>> >>> (For comparison, the XMOS can do a software 100 Mb NIC in about half of >>> a cpu, letting you run lwip and network software in the other half. But >>> if you really want Ethernet on an XMOS device, you are better off with >>> the chips that have a dedicated hardware Ethernet interface.) >> >> I assume it can do a NIC in software because of the hardware assist for >> the I/O? Still, that's pretty good. >> > > Yes, you basically have a SERDES system for each of the I/O pins, and a > whole array of hardware timers that can trigger the transfers.
I think we are starting to see why the XMOS devices are so expensive, very extensive dedicated hardware. Too bad they don't have a few programmable digital blocks that can actually do something useful. -- Rick C
David Brown wrote on 6/20/2017 10:48 AM:
> On 20/06/17 14:12, Anton Ertl wrote: >> David Brown <david.brown@hesbynett.no> writes: >>> On 19/06/17 16:44, Anton Ertl wrote: >>>> A static checker might say that the DROP and the - access a value that >>>> is not present in the stack effect, so they would be a little more >>>> precise at pinpointing the problem, but stack depth issues are easy >>>> enough that nobody found it worthwhile to write such a checker yet. >>> >>> I would be much happier to see the language supporting such static >>> checks in some way (not as comments, but as part of the language), and >>> tools doing the checking. Spotting such errors during testing is better >>> than spotting them when running the program, but spotting them during >>> compilation is far better. >> >> Why would that be? I can see that it's far better for programmers who >> don't test their programs, but what is the advantage for programmers >> who test their programs? > > Honestly? You can't see the advantage of spotting errors at as early a > stage as possible? > > Why would someone bother writing test patterns to catch possible errors > that the tools can see automatically? That is just a waste of > everyone's time, and it's easy to forget some tests.
You are showing your ignorance of Forth. The test code catches the errors at compile time same as C. If you are going to forget tests you are doomed. Every piece of code should be written to a set of requirements, each of which must be verified, usually by testing. Forget a test and you have unverified code.
> Errors of various sorts can happen when you write code. They can be > everything from misunderstandings of the specifications, to small typos, > to stylistic errors (which don't affect the running code, but can affect > maintainability and lead to higher risk of errors in the future), to > unwarranted assumptions about how the code is used. Producing a correct > program involves a range of methods for avoiding errors, or detecting > them as early as possible. Testing (of many different kinds) is /part/ > of that - but it is most certainly not sufficient. It is /always/ > cheaper and more productive to spot errors at an earlier stage than at a > later stage - and detecting them at compilation time is earlier than > detecting them at unit test time or system test time.
See above... You don't understand Forth.
>>> (Better still is spotting them while editing >>> - IDEs for C usually do a fair amount of checking while you write the code.) >> >> And I would especially hate it if an IDE is distracting me by nagging >> me about minor details while I am focusing on something else. > > Incorrect code is not a minor detail.
I've never seen a C editor that would catch anything more complex than mismatched parentheses. Do editors look for missing variable declarations now?
>>>> No! I have had lots of portability problems for C code when porting >>>> between 32-bit and 64-bit systems, thanks to the integer type zoo of >>>> C. In Forth I have had very few such problems, thanks to the fact >>>> that we only have cells and occasionally double-cells (and when you >>>> get a double-cell program right on 32-bit systems, it also works on >>>> 64-bit systems). If you want a FLOOR5 variant that works for integers >>>> that don't fit in a cell, you write DFLOOR5. And if it does not fit >>>> in double cells (but would fit in 64 bits), you probably have the >>>> wrong machine for what you are trying to do. C did not acquire 64-bit >>>> integer types until 32-bit machines were mainstream. >>>> >>> >>> And there you have illustrated my point, to some extent - C has >>> progressed as a language, to include new features for more modern >>> systems, such as support for 64-bit types. Now I can use a C compiler >>> for an 8-bit microcontroller and have 64-bit datatypes. (OK, not all >>> implementations have such support - but that is a quality of >>> implementation issue, not a language failure.) >> >> If the language does not require the 64-bit types, you can hardly >> claim them as a language feature. > > C /does/ require them (in C99), and they /are/ a language feature. > (Technically, C requires an integer type that is at least 64 bits, but > for most practical purposes, real implementations have exactly 64-bit > types.) There are C compilers that don't support all of C99. > > And I have used 64-bit integers on an 8-bit microcontroller. It is a > rare requirement, certainly, but not inconceivable. > >> >> Anyway, if 64-bit integers were needed on 16-bit-cell systems, we >> would add them to Forth. But in discussions about this subject, the >> consensus emerged that we do not need them (at least not for >> computations). >> >> By contrast, Gforth and PFE have provided 128-bit integers on 64-bit >> systems since 1995, something that C compilers have not supported for >> quite a while after that. And once GCC started supporting it, it was >> quite buggy; I guess the static-checking-encouraged lack of testing >> was at work here. > > Static checking is an addition to testing, not an alternative. > >> >>> And it is perfectly possible to write C code that is portable across >>> 32-bit and 64-bit systems >> >> There is a difference between "it is possible" and "it happens". My >> experience is that, in C, if you have tested a program only on 32-bit >> systems, it will likely not work on 64-bit systems; in Forth, it >> likely will. > > I don't see a way to write portable code that works with known sizes of > data in Forth. All I have seen so far is that you can use single cells > for 16-bit data, and double cells for 32-bit data. This means if you > want to use 32-bit values, your choice is between broken code on 16-bit > systems or inefficient code on 32-bit systems. (And you can only tell > if it is broken on 16-bit systems if you have remembered to include a > test case with larger data values.)
Again, you don't understand Forth. You use single cells or double cells. Forth does not specify the cell size just as C does not specify the size of an integer.
> I work on embedded systems. I need to be able to access memory with > /specific/ sizes. I need to be able to make structures with /specific/ > sizes. > > Can you show me how this is possible in Forth, in a clear, simple and > portable manner?
Forth has cells (a word) and chars (a byte). I don't find it to be a problem, but then I don't typically jump back and forth between 32 and 16 bit processors. What 16 bit processors do you actually use?
>>> On the other hand, you don't seem to be able to write a FLOOR5 >>> definition that will handle 32-bit values efficiently on both 16-bit >>> cell systems and 32-bit cell systems. >> >> Tough luck. Why would I need a double FLOOR5 on a 16-bit platform? >> > > That is not for you to worry about. Think of me as a customer asking > for a piece of code written in Forth. I want a FLOOR5 function that > handles 32-bit values, works correctly on 16-bit and 32-bit cell > systems, and is efficient on both sizes of system. Can it be done?
No one has customers asking for simple functions. -- Rick C
The 2026 Embedded Online Conference