EmbeddedRelated.com
Forums

MCU mimicking a SPI flash slave

Started by John Speth June 14, 2017
On 16/06/17 19:52, rickman wrote:
> David Brown wrote on 6/16/2017 3:25 AM: >> On 16/06/17 00:15, rickman wrote: >>> David Brown wrote on 6/15/2017 5:37 PM: >>>> On 15/06/17 22:52, rickman wrote: >>>
<snip>
>>> Once an FPGA was on the board there was no reason to use a CPU, although >>> I would have liked to have a hybrid chip with about 1000-4 input LUTs >>> and a moderate CPU or DSP even. Add a 16 bit stereo CODEC and it would >>> be perfect! >> >> Often a soft processor can be enough for housekeeping, but an FPGA with >> a fast ARM core would be nice. Expensive, but nice. > > Not sure what you mean by a "fast" ARM core, but ARMs combined with > FPGAs are sold by three of the four FPGA companies.
By "fast", I mean "so much faster than strictly needed for the job in hand that you don't need to worry about speed" :-) Alternatively, a Cortex-A core with Neon is "fast".
> > >> I think Atmel/Microchip now have a microcontroller with a bit of >> programmable logic - I don't know how useful that might be. (Not for >> your application here, of course - neither the cpu nor the PLD part are >> powerful enough.) > > You might be thinking of the PSOC devices from Cypress. They have > either an 8051 type processor or an ARM CM3 with various programmable > logic and analog. Not really an FPGA in any sense. They can be > programmed in Verilog, but they are not terribly capable. Think of them > as having highly flexible peripherals.
No, I mean the new Atmel XMega E series. They have a "custom logic module" with a couple of timers and some lookup tables. It is not an FPGA - it's just a bit of programmable logic, more akin to a built-in PLD.
> > If you really mean the Atmel FPGAs, they are very old, very slow and > very expensive. I don't consider them to be in the FPGA business, they > are more in the obsolete device business like Rochester Electronics. > The device line they had that included a small 8 bit processor is long > gone and was never cost competitive.
No, I know about that line too (never used it, but I know of it).
> > >>> I wonder why they can't make lower cost versions? The GA144 has 144 >>> processors, $15 @ qty 1. It's not even a modern process node, 150 or >>> 180 nm I think, 100 times more area than what they are using today. >>> >> >> I guess it is the usual matter - NRE and support costs have to be >> amortized. When the chip is not a big seller (and I don't imagine the >> GA144 is that popular), they have to make back their investment somehow. > > I'm talking about the XMOS device. The GA144 could easily be sold > cheaply if they use a more modern process and sold them in high > volumes. But what is holding back XMOS from selling a $1 chip? My > understanding is the CPU is normally a pretty small part of an MCU with > memory being the lion's share of the real estate. Even having 8 CPUs > shouldn't run the area and cost up since the chip is really all about > the RAM. Is the RAM special in some way? I thought it was just fast > and shared through multiplexing. >
It is a fast RAM - the one RAM block runs at 500 MHz single-cycle, and may be dual-ported. There is only one cpu on the smaller XMOS devices, with 8 hardware threads - larger devices have up to 4 cpus (and thus 32 threads). The IO pins have a fair amount of fast logic attached too. But I don't know where the cost comes in. The XMOS devices are, I guess, a good deal more popular than the GA144 - but they are not mass market compared to popular Cortex-M microcontrollers.
> >> Have you used the GA144? It sounds interesting, but I haven't thought >> of any applications for it. > > There are a number of issues with using the GA144 in a production > design. Not the least is the lack of reliable supply. The company runs > on a shoestring with minimal offices, encouraging free help from anyone > interested in writing an app note. lol When they kicked off the GA144 > there was a lot of interest from fairly fringe groups of designers (the > assembly language is pretty much Forth) but I have yet to hear of any > designs reaching production which is not the same thing as there being > none. The production runs appear to be the minimum size test runs from > the foundry. The chip is pretty small, so they get a *lot* from a wafer. >
On 16/06/17 17:40, Stephen Pelc wrote:
> On Fri, 16 Jun 2017 16:34:55 +0200, David Brown > <david.brown@hesbynett.no> wrote: > >> I had a look. No, FORTH has not progressed that I can see (unless you >> think adding colour to the editor is a revolution). Some FORTH >> compilers might be good at producing optimised code on microcontrollers, >> but that is an improvement in the implementations, not the language. > > We use standard editors with syntax colouring files as we have done > for decades. The professional Forth compilers, whether for > microcontrollers or for the desktop, produce optimised native > code.
The reference to colour was for the "new" colorForth used by the GA144. I am not surprised you have syntax highlighting in your editors - /Forth/ may not have moved on, but your implementations of Forth toolchains seem top class and with modern features.
> > The current Forth standard is Forth-2012. See: > http://www.forth200x.org/documents/forth-2012.pdf > > What you used 30 years ago did not include target code for > USB stack > FAT file system > TCP/IP stack with HTTP, FTP and Telnet servers > Embedded GUI > and so on and so on
Those are all libraries provided by your implementation. That makes your implementation good and useful to users - it does not make the /language/ any better.
> > Having been in the Forth compiler business for a very long time, > I can assure you that the tools and libraries supplied in this > decade are vastly superior to those of 30 years ago. >
Again, you are missing my point entirely. The /language/ has not changed. You are still stuck with a typeless system relying on programmers writing comments to describe a function's inputs and outputs. You are still stuck on doing everything with "cells", that are usually 16-bit or 32-bit, or double-cells - no standardised way of working with data of specific sizes. You are still stuck on a single word list, with 31 characters significance (the GA144 Forth is limited to "5 to 7" significant characters) - no modules, namespaces or other local naming. You are still tied to blocks of 16x64 characters. Some details have changed, but the language has not.
On 16/06/17 20:22, rickman wrote:
> David Brown wrote on 6/16/2017 7:21 AM: >> On 15/06/17 23:46, rickman wrote: >>> David Brown wrote on 6/15/2017 7:04 AM:
<snip>
> > Yeah, I don't know of any product using the GA144. I looked hard at > using it in a production design where I needed to replace an EOL FPGA. > Ignoring all the other issues, I wasn't sure it would meet the timing > I've outlined in other posts in this thread. I needed info on the I/O > timing and GA wouldn't provide it. They seemed to think I wanted to > reverse engineer the transistor level design. Silly gooses. > > I don't know why the age of a computer language is even a factor. I > don't think C is a newcomer and is the most widely used programming > language in embedded devices, no?
The age of a language itself is not important, of course. The type of features it has /are/ important. Many things in computing have changed in the last 4 decades. Features of a language that were simply impossible at that time due to limited host computing power are entirely possible today. So modern languages do far more compile-time checking and optimisation now than was possible at that time. Good languages evolve to take advantage of newer possibilities - the C of today is not the same as the pre-K&R C of that period. The Forth of today appears to me to be pretty much the same - albeit with more optimising compilers and additional libraries.
> > The GA144 is a stack processor and so the assembly language looks a lot > like Forth which is based on a stack processor virtual machine. I'm not > sure what is "weird" about it other than the fact that most programmers > aren't familiar with stack programming other than Open Boot, Postscript, > RPL and BibTeX. > >
Even for a stack machine, it is very limited. In some ways, the 4-bit MARC4 architecture was more powerful (it certainly had more program space). But this is all first impressions from me - I have not used the devices, and I am /very/ out of practice with Forth.
>>>>> XMOS does provide some techniques (e.g. composable >>>>> and interfaces) to reduce the sharpness of the cliffs, >>>>> but not to eliminate them. But then the same is true >>>>> of ARM+RTOS etc etc. >>>> >>>> Yes, there are learning curves everywhere. And scope for getting >>>> things >>>> wrong :-) XMOS gives a different balance amongst many of the >>>> challenges >>>> facing designers - it is better in some ways, worse in other ways. >>>> >>>>> >>>>> I wouldn't regard XMOS as being a replacement for >>>>> a general-purpose processor, but there is a large >>>>> overlap in many hard-realtime applications. >>>>> >>>>> Similarly I wouldn't regard an ARM as being a >>>>> replacement for a CPLD/FPGA, but there can be >>>>> an overlap in many soft-realtime applications >>>>> >>>>> The XMOS devices inhabit an interesting and important >>>>> niche between those two. >>>> >>>> Agreed. >>> >>> The question is whether it is worth investing the time and energy into >>> learning the chip if you don't focus your work in this realm. >> >> Certainly - working with XMOS means thinking a little differently. But >> it is not /nearly/ as different as the GA144. > > "A little" is a *lot* larger learning curve than any other MCU I am > aware of (the GA144 aside). My point is that can be worth it only if > you do a lot of designs that would make use of its unique features. I'm > not sure there really is a very large sweet spot given that the chips > are not sold at the low end and other devices will do the same job using > existing techniques and tools.
For a programmer who is completely unfamiliar with FPGA and programmable logic, the XMOS is likely to be less of a leap than moving to an FPGA. But I agree it is hard to find an application area for these devices - I have only had a couple of uses for them, and they probably were not ideal for those cases.
> > >>> Personally I find the FPGA approach covers a *lot* of ground that others >>> don't see and the region left that is not so easy to address with either >>> FPGAs or more conventional CPUs is very limited. If the XMOS isn't a >>> good price fit, I most likely would just go with a small FPGA. I saw >>> the XMOS has a 12 bit, 1 MSPS ADC which is nice. But again, this only >>> makes its range of good fit slightly larger. >> >> If you have lots of experience with FPGA development, it is natural to >> look to FPGAs for solutions to design problems - and that is absolutely >> fine. It is impossible to jump on every different development method >> and be an expert at them all - and most problems can be solved in a >> variety of ways. > > Yes, but you don't need to know a "variety" of ways of solving problems. > You only need to know ways that are highly effective for most design > problems. >
True.
> The many misconceptions of FPGAs relegate them to situations where CPUs > just can't cut the mustard. In reality they are very flexible and only > limited by the lack of on chip peripherals. Microsemi adds more > peripherals to their devices, but still don't compete directly with > lower cost MCUs.
FPGAs have their strengths and their weaknesses. There are lots of situations where they are far from ideal - but I agree that there are many misconceptions about them that might discourage people from starting to use them.
> > >>>>>> XMOS devices can be a lot of fun, and I would enjoy working with them >>>>>> again. For some tasks, they are an ideal solution (like for this >>>>>> one), >>>>>> once you have learned to use them. But not everything is easy with >>>>>> them, and you have to think in a somewhat different way. >>>>> >>>>> Yes, but the learning curve is very short and >>>>> there are no unpleasant surprises. >>>>> >>>>> IMNSHO "thinking in CSP" will help structure >>>>> thoughts for any realtime application, whether >>>>> or not it is formally part of the implementation. >>>>> >>>> >>>> Agreed. >>> >>> I prefer Forth for embedded work. I believe there is a Forth >>> available. >> >> Available for what? The XMOS? I'd be surprised. >> >>> I don't know if it captures any of the flavor of CSP, mostly >>> because I know little about CSP. I did use Occam some time ago. Mostly >>> I recall it had a lot of constraints on what the programmer was allowed >>> to do. The project we were using the Transputer on programmed it all >>> in C. >>> >> >> Occam has its own advantages and disadvantages independent of the use of >> CSP-style synchronisation and message passing. >> >> I have looked at Forth a few times over the years, but I have yet to see >> a version that has changed for the better since I played with it as a >> teenager some 30 years ago. The stuff on the GA144 website is absurd - >> their "innovation" is that their IDE has colour highlighting despite >> looking like a reject from the days of MSDOS 3.3. Words are apparently >> only sensitive to the first "5 to 7 characters" - so "block" and >> "blocks" are the same identifier. (You would think they would /know/ if >> the limit were 5, 6 or 7 characters.) Everything is still designed >> around punch card formats of 8 by 64 characters. > > You seem obsessed with your perceptions of the UI rather than utility.
I am merely highlighting what the GA144 website seems to view as being modern innovations in the Forth toolchains.
> I don't have a problem with large fonts. Most of the designers of the > system are older and have poorer eyesight (a feature I share with > them).
If your eyesight is poor, use a bigger screen or a bigger or more legible font - that's fine. But it does not make sense to use that as the basis for designing your toolchain - just make the IDE configurable.
> The use of color to indicate aspects of the language is pretty > much the same as the color highlighting I see in nearly every modern > editor. The difference is that in ColorForth the highlighting is *part* > of the language as it distinguishes when commands are executed.
It is syntax highlighting.
> Some > commands in Forth are executed at compile time rather than being > compiled. This is one of the many powerful features of Forth.
You get that in other languages too. True, it is not always easy to determine what is done at compile time and what is done at run time, and the distinction may depend on optimisation flags. But really, what you are describing here is like C++ with constexpr code shown in a different colour.
> ColorForth pushes further to allow some commands to be executed at edit > time. I have not studied it in detail, so I can't give you details on > this. > > I just want to explain how you are using very simplistic perceptions and > expectations to "color" your impression of ColorForth without learning > anything important about it.
I've read the colorForth FAQ, such as it is. I also note that the website is dead.
> > >> I can appreciate that a stack machine design is ideal for a small >> processor, and can give very tight code. I can appreciate that this >> means a sort of Forth is the natural assembly language for the system. >> I can even appreciate that the RPN syntax, the interactivity, and the >> close-to-the-metal programming appeals to some people. But I cannot >> understand why this cannot be done with a modern language with decent >> typing, static checking, optimised compilation, structured syntax, etc. > > You can't understand because you have not tried to learn about Forth. I > can assure you there are a number of optimizing compilers for Forth. I > don't know what you are seeing that you think Forth doesn't have > "structured syntax". Is this different from the control flow structures?
I fully understand that there are good optimising Forth compilers and cross-compilers. But those are good /implementation/ - I am talking about the /language/.
> > I see Stephen Pelc responded to your posts. He is the primary author of > VFX from MPE. Instead of throwing a tantrum about Forth "looking" like > it is 30 years old, why not engage him and learn something? >
I am not "throwing a tantrum" - I /am/ engaging in discussion (including with Stephen). I am talking about how Forth appears to me. I have worked with a wide range of languages, including various functional programming languages, parallel programming languages, assembly languages, hardware design languages, high level languages, low level languages, and a little Forth long ago. I have worked through tutorials in APL and Prolog. I am not put off by strange syntaxes or having to think in a different manner. (It might put me off /using/ such languages for real work, however.) When I talk about how Forth appears to me, it is quite clear that the language has limited practicality for modern programming. And if that is /not/ the case, then it certainly has an image problem.
David Brown wrote on 6/18/2017 4:56 PM:
> On 16/06/17 19:52, rickman wrote: >> David Brown wrote on 6/16/2017 3:25 AM: >>> On 16/06/17 00:15, rickman wrote: >>>> David Brown wrote on 6/15/2017 5:37 PM: >>>>> On 15/06/17 22:52, rickman wrote: >>>> > <snip> >>>> Once an FPGA was on the board there was no reason to use a CPU, although >>>> I would have liked to have a hybrid chip with about 1000-4 input LUTs >>>> and a moderate CPU or DSP even. Add a 16 bit stereo CODEC and it would >>>> be perfect! >>> >>> Often a soft processor can be enough for housekeeping, but an FPGA with >>> a fast ARM core would be nice. Expensive, but nice. >> >> Not sure what you mean by a "fast" ARM core, but ARMs combined with FPGAs >> are sold by three of the four FPGA companies. > > By "fast", I mean "so much faster than strictly needed for the job in hand > that you don't need to worry about speed" :-) > > Alternatively, a Cortex-A core with Neon is "fast".
Have you looked at the ARM + FPGAs offered by the mainstream FPGA vendors? The ARMs in the Xilinx and Altera/Intel parts are high end like the Cortex-A devices. The Microsemi part is a CM3 or CM4, I forget which.
>>> I think Atmel/Microchip now have a microcontroller with a bit of >>> programmable logic - I don't know how useful that might be. (Not for >>> your application here, of course - neither the cpu nor the PLD part are >>> powerful enough.) >> >> You might be thinking of the PSOC devices from Cypress. They have either >> an 8051 type processor or an ARM CM3 with various programmable logic and >> analog. Not really an FPGA in any sense. They can be programmed in >> Verilog, but they are not terribly capable. Think of them as having >> highly flexible peripherals. > > No, I mean the new Atmel XMega E series. They have a "custom logic module" > with a couple of timers and some lookup tables. It is not an FPGA - it's > just a bit of programmable logic, more akin to a built-in PLD.
I wouldn't even say the logic is comparable to a PLD type device other than a very, very simple one. It only includes two LUTs. This is more like a very lame Cypress PSOC device. Actually Cypress makes one sub-family of PSOC devices that actually have no programmable logic as such. They just have some peripherals that are very configurable, like it can be SPI or I2C or a couple of other serial devices, but no general logic.
>> If you really mean the Atmel FPGAs, they are very old, very slow and very >> expensive. I don't consider them to be in the FPGA business, they are >> more in the obsolete device business like Rochester Electronics. The >> device line they had that included a small 8 bit processor is long gone >> and was never cost competitive. > > No, I know about that line too (never used it, but I know of it). > >> >> >>>> I wonder why they can't make lower cost versions? The GA144 has 144 >>>> processors, $15 @ qty 1. It's not even a modern process node, 150 or >>>> 180 nm I think, 100 times more area than what they are using today. >>>> >>> >>> I guess it is the usual matter - NRE and support costs have to be >>> amortized. When the chip is not a big seller (and I don't imagine the >>> GA144 is that popular), they have to make back their investment somehow. >> >> I'm talking about the XMOS device. The GA144 could easily be sold cheaply >> if they use a more modern process and sold them in high volumes. But what >> is holding back XMOS from selling a $1 chip? My understanding is the CPU >> is normally a pretty small part of an MCU with memory being the lion's >> share of the real estate. Even having 8 CPUs shouldn't run the area and >> cost up since the chip is really all about the RAM. Is the RAM special in >> some way? I thought it was just fast and shared through multiplexing. >> > > It is a fast RAM - the one RAM block runs at 500 MHz single-cycle, and may > be dual-ported. There is only one cpu on the smaller XMOS devices, with 8 > hardware threads - larger devices have up to 4 cpus (and thus 32 threads). > The IO pins have a fair amount of fast logic attached too. > > But I don't know where the cost comes in. The XMOS devices are, I guess, a > good deal more popular than the GA144 - but they are not mass market > compared to popular Cortex-M microcontrollers.
It doesn't have to be 100's of millions to be cost effective. The GA144 isn't even in the running. But any practical MCU needs to be sold in sufficient quantities to make it affordable and for the company to keep running.
>>> Have you used the GA144? It sounds interesting, but I haven't thought >>> of any applications for it. >> >> There are a number of issues with using the GA144 in a production design. >> Not the least is the lack of reliable supply. The company runs on a >> shoestring with minimal offices, encouraging free help from anyone >> interested in writing an app note. lol When they kicked off the GA144 >> there was a lot of interest from fairly fringe groups of designers (the >> assembly language is pretty much Forth) but I have yet to hear of any >> designs reaching production which is not the same thing as there being >> none. The production runs appear to be the minimum size test runs from >> the foundry. The chip is pretty small, so they get a *lot* from a wafer. >> >
-- Rick C
David Brown wrote on 6/18/2017 9:15 PM:
> On 16/06/17 20:22, rickman wrote: >> David Brown wrote on 6/16/2017 7:21 AM: >>> On 15/06/17 23:46, rickman wrote: >>>> David Brown wrote on 6/15/2017 7:04 AM: > <snip> >> >> Yeah, I don't know of any product using the GA144. I looked hard at using >> it in a production design where I needed to replace an EOL FPGA. Ignoring >> all the other issues, I wasn't sure it would meet the timing I've outlined >> in other posts in this thread. I needed info on the I/O timing and GA >> wouldn't provide it. They seemed to think I wanted to reverse engineer >> the transistor level design. Silly gooses. >> >> I don't know why the age of a computer language is even a factor. I don't >> think C is a newcomer and is the most widely used programming language in >> embedded devices, no? > > The age of a language itself is not important, of course. The type of > features it has /are/ important. > > Many things in computing have changed in the last 4 decades. Features of a > language that were simply impossible at that time due to limited host > computing power are entirely possible today. So modern languages do far > more compile-time checking and optimisation now than was possible at that > time. Good languages evolve to take advantage of newer possibilities - the > C of today is not the same as the pre-K&R C of that period. The Forth of > today appears to me to be pretty much the same - albeit with more optimising > compilers and additional libraries.
I don't know the "new" C, I don't work with it. What improved?
>> The GA144 is a stack processor and so the assembly language looks a lot >> like Forth which is based on a stack processor virtual machine. I'm not >> sure what is "weird" about it other than the fact that most programmers >> aren't familiar with stack programming other than Open Boot, Postscript, >> RPL and BibTeX. >> >> > > Even for a stack machine, it is very limited.
Like what? It is not really "limited". The GA144 assembly language is... well, assembly language. Would you compare the assembly language of an X86 to JAVA or C?
> In some ways, the 4-bit MARC4 > architecture was more powerful (it certainly had more program space). > > But this is all first impressions from me - I have not used the devices, and > I am /very/ out of practice with Forth.
You keep talking in vague terms, saying the MARC4 was more "powerful". Address space is not the power of the language. It is the hardware limitation of the CPU. The GA144 was designed with a different philosophy. I would say for a different purpose, but it was not designed for *any* purpose. Chuck designed it as an experiment while exploring the space of minimal hardware processors. The capapbilities come from the high speed of each processor and the comms capability. I compare the GA144 to FPGAs more than to ARMs. The CPUs are small, very fast and plentiful (relatively), like the LUTs in an FPGA. Communications are very fast and the processor can automatically halt for synchronization with the other processor. Letting a processor sit idle is a power advantage, not a speed disadvantage. Processing speed is plentiful in the GA144 so it does not need to be optimized. Like the XMOS using it requires some adjustment in your thinking... a lot more than the XMOS in fact.
>>>>>> XMOS does provide some techniques (e.g. composable >>>>>> and interfaces) to reduce the sharpness of the cliffs, >>>>>> but not to eliminate them. But then the same is true >>>>>> of ARM+RTOS etc etc. >>>>> >>>>> Yes, there are learning curves everywhere. And scope for getting things >>>>> wrong :-) XMOS gives a different balance amongst many of the challenges >>>>> facing designers - it is better in some ways, worse in other ways. >>>>> >>>>>> >>>>>> I wouldn't regard XMOS as being a replacement for >>>>>> a general-purpose processor, but there is a large >>>>>> overlap in many hard-realtime applications. >>>>>> >>>>>> Similarly I wouldn't regard an ARM as being a >>>>>> replacement for a CPLD/FPGA, but there can be >>>>>> an overlap in many soft-realtime applications >>>>>> >>>>>> The XMOS devices inhabit an interesting and important >>>>>> niche between those two. >>>>> >>>>> Agreed. >>>> >>>> The question is whether it is worth investing the time and energy into >>>> learning the chip if you don't focus your work in this realm. >>> >>> Certainly - working with XMOS means thinking a little differently. But >>> it is not /nearly/ as different as the GA144. >> >> "A little" is a *lot* larger learning curve than any other MCU I am aware >> of (the GA144 aside). My point is that can be worth it only if you do a >> lot of designs that would make use of its unique features. I'm not sure >> there really is a very large sweet spot given that the chips are not sold >> at the low end and other devices will do the same job using existing >> techniques and tools. > > For a programmer who is completely unfamiliar with FPGA and programmable > logic, the XMOS is likely to be less of a leap than moving to an FPGA.
Perhaps, but I would still emphasize the issue that MCUs in general and FPGAs in general cover a lot of territory. XMOS only excels in a fairly small region. The GA144 is optimal for a microscopically small region.
> But I agree it is hard to find an application area for these devices - I > have only had a couple of uses for them, and they probably were not ideal > for those cases.
If the XMOS price were better, I would say they would be much more worth learning.
>>>> Personally I find the FPGA approach covers a *lot* of ground that others >>>> don't see and the region left that is not so easy to address with either >>>> FPGAs or more conventional CPUs is very limited. If the XMOS isn't a >>>> good price fit, I most likely would just go with a small FPGA. I saw >>>> the XMOS has a 12 bit, 1 MSPS ADC which is nice. But again, this only >>>> makes its range of good fit slightly larger. >>> >>> If you have lots of experience with FPGA development, it is natural to >>> look to FPGAs for solutions to design problems - and that is absolutely >>> fine. It is impossible to jump on every different development method >>> and be an expert at them all - and most problems can be solved in a >>> variety of ways. >> >> Yes, but you don't need to know a "variety" of ways of solving problems. >> You only need to know ways that are highly effective for most design >> problems. >> > > True. > >> The many misconceptions of FPGAs relegate them to situations where CPUs >> just can't cut the mustard. In reality they are very flexible and only >> limited by the lack of on chip peripherals. Microsemi adds more >> peripherals to their devices, but still don't compete directly with lower >> cost MCUs. > > FPGAs have their strengths and their weaknesses. There are lots of > situations where they are far from ideal - but I agree that there are many > misconceptions about them that might discourage people from starting to use > them.
I can't tell you how many people think FPGAs are complicated to design, power hungry and expensive. All three of these are not true. My only complaints are they tend to be in very fine pitch BGA packages (many with very high pin counts), only a few smaller devices are available and they don't integrate much analog. I'd like to see a small FPGA rolled with a small MCU (ARM CM4) with all the standard peripherals an MCU normally includes, brownout, ADC/DAC, etc. They could have done this affordably a decade ago if they wanted, but the FPGA companies have a particular business model that does not include this market. Lattice and Microsemi aren't as committed to the mainstream FPGA market and so offer some limited products that differ.
>>>>>>> XMOS devices can be a lot of fun, and I would enjoy working with them >>>>>>> again. For some tasks, they are an ideal solution (like for this one), >>>>>>> once you have learned to use them. But not everything is easy with >>>>>>> them, and you have to think in a somewhat different way. >>>>>> >>>>>> Yes, but the learning curve is very short and >>>>>> there are no unpleasant surprises. >>>>>> >>>>>> IMNSHO "thinking in CSP" will help structure >>>>>> thoughts for any realtime application, whether >>>>>> or not it is formally part of the implementation. >>>>>> >>>>> >>>>> Agreed. >>>> >>>> I prefer Forth for embedded work. I believe there is a Forth >>>> available. >>> >>> Available for what? The XMOS? I'd be surprised. >>> >>>> I don't know if it captures any of the flavor of CSP, mostly >>>> because I know little about CSP. I did use Occam some time ago. Mostly >>>> I recall it had a lot of constraints on what the programmer was allowed >>>> to do. The project we were using the Transputer on programmed it all in C. >>>> >>> >>> Occam has its own advantages and disadvantages independent of the use of >>> CSP-style synchronisation and message passing. >>> >>> I have looked at Forth a few times over the years, but I have yet to see >>> a version that has changed for the better since I played with it as a >>> teenager some 30 years ago. The stuff on the GA144 website is absurd - >>> their "innovation" is that their IDE has colour highlighting despite >>> looking like a reject from the days of MSDOS 3.3. Words are apparently >>> only sensitive to the first "5 to 7 characters" - so "block" and >>> "blocks" are the same identifier. (You would think they would /know/ if >>> the limit were 5, 6 or 7 characters.) Everything is still designed >>> around punch card formats of 8 by 64 characters. >> >> You seem obsessed with your perceptions of the UI rather than utility. > > I am merely highlighting what the GA144 website seems to view as being > modern innovations in the Forth toolchains. > >> I don't have a problem with large fonts. Most of the designers of the >> system are older and have poorer eyesight (a feature I share with them). > > If your eyesight is poor, use a bigger screen or a bigger or more legible > font - that's fine. But it does not make sense to use that as the basis for > designing your toolchain - just make the IDE configurable.
I have a laptop with a 17 inch screen and the fonts are smaller than my old desktop with a 17 inch monitor because the pixels are smaller (HD vs. 1280 horizontal resolution). The windows stuff for adjusting the size of fonts and such don't work properly across lots of apps. Even my Excalibur calculator is very hard to read. Bottom line is don't bad mouth an API because it doesn't *please* you. Your tastes aren't the only standard.
>> The use of color to indicate aspects of the language is pretty much the >> same as the color highlighting I see in nearly every modern editor. The >> difference is that in ColorForth the highlighting is *part* of the >> language as it distinguishes when commands are executed. > > It is syntax highlighting.
No, it is functional, not just illustrating. It is in the *language*, not just the editor. It's all integrated, not in the way the tools in a GUI are integrated, but in the way your heart, lungs and brain are integrated.
>> Some commands in Forth are executed at compile time rather than being >> compiled. This is one of the many powerful features of Forth. > > You get that in other languages too. True, it is not always easy to > determine what is done at compile time and what is done at run time, and the > distinction may depend on optimisation flags.
That's what the color does.
> But really, what you are describing here is like C++ with constexpr code > shown in a different colour.
I wouldn't know C++.
>> ColorForth pushes further to allow some commands to be executed at edit >> time. I have not studied it in detail, so I can't give you details on this. >> >> I just want to explain how you are using very simplistic perceptions and >> expectations to "color" your impression of ColorForth without learning >> anything important about it. > > I've read the colorForth FAQ, such as it is. I also note that the website > is dead.
Yeah, Charles Moore isn't in the business of supporting language standards. He created Color Forth for himself and has shared it with others. GA is using it to support their products and they are the best source for information now.
>>> I can appreciate that a stack machine design is ideal for a small >>> processor, and can give very tight code. I can appreciate that this >>> means a sort of Forth is the natural assembly language for the system. >>> I can even appreciate that the RPN syntax, the interactivity, and the >>> close-to-the-metal programming appeals to some people. But I cannot >>> understand why this cannot be done with a modern language with decent >>> typing, static checking, optimised compilation, structured syntax, etc. >> >> You can't understand because you have not tried to learn about Forth. I >> can assure you there are a number of optimizing compilers for Forth. I >> don't know what you are seeing that you think Forth doesn't have >> "structured syntax". Is this different from the control flow structures? > > I fully understand that there are good optimising Forth compilers and > cross-compilers. But those are good /implementation/ - I am talking about > the /language/.
You mentioned optimizing compilers, what was your point in bringing it up? Optimizations are not in any language that I'm aware of. You seem to think there is something lacking in the Forth language, but you don't say what that would be.
>> I see Stephen Pelc responded to your posts. He is the primary author of >> VFX from MPE. Instead of throwing a tantrum about Forth "looking" like it >> is 30 years old, why not engage him and learn something? >> > > I am not "throwing a tantrum" - I /am/ engaging in discussion (including > with Stephen). > > I am talking about how Forth appears to me. I have worked with a wide range > of languages, including various functional programming languages, parallel > programming languages, assembly languages, hardware design languages, high > level languages, low level languages, and a little Forth long ago. I have > worked through tutorials in APL and Prolog. I am not put off by strange > syntaxes or having to think in a different manner. (It might put me off > /using/ such languages for real work, however.) When I talk about how Forth > appears to me, it is quite clear that the language has limited practicality > for modern programming. And if that is /not/ the case, then it certainly > has an image problem.
I don't know what is meant by "limited practicality for modern programming". By griping about the use of primary colors and large fonts, I consider that throwing a tantrum. How about discussing something important and useful? -- Rick C
On 19/06/17 06:21, rickman wrote:
> David Brown wrote on 6/18/2017 4:56 PM: >> On 16/06/17 19:52, rickman wrote: >>> David Brown wrote on 6/16/2017 3:25 AM: >>>> On 16/06/17 00:15, rickman wrote: >>>>> David Brown wrote on 6/15/2017 5:37 PM: >>>>>> On 15/06/17 22:52, rickman wrote: >>>>> >> <snip> >>>>> Once an FPGA was on the board there was no reason to use a CPU, >>>>> although >>>>> I would have liked to have a hybrid chip with about 1000-4 input LUTs >>>>> and a moderate CPU or DSP even. Add a 16 bit stereo CODEC and it >>>>> would >>>>> be perfect! >>>> >>>> Often a soft processor can be enough for housekeeping, but an FPGA with >>>> a fast ARM core would be nice. Expensive, but nice. >>> >>> Not sure what you mean by a "fast" ARM core, but ARMs combined with >>> FPGAs >>> are sold by three of the four FPGA companies. >> >> By "fast", I mean "so much faster than strictly needed for the job in >> hand >> that you don't need to worry about speed" :-) >> >> Alternatively, a Cortex-A core with Neon is "fast". > > Have you looked at the ARM + FPGAs offered by the mainstream FPGA > vendors? The ARMs in the Xilinx and Altera/Intel parts are high end like > the Cortex-A devices. The Microsemi part is a CM3 or CM4, I forget which. >
Yes, I know. (Perhaps you interpreted me to mean "it would be nice if someone made an FGPA with a fast ARM core". I actually meant "an FPGA with a fast ARM core would be nice for this sort of application".)
> >>>> I think Atmel/Microchip now have a microcontroller with a bit of >>>> programmable logic - I don't know how useful that might be. (Not for >>>> your application here, of course - neither the cpu nor the PLD part are >>>> powerful enough.) >>> >>> You might be thinking of the PSOC devices from Cypress. They have >>> either >>> an 8051 type processor or an ARM CM3 with various programmable logic and >>> analog. Not really an FPGA in any sense. They can be programmed in >>> Verilog, but they are not terribly capable. Think of them as having >>> highly flexible peripherals. >> >> No, I mean the new Atmel XMega E series. They have a "custom logic >> module" >> with a couple of timers and some lookup tables. It is not an FPGA - it's >> just a bit of programmable logic, more akin to a built-in PLD. > > I wouldn't even say the logic is comparable to a PLD type device other > than a very, very simple one. It only includes two LUTs. This is more > like a very lame Cypress PSOC device. Actually Cypress makes one > sub-family of PSOC devices that actually have no programmable logic as > such. They just have some peripherals that are very configurable, like > it can be SPI or I2C or a couple of other serial devices, but no general > logic.
It is programmable logic, even though it is small. And unlike the PSoC, it is a supplement to a range of normal microcontroller peripherals and the AVR's "event" system for connecting peripherals. The idea is that this logic can avoid the need of glue logic that you sometimes need along with a microcontroller. I think it is a neat idea, and if it catches on then I am sure later models will have more.
> > >>> If you really mean the Atmel FPGAs, they are very old, very slow and >>> very >>> expensive. I don't consider them to be in the FPGA business, they are >>> more in the obsolete device business like Rochester Electronics. The >>> device line they had that included a small 8 bit processor is long gone >>> and was never cost competitive. >> >> No, I know about that line too (never used it, but I know of it). >> >>> >>> >>>>> I wonder why they can't make lower cost versions? The GA144 has 144 >>>>> processors, $15 @ qty 1. It's not even a modern process node, 150 or >>>>> 180 nm I think, 100 times more area than what they are using today. >>>>> >>>> >>>> I guess it is the usual matter - NRE and support costs have to be >>>> amortized. When the chip is not a big seller (and I don't imagine the >>>> GA144 is that popular), they have to make back their investment >>>> somehow. >>> >>> I'm talking about the XMOS device. The GA144 could easily be sold >>> cheaply >>> if they use a more modern process and sold them in high volumes. But >>> what >>> is holding back XMOS from selling a $1 chip? My understanding is the >>> CPU >>> is normally a pretty small part of an MCU with memory being the lion's >>> share of the real estate. Even having 8 CPUs shouldn't run the area and >>> cost up since the chip is really all about the RAM. Is the RAM >>> special in >>> some way? I thought it was just fast and shared through multiplexing. >>> >> >> It is a fast RAM - the one RAM block runs at 500 MHz single-cycle, and >> may >> be dual-ported. There is only one cpu on the smaller XMOS devices, >> with 8 >> hardware threads - larger devices have up to 4 cpus (and thus 32 >> threads). >> The IO pins have a fair amount of fast logic attached too. >> >> But I don't know where the cost comes in. The XMOS devices are, I >> guess, a >> good deal more popular than the GA144 - but they are not mass market >> compared to popular Cortex-M microcontrollers. > > It doesn't have to be 100's of millions to be cost effective. The GA144 > isn't even in the running. But any practical MCU needs to be sold in > sufficient quantities to make it affordable and for the company to keep > running.
Again, I don't know the numbers here. XMOS has been running for quite a few years, with regular new products and new versions of their tools, so they seem to be doing okay.
On 19/06/17 06:54, rickman wrote:
> David Brown wrote on 6/18/2017 9:15 PM: >> On 16/06/17 20:22, rickman wrote: >>> David Brown wrote on 6/16/2017 7:21 AM: >>>> On 15/06/17 23:46, rickman wrote: >>>>> David Brown wrote on 6/15/2017 7:04 AM: >> <snip> >>> >>> Yeah, I don't know of any product using the GA144. I looked hard at >>> using >>> it in a production design where I needed to replace an EOL FPGA. >>> Ignoring >>> all the other issues, I wasn't sure it would meet the timing I've >>> outlined >>> in other posts in this thread. I needed info on the I/O timing and GA >>> wouldn't provide it. They seemed to think I wanted to reverse engineer >>> the transistor level design. Silly gooses. >>> >>> I don't know why the age of a computer language is even a factor. I >>> don't >>> think C is a newcomer and is the most widely used programming >>> language in >>> embedded devices, no? >> >> The age of a language itself is not important, of course. The type of >> features it has /are/ important. >> >> Many things in computing have changed in the last 4 decades. Features >> of a >> language that were simply impossible at that time due to limited host >> computing power are entirely possible today. So modern languages do far >> more compile-time checking and optimisation now than was possible at that >> time. Good languages evolve to take advantage of newer possibilities >> - the >> C of today is not the same as the pre-K&R C of that period. The Forth of >> today appears to me to be pretty much the same - albeit with more >> optimising >> compilers and additional libraries. > > I don't know the "new" C, I don't work with it. What improved?
Well, starting from pre-K&R C and moving to "ANSI" C89/C90, it got prototypes, proper structs, const, volatile, multiple different sized types, etc. I am sure you are very familiar with this C - but my point is that even though the history of C is old like that of Forth, even at that point 25+ years ago C had moved on and improved significantly as a language, compared to its original version. Some embedded developers still stick to that old language, rather than moving on to C99 with inline, booleans, specifically sized types, line comments, mixing code and declarations, and a few other useful bits and pieces. Again, C99 is a much better language. C11 is the current version, but does not add much that was not already common in implementations. Static assertions are /very/ useful, and the atomic types have possibilities but I think are too little, too late.
> > >>> The GA144 is a stack processor and so the assembly language looks a lot >>> like Forth which is based on a stack processor virtual machine. I'm not >>> sure what is "weird" about it other than the fact that most programmers >>> aren't familiar with stack programming other than Open Boot, Postscript, >>> RPL and BibTeX. >>> >>> >> >> Even for a stack machine, it is very limited. > > Like what? It is not really "limited". The GA144 assembly language > is... well, assembly language. Would you compare the assembly language > of an X86 to JAVA or C? >
The size of the memories (data space, code space and stack space) is the most obvious limitation.
> >> In some ways, the 4-bit MARC4 >> architecture was more powerful (it certainly had more program space). >> >> But this is all first impressions from me - I have not used the >> devices, and >> I am /very/ out of practice with Forth. > > You keep talking in vague terms, saying the MARC4 was more "powerful". > Address space is not the power of the language.
True - I was not clear in distinguishing the language from the hardware here. I meant the hardware in this case.
> It is the hardware > limitation of the CPU. The GA144 was designed with a different > philosophy. I would say for a different purpose, but it was not designed > for *any* purpose. Chuck designed it as an experiment while exploring > the space of minimal hardware processors. The capapbilities come from > the high speed of each processor and the comms capability.
Minimal systems can be interesting for theory, but are rarely of any use in practice.
> > I compare the GA144 to FPGAs more than to ARMs. The CPUs are small, > very fast and plentiful (relatively), like the LUTs in an FPGA. > Communications are very fast and the processor can automatically halt > for synchronization with the other processor. Letting a processor sit > idle is a power advantage, not a speed disadvantage. Processing speed > is plentiful in the GA144 so it does not need to be optimized. Like the > XMOS using it requires some adjustment in your thinking... a lot more > than the XMOS in fact.
I agree with the principle - as I say, the GA144 has some interesting ideas and technology. But you need more power in each cpu to do something useful. If you want to use animal power to draw a plough, you want a horse. An army of ants might have a theoretically greater total strength and a better total-power to food cost ratio, but it is still hopeless as a solution.
>> For a programmer who is completely unfamiliar with FPGA and programmable >> logic, the XMOS is likely to be less of a leap than moving to an FPGA. > > Perhaps, but I would still emphasize the issue that MCUs in general and > FPGAs in general cover a lot of territory. XMOS only excels in a fairly > small region. The GA144 is optimal for a microscopically small region. >
Fair enough.
> >> But I agree it is hard to find an application area for these devices - I >> have only had a couple of uses for them, and they probably were not ideal >> for those cases. > > If the XMOS price were better, I would say they would be much more worth > learning. >
Also true.
>> >>> The many misconceptions of FPGAs relegate them to situations where CPUs >>> just can't cut the mustard. In reality they are very flexible and only >>> limited by the lack of on chip peripherals. Microsemi adds more >>> peripherals to their devices, but still don't compete directly with >>> lower >>> cost MCUs. >> >> FPGAs have their strengths and their weaknesses. There are lots of >> situations where they are far from ideal - but I agree that there are >> many >> misconceptions about them that might discourage people from starting >> to use >> them. > > I can't tell you how many people think FPGAs are complicated to design, > power hungry and expensive. All three of these are not true. >
That certainly /was/ the case. But yes, for a good while now there have been cheap and low power FPGAs available. As for complicated to design - well, I guess it's easy when you know how. But you do have to know what you are doing. Tools are better, introductory videos are better, etc. - there are lots more learning resources than in the "old" days. And once you know (at least roughly) what you are doing, the modern tools and computers make the job a good deal faster than before. I remember some 20 years ago working with a large PLD - place and route took about 8 hours, and debugging the design was done by pulling a couple of internal signals out to spare pins and re-doing the place and route. (By that stage of the project it was too late to think about alternative chips.)
> My only complaints are they tend to be in very fine pitch BGA packages > (many with very high pin counts), only a few smaller devices are > available and they don't integrate much analog. I'd like to see a small > FPGA rolled with a small MCU (ARM CM4) with all the standard peripherals > an MCU normally includes, brownout, ADC/DAC, etc. They could have done > this affordably a decade ago if they wanted, but the FPGA companies have > a particular business model that does not include this market. Lattice > and Microsemi aren't as committed to the mainstream FPGA market and so > offer some limited products that differ. >
Variety of choices is always nice. I agree that devices like those could have a wide range of uses.
>> >> If your eyesight is poor, use a bigger screen or a bigger or more legible >> font - that's fine. But it does not make sense to use that as the >> basis for >> designing your toolchain - just make the IDE configurable. > > I have a laptop with a 17 inch screen and the fonts are smaller than my > old desktop with a 17 inch monitor because the pixels are smaller (HD > vs. 1280 horizontal resolution). The windows stuff for adjusting the > size of fonts and such don't work properly across lots of apps. Even my > Excalibur calculator is very hard to read. >
Surely you don't use that laptop for normal work? A laptop is okay for when you need a portable office, but I have three large monitors in my office. And most of my development is done on Linux, where font scaling works most of the time. (Though personally, I like small fonts with lots of text on the screen - my eyes are fine for that, when I have my contacts in. Without them, I can't focus further than my nose!).
> Bottom line is don't bad mouth an API because it doesn't *please* you. > Your tastes aren't the only standard. >
My tastes are the most important standard for /me/, and the one that affects my impression when I look at a tool. Of course I realise that other people have other tastes. And perhaps some people have poor eyesight and a job that requires them to work entirely on a small laptop. But I find it hard to accept that an IDE should be designed solely on the basis of being clear to someone with bad eyesight who works with a tiny old monitor. The colorForth stuff seems to be designed by and /for/ a single person - Chuck Moore. That's fine for him for a personal project, but it is highly unlikely to be a good way to make tools for more general use.
>>> The use of color to indicate aspects of the language is pretty much the >>> same as the color highlighting I see in nearly every modern editor. The >>> difference is that in ColorForth the highlighting is *part* of the >>> language as it distinguishes when commands are executed. >> >> It is syntax highlighting. > > No, it is functional, not just illustrating. It is in the *language*, > not just the editor. It's all integrated, not in the way the tools in a > GUI are integrated, but in the way your heart, lungs and brain are > integrated. >
No, it is syntax highlighting. There is a 4 bit "colour token" attached to each symbol. These distinguish between variables, comments, word definitions, etc. There is /nothing/ that this gives you compared to, say, $ prefixes for variables (like PHP), _t suffixes for types (common convention in C), etc., with colour syntax highlighting. The only difference is that the editor hides the token. So when you have both var_foo and word_foo, they are both displayed as "foo" in different colours rather than "var_foo" and "word_foo" in different colours. That is all there is to it.
> >>> Some commands in Forth are executed at compile time rather than being >>> compiled. This is one of the many powerful features of Forth. >> >> You get that in other languages too. True, it is not always easy to >> determine what is done at compile time and what is done at run time, >> and the >> distinction may depend on optimisation flags. > > That's what the color does. >
The colour doesn't do it - the language makes a clearer distinction between compile-time and run-time, and the colour helps you see that. You had the same distinction in Forth without colouring. Having a separation here is both a good thing and a bad thing, in comparison to the way C handles it, and the way C++ handles it. There is room in the world for many models of language.
> >> But really, what you are describing here is like C++ with constexpr code >> shown in a different colour. > > I wouldn't know C++. >
Without going into details, you are probably aware that in C you sometimes need a "real" constant. For example, you can't make a file-level array definition unless the size is absolutely fixed: int xs[16]; That's allowed. But you can't write this: int square(int x) { return x * x; } int xs[square(4)]; "square(4)" is not a constant in C terms. However, you would expect a compiler to calculate the value at compile time (assuming it can see the definition of the "square" function) for the purposes of code optimisation. In C++11 onwards, you can write: constexpr int square(int x) { return x * x; } int xs[square(4)]; This tells the compiler that it can calculate "square" at compile time if the parameters are known at compile time, but still allows the function to be used as a run-time function if the parameters are not known at compile time.
> >>> ColorForth pushes further to allow some commands to be executed at edit >>> time. I have not studied it in detail, so I can't give you details >>> on this. >>> >>> I just want to explain how you are using very simplistic perceptions and >>> expectations to "color" your impression of ColorForth without learning >>> anything important about it. >> >> I've read the colorForth FAQ, such as it is. I also note that the >> website >> is dead. > > Yeah, Charles Moore isn't in the business of supporting language > standards. He created Color Forth for himself and has shared it with > others. GA is using it to support their products and they are the best > source for information now. >
With all due respect to Chuck Moore and his creations, this is not a way to conduct a professional business.
> >>>> I can appreciate that a stack machine design is ideal for a small >>>> processor, and can give very tight code. I can appreciate that this >>>> means a sort of Forth is the natural assembly language for the system. >>>> I can even appreciate that the RPN syntax, the interactivity, and the >>>> close-to-the-metal programming appeals to some people. But I cannot >>>> understand why this cannot be done with a modern language with decent >>>> typing, static checking, optimised compilation, structured syntax, etc. >>> >>> You can't understand because you have not tried to learn about Forth. I >>> can assure you there are a number of optimizing compilers for Forth. I >>> don't know what you are seeing that you think Forth doesn't have >>> "structured syntax". Is this different from the control flow >>> structures? >> >> I fully understand that there are good optimising Forth compilers and >> cross-compilers. But those are good /implementation/ - I am talking >> about >> the /language/. > > You mentioned optimizing compilers, what was your point in bringing it > up? Optimizations are not in any language that I'm aware of. You seem > to think there is something lacking in the Forth language, but you don't > say what that would be. >
I gave a list somewhere in another post. But my key "missing features" from Forth are good static checking, typing, methods of working with data of different sizes, safe ways to define and use structures, and ways to modularise the program. For example, take the "FLOOR5" function from the Wikipedia page: : FLOOR5 ( n -- n' ) DUP 6 < IF DROP 5 ELSE 1 - THEN ; The C version is: int floor5(int v) { return (v < 6) ? 5 : (v - 1); } Suppose the Forth programmer accidentally writes: : FLOOR5 ( n -- n' ) 6 < IF DROP 5 ELSE 1 - THEN ; It's an easy mistake to miss, and you've made a perfectly valid Forth word definition that will be accepted by the system. But now the comment does not match the usage. It would be entirely possible for the language to provide a formalised and standardised way of specifying input and output parameters in a way that most cases could be automatically checked by the tools. Conventionalised comments are /way/ out of date as an aid to automated correctness checking. And then suppose you want this function to work with 32-bit values - regardless of the width of a cell on the target machine. Or 64-bit values on a 16-bit cell system. (If you have good answers here, maybe you will change my mind - at least a little!)
> >>> I see Stephen Pelc responded to your posts. He is the primary author of >>> VFX from MPE. Instead of throwing a tantrum about Forth "looking" >>> like it >>> is 30 years old, why not engage him and learn something? >>> >> >> I am not "throwing a tantrum" - I /am/ engaging in discussion (including >> with Stephen). >> >> I am talking about how Forth appears to me. I have worked with a wide >> range >> of languages, including various functional programming languages, >> parallel >> programming languages, assembly languages, hardware design languages, >> high >> level languages, low level languages, and a little Forth long ago. I >> have >> worked through tutorials in APL and Prolog. I am not put off by strange >> syntaxes or having to think in a different manner. (It might put me off >> /using/ such languages for real work, however.) When I talk about how >> Forth >> appears to me, it is quite clear that the language has limited >> practicality >> for modern programming. And if that is /not/ the case, then it certainly >> has an image problem. > > I don't know what is meant by "limited practicality for modern > programming". By griping about the use of primary colors and large > fonts, I consider that throwing a tantrum. How about discussing > something important and useful? >
See above for a simple example. But I am not griping about the use of colour - I am mocking the idea that adding colour to the IDE is a big innovation in the language.
On 19/06/17 05:54, rickman wrote:
> David Brown wrote on 6/18/2017 9:15 PM: >> On 16/06/17 20:22, rickman wrote: >>> David Brown wrote on 6/16/2017 7:21 AM: >>>> On 15/06/17 23:46, rickman wrote: >>>>> David Brown wrote on 6/15/2017 7:04 AM:
To snip many points and bring together a few comments you made...
> I don't know the "new" C, I don't work with it. What improved?
> I wouldn't know C++.
> You seem to think there is something lacking in the Forth language, but you don't say what that would be. You have previously asked me to give you xC code snippets. When I provided direct references to xC concepts illustrated by /easily/ digestible code snippets, you completely ignored them. I know it wouldn't be a good use of my time to /poorly/ duplicate /existing/ information in an attempt to "enlighten" you.
> I don't know what is meant by "limited practicality for modern programming". By > griping about the use of primary colors and large fonts, I consider that > throwing a tantrum.
From your statements about what you /don't/ know, it appears you haven't seriously used modern languages i.e. those developed since the early 90s.[1] In that case it is to be expected that you don't understand the valid points made by David Brown and others.
> How about discussing something important and useful?
David Brown is making important and useful points. You have indicated probably reasons why you don't understand them. That's fine; we can't all be competent in everything, and I'm sure you are very competent within your experience. [1] I'll note these facilities enabled by Java significantly exceed those offered by C/C++/etc, but Java isn't relevant in this context.
In article <oi81sg$n55$1@dont-email.me>,
David Brown  <david.brown@hesbynett.no> wrote:
>On 19/06/17 06:54, rickman wrote: >> David Brown wrote on 6/18/2017 9:15 PM: >>> On 16/06/17 20:22, rickman wrote: >>>> David Brown wrote on 6/16/2017 7:21 AM: >>>>>> David Brown wrote on 6/15/2017 7:04 AM:
<SNIP>
>>>>> On 15/06/17 23:46, rickman wrote: >>>> The GA144 is a stack processor and so the assembly language looks a lot >>>> like Forth which is based on a stack processor virtual machine. I'm not >>>> sure what is "weird" about it other than the fact that most programmers >>>> aren't familiar with stack programming other than Open Boot, Postscript, >>>> RPL and BibTeX. >>>> >>>> >>> >>> Even for a stack machine, it is very limited. >> >> Like what? It is not really "limited". The GA144 assembly language >> is... well, assembly language. Would you compare the assembly language >> of an X86 to JAVA or C? >> > >The size of the memories (data space, code space and stack space) is the >most obvious limitation.
A less obvious limitation that goes right to the heart of the parallel processing that is claimed, is processor connectivity. I explored parallelism (with the parallel prime sieve) and the fixed rectangular grid is absolutely bonkers for any serious application where calculation power is needed. In this case I wanted to have two pipelines that come together. It starts as a puzzle, then it turns out to be hardly possible. Two crossing pipeline have to pass through one processor. If there is any structure to the data, the one processor would fail the processing power to make that possible. On transputers you would have hypercube arrangements such that there is no need to do that. On top of that, it would be easy. You just define two unrelated pass-through processes. A definitive measure for the quality of the GA144 would be a bitcoin calculator. That is the ratio between the cost of the electricity consumed and the value of the bitcoins generated. t would be *bad*. <SNIP>
>> >> Perhaps, but I would still emphasize the issue that MCUs in general and >> FPGAs in general cover a lot of territory. XMOS only excels in a fairly >> small region. The GA144 is optimal for a microscopically small region. >> > >Fair enough.
Indeed. <SNIP>
>See above for a simple example. > >But I am not griping about the use of colour - I am mocking the idea >that adding colour to the IDE is a big innovation in the language.
100% agreed. Adding colour is equivalent to a prefix character. Then if you want to Vim can add the colour for you based on the prefix character. Notations can be important innovations as Newton and Leibniz showed. The real big innovation they made was the differential calculus. If there is something underlying Colorforth it is tagged objects, hardly spectacular. Groetjes Albert
> >
-- Albert van der Horst, UTRECHT,THE NETHERLANDS Economic growth -- being exponential -- ultimately falters. albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst
On 19/06/17 12:54, Albert van der Horst wrote:
> In article <oi81sg$n55$1@dont-email.me>, > David Brown <david.brown@hesbynett.no> wrote: >> On 19/06/17 06:54, rickman wrote: >>> David Brown wrote on 6/18/2017 9:15 PM: >>>> On 16/06/17 20:22, rickman wrote: >>>>> David Brown wrote on 6/16/2017 7:21 AM: >>>>>>> David Brown wrote on 6/15/2017 7:04 AM: > <SNIP> >>>>>> On 15/06/17 23:46, rickman wrote: >>>>> The GA144 is a stack processor and so the assembly language looks a lot >>>>> like Forth which is based on a stack processor virtual machine. I'm not >>>>> sure what is "weird" about it other than the fact that most programmers >>>>> aren't familiar with stack programming other than Open Boot, Postscript, >>>>> RPL and BibTeX. >>>>> >>>>> >>>> >>>> Even for a stack machine, it is very limited. >>> >>> Like what? It is not really "limited". The GA144 assembly language >>> is... well, assembly language. Would you compare the assembly language >>> of an X86 to JAVA or C? >>> >> >> The size of the memories (data space, code space and stack space) is the >> most obvious limitation. > > A less obvious limitation that goes right to the heart of the > parallel processing that is claimed, is processor connectivity. > I explored parallelism (with the parallel prime sieve) and > the fixed rectangular grid is absolutely bonkers for any serious > application where calculation power is needed. > In this case I wanted to have two pipelines that come together. > It starts as a puzzle, then it turns out to be hardly possible. > > Two crossing pipeline have to pass through one processor. > If there is any structure to the data, the one processor would > fail the processing power to make that possible. > On transputers you would have hypercube arrangements such that there > is no need to do that. On top of that, it would be easy. > You just define two unrelated pass-through processes.
You make some very good points here. The comparison has been made with FPGAs. A great deal of the work (the physical hardware, and also the development tool's work) in an FPGA is about connections - moving signals between different nodes on the device. Imagine an FPGA where each node (block of LUTs, registers, etc.) could only communicate directly with its immediate 2D neighbours. The GA144 might work okay for problems that naturally fit a 2D grid that happens to fit the dimensions of the chip (8 x 18, I think). But it will be poor on anything else. In comparison, on the XMOS any virtual cpu (hardware thread) can connect directly to any other - either with a "permanent" channel (existing for the lifetime of the program) or created temporarily as needed. Exactly the same software system is used whether you are communicating between hardware threads on the same core, threads on different cores on the same chip, or threads of different cores on different XMOS chips. Clearly there are latency and bandwidth differences, but the logic is the same.
> > A definitive measure for the quality of the GA144 would be a > bitcoin calculator. That is the ratio between the cost of the > electricity consumed and the value of the bitcoins generated. > t would be *bad*. > > <SNIP> >>> >>> Perhaps, but I would still emphasize the issue that MCUs in general and >>> FPGAs in general cover a lot of territory. XMOS only excels in a fairly >>> small region. The GA144 is optimal for a microscopically small region. >>> >> >> Fair enough. > > Indeed. > > <SNIP> > >> See above for a simple example. >> >> But I am not griping about the use of colour - I am mocking the idea >> that adding colour to the IDE is a big innovation in the language. > > 100% agreed. Adding colour is equivalent to a prefix character. > Then if you want to Vim can add the colour for you based on the > prefix character. > > Notations can be important innovations as Newton and Leibniz showed. > The real big innovation they made was the differential calculus. > If there is something underlying Colorforth it is tagged objects, > hardly spectacular. > > Groetjes Albert >> >>