EmbeddedRelated.com
Forums
Memfault Beyond the Launch

MCU mimicking a SPI flash slave

Started by John Speth June 14, 2017
David Brown wrote on 6/19/2017 3:02 AM:
> On 19/06/17 06:21, rickman wrote: >> David Brown wrote on 6/18/2017 4:56 PM: >>> On 16/06/17 19:52, rickman wrote: >>>> David Brown wrote on 6/16/2017 3:25 AM: >>> <snip> >> >>>>> I think Atmel/Microchip now have a microcontroller with a bit of >>>>> programmable logic - I don't know how useful that might be. (Not for >>>>> your application here, of course - neither the cpu nor the PLD part are >>>>> powerful enough.) >>>> >>>> You might be thinking of the PSOC devices from Cypress. They have >>>> either >>>> an 8051 type processor or an ARM CM3 with various programmable logic and >>>> analog. Not really an FPGA in any sense. They can be programmed in >>>> Verilog, but they are not terribly capable. Think of them as having >>>> highly flexible peripherals. >>> >>> No, I mean the new Atmel XMega E series. They have a "custom logic >>> module" >>> with a couple of timers and some lookup tables. It is not an FPGA - it's >>> just a bit of programmable logic, more akin to a built-in PLD. >> >> I wouldn't even say the logic is comparable to a PLD type device other >> than a very, very simple one. It only includes two LUTs. This is more >> like a very lame Cypress PSOC device. Actually Cypress makes one >> sub-family of PSOC devices that actually have no programmable logic as >> such. They just have some peripherals that are very configurable, like >> it can be SPI or I2C or a couple of other serial devices, but no general >> logic. > > It is programmable logic, even though it is small. And unlike the PSoC, > it is a supplement to a range of normal microcontroller peripherals and > the AVR's "event" system for connecting peripherals. The idea is that > this logic can avoid the need of glue logic that you sometimes need > along with a microcontroller. I think it is a neat idea, and if it > catches on then I am sure later models will have more.
Small is not the word. The PSOC is enormously more capable and *that* is small. I'm not sure of the purpose of the distinction when you say "supplement to a range of normal microcontroller peripherals". Why a need for "normal" peripherals? The point of the PSOC is they can offer flexibility so your design can use what it needs without a lot of wasted silicon for peripherals that aren't used. Instead they "waste" silicon by making the peripherals programmable. In the process you get a degree of flexibility not found other than in FPGA type programmable logic as well as analog programmability you can only find in discrete analog chips.
>>>>>> I wonder why they can't make lower cost versions? The GA144 has 144 >>>>>> processors, $15 @ qty 1. It's not even a modern process node, 150 or >>>>>> 180 nm I think, 100 times more area than what they are using today. >>>>>> >>>>> >>>>> I guess it is the usual matter - NRE and support costs have to be >>>>> amortized. When the chip is not a big seller (and I don't imagine the >>>>> GA144 is that popular), they have to make back their investment >>>>> somehow. >>>> >>>> I'm talking about the XMOS device. The GA144 could easily be sold >>>> cheaply >>>> if they use a more modern process and sold them in high volumes. But >>>> what >>>> is holding back XMOS from selling a $1 chip? My understanding is the >>>> CPU >>>> is normally a pretty small part of an MCU with memory being the lion's >>>> share of the real estate. Even having 8 CPUs shouldn't run the area and >>>> cost up since the chip is really all about the RAM. Is the RAM >>>> special in >>>> some way? I thought it was just fast and shared through multiplexing. >>>> >>> >>> It is a fast RAM - the one RAM block runs at 500 MHz single-cycle, and >>> may >>> be dual-ported. There is only one cpu on the smaller XMOS devices, >>> with 8 >>> hardware threads - larger devices have up to 4 cpus (and thus 32 >>> threads). >>> The IO pins have a fair amount of fast logic attached too. >>> >>> But I don't know where the cost comes in. The XMOS devices are, I >>> guess, a >>> good deal more popular than the GA144 - but they are not mass market >>> compared to popular Cortex-M microcontrollers. >> >> It doesn't have to be 100's of millions to be cost effective. The GA144 >> isn't even in the running. But any practical MCU needs to be sold in >> sufficient quantities to make it affordable and for the company to keep >> running. > > Again, I don't know the numbers here. XMOS has been running for quite a > few years, with regular new products and new versions of their tools, so > they seem to be doing okay.
The issue I have isn't that they aren't stable, but that they don't seem to be able to produce a device price competitive with the lower end device. For me that make FPGAs a more viable solution economically for the large majority of designs. Combine that with the large learning curve and we end up with many users never taking the time to become proficient with them. If they get priced down to the $1 range, they will get a *lot* more users and sales. Likewise, if they did a shrink to make the GA144 more cost competitive a lot more users would be interested in learning how to program the device. But it would still be a very ugly duckling requiring a whole new approach to complex systems. The inability to draw on the huge base of established software make the GA144 a non-starter for many apps. -- Rick C
On 19/06/17 14:36, rickman wrote:
> David Brown wrote on 6/19/2017 3:02 AM: >> On 19/06/17 06:21, rickman wrote: >>> David Brown wrote on 6/18/2017 4:56 PM: >>>> On 16/06/17 19:52, rickman wrote: >>>>> David Brown wrote on 6/16/2017 3:25 AM: >>>> <snip> >>> >>>>>> I think Atmel/Microchip now have a microcontroller with a bit of >>>>>> programmable logic - I don't know how useful that might be. (Not for >>>>>> your application here, of course - neither the cpu nor the PLD >>>>>> part are >>>>>> powerful enough.) >>>>> >>>>> You might be thinking of the PSOC devices from Cypress. They have >>>>> either >>>>> an 8051 type processor or an ARM CM3 with various programmable >>>>> logic and >>>>> analog. Not really an FPGA in any sense. They can be programmed in >>>>> Verilog, but they are not terribly capable. Think of them as having >>>>> highly flexible peripherals. >>>> >>>> No, I mean the new Atmel XMega E series. They have a "custom logic >>>> module" >>>> with a couple of timers and some lookup tables. It is not an FPGA - >>>> it's >>>> just a bit of programmable logic, more akin to a built-in PLD. >>> >>> I wouldn't even say the logic is comparable to a PLD type device other >>> than a very, very simple one. It only includes two LUTs. This is more >>> like a very lame Cypress PSOC device. Actually Cypress makes one >>> sub-family of PSOC devices that actually have no programmable logic as >>> such. They just have some peripherals that are very configurable, like >>> it can be SPI or I2C or a couple of other serial devices, but no general >>> logic. >> >> It is programmable logic, even though it is small. And unlike the PSoC, >> it is a supplement to a range of normal microcontroller peripherals and >> the AVR's "event" system for connecting peripherals. The idea is that >> this logic can avoid the need of glue logic that you sometimes need >> along with a microcontroller. I think it is a neat idea, and if it >> catches on then I am sure later models will have more. > > Small is not the word. The PSOC is enormously more capable and *that* > is small. I'm not sure of the purpose of the distinction when you say > "supplement to a range of normal microcontroller peripherals". Why a > need for "normal" peripherals? The point of the PSOC is they can offer > flexibility so your design can use what it needs without a lot of wasted > silicon for peripherals that aren't used. Instead they "waste" silicon > by making the peripherals programmable. In the process you get a degree > of flexibility not found other than in FPGA type programmable logic as > well as analog programmability you can only find in discrete analog chips.
What you get with the PSoC is a chip that can give you a couple of specialised custom-tuned peripherals if that is what your application needs, or 2 or 3 standard peripherals (timers, uarts, SPI, etc.) for the silicon, power and dollar cost of 20 standard peripherals on a "normal" microcontroller. A fixed UART or 16-bit timer is /much/ more efficient than one made from flexible digital cells and all the programmability needed to make them into the same type of peripheral. When you need a custom peripheral of some sort, then the flexibility of a PSoC is (presumably) great. But only then.
>> >> Again, I don't know the numbers here. XMOS has been running for quite a >> few years, with regular new products and new versions of their tools, so >> they seem to be doing okay. > > The issue I have isn't that they aren't stable, but that they don't seem > to be able to produce a device price competitive with the lower end > device. For me that make FPGAs a more viable solution economically for > the large majority of designs. Combine that with the large learning > curve and we end up with many users never taking the time to become > proficient with them. If they get priced down to the $1 range, they > will get a *lot* more users and sales. >
I agree with you - and as I say, I don't really know why they can't make (or sell) the chips for a lower price.
> Likewise, if they did a shrink to make the GA144 more cost competitive a > lot more users would be interested in learning how to program the > device. But it would still be a very ugly duckling requiring a whole > new approach to complex systems. The inability to draw on the huge base > of established software make the GA144 a non-starter for many apps. >
Nah, the GA144 would not be popular even if they /paid/ people to use them. Less unpopular, perhaps, but not popular.
David Brown wrote on 6/19/2017 4:30 AM:
> On 19/06/17 06:54, rickman wrote: >> David Brown wrote on 6/18/2017 9:15 PM: >>> On 16/06/17 20:22, rickman wrote: >>>> David Brown wrote on 6/16/2017 7:21 AM: >>>>> On 15/06/17 23:46, rickman wrote: >>>>>> David Brown wrote on 6/15/2017 7:04 AM: >>> <snip> >>>> >>>> Yeah, I don't know of any product using the GA144. I looked hard at >>>> using >>>> it in a production design where I needed to replace an EOL FPGA. >>>> Ignoring >>>> all the other issues, I wasn't sure it would meet the timing I've >>>> outlined >>>> in other posts in this thread. I needed info on the I/O timing and GA >>>> wouldn't provide it. They seemed to think I wanted to reverse engineer >>>> the transistor level design. Silly gooses. >>>> >>>> I don't know why the age of a computer language is even a factor. I >>>> don't >>>> think C is a newcomer and is the most widely used programming >>>> language in >>>> embedded devices, no? >>> >>> The age of a language itself is not important, of course. The type of >>> features it has /are/ important. >>> >>> Many things in computing have changed in the last 4 decades. Features >>> of a >>> language that were simply impossible at that time due to limited host >>> computing power are entirely possible today. So modern languages do far >>> more compile-time checking and optimisation now than was possible at that >>> time. Good languages evolve to take advantage of newer possibilities >>> - the >>> C of today is not the same as the pre-K&R C of that period. The Forth of >>> today appears to me to be pretty much the same - albeit with more >>> optimising >>> compilers and additional libraries. >> >> I don't know the "new" C, I don't work with it. What improved? > > Well, starting from pre-K&R C and moving to "ANSI" C89/C90, it got > prototypes, proper structs, const, volatile, multiple different sized > types, etc. I am sure you are very familiar with this C - but my point > is that even though the history of C is old like that of Forth, even at > that point 25+ years ago C had moved on and improved significantly as a > language, compared to its original version.
As has Forth. The 2012 standard is an improvement over the previous version, which is an improvement over the previous version to that and the initial ANSI version was an improvement over the multiple flavors of Forth prior to that for the standardization if nothing else.
> Some embedded developers still stick to that old language, rather than > moving on to C99 with inline, booleans, specifically sized types, line > comments, mixing code and declarations, and a few other useful bits and > pieces. Again, C99 is a much better language. > > C11 is the current version, but does not add much that was not already > common in implementations. Static assertions are /very/ useful, and the > atomic types have possibilities but I think are too little, too late.
I think the real issue is you are very familiar with C while totally unfamiliar with Forth.
>>>> The GA144 is a stack processor and so the assembly language looks a lot >>>> like Forth which is based on a stack processor virtual machine. I'm not >>>> sure what is "weird" about it other than the fact that most programmers >>>> aren't familiar with stack programming other than Open Boot, Postscript, >>>> RPL and BibTeX. >>>> >>>> >>> >>> Even for a stack machine, it is very limited. >> >> Like what? It is not really "limited". The GA144 assembly language >> is... well, assembly language. Would you compare the assembly language >> of an X86 to JAVA or C? >> > > The size of the memories (data space, code space and stack space) is the > most obvious limitation.
As I said, that is not a language issue, that is a device issue. But you completely blow it when you talk about the "stack" limitation. Stacks don't need to be MBs. It's that simple. You are thinking in C and the other algol derived languages, not Forth.
>>> In some ways, the 4-bit MARC4 >>> architecture was more powerful (it certainly had more program space). >>> >>> But this is all first impressions from me - I have not used the >>> devices, and >>> I am /very/ out of practice with Forth. >> >> You keep talking in vague terms, saying the MARC4 was more "powerful". >> Address space is not the power of the language. > > True - I was not clear in distinguishing the language from the hardware > here. I meant the hardware in this case. > >> It is the hardware >> limitation of the CPU. The GA144 was designed with a different >> philosophy. I would say for a different purpose, but it was not designed >> for *any* purpose. Chuck designed it as an experiment while exploring >> the space of minimal hardware processors. The capapbilities come from >> the high speed of each processor and the comms capability. > > Minimal systems can be interesting for theory, but are rarely of any use > in practice.
That comment would seem to indicate you are very familiar with minimal systems. I suspect the opposite is true. I find minimal CPUs to be *very* useful in FPGA designs allowing a "fast" processor to be implemented in even very small amounts of logic.
>> I compare the GA144 to FPGAs more than to ARMs. The CPUs are small, >> very fast and plentiful (relatively), like the LUTs in an FPGA. >> Communications are very fast and the processor can automatically halt >> for synchronization with the other processor. Letting a processor sit >> idle is a power advantage, not a speed disadvantage. Processing speed >> is plentiful in the GA144 so it does not need to be optimized. Like the >> XMOS using it requires some adjustment in your thinking... a lot more >> than the XMOS in fact. > > I agree with the principle - as I say, the GA144 has some interesting > ideas and technology. But you need more power in each cpu to do > something useful. If you want to use animal power to draw a plough, you > want a horse. An army of ants might have a theoretically greater total > strength and a better total-power to food cost ratio, but it is still > hopeless as a solution.
You should save the poor analogs for other conversations. An army of ants can move a large pile of sand overnight by themselves. The horse will just complain it doesn't have enough food and water moving the sand only if you connect it to the right equipment and spend your day cracking the whip.
>>> For a programmer who is completely unfamiliar with FPGA and programmable >>> logic, the XMOS is likely to be less of a leap than moving to an FPGA. >> >> Perhaps, but I would still emphasize the issue that MCUs in general and >> FPGAs in general cover a lot of territory. XMOS only excels in a fairly >> small region. The GA144 is optimal for a microscopically small region. >> > > Fair enough. > >> >>> But I agree it is hard to find an application area for these devices - I >>> have only had a couple of uses for them, and they probably were not ideal >>> for those cases. >> >> If the XMOS price were better, I would say they would be much more worth >> learning. >> > > Also true. > >>> >>>> The many misconceptions of FPGAs relegate them to situations where CPUs >>>> just can't cut the mustard. In reality they are very flexible and only >>>> limited by the lack of on chip peripherals. Microsemi adds more >>>> peripherals to their devices, but still don't compete directly with >>>> lower >>>> cost MCUs. >>> >>> FPGAs have their strengths and their weaknesses. There are lots of >>> situations where they are far from ideal - but I agree that there are >>> many >>> misconceptions about them that might discourage people from starting >>> to use >>> them. >> >> I can't tell you how many people think FPGAs are complicated to design, >> power hungry and expensive. All three of these are not true. >> > > That certainly /was/ the case.
20 years ago maybe.
> But yes, for a good while now there have > been cheap and low power FPGAs available. As for complicated to design > - well, I guess it's easy when you know how. But you do have to know > what you are doing.
MCUs are no different. A newbie will do a hack job. I once provided some assistance to a programmer who needed to spin an FPGA design for his company. They wouldn't hire me to do it because they wanted to develop the ability in house. With minimal assistance (and I mean minimal) he first wrote a "hello, world" program for the FPGA. He then went on to write his application. The only tricky parts of programming FPGAs is when you need to optimize either speed or capacity or worse, both! But I believe the exact same thing is true about MCUs. Most projects can be done by beginners and indeed *are* done by beginners. That has been my experience. In fact, that is the whole reason for the development and use of the various tools for programming, making them usable by programmers with lesser skills, enabling a larger labor pool at a lower price. The only magic in FPGA design is the willingness to wade into the waters and get your feet wet.
> Tools are better, introductory videos are better, > etc. - there are lots more learning resources than in the "old" days. > And once you know (at least roughly) what you are doing, the modern > tools and computers make the job a good deal faster than before. I > remember some 20 years ago working with a large PLD - place and route > took about 8 hours, and debugging the design was done by pulling a > couple of internal signals out to spare pins and re-doing the place and > route. (By that stage of the project it was too late to think about > alternative chips.) > >> My only complaints are they tend to be in very fine pitch BGA packages >> (many with very high pin counts), only a few smaller devices are >> available and they don't integrate much analog. I'd like to see a small >> FPGA rolled with a small MCU (ARM CM4) with all the standard peripherals >> an MCU normally includes, brownout, ADC/DAC, etc. They could have done >> this affordably a decade ago if they wanted, but the FPGA companies have >> a particular business model that does not include this market. Lattice >> and Microsemi aren't as committed to the mainstream FPGA market and so >> offer some limited products that differ. >> > > Variety of choices is always nice. I agree that devices like those > could have a wide range of uses.
Devices are not often made for broad markets. FPGAs in particular are designed for the comms market and everyone else is along for the ride. As I mentioned, Lattice and Microsemi are branching out a bit to address either a different market (Lattice ice40 parts are aimed at the cell phone market) or a broader market (Microsemi devices), but with limited success.
>>> If your eyesight is poor, use a bigger screen or a bigger or more legible >>> font - that's fine. But it does not make sense to use that as the >>> basis for >>> designing your toolchain - just make the IDE configurable. >> >> I have a laptop with a 17 inch screen and the fonts are smaller than my >> old desktop with a 17 inch monitor because the pixels are smaller (HD >> vs. 1280 horizontal resolution). The windows stuff for adjusting the >> size of fonts and such don't work properly across lots of apps. Even my >> Excalibur calculator is very hard to read. >> > > Surely you don't use that laptop for normal work? A laptop is okay for > when you need a portable office, but I have three large monitors in my > office. And most of my development is done on Linux, where font scaling > works most of the time. (Though personally, I like small fonts with > lots of text on the screen - my eyes are fine for that, when I have my > contacts in. Without them, I can't focus further than my nose!).
At this point my laptop is my only computer (other than a five year old netbook used for extreme emergencies). Its only problem is it's a Lenovo piece of crap. I don't have an office as much as I am portable. I don't spend more than half a week in one place running. I just wish I could find a reasonably priced attached oscilloscope. The model I'd like to have is $1,500. I'd like to spend more like $500 and then I'd be totally portable.
>> Bottom line is don't bad mouth an API because it doesn't *please* you. >> Your tastes aren't the only standard. >> > > My tastes are the most important standard for /me/, and the one that > affects my impression when I look at a tool. Of course I realise that > other people have other tastes. And perhaps some people have poor > eyesight and a job that requires them to work entirely on a small > laptop. But I find it hard to accept that an IDE should be designed > solely on the basis of being clear to someone with bad eyesight who > works with a tiny old monitor. The colorForth stuff seems to be > designed by and /for/ a single person - Chuck Moore. That's fine for > him for a personal project, but it is highly unlikely to be a good way > to make tools for more general use.
There you go with the extremes again. Colorforth isn't designed "solely" for people with bad eyesight. It is designed to be as useful as possible. It is clear you have not learned enough about it to know what is good and what is bad. You took one quick look at it and turned away.
>>>> The use of color to indicate aspects of the language is pretty much the >>>> same as the color highlighting I see in nearly every modern editor. The >>>> difference is that in ColorForth the highlighting is *part* of the >>>> language as it distinguishes when commands are executed. >>> >>> It is syntax highlighting. >> >> No, it is functional, not just illustrating. It is in the *language*, >> not just the editor. It's all integrated, not in the way the tools in a >> GUI are integrated, but in the way your heart, lungs and brain are >> integrated. >> > > No, it is syntax highlighting. > > There is a 4 bit "colour token" attached to each symbol. These > distinguish between variables, comments, word definitions, etc. There > is /nothing/ that this gives you compared to, say, $ prefixes for > variables (like PHP), _t suffixes for types (common convention in C), > etc., with colour syntax highlighting. The only difference is that the > editor hides the token. So when you have both var_foo and word_foo, > they are both displayed as "foo" in different colours rather than > "var_foo" and "word_foo" in different colours. > > That is all there is to it.
You just said it is more than syntax highlighting. It is like type definitions in other languages. It is built into the language which won't work without it. That's the part you aren't getting. Compare Colorforth to ANSI Forth and you will see what I mean.
>>>> Some commands in Forth are executed at compile time rather than being >>>> compiled. This is one of the many powerful features of Forth. >>> >>> You get that in other languages too. True, it is not always easy to >>> determine what is done at compile time and what is done at run time, >>> and the >>> distinction may depend on optimisation flags. >> >> That's what the color does. >> > > The colour doesn't do it - the language makes a clearer distinction > between compile-time and run-time, and the colour helps you see that. > You had the same distinction in Forth without colouring.
If you don't understand, learn how it is done in ANSI Forth and then tell me if THAT is part of the language or not.
> Having a separation here is both a good thing and a bad thing, in > comparison to the way C handles it, and the way C++ handles it. There > is room in the world for many models of language. > >> >>> But really, what you are describing here is like C++ with constexpr code >>> shown in a different colour. >> >> I wouldn't know C++. >> > > Without going into details, you are probably aware that in C you > sometimes need a "real" constant. For example, you can't make a > file-level array definition unless the size is absolutely fixed: > > int xs[16]; > > That's allowed. But you can't write this: > > int square(int x) { return x * x; } > > int xs[square(4)]; > > "square(4)" is not a constant in C terms. However, you would expect a > compiler to calculate the value at compile time (assuming it can see the > definition of the "square" function) for the purposes of code optimisation. > > In C++11 onwards, you can write: > > constexpr int square(int x) { return x * x; } > int xs[square(4)]; > > This tells the compiler that it can calculate "square" at compile time > if the parameters are known at compile time, but still allows the > function to be used as a run-time function if the parameters are not > known at compile time.
Here is a perfect example of why you think Forth has not evolved. There is nothing in even the earliest Forth that precludes this computation from being done at compile time. So how do you improve on perfection? <grin>
>>>> ColorForth pushes further to allow some commands to be executed at edit >>>> time. I have not studied it in detail, so I can't give you details >>>> on this. >>>> >>>> I just want to explain how you are using very simplistic perceptions and >>>> expectations to "color" your impression of ColorForth without learning >>>> anything important about it. >>> >>> I've read the colorForth FAQ, such as it is. I also note that the >>> website >>> is dead. >> >> Yeah, Charles Moore isn't in the business of supporting language >> standards. He created Color Forth for himself and has shared it with >> others. GA is using it to support their products and they are the best >> source for information now. >> > > With all due respect to Chuck Moore and his creations, this is not a way > to conduct a professional business.
Whatever. He isn't running a business by promoting Forth. As I've said, he wrote Forth for himself and others like the ideas he has come up with and learned them. GA is a business that uses ColorForth. ANSI Forth is a standard that is widely used. Is Arduino a standard? No, yet it is widely used even in business. Standards are useful in some cases, in other cases not needed.
>>>>> I can appreciate that a stack machine design is ideal for a small >>>>> processor, and can give very tight code. I can appreciate that this >>>>> means a sort of Forth is the natural assembly language for the system. >>>>> I can even appreciate that the RPN syntax, the interactivity, and the >>>>> close-to-the-metal programming appeals to some people. But I cannot >>>>> understand why this cannot be done with a modern language with decent >>>>> typing, static checking, optimised compilation, structured syntax, etc. >>>> >>>> You can't understand because you have not tried to learn about Forth. I >>>> can assure you there are a number of optimizing compilers for Forth. I >>>> don't know what you are seeing that you think Forth doesn't have >>>> "structured syntax". Is this different from the control flow >>>> structures? >>> >>> I fully understand that there are good optimising Forth compilers and >>> cross-compilers. But those are good /implementation/ - I am talking >>> about >>> the /language/. >> >> You mentioned optimizing compilers, what was your point in bringing it >> up? Optimizations are not in any language that I'm aware of. You seem >> to think there is something lacking in the Forth language, but you don't >> say what that would be. >> > > I gave a list somewhere in another post. But my key "missing features" > from Forth are good static checking, typing, methods of working with > data of different sizes, safe ways to define and use structures, and > ways to modularise the program. > > For example, take the "FLOOR5" function from the Wikipedia page: > > : FLOOR5 ( n -- n' ) DUP 6 < IF DROP 5 ELSE 1 - THEN ; > > > The C version is: > > int floor5(int v) { > return (v < 6) ? 5 : (v - 1); > } > > > Suppose the Forth programmer accidentally writes: > > : FLOOR5 ( n -- n' ) 6 < IF DROP 5 ELSE 1 - THEN ; > > It's an easy mistake to miss, and you've made a perfectly valid Forth > word definition that will be accepted by the system. But now the > comment does not match the usage. It would be entirely possible for the > language to provide a formalised and standardised way of specifying > input and output parameters in a way that most cases could be > automatically checked by the tools. Conventionalised comments are /way/ > out of date as an aid to automated correctness checking.
This has been discussed before and some have experimented with writing code to do this. But it is only easy in simple examples like this one. Forth is a very flexible and powerful language which can make it hard to implement this for all cases.
> And then suppose you want this function to work with 32-bit values - > regardless of the width of a cell on the target machine. Or 64-bit > values on a 16-bit cell system. > > > (If you have good answers here, maybe you will change my mind - at least > a little!)
Just as in other languages, like ADA and VHDL (both strongly typed) you would need to write different code. I'm not interested in changing your mind, only in showing you your misunderstandings about Forth. I'm not actually the right person for the job being a relative amateur with Forth, so I crossposted to the Forth group so others could do a better job. That may bring in some wild cards however as discussions in the Forth group often go awry.
>>>> I see Stephen Pelc responded to your posts. He is the primary author of >>>> VFX from MPE. Instead of throwing a tantrum about Forth "looking" >>>> like it >>>> is 30 years old, why not engage him and learn something? >>>> >>> >>> I am not "throwing a tantrum" - I /am/ engaging in discussion (including >>> with Stephen). >>> >>> I am talking about how Forth appears to me. I have worked with a wide >>> range >>> of languages, including various functional programming languages, >>> parallel >>> programming languages, assembly languages, hardware design languages, >>> high >>> level languages, low level languages, and a little Forth long ago. I >>> have >>> worked through tutorials in APL and Prolog. I am not put off by strange >>> syntaxes or having to think in a different manner. (It might put me off >>> /using/ such languages for real work, however.) When I talk about how >>> Forth >>> appears to me, it is quite clear that the language has limited >>> practicality >>> for modern programming. And if that is /not/ the case, then it certainly >>> has an image problem. >> >> I don't know what is meant by "limited practicality for modern >> programming". By griping about the use of primary colors and large >> fonts, I consider that throwing a tantrum. How about discussing >> something important and useful? >> > > See above for a simple example. > > But I am not griping about the use of colour - I am mocking the idea > that adding colour to the IDE is a big innovation in the language.
You still don't understand the issue. It isn't about the IDE, it is the fact that the use of color replaces words in the language that change how the other words are interpreted. The only downside is that by making it an integral part of the language it becomes hard to use for color blind programmers. We can all live without color highlighting in an IDE, but in Colorforth it is not optional. -- Rick C
David Brown wrote on 6/19/2017 8:47 AM:
> On 19/06/17 14:36, rickman wrote: >> David Brown wrote on 6/19/2017 3:02 AM: >>> On 19/06/17 06:21, rickman wrote: >>>> David Brown wrote on 6/18/2017 4:56 PM: >>>>> On 16/06/17 19:52, rickman wrote: >>>>>> David Brown wrote on 6/16/2017 3:25 AM: >>>>> <snip> >>>> >>>>>>> I think Atmel/Microchip now have a microcontroller with a bit of >>>>>>> programmable logic - I don't know how useful that might be. (Not for >>>>>>> your application here, of course - neither the cpu nor the PLD >>>>>>> part are >>>>>>> powerful enough.) >>>>>> >>>>>> You might be thinking of the PSOC devices from Cypress. They have >>>>>> either >>>>>> an 8051 type processor or an ARM CM3 with various programmable >>>>>> logic and >>>>>> analog. Not really an FPGA in any sense. They can be programmed in >>>>>> Verilog, but they are not terribly capable. Think of them as having >>>>>> highly flexible peripherals. >>>>> >>>>> No, I mean the new Atmel XMega E series. They have a "custom logic >>>>> module" >>>>> with a couple of timers and some lookup tables. It is not an FPGA - >>>>> it's >>>>> just a bit of programmable logic, more akin to a built-in PLD. >>>> >>>> I wouldn't even say the logic is comparable to a PLD type device other >>>> than a very, very simple one. It only includes two LUTs. This is more >>>> like a very lame Cypress PSOC device. Actually Cypress makes one >>>> sub-family of PSOC devices that actually have no programmable logic as >>>> such. They just have some peripherals that are very configurable, like >>>> it can be SPI or I2C or a couple of other serial devices, but no general >>>> logic. >>> >>> It is programmable logic, even though it is small. And unlike the PSoC, >>> it is a supplement to a range of normal microcontroller peripherals and >>> the AVR's "event" system for connecting peripherals. The idea is that >>> this logic can avoid the need of glue logic that you sometimes need >>> along with a microcontroller. I think it is a neat idea, and if it >>> catches on then I am sure later models will have more. >> >> Small is not the word. The PSOC is enormously more capable and *that* >> is small. I'm not sure of the purpose of the distinction when you say >> "supplement to a range of normal microcontroller peripherals". Why a >> need for "normal" peripherals? The point of the PSOC is they can offer >> flexibility so your design can use what it needs without a lot of wasted >> silicon for peripherals that aren't used. Instead they "waste" silicon >> by making the peripherals programmable. In the process you get a degree >> of flexibility not found other than in FPGA type programmable logic as >> well as analog programmability you can only find in discrete analog chips. > > What you get with the PSoC is a chip that can give you a couple of > specialised custom-tuned peripherals if that is what your application > needs, or 2 or 3 standard peripherals (timers, uarts, SPI, etc.) for the > silicon, power and dollar cost of 20 standard peripherals on a "normal" > microcontroller. A fixed UART or 16-bit timer is /much/ more efficient > than one made from flexible digital cells and all the programmability > needed to make them into the same type of peripheral. > > When you need a custom peripheral of some sort, then the flexibility of > a PSoC is (presumably) great. But only then.
So programmability in the PSOC is a niche feature while programmability in the XMega E is somehow a big feature? I think you may have missed some of the PSOC devices, like 90%. One subfamily of about four devices have programmable peripherals. The others have programmable logic and analog blocks. Much more powerful.
>>> Again, I don't know the numbers here. XMOS has been running for quite a >>> few years, with regular new products and new versions of their tools, so >>> they seem to be doing okay. >> >> The issue I have isn't that they aren't stable, but that they don't seem >> to be able to produce a device price competitive with the lower end >> device. For me that make FPGAs a more viable solution economically for >> the large majority of designs. Combine that with the large learning >> curve and we end up with many users never taking the time to become >> proficient with them. If they get priced down to the $1 range, they >> will get a *lot* more users and sales. >> > > I agree with you - and as I say, I don't really know why they can't make > (or sell) the chips for a lower price. > >> Likewise, if they did a shrink to make the GA144 more cost competitive a >> lot more users would be interested in learning how to program the >> device. But it would still be a very ugly duckling requiring a whole >> new approach to complex systems. The inability to draw on the huge base >> of established software make the GA144 a non-starter for many apps. >> > > Nah, the GA144 would not be popular even if they /paid/ people to use > them. Less unpopular, perhaps, but not popular.
That much processing power in a very low cost device would become useful in many apps. It is odd and difficult to learn, but it is not without functionality and application. -- Rick C
Albert van der Horst wrote on 6/19/2017 6:54 AM:
> In article <oi81sg$n55$1@dont-email.me>, > David Brown <david.brown@hesbynett.no> wrote: >> On 19/06/17 06:54, rickman wrote: >>> David Brown wrote on 6/18/2017 9:15 PM: >>>> On 16/06/17 20:22, rickman wrote: >>>>> David Brown wrote on 6/16/2017 7:21 AM: >>>>>>> David Brown wrote on 6/15/2017 7:04 AM: > <SNIP> >>>>>> On 15/06/17 23:46, rickman wrote: >>>>> The GA144 is a stack processor and so the assembly language looks a lot >>>>> like Forth which is based on a stack processor virtual machine. I'm not >>>>> sure what is "weird" about it other than the fact that most programmers >>>>> aren't familiar with stack programming other than Open Boot, Postscript, >>>>> RPL and BibTeX. >>>>> >>>>> >>>> >>>> Even for a stack machine, it is very limited. >>> >>> Like what? It is not really "limited". The GA144 assembly language >>> is... well, assembly language. Would you compare the assembly language >>> of an X86 to JAVA or C? >>> >> >> The size of the memories (data space, code space and stack space) is the >> most obvious limitation. > > A less obvious limitation that goes right to the heart of the > parallel processing that is claimed, is processor connectivity. > I explored parallelism (with the parallel prime sieve) and > the fixed rectangular grid is absolutely bonkers for any serious > application where calculation power is needed. > In this case I wanted to have two pipelines that come together. > It starts as a puzzle, then it turns out to be hardly possible. > > Two crossing pipeline have to pass through one processor. > If there is any structure to the data, the one processor would > fail the processing power to make that possible. > On transputers you would have hypercube arrangements such that there > is no need to do that. On top of that, it would be easy. > You just define two unrelated pass-through processes.
You can also do that with the GA144.
> A definitive measure for the quality of the GA144 would be a > bitcoin calculator. That is the ratio between the cost of the > electricity consumed and the value of the bitcoins generated. > t would be *bad*.
Bad compared to FPGAs where the design is optimized at a very low level or compared to a custom ASIC which is optimized from the ground up for this application. I don't believe any other devices are currently used to mine bitcoin. Software became obsolete some time back.
>>> Perhaps, but I would still emphasize the issue that MCUs in general and >>> FPGAs in general cover a lot of territory. XMOS only excels in a fairly >>> small region. The GA144 is optimal for a microscopically small region. >>> >> >> Fair enough. > > Indeed. > > <SNIP> > >> See above for a simple example. >> >> But I am not griping about the use of colour - I am mocking the idea >> that adding colour to the IDE is a big innovation in the language. > > 100% agreed. Adding colour is equivalent to a prefix character. > Then if you want to Vim can add the colour for you based on the > prefix character. > > Notations can be important innovations as Newton and Leibniz showed. > The real big innovation they made was the differential calculus. > If there is something underlying Colorforth it is tagged objects, > hardly spectacular.
Does that prefix character not take the place of a Forth word? -- Rick C
On 19/06/17 15:23, rickman wrote:
> David Brown wrote on 6/19/2017 8:47 AM: >> On 19/06/17 14:36, rickman wrote: >>> David Brown wrote on 6/19/2017 3:02 AM: >>>> On 19/06/17 06:21, rickman wrote: >>>>> David Brown wrote on 6/18/2017 4:56 PM: >>>>>> On 16/06/17 19:52, rickman wrote: >>>>>>> David Brown wrote on 6/16/2017 3:25 AM: >>>>>> <snip> >>>>> >>>>>>>> I think Atmel/Microchip now have a microcontroller with a bit of >>>>>>>> programmable logic - I don't know how useful that might be. >>>>>>>> (Not for >>>>>>>> your application here, of course - neither the cpu nor the PLD >>>>>>>> part are >>>>>>>> powerful enough.) >>>>>>> >>>>>>> You might be thinking of the PSOC devices from Cypress. They have >>>>>>> either >>>>>>> an 8051 type processor or an ARM CM3 with various programmable >>>>>>> logic and >>>>>>> analog. Not really an FPGA in any sense. They can be programmed in >>>>>>> Verilog, but they are not terribly capable. Think of them as having >>>>>>> highly flexible peripherals. >>>>>> >>>>>> No, I mean the new Atmel XMega E series. They have a "custom logic >>>>>> module" >>>>>> with a couple of timers and some lookup tables. It is not an FPGA - >>>>>> it's >>>>>> just a bit of programmable logic, more akin to a built-in PLD. >>>>> >>>>> I wouldn't even say the logic is comparable to a PLD type device other >>>>> than a very, very simple one. It only includes two LUTs. This is >>>>> more >>>>> like a very lame Cypress PSOC device. Actually Cypress makes one >>>>> sub-family of PSOC devices that actually have no programmable logic as >>>>> such. They just have some peripherals that are very configurable, >>>>> like >>>>> it can be SPI or I2C or a couple of other serial devices, but no >>>>> general >>>>> logic. >>>> >>>> It is programmable logic, even though it is small. And unlike the >>>> PSoC, >>>> it is a supplement to a range of normal microcontroller peripherals and >>>> the AVR's "event" system for connecting peripherals. The idea is that >>>> this logic can avoid the need of glue logic that you sometimes need >>>> along with a microcontroller. I think it is a neat idea, and if it >>>> catches on then I am sure later models will have more. >>> >>> Small is not the word. The PSOC is enormously more capable and *that* >>> is small. I'm not sure of the purpose of the distinction when you say >>> "supplement to a range of normal microcontroller peripherals". Why a >>> need for "normal" peripherals? The point of the PSOC is they can offer >>> flexibility so your design can use what it needs without a lot of wasted >>> silicon for peripherals that aren't used. Instead they "waste" silicon >>> by making the peripherals programmable. In the process you get a degree >>> of flexibility not found other than in FPGA type programmable logic as >>> well as analog programmability you can only find in discrete analog >>> chips. >> >> What you get with the PSoC is a chip that can give you a couple of >> specialised custom-tuned peripherals if that is what your application >> needs, or 2 or 3 standard peripherals (timers, uarts, SPI, etc.) for the >> silicon, power and dollar cost of 20 standard peripherals on a "normal" >> microcontroller. A fixed UART or 16-bit timer is /much/ more efficient >> than one made from flexible digital cells and all the programmability >> needed to make them into the same type of peripheral. >> >> When you need a custom peripheral of some sort, then the flexibility of >> a PSoC is (presumably) great. But only then. > > So programmability in the PSOC is a niche feature while programmability > in the XMega E is somehow a big feature?
No, the small programmability of the XMega E is a nice addition to all the other peripherals. Without its timers, communications, ADCs, DMA, etc., it would be a very poor microcontroller. The small programmability of the PSoC is niche because it has its programmable blocks /instead of/ ordinary microcontroller peripherals. If they were on top of a base of solid standard peripherals, the programmable blocks would be much more interesting.
> > I think you may have missed some of the PSOC devices, like 90%. One > subfamily of about four devices have programmable peripherals. The > others have programmable logic and analog blocks. Much more powerful. > > >>>> Again, I don't know the numbers here. XMOS has been running for >>>> quite a >>>> few years, with regular new products and new versions of their >>>> tools, so >>>> they seem to be doing okay. >>> >>> The issue I have isn't that they aren't stable, but that they don't seem >>> to be able to produce a device price competitive with the lower end >>> device. For me that make FPGAs a more viable solution economically for >>> the large majority of designs. Combine that with the large learning >>> curve and we end up with many users never taking the time to become >>> proficient with them. If they get priced down to the $1 range, they >>> will get a *lot* more users and sales. >>> >> >> I agree with you - and as I say, I don't really know why they can't make >> (or sell) the chips for a lower price. >> >>> Likewise, if they did a shrink to make the GA144 more cost competitive a >>> lot more users would be interested in learning how to program the >>> device. But it would still be a very ugly duckling requiring a whole >>> new approach to complex systems. The inability to draw on the huge base >>> of established software make the GA144 a non-starter for many apps. >>> >> >> Nah, the GA144 would not be popular even if they /paid/ people to use >> them. Less unpopular, perhaps, but not popular. > > That much processing power in a very low cost device would become useful > in many apps. It is odd and difficult to learn, but it is not without > functionality and application. >
Well, I disagree. I think the individual cpus are too weak to be practical. It does not really matter if they can do simple operations at 700 MHz if they can't do more than a few dozen lines of code. There are not nearly enough processors on the chip, a far, far too little communication channels between nodes, to be useful despite the small memory size.
On 19/06/17 15:19, rickman wrote:
> David Brown wrote on 6/19/2017 4:30 AM: >> On 19/06/17 06:54, rickman wrote: >>> David Brown wrote on 6/18/2017 9:15 PM: >>>> On 16/06/17 20:22, rickman wrote: >>>>> David Brown wrote on 6/16/2017 7:21 AM: >>>>>> On 15/06/17 23:46, rickman wrote: >>>>>>> David Brown wrote on 6/15/2017 7:04 AM: >>>> <snip> >>>>> >>>>> Yeah, I don't know of any product using the GA144. I looked hard at >>>>> using >>>>> it in a production design where I needed to replace an EOL FPGA. >>>>> Ignoring >>>>> all the other issues, I wasn't sure it would meet the timing I've >>>>> outlined >>>>> in other posts in this thread. I needed info on the I/O timing and GA >>>>> wouldn't provide it. They seemed to think I wanted to reverse >>>>> engineer >>>>> the transistor level design. Silly gooses. >>>>> >>>>> I don't know why the age of a computer language is even a factor. I >>>>> don't >>>>> think C is a newcomer and is the most widely used programming >>>>> language in >>>>> embedded devices, no? >>>> >>>> The age of a language itself is not important, of course. The type of >>>> features it has /are/ important. >>>> >>>> Many things in computing have changed in the last 4 decades. Features >>>> of a >>>> language that were simply impossible at that time due to limited host >>>> computing power are entirely possible today. So modern languages do >>>> far >>>> more compile-time checking and optimisation now than was possible at >>>> that >>>> time. Good languages evolve to take advantage of newer possibilities >>>> - the >>>> C of today is not the same as the pre-K&R C of that period. The >>>> Forth of >>>> today appears to me to be pretty much the same - albeit with more >>>> optimising >>>> compilers and additional libraries. >>> >>> I don't know the "new" C, I don't work with it. What improved? >> >> Well, starting from pre-K&R C and moving to "ANSI" C89/C90, it got >> prototypes, proper structs, const, volatile, multiple different sized >> types, etc. I am sure you are very familiar with this C - but my point >> is that even though the history of C is old like that of Forth, even at >> that point 25+ years ago C had moved on and improved significantly as a >> language, compared to its original version. > > As has Forth. The 2012 standard is an improvement over the previous > version, which is an improvement over the previous version to that and > the initial ANSI version was an improvement over the multiple flavors of > Forth prior to that for the standardization if nothing else.
I have looked through the Forth 2012 standard. Nothing much has changed in the language - a few words added, a few words removed. (Previous revisions apparently had bigger changes, according to a list of compatibility points.)
> > >> Some embedded developers still stick to that old language, rather than >> moving on to C99 with inline, booleans, specifically sized types, line >> comments, mixing code and declarations, and a few other useful bits and >> pieces. Again, C99 is a much better language. >> >> C11 is the current version, but does not add much that was not already >> common in implementations. Static assertions are /very/ useful, and the >> atomic types have possibilities but I think are too little, too late. > > I think the real issue is you are very familiar with C while totally > unfamiliar with Forth.
I certainly can't claim to be unbiased - yes, I am very familiar with C and very unfamiliar with Forth. I am not /totally/ unfamiliar - I understand the principles of the stacks and their manipulation, the way words are defined, and can figure out what some very simple words do, at least for arithmetic and basic stack operations. And I am fine with trying to get an understanding of how a language could be used even though I don't understand the details.
> > >>>>> The GA144 is a stack processor and so the assembly language looks a >>>>> lot >>>>> like Forth which is based on a stack processor virtual machine. >>>>> I'm not >>>>> sure what is "weird" about it other than the fact that most >>>>> programmers >>>>> aren't familiar with stack programming other than Open Boot, >>>>> Postscript, >>>>> RPL and BibTeX. >>>>> >>>>> >>>> >>>> Even for a stack machine, it is very limited. >>> >>> Like what? It is not really "limited". The GA144 assembly language >>> is... well, assembly language. Would you compare the assembly language >>> of an X86 to JAVA or C? >>> >> >> The size of the memories (data space, code space and stack space) is the >> most obvious limitation. > > As I said, that is not a language issue, that is a device issue. But > you completely blow it when you talk about the "stack" limitation. > Stacks don't need to be MBs. It's that simple. You are thinking in C > and the other algol derived languages, not Forth.
I program mostly on small microcontrollers. These days, I see more devices with something like 128K ram, but I have done more than my fair share with 4K ram or less. No, I am /not/ thinking megabytes of space. But a 10 cell stack is /very/ limited. So is a 64 cell ram, and a 64 cell program rom - even taking into account the code space efficiency of Forth. I am not asking for MB here.
>>> It is the hardware >>> limitation of the CPU. The GA144 was designed with a different >>> philosophy. I would say for a different purpose, but it was not designed >>> for *any* purpose. Chuck designed it as an experiment while exploring >>> the space of minimal hardware processors. The capapbilities come from >>> the high speed of each processor and the comms capability. >> >> Minimal systems can be interesting for theory, but are rarely of any use >> in practice. > > That comment would seem to indicate you are very familiar with minimal > systems. I suspect the opposite is true. I find minimal CPUs to be > *very* useful in FPGA designs allowing a "fast" processor to be > implemented in even very small amounts of logic. >
If you have a specific limited task, then a small cpu can be very useful. Maybe you've got an FPGA connected to a DDR DIMM socket. A very small cpu might be the most convenient way to set up the memory strobe delays and other parameters, letting the FPGA work with a cleaner memory interface. But that is a case of a small cpu helping out a bigger system - it is not a case of using the small cpus alone. It is a different case altogether.
> >>> I compare the GA144 to FPGAs more than to ARMs. The CPUs are small, >>> very fast and plentiful (relatively), like the LUTs in an FPGA. >>> Communications are very fast and the processor can automatically halt >>> for synchronization with the other processor. Letting a processor sit >>> idle is a power advantage, not a speed disadvantage. Processing speed >>> is plentiful in the GA144 so it does not need to be optimized. Like the >>> XMOS using it requires some adjustment in your thinking... a lot more >>> than the XMOS in fact. >> >> I agree with the principle - as I say, the GA144 has some interesting >> ideas and technology. But you need more power in each cpu to do >> something useful. If you want to use animal power to draw a plough, you >> want a horse. An army of ants might have a theoretically greater total >> strength and a better total-power to food cost ratio, but it is still >> hopeless as a solution. > > You should save the poor analogs for other conversations. An army of > ants can move a large pile of sand overnight by themselves. The horse > will just complain it doesn't have enough food and water moving the sand > only if you connect it to the right equipment and spend your day > cracking the whip. >
There are good reasons we don't use masses of tiny cpus instead of a few big ones - just as we don't use ants as workers. It is not just a matter of bias or unfamiliarity.
>>> I can't tell you how many people think FPGAs are complicated to design, >>> power hungry and expensive. All three of these are not true. >>> >> >> That certainly /was/ the case. > > 20 years ago maybe. >
A /lot/ less than 20 years ago.
> >> But yes, for a good while now there have >> been cheap and low power FPGAs available. As for complicated to design >> - well, I guess it's easy when you know how. But you do have to know >> what you are doing. > > MCUs are no different. A newbie will do a hack job. I once provided > some assistance to a programmer who needed to spin an FPGA design for > his company. They wouldn't hire me to do it because they wanted to > develop the ability in house. With minimal assistance (and I mean > minimal) he first wrote a "hello, world" program for the FPGA. He then > went on to write his application. > > The only tricky parts of programming FPGAs is when you need to optimize > either speed or capacity or worse, both! But I believe the exact same > thing is true about MCUs. Most projects can be done by beginners and > indeed *are* done by beginners. That has been my experience. In fact, > that is the whole reason for the development and use of the various > tools for programming, making them usable by programmers with lesser > skills, enabling a larger labor pool at a lower price. > > The only magic in FPGA design is the willingness to wade into the waters > and get your feet wet. >
I will happily agree that FPGA design is not as hard as many people think. However, I do think it is harder to learn and harder to get write than basic microcontroller programming. The key difference is that with microcontrollers, you are (mostly) doing one thing at a time all in one place on the chip - with FPGAs, you are doing everything at once but in separate parts of the chip. I think the serial execution is a more familiar model to people - we are used to doing one thing at a time, but being able to do many different tasks at different times. The FPGA model is more like workers on a production line, and that takes time to understand for an individual.
> >> Tools are better, introductory videos are better, >> etc. - there are lots more learning resources than in the "old" days. >> And once you know (at least roughly) what you are doing, the modern >> tools and computers make the job a good deal faster than before. I >> remember some 20 years ago working with a large PLD - place and route >> took about 8 hours, and debugging the design was done by pulling a >> couple of internal signals out to spare pins and re-doing the place and >> route. (By that stage of the project it was too late to think about >> alternative chips.) >> >>> My only complaints are they tend to be in very fine pitch BGA packages >>> (many with very high pin counts), only a few smaller devices are >>> available and they don't integrate much analog. I'd like to see a small >>> FPGA rolled with a small MCU (ARM CM4) with all the standard peripherals >>> an MCU normally includes, brownout, ADC/DAC, etc. They could have done >>> this affordably a decade ago if they wanted, but the FPGA companies have >>> a particular business model that does not include this market. Lattice >>> and Microsemi aren't as committed to the mainstream FPGA market and so >>> offer some limited products that differ. >>> >> >> Variety of choices is always nice. I agree that devices like those >> could have a wide range of uses. > > Devices are not often made for broad markets. FPGAs in particular are > designed for the comms market and everyone else is along for the ride. > As I mentioned, Lattice and Microsemi are branching out a bit to address > either a different market (Lattice ice40 parts are aimed at the cell > phone market) or a broader market (Microsemi devices), but with limited > success. > > >>>> If your eyesight is poor, use a bigger screen or a bigger or more >>>> legible >>>> font - that's fine. But it does not make sense to use that as the >>>> basis for >>>> designing your toolchain - just make the IDE configurable. >>> >>> I have a laptop with a 17 inch screen and the fonts are smaller than my >>> old desktop with a 17 inch monitor because the pixels are smaller (HD >>> vs. 1280 horizontal resolution). The windows stuff for adjusting the >>> size of fonts and such don't work properly across lots of apps. Even my >>> Excalibur calculator is very hard to read. >>> >> >> Surely you don't use that laptop for normal work? A laptop is okay for >> when you need a portable office, but I have three large monitors in my >> office. And most of my development is done on Linux, where font scaling >> works most of the time. (Though personally, I like small fonts with >> lots of text on the screen - my eyes are fine for that, when I have my >> contacts in. Without them, I can't focus further than my nose!). > > At this point my laptop is my only computer (other than a five year old > netbook used for extreme emergencies). Its only problem is it's a > Lenovo piece of crap. I don't have an office as much as I am portable. > I don't spend more than half a week in one place running. I just wish I > could find a reasonably priced attached oscilloscope. The model I'd > like to have is $1,500. I'd like to spend more like $500 and then I'd > be totally portable. > > >>> Bottom line is don't bad mouth an API because it doesn't *please* you. >>> Your tastes aren't the only standard. >>> >> >> My tastes are the most important standard for /me/, and the one that >> affects my impression when I look at a tool. Of course I realise that >> other people have other tastes. And perhaps some people have poor >> eyesight and a job that requires them to work entirely on a small >> laptop. But I find it hard to accept that an IDE should be designed >> solely on the basis of being clear to someone with bad eyesight who >> works with a tiny old monitor. The colorForth stuff seems to be >> designed by and /for/ a single person - Chuck Moore. That's fine for >> him for a personal project, but it is highly unlikely to be a good way >> to make tools for more general use. > > There you go with the extremes again. Colorforth isn't designed > "solely" for people with bad eyesight. It is designed to be as useful > as possible. It is clear you have not learned enough about it to know > what is good and what is bad. You took one quick look at it and turned > away.
I gave it several good looks. I have also given Forth a good look over a number of times in the past few decades. It has some attractions, and I would be happy if it were a practical choice for a lot of development. It is always better when there is a choice - of chips, tools, languages, whatever. But Forth just does not have what I need - not by a long shot. What you take to be animosity, ignorance or bias here is perhaps as much a result of frustration and a feeling of disappointment that Forth is not better.
> > >>>>> The use of color to indicate aspects of the language is pretty much >>>>> the >>>>> same as the color highlighting I see in nearly every modern >>>>> editor. The >>>>> difference is that in ColorForth the highlighting is *part* of the >>>>> language as it distinguishes when commands are executed. >>>> >>>> It is syntax highlighting. >>> >>> No, it is functional, not just illustrating. It is in the *language*, >>> not just the editor. It's all integrated, not in the way the tools in a >>> GUI are integrated, but in the way your heart, lungs and brain are >>> integrated. >>> >> >> No, it is syntax highlighting. >> >> There is a 4 bit "colour token" attached to each symbol. These >> distinguish between variables, comments, word definitions, etc. There >> is /nothing/ that this gives you compared to, say, $ prefixes for >> variables (like PHP), _t suffixes for types (common convention in C), >> etc., with colour syntax highlighting. The only difference is that the >> editor hides the token. So when you have both var_foo and word_foo, >> they are both displayed as "foo" in different colours rather than >> "var_foo" and "word_foo" in different colours. >> >> That is all there is to it. > > You just said it is more than syntax highlighting. It is like type > definitions in other languages. It is built into the language which > won't work without it. That's the part you aren't getting. Compare > Colorforth to ANSI Forth and you will see what I mean. >
It tags that you see by colour instead of as symbols or letters. Glorified syntax highlighting.
> >>>>> Some commands in Forth are executed at compile time rather than being >>>>> compiled. This is one of the many powerful features of Forth. >>>> >>>> You get that in other languages too. True, it is not always easy to >>>> determine what is done at compile time and what is done at run time, >>>> and the >>>> distinction may depend on optimisation flags. >>> >>> That's what the color does. >>> >> >> The colour doesn't do it - the language makes a clearer distinction >> between compile-time and run-time, and the colour helps you see that. >> You had the same distinction in Forth without colouring. > > If you don't understand, learn how it is done in ANSI Forth and then > tell me if THAT is part of the language or not. > > >> Having a separation here is both a good thing and a bad thing, in >> comparison to the way C handles it, and the way C++ handles it. There >> is room in the world for many models of language. >> >>> >>>> But really, what you are describing here is like C++ with constexpr >>>> code >>>> shown in a different colour. >>> >>> I wouldn't know C++. >>> >> >> Without going into details, you are probably aware that in C you >> sometimes need a "real" constant. For example, you can't make a >> file-level array definition unless the size is absolutely fixed: >> >> int xs[16]; >> >> That's allowed. But you can't write this: >> >> int square(int x) { return x * x; } >> >> int xs[square(4)]; >> >> "square(4)" is not a constant in C terms. However, you would expect a >> compiler to calculate the value at compile time (assuming it can see the >> definition of the "square" function) for the purposes of code >> optimisation. >> >> In C++11 onwards, you can write: >> >> constexpr int square(int x) { return x * x; } >> int xs[square(4)]; >> >> This tells the compiler that it can calculate "square" at compile time >> if the parameters are known at compile time, but still allows the >> function to be used as a run-time function if the parameters are not >> known at compile time. > > Here is a perfect example of why you think Forth has not evolved. There > is nothing in even the earliest Forth that precludes this computation > from being done at compile time. So how do you improve on perfection? > <grin>
Hey, I never claimed C was perfect!
> > >>>>> ColorForth pushes further to allow some commands to be executed at >>>>> edit >>>>> time. I have not studied it in detail, so I can't give you details >>>>> on this. >>>>> >>>>> I just want to explain how you are using very simplistic >>>>> perceptions and >>>>> expectations to "color" your impression of ColorForth without learning >>>>> anything important about it. >>>> >>>> I've read the colorForth FAQ, such as it is. I also note that the >>>> website >>>> is dead. >>> >>> Yeah, Charles Moore isn't in the business of supporting language >>> standards. He created Color Forth for himself and has shared it with >>> others. GA is using it to support their products and they are the best >>> source for information now. >>> >> >> With all due respect to Chuck Moore and his creations, this is not a way >> to conduct a professional business. > > Whatever. He isn't running a business by promoting Forth. As I've > said, he wrote Forth for himself and others like the ideas he has come > up with and learned them. > > GA is a business that uses ColorForth. ANSI Forth is a standard that is > widely used. >
That's fair enough. But it does mean that the GA144 is not a serious choice for professional products. (And yes, I know that not everything made has to be serious and long lasting.)
> Is Arduino a standard? No, yet it is widely used even in business. > Standards are useful in some cases, in other cases not needed. >
We do a fair amount of business taking people's bashed-together Arduino prototypes and turning them into robust industrialised and professional products.
>> >> (If you have good answers here, maybe you will change my mind - at least >> a little!) > > Just as in other languages, like ADA and VHDL (both strongly typed) you > would need to write different code. > > I'm not interested in changing your mind, only in showing you your > misunderstandings about Forth. I'm not actually the right person for > the job being a relative amateur with Forth, so I crossposted to the > Forth group so others could do a better job. That may bring in some > wild cards however as discussions in the Forth group often go awry.
I appreciate the conversation, and have found this thread enlightening, educational and interesting - even when we disagree.
David Brown <david.brown@hesbynett.no> writes:
>I gave a list somewhere in another post. But my key "missing features" >from Forth are good static checking, typing, methods of working with >data of different sizes, safe ways to define and use structures, and >ways to modularise the program. > >For example, take the "FLOOR5" function from the Wikipedia page: > >: FLOOR5 ( n -- n' ) DUP 6 < IF DROP 5 ELSE 1 - THEN ; > > >The C version is: > >int floor5(int v) { > return (v < 6) ? 5 : (v - 1); >} > > >Suppose the Forth programmer accidentally writes: > >: FLOOR5 ( n -- n' ) 6 < IF DROP 5 ELSE 1 - THEN ; > >It's an easy mistake to miss, and you've made a perfectly valid Forth >word definition that will be accepted by the system. But now the >comment does not match the usage. It would be entirely possible for the >language to provide a formalised and standardised way of specifying >input and output parameters in a way that most cases could be >automatically checked by the tools. Conventionalised comments are /way/ >out of date as an aid to automated correctness checking.
As a proper programmer, you also write tests for your programs: t{ 3 floor5 -> 5 }t t{ 5 floor5 -> 5 }t t{ 6 floor5 -> 5 }t t{ 9 floor5 -> 8 }t When you run this through "gforth test/ttester.fs", you get right away: :2: Stack underflow t{ 3 >>>floor5<<< -> 5 }t Backtrace: $7F062B481EF0 lit The backtrace points to one of the literals (6, 5, or 1, and actually 5 in this case), which is misleading; Gforth notices stack underflows when the stack memory is accessed, and DROP does not access that memory. But anyway, once you get that error message, it's pretty easy to find the error. Even if you run it on a system that does not catch all stack underflows (e.g., gforth-fast test/ttester.fs), you get WRONG NUMBER OF RESULTS: t{ 3 floor5 -> 5 }t WRONG NUMBER OF RESULTS: t{ 5 floor5 -> 5 }t WRONG NUMBER OF RESULTS: t{ 6 floor5 -> 5 }t WRONG NUMBER OF RESULTS: t{ 9 floor5 -> 8 }t A static checker might say that the DROP and the - access a value that is not present in the stack effect, so they would be a little more precise at pinpointing the problem, but stack depth issues are easy enough that nobody found it worthwhile to write such a checker yet. BTW, I wrould write this function as: : floor5 ( n1 -- n2 ) 1- 5 max ;
>And then suppose you want this function to work with 32-bit values - >regardless of the width of a cell on the target machine. Or 64-bit >values on a 16-bit cell system.
No! I have had lots of portability problems for C code when porting between 32-bit and 64-bit systems, thanks to the integer type zoo of C. In Forth I have had very few such problems, thanks to the fact that we only have cells and occasionally double-cells (and when you get a double-cell program right on 32-bit systems, it also works on 64-bit systems). If you want a FLOOR5 variant that works for integers that don't fit in a cell, you write DFLOOR5. And if it does not fit in double cells (but would fit in 64 bits), you probably have the wrong machine for what you are trying to do. C did not acquire 64-bit integer types until 32-bit machines were mainstream. - anton -- M. Anton Ertl http://www.complang.tuwien.ac.at/anton/home.html comp.lang.forth FAQs: http://www.complang.tuwien.ac.at/forth/faq/toc.html New standard: http://www.forth200x.org/forth200x.html EuroForth 2017: http://www.euroforth.org/ef17/
Have you contacted datakey.com? It is just possible that they might be 
able to help you less expensively than reverse engineering their product.

w..

In article <2017Jun19.164441@mips.complang.tuwien.ac.at>,
Anton Ertl <anton@mips.complang.tuwien.ac.at> wrote:
>David Brown <david.brown@hesbynett.no> writes:
<SNIP>
> >>And then suppose you want this function to work with 32-bit values - >>regardless of the width of a cell on the target machine. Or 64-bit >>values on a 16-bit cell system. > >No! I have had lots of portability problems for C code when porting >between 32-bit and 64-bit systems, thanks to the integer type zoo of >C. In Forth I have had very few such problems, thanks to the fact >that we only have cells and occasionally double-cells (and when you >get a double-cell program right on 32-bit systems, it also works on >64-bit systems). If you want a FLOOR5 variant that works for integers >that don't fit in a cell, you write DFLOOR5. And if it does not fit >in double cells (but would fit in 64 bits), you probably have the >wrong machine for what you are trying to do. C did not acquire 64-bit >integer types until 32-bit machines were mainstream.
Interestingly, Java is supposed to be safe. I've seen dozens of discussions of Euler problems of Java problems who had problems with overflow and had wasted time debugging that. (Once you've solved one you're entitled to write about how you did it). Of course sometimes when you scale up a problem you get wrong results in Forth caused by overflow too. That was never a time waster, because that is the first thing to look at in such a case, and it is easy to detect and correct in Forth.
> >- anton
Groetjes Albert -- Albert van der Horst, UTRECHT,THE NETHERLANDS Economic growth -- being exponential -- ultimately falters. albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst

Memfault Beyond the Launch