EmbeddedRelated.com
Forums
Memfault State of IoT Report

MCU mimicking a SPI flash slave

Started by John Speth June 14, 2017
On 20/06/17 10:36, dp wrote:
> On Tuesday, June 20, 2017 at 11:44:39 AM UTC+3, David Brown wrote: >> On 20/06/17 09:25, Dimiter_Popoff wrote: >>> On 20.6.2017 г. 07:12, rickman wrote: >>>> .... >>>> >>>> So you think for a language to be modern it has to have hard coded data >>>> sizes? >> >> If you are going to use it for low-level programming and embedded >> development, then yes. > > No. If you don't know which datum of which size and type is you are > not up to programming. Delegating to the tool to track that means > only more work for the programmer, sometimes a lot more while tracking > what the tool got wrong _this_ time.
Er. No. David is suggesting, correctly IMNSHO, that sometimes it is necessary for me to specify exactly what the tool has to achieve - and then to let the tool do it in any way it sees fit. He gave a good example of that, which you snipped. With types such as uint8_t, uint_fast8_t and uint_least8_t, modern C is a significant advance over K&R C.
>>> Programmers should use their ability to count not just for stack levels, >>> it is a lot more effective than working to delegate the counting of >>> this and that to a tool which is a lot less intelligent than the >>> programmer. Just let the tool do the heavy lifting exercises, counting >>> is not one of these. >>> >> >> Counting is one of the tasks I expect a computer - and therefore a >> programming language and a toolchain - to do well. I expect the tool to >> do the menial stuff and let the programmer get on with the thinking. > > If "counting" is too much of a workload is too much for a person he > is not in the right job as a programmer (hopefully "counting" is not > taken literally here, it means more "counting up to ten"). > Delegating simple tasks to the tool makes life harder, not easier, > as I said above. Often a lot harder.
I can only easily deal with three numbers: 0, 1, many :) All other numbers are a pain in the ass and I'm more than happy to delegate them to a tool.
On 20/06/17 05:57, rickman wrote:
> David Brown wrote on 6/19/2017 10:23 AM: >> On 19/06/17 15:19, rickman wrote: >>> David Brown wrote on 6/19/2017 4:30 AM: >>>> On 19/06/17 06:54, rickman wrote:
<snip>
>>> >>> As has Forth. The 2012 standard is an improvement over the previous >>> version, which is an improvement over the previous version to that and >>> the initial ANSI version was an improvement over the multiple flavors of >>> Forth prior to that for the standardization if nothing else. >> >> I have looked through the Forth 2012 standard. Nothing much has changed >> in the language - a few words added, a few words removed. (Previous >> revisions apparently had bigger changes, according to a list of >> compatibility points.) > > I don't mean to be rude, but this shows your ignorance of Forth. In > Forth, nearly everything is about the words.
(I don't take it as rude - this has been a very civil thread, despite differing opinions.) Yes, I know Forth is all about the words. But as far as I could tell, Forth 2012 does not add many or remove many words - it makes little change to what you can do with the language. And - IMHO - to make Forth a good choice for a modern programming language, it would need to do more than that. As you say below, however, that is not "what Forth is about".
> > >>>> Some embedded developers still stick to that old language, rather than >>>> moving on to C99 with inline, booleans, specifically sized types, line >>>> comments, mixing code and declarations, and a few other useful bits and >>>> pieces. Again, C99 is a much better language. >>>> >>>> C11 is the current version, but does not add much that was not already >>>> common in implementations. Static assertions are /very/ useful, and >>>> the >>>> atomic types have possibilities but I think are too little, too late. >>> >>> I think the real issue is you are very familiar with C while totally >>> unfamiliar with Forth. >> >> I certainly can't claim to be unbiased - yes, I am very familiar with C >> and very unfamiliar with Forth. I am not /totally/ unfamiliar - I >> understand the principles of the stacks and their manipulation, the way >> words are defined, and can figure out what some very simple words do, at >> least for arithmetic and basic stack operations. And I am fine with >> trying to get an understanding of how a language could be used even >> though I don't understand the details. > > When addressing the issues you raise with Forth none of these things are > what Forth is about. > > I don't know that you need to understand all the details of Forth to see > it's power, but it would help if you understood how some parts of Forth > work or at least could see how a significant app was written in Forth. > Try learning how an assembler is usually written in Forth. This is easy > to do as most Forths are provided with full source. > >
<snip>
>>>> >>>> The size of the memories (data space, code space and stack space) is >>>> the >>>> most obvious limitation. >>> >>> As I said, that is not a language issue, that is a device issue. But >>> you completely blow it when you talk about the "stack" limitation. >>> Stacks don't need to be MBs. It's that simple. You are thinking in C >>> and the other algol derived languages, not Forth. >> >> I program mostly on small microcontrollers. These days, I see more >> devices with something like 128K ram, but I have done more than my fair >> share with 4K ram or less. No, I am /not/ thinking megabytes of space. >> But a 10 cell stack is /very/ limited. So is a 64 cell ram, and a 64 >> cell program rom - even taking into account the code space efficiency of >> Forth. I am not asking for MB here. > > Again, I don't mean to be rude, but saying a 10 cell stack is too small > shows a lack of understanding of Forth. You are working with your > experience in other languages, not Forth. > > I won't argue that 64 cells of RAM don't limit your applications, but > the GA144 doesn't have 64 cells of RAM. It has 144 * 64 cells. > External memory can be connected if needed. I won't argue this is not a > limitation, but it is not a brick wall. Again, I suggest you stop > comparing the GA144 to the other processors you have worked with and > consider what *can* be done with it. What do you know that *has* been > done with the GA144?
I think that we are actually mostly in agreement here, but using vague terms so it looks like we are saying different things. We agree, I think that 10 stack cells and 64 cells RAM (which includes the user program code, as far as I can tell) is very limited. We agree that it is possible to do bigger tasks by combining lots of small cpus together. And since the device is Turing complete, you can in theory do anything you want on it - given enough time and external memory. The smallest microcontroller I worked with had 2KB flash, 64 bytes eeprom, a 3 entry return stack, and /no/ ram - just the 32 8-bit cpu registers. I programmed that in C. It was a simple program, but it did the job in hand. So yes, I appreciate that sometimes "very limited" is still big enough to be useful. But that does not stop it being very limited.
> > >>>>> It is the hardware >>>>> limitation of the CPU. The GA144 was designed with a different >>>>> philosophy. I would say for a different purpose, but it was not >>>>> designed >>>>> for *any* purpose. Chuck designed it as an experiment while exploring >>>>> the space of minimal hardware processors. The capapbilities come from >>>>> the high speed of each processor and the comms capability. >>>> >>>> Minimal systems can be interesting for theory, but are rarely of any >>>> use >>>> in practice. >>> >>> That comment would seem to indicate you are very familiar with minimal >>> systems. I suspect the opposite is true. I find minimal CPUs to be >>> *very* useful in FPGA designs allowing a "fast" processor to be >>> implemented in even very small amounts of logic. >>> >> >> If you have a specific limited task, then a small cpu can be very >> useful. Maybe you've got an FPGA connected to a DDR DIMM socket. A >> very small cpu might be the most convenient way to set up the memory >> strobe delays and other parameters, letting the FPGA work with a cleaner >> memory interface. But that is a case of a small cpu helping out a >> bigger system - it is not a case of using the small cpus alone. It is a >> different case altogether. > > I really don't follow your point here. I think a CPU would be a > terrible way to control a DDR memory other than in a GA144 with 700 MIPS > processors. I've never seen a CPU interface to DDR RAM without a > hardware memory controller. Maybe I'm just not understanding what you > are saying.
I am probably just picking a bad example here - please forget it. I was simply trying to think of a case where your main work would be done in fast FPGA logic, while you need a little "housekeeping" work done and a small cpu makes that flexible and space efficient despite being slower.
>> >> There are good reasons we don't use masses of tiny cpus instead of a few >> big ones - just as we don't use ants as workers. It is not just a >> matter of bias or unfamiliarity. > > Reasons you can't explain? >
Amdahl's law is useful here. Some tasks simply cannot be split into smaller parallel parts. You always reach a point where you cannot split them more, and you always reach a point where the overhead of dividing up the tasks and recombining the results costs more than the gains of splitting it up. Imagine, for example, a network router or filter. Packets come in, get checked or manipulated, and get passed out again. It is reasonable to split this up in parallel - 4 cpus at 1 GHz are likely to do as good a job as 1 cpu at 4 GHz. But what about 40 cpus at 100 MHz? Now you are going to get longer latencies, and have significant effort tracking the packets and computing resources - even though you have the same theoretical bandwidth. 400 cpus at 10 MHz? That would be even worse. If some data needs to be shared across the processing tasks, it is likely to be hopeless with so many cpus. And if you try to build the thing out of 8051 chips, it will never be successful no matter how many millions you use, if the devices don't have enough memory to hold a packet. Or to pick a simple analogy - sometimes a rock is more useful than a pile of sand.
> >>>>> I can't tell you how many people think FPGAs are complicated to >>>>> design, >>>>> power hungry and expensive. All three of these are not true. >>>>> >>>> >>>> That certainly /was/ the case. >>> >>> 20 years ago maybe. >>> >> >> A /lot/ less than 20 years ago. > > I designed a board almost a decade ago that was less than an inch wide > and 4 inches long that provided an analog/digital synchronized interface > for an IP networking card. It used a small, low power, low cost FPGA to > do all the heavy lifting and made me well over a million dollars. At > the time I built the board, that chip was already some three or four > years old. So there is an example that was over 12 years ago. Other > FPGAs that fit the same criteria were from closer to 2000 or 17 years > ago. I didn't use them because I wanted to maximize the lifespan of the > board.
Again, I think our apparent disagreement is just a matter of using vague terms that we each interpret slightly differently.
> > >>>> But yes, for a good while now there have >>>> been cheap and low power FPGAs available. As for complicated to design >>>> - well, I guess it's easy when you know how. But you do have to know >>>> what you are doing. >>> >>> MCUs are no different. A newbie will do a hack job. I once provided >>> some assistance to a programmer who needed to spin an FPGA design for >>> his company. They wouldn't hire me to do it because they wanted to >>> develop the ability in house. With minimal assistance (and I mean >>> minimal) he first wrote a "hello, world" program for the FPGA. He then >>> went on to write his application. >>> >>> The only tricky parts of programming FPGAs is when you need to optimize >>> either speed or capacity or worse, both! But I believe the exact same >>> thing is true about MCUs. Most projects can be done by beginners and >>> indeed *are* done by beginners. That has been my experience. In fact, >>> that is the whole reason for the development and use of the various >>> tools for programming, making them usable by programmers with lesser >>> skills, enabling a larger labor pool at a lower price. >>> >>> The only magic in FPGA design is the willingness to wade into the waters >>> and get your feet wet. >>> >> >> I will happily agree that FPGA design is not as hard as many people >> think. However, I do think it is harder to learn and harder to get >> write than basic microcontroller programming. The key difference is >> that with microcontrollers, you are (mostly) doing one thing at a time >> all in one place on the chip - with FPGAs, you are doing everything at >> once but in separate parts of the chip. I think the serial execution is >> a more familiar model to people - we are used to doing one thing at a >> time, but being able to do many different tasks at different times. The >> FPGA model is more like workers on a production line, and that takes >> time to understand for an individual. > > What you just described is what makes FPGAs so easy to use. The serial > execution in a processor to emulate parallel tasks is what makes CPUs so > hard to use and supposedly what makes the XMOS so useful. FPGAs make > parallelism easy with literally no thinking as the language and the > tools are designed from the ground up for that. > > I like to say, whoever came up with the name for water wasn't a fish. > In FPGAs no one even thinks about the fact that parallelism is being > used... unless they aren't fish, meaning software people can have some > difficulty realizing they aren't on land anymore and going with the flow. >
You have been designing with FPGAs for decades - that can make it hard to understand why other people may find them difficult. I have done a few CPLD/FPGA designs over the years - not many, but enough to be happy with working with them. For people used sequential programming, however, they appear hard - you have to think in a completely different way. It is not so much that thinking in parallel is harder than thinking in serial (though I believe it is), it is that it is /different/. <snip>
>>> There you go with the extremes again. Colorforth isn't designed >>> "solely" for people with bad eyesight. It is designed to be as useful >>> as possible. It is clear you have not learned enough about it to know >>> what is good and what is bad. You took one quick look at it and turned >>> away. >> >> I gave it several good looks. I have also given Forth a good look over >> a number of times in the past few decades. It has some attractions, and >> I would be happy if it were a practical choice for a lot of development. >> It is always better when there is a choice - of chips, tools, >> languages, whatever. But Forth just does not have what I need - not by >> a long shot. What you take to be animosity, ignorance or bias here is >> perhaps as much a result of frustration and a feeling of disappointment >> that Forth is not better. > > I will say you have expressed your unhappiness with Forth without > explaining what was lacking other than vague issues (like the look of > Colorforth) and wanting it to be like other languages you are more used > to. If you want the other languages, what is missing from them that you > are still looking?
I am not sure exactly what you are asking here, but if we are going to bring in other languages, I think perhaps that would be a topic for a new thread some other time. It could be a very interesting discussion for comp.arch.embedded (less so for comp.lang.forth). However, I feel this thread is big enough as it is!
> >>>>>>> The use of color to indicate aspects of the language is pretty much >>>>>>> the >>>>>>> same as the color highlighting I see in nearly every modern >>>>>>> editor. The >>>>>>> difference is that in ColorForth the highlighting is *part* of the >>>>>>> language as it distinguishes when commands are executed. >>>>>> >>>>>> It is syntax highlighting. >>>>> >>>>> No, it is functional, not just illustrating. It is in the *language*, >>>>> not just the editor. It's all integrated, not in the way the tools >>>>> in a >>>>> GUI are integrated, but in the way your heart, lungs and brain are >>>>> integrated. >>>>> >>>> >>>> No, it is syntax highlighting. >>>> >>>> There is a 4 bit "colour token" attached to each symbol. These >>>> distinguish between variables, comments, word definitions, etc. There >>>> is /nothing/ that this gives you compared to, say, $ prefixes for >>>> variables (like PHP), _t suffixes for types (common convention in C), >>>> etc., with colour syntax highlighting. The only difference is that the >>>> editor hides the token. So when you have both var_foo and word_foo, >>>> they are both displayed as "foo" in different colours rather than >>>> "var_foo" and "word_foo" in different colours. >>>> >>>> That is all there is to it. >>> >>> You just said it is more than syntax highlighting. It is like type >>> definitions in other languages. It is built into the language which >>> won't work without it. That's the part you aren't getting. Compare >>> Colorforth to ANSI Forth and you will see what I mean. >>> >> >> It tags that you see by colour instead of as symbols or letters. >> Glorified syntax highlighting. > > You can't get past the color highlighting. It's not about the color. > It's about the fact that parts of the language have different uses. > Color highlighting in other languages are just a nicety of the editor. > The tokens in Colorforth are fundamental to the language. The color is > used to indicate what is what, but color is not the point. >
Again, the tokens are nothing special. In most languages, the role is filled by keyboards, symbols or other features of the grammar - but there is nothing here that is fundamentally different. I haven't looked up a list of token types, but for the sake of argument let's say that there is one indicating that something is a variable shown in green, one indicating a word definition shown in red, and one indicating a compile-time action shown in blue. And you have a name "foo" that exists in all these contexts. You can show the different uses by displaying "foo" in different colours. You can store it in code memory using a 4 bit token tag. You could write it using keywords VAR, DEF and COMP before the identifier "foo". You could use symbols $, : and # before the identifier to show the difference. You could use other aspects of a language's grammar to determine the difference. You could use the position within the line of the code file to make the difference. You could simply say that the same identifier cannot be used for different sorts of token, and the token type is fixed when the identifier is created. The existence of different kinds of tokens for different uses is (at least) as old as programming languages. Distinguishing them in different ways is equally old. Yes, the use of colour as a way to show this is not really relevant. However, it is not /me/ that is fussing about it - look at the /name/ of this "marvellous new" Forth. It is called "colorFORTH".
>>> Here is a perfect example of why you think Forth has not evolved. There >>> is nothing in even the earliest Forth that precludes this computation >>> from being done at compile time. So how do you improve on perfection? >>> <grin> >> >> Hey, I never claimed C was perfect! > > That you are not perfect goes without saying... <g>
No, no - /C/ is not perfect. But that does not mean /I/ am not :-)
>> >> We do a fair amount of business taking people's bashed-together Arduino >> prototypes and turning them into robust industrialised and professional >> products. > > Yep, but they developed using the Arduino and they sell lots of them, > likely a lot more than you sell of your industrialized products.
The people that come to us may use Arduino or Pi's for prototyping, but it is the industrial versions they sell (otherwise there would be no point coming to us!). But no, we don't sell as many units as mass produced cheap devices do.
> > >>>> (If you have good answers here, maybe you will change my mind - at >>>> least >>>> a little!) >>> >>> Just as in other languages, like ADA and VHDL (both strongly typed) you >>> would need to write different code. >>> >>> I'm not interested in changing your mind, only in showing you your >>> misunderstandings about Forth. I'm not actually the right person for >>> the job being a relative amateur with Forth, so I crossposted to the >>> Forth group so others could do a better job. That may bring in some >>> wild cards however as discussions in the Forth group often go awry. >> >> I appreciate the conversation, and have found this thread enlightening, >> educational and interesting - even when we disagree. > > We don't learn much if we just agree. I'm glad we could disagree > without making it an argument. The other Forth users are much more > experienced than I am. They will likely have much better info although > not too many actually use Colorforth. Many have learned about it though > to learn from it. >
On 20.6.2017 &#1075;. 13:25, Tom Gardner wrote:
> On 20/06/17 10:36, dp wrote: >> On Tuesday, June 20, 2017 at 11:44:39 AM UTC+3, David Brown wrote: >>> On 20/06/17 09:25, Dimiter_Popoff wrote: >>>> On 20.6.2017 &#1075;. 07:12, rickman wrote: >>>>> .... >>>>> >>>>> So you think for a language to be modern it has to have hard coded >>>>> data >>>>> sizes? >>> >>> If you are going to use it for low-level programming and embedded >>> development, then yes. >> >> No. If you don't know which datum of which size and type is you are >> not up to programming. Delegating to the tool to track that means >> only more work for the programmer, sometimes a lot more while tracking >> what the tool got wrong _this_ time. > > Er. No. > > David is suggesting, correctly IMNSHO, that sometimes > it is necessary for me to specify exactly what the > tool has to achieve - and then to let the tool do it > in any way it sees fit. > > He gave a good example of that, which you snipped. > > With types such as uint8_t, uint_fast8_t and uint_least8_t, > modern C is a significant advance over K&R C.
It may be an improvement in C indeed. But this is not relevant to the main point: delegating simple tasks to the tool costs the programmer more, often a lot more effort than it returns.
> >>>> Programmers should use their ability to count not just for stack >>>> levels, >>>> it is a lot more effective than working to delegate the counting of >>>> this and that to a tool which is a lot less intelligent than the >>>> programmer. Just let the tool do the heavy lifting exercises, counting >>>> is not one of these. >>>> >>> >>> Counting is one of the tasks I expect a computer - and therefore a >>> programming language and a toolchain - to do well. I expect the tool to >>> do the menial stuff and let the programmer get on with the thinking. >> >> If "counting" is too much of a workload is too much for a person he >> is not in the right job as a programmer (hopefully "counting" is not >> taken literally here, it means more "counting up to ten"). >> Delegating simple tasks to the tool makes life harder, not easier, >> as I said above. Often a lot harder. > > I can only easily deal with three numbers: 0, 1, many :) > All other numbers are a pain in the ass and I'm more > than happy to delegate them to a tool.
Well you snipped my example with the programmer and the cook, let me repost it:
>> Pretty much like a meal is supposed to be served nicely >> arranged on a plate for the consumer; however for the cook the >> ingredients are a lot more convenient in a raw form.
In the case you let the tool do all the cooking for you you relegate yourself to the role of the waiter if not the consumer. And if you have to do the job of the cook having kept yourself fit only about the skill set of a waiter - it will cost you a lot more time to do the job than it would cost you had you not lef your cooking skills decay. Dimiter ====================================================== Dimiter Popoff, TGI http://www.tgi-sci.com ====================================================== http://www.flickr.com/photos/didi_tgi/
On 20/06/17 11:36, dp wrote:
> On Tuesday, June 20, 2017 at 11:44:39 AM UTC+3, David Brown wrote: >> On 20/06/17 09:25, Dimiter_Popoff wrote: >>> On 20.6.2017 &#1075;. 07:12, rickman wrote: >>>> .... >>>> >>>> So you think for a language to be modern it has to have hard coded data >>>> sizes? >> >> If you are going to use it for low-level programming and embedded >> development, then yes. > > No. If you don't know which datum of which size and type is you are > not up to programming.
The reason I know exactly which size a datum is, is because the type has a fixed size! If I need to know that "x" has 32 bits, I make sure of that fact by declaring "x" as a "uint32_t" or "int32_t" in C. I can do that, precisely because C supports such hard coded data sizes. In a language like Forth, or pre-C99 C, you can't do that portably. An "int" might be 16-bit, or maybe 32-bit, or maybe something weird - the same applies to a Forth "cell". You need pre-processor directives, conditional compilation, implementation-specific code, etc., in order to know for sure what sizes you are using.
> Delegating to the tool to track that means > only more work for the programmer, sometimes a lot more while tracking > what the tool got wrong _this_ time.
Are you seriously suggesting that sometimes compilers will get the sizes wrong? That if I ask a C compiler for an "int64_t", sometimes it will give me a different size? Or are you talking about more complex types? If I define a struct that I know should match an external definition (hardware registers, telegram format, etc.) of a particular size, I can write: typedef struct { uint16_t x; uint8_t ys[6]; ... } reg_t; static_assert(sizeof(reg_t) == 24, "Checking size of reg_t struct"); The /compiler/ does the counting and the checking. It does so easily and reliably, handles long and complicated structures, is portable across processors of difference sizes, and will always give a clear and unmistakeably compile-time error message if there is a problem.
> >>> Programmers should use their ability to count not just for stack levels, >>> it is a lot more effective than working to delegate the counting of >>> this and that to a tool which is a lot less intelligent than the >>> programmer. Just let the tool do the heavy lifting exercises, counting >>> is not one of these. >>> >> >> Counting is one of the tasks I expect a computer - and therefore a >> programming language and a toolchain - to do well. I expect the tool to >> do the menial stuff and let the programmer get on with the thinking. > > If "counting" is too much of a workload is too much for a person he > is not in the right job as a programmer (hopefully "counting" is not > taken literally here, it means more "counting up to ten"). > Delegating simple tasks to the tool makes life harder, not easier, > as I said above. Often a lot harder.
You said it above - but you were wrong (IMHO).
> > Controlled types, sizes etc. by the machine are meant for the user, > not for the programmer. They are supposed to make these work, this *is* > their job. Pretty much like a meal is supposed to be served nicely > arranged on a plate for the consumer; however for the cook the > ingredients are a lot more convenient in a raw form. >
Think of having the compiler check sizes and do "counting" as like the thermostat and timer on the oven. The cook (or programmer) decides on the temperature and timing he wants, but the oven handles the boring bit of turning the elements on and off to get the right temperature, and warns the cook when the timer is done.
> Dimiter >
On 20/06/17 12:25, Tom Gardner wrote:

> I can only easily deal with three numbers: 0, 1, many :) > All other numbers are a pain in the ass and I'm more > than happy to delegate them to a tool.
There are three sorts of people in this world - those that can count, and those that can't.
On 20.6.2017 &#1075;. 13:57, David Brown wrote:
> On 20/06/17 11:36, dp wrote: >> On Tuesday, June 20, 2017 at 11:44:39 AM UTC+3, David Brown wrote: >>> On 20/06/17 09:25, Dimiter_Popoff wrote: >>>> On 20.6.2017 &#1075;. 07:12, rickman wrote: >>>>> .... >>>>> >>>>> So you think for a language to be modern it has to have hard coded data >>>>> sizes? >>> >>> If you are going to use it for low-level programming and embedded >>> development, then yes. >> >> No. If you don't know which datum of which size and type is you are >> not up to programming. > > The reason I know exactly which size a datum is, is because the type has > a fixed size! > > If I need to know that "x" has 32 bits, I make sure of that fact by > declaring "x" as a "uint32_t" or "int32_t" in C. I can do that, > precisely because C supports such hard coded data sizes. > > In a language like Forth, or pre-C99 C, you can't do that portably. An > "int" might be 16-bit, or maybe 32-bit, or maybe something weird - the > same applies to a Forth "cell". You need pre-processor directives, > conditional compilation, implementation-specific code, etc., in order to > know for sure what sizes you are using.
So C has improved. But this is only about C overcoming one of its shortcomings by adding more complexity for the programmer to deal with, which is exactly my point. You need to know what your compiler does with your data type only if you have to rely on it to deal with it instead of just dealing with it yourself when you know what it is anyway.
> >> Delegating to the tool to track that means >> only more work for the programmer, sometimes a lot more while tracking >> what the tool got wrong _this_ time. > > Are you seriously suggesting that sometimes compilers will get the sizes > wrong? That if I ask a C compiler for an "int64_t", sometimes it will > give me a different size?
Are you seriously suggesting that you have not spent well over half of your programming time figuring out what the compiler expects from you.
>.... >> >> Controlled types, sizes etc. by the machine are meant for the user, >> not for the programmer. They are supposed to make these work, this *is* >> their job. Pretty much like a meal is supposed to be served nicely >> arranged on a plate for the consumer; however for the cook the >> ingredients are a lot more convenient in a raw form. >> > > Think of having the compiler check sizes and do "counting" as like the > thermostat and timer on the oven. The cook (or programmer) decides on > the temperature and timing he wants, but the oven handles the boring bit > of turning the elements on and off to get the right temperature, and > warns the cook when the timer is done.
Then a month later the menu changes and you have to set the temperature to the one for the meal you did last month. Oops, what was it? Spend another week rediscovering that. Dimiter ====================================================== Dimiter Popoff, TGI http://www.tgi-sci.com ====================================================== http://www.flickr.com/photos/didi_tgi/
On 20/06/17 11:49, Dimiter_Popoff wrote:
> On 20.6.2017 &#1075;. 13:25, Tom Gardner wrote: >> On 20/06/17 10:36, dp wrote: >>> On Tuesday, June 20, 2017 at 11:44:39 AM UTC+3, David Brown wrote: >>>> On 20/06/17 09:25, Dimiter_Popoff wrote: >>>>> On 20.6.2017 &#1075;. 07:12, rickman wrote: >>>>>> .... >>>>>> >>>>>> So you think for a language to be modern it has to have hard coded >>>>>> data >>>>>> sizes? >>>> >>>> If you are going to use it for low-level programming and embedded >>>> development, then yes. >>> >>> No. If you don't know which datum of which size and type is you are >>> not up to programming. Delegating to the tool to track that means >>> only more work for the programmer, sometimes a lot more while tracking >>> what the tool got wrong _this_ time. >> >> Er. No. >> >> David is suggesting, correctly IMNSHO, that sometimes >> it is necessary for me to specify exactly what the >> tool has to achieve - and then to let the tool do it >> in any way it sees fit. >> >> He gave a good example of that, which you snipped. >> >> With types such as uint8_t, uint_fast8_t and uint_least8_t, >> modern C is a significant advance over K&R C. > > It may be an improvement in C indeed. But this is not relevant > to the main point: delegating simple tasks to the tool costs > the programmer more, often a lot more effort than it returns.
For the examples given you haven't demonstrated your point, and you have ignored the main point being made. To repeat the main point being made: it is better for me to specify (in the source code) what the tool has to achieve, and let the compiler decide how to achieve it. It is worse for me to /ambiguously/ specify (in the source code) what is required, and to implicitly point to the compiler's man pages - and probably to hope the next program maintainer uses the correct compiler flags.
>>>>> Programmers should use their ability to count not just for stack >>>>> levels, >>>>> it is a lot more effective than working to delegate the counting of >>>>> this and that to a tool which is a lot less intelligent than the >>>>> programmer. Just let the tool do the heavy lifting exercises, counting >>>>> is not one of these. >>>>> >>>> >>>> Counting is one of the tasks I expect a computer - and therefore a >>>> programming language and a toolchain - to do well. I expect the tool to >>>> do the menial stuff and let the programmer get on with the thinking. >>> >>> If "counting" is too much of a workload is too much for a person he >>> is not in the right job as a programmer (hopefully "counting" is not >>> taken literally here, it means more "counting up to ten"). >>> Delegating simple tasks to the tool makes life harder, not easier, >>> as I said above. Often a lot harder. >> >> I can only easily deal with three numbers: 0, 1, many :) >> All other numbers are a pain in the ass and I'm more >> than happy to delegate them to a tool. > > Well you snipped my example with the programmer and the cook, let > me repost it:
I ignored it because, like most analogies, its relevance is dubious and encourage focusing the attention on /inapplicable/ details. That has already started happening in another response which has started to discuss menus/meals that people had last week! I beats me how that is supposed to illuminate the benefits of stating the peripheral's structure and letting a compiler sort out how best to generate code for it.
David Brown wrote on 6/20/2017 6:36 AM:
> On 20/06/17 05:57, rickman wrote: >> David Brown wrote on 6/19/2017 10:23 AM: >>> On 19/06/17 15:19, rickman wrote: >>>> David Brown wrote on 6/19/2017 4:30 AM: >>>>> On 19/06/17 06:54, rickman wrote: > <snip> >>>> >>>> As has Forth. The 2012 standard is an improvement over the previous >>>> version, which is an improvement over the previous version to that and >>>> the initial ANSI version was an improvement over the multiple flavors of >>>> Forth prior to that for the standardization if nothing else. >>> >>> I have looked through the Forth 2012 standard. Nothing much has changed >>> in the language - a few words added, a few words removed. (Previous >>> revisions apparently had bigger changes, according to a list of >>> compatibility points.) >> >> I don't mean to be rude, but this shows your ignorance of Forth. In >> Forth, nearly everything is about the words. > > (I don't take it as rude - this has been a very civil thread, despite > differing opinions.)
I feel I use a fairly passive voice in conversations like this one. But sometimes people get torqued off about their reception of my rudeness.
> Yes, I know Forth is all about the words. But as far as I could tell, > Forth 2012 does not add many or remove many words - it makes little > change to what you can do with the language. > > And - IMHO - to make Forth a good choice for a modern programming > language, it would need to do more than that. As you say below, > however, that is not "what Forth is about".
So far you have only identified one thing Forth does not do that you would like, it doesn't have fixed size data types. What other important things is it lacking? There is at least one Forth programmer here who agrees with you about the data sizes. He feels many things in Forth should be nailed down rather than being left to the implementation. But people are able to get work done efficiently in spite of this. I will say pointing out this issue is making me think. I can't think of a situation where this would actually create a problem. To allow the code to run on a 16 bit system that variable would need to use a double data type (double size integer, not a floating point type). It would then be a 64 bit type on a 32 bit system. Would that create a problem?
>>>>> Some embedded developers still stick to that old language, rather than >>>>> moving on to C99 with inline, booleans, specifically sized types, line >>>>> comments, mixing code and declarations, and a few other useful bits and >>>>> pieces. Again, C99 is a much better language. >>>>> >>>>> C11 is the current version, but does not add much that was not already >>>>> common in implementations. Static assertions are /very/ useful, and >>>>> the >>>>> atomic types have possibilities but I think are too little, too late. >>>> >>>> I think the real issue is you are very familiar with C while totally >>>> unfamiliar with Forth. >>> >>> I certainly can't claim to be unbiased - yes, I am very familiar with C >>> and very unfamiliar with Forth. I am not /totally/ unfamiliar - I >>> understand the principles of the stacks and their manipulation, the way >>> words are defined, and can figure out what some very simple words do, at >>> least for arithmetic and basic stack operations. And I am fine with >>> trying to get an understanding of how a language could be used even >>> though I don't understand the details. >> >> When addressing the issues you raise with Forth none of these things are >> what Forth is about. >> >> I don't know that you need to understand all the details of Forth to see >> it's power, but it would help if you understood how some parts of Forth >> work or at least could see how a significant app was written in Forth. >> Try learning how an assembler is usually written in Forth. This is easy >> to do as most Forths are provided with full source. >> >> > <snip> >>>>> >>>>> The size of the memories (data space, code space and stack space) is >>>>> the >>>>> most obvious limitation. >>>> >>>> As I said, that is not a language issue, that is a device issue. But >>>> you completely blow it when you talk about the "stack" limitation. >>>> Stacks don't need to be MBs. It's that simple. You are thinking in C >>>> and the other algol derived languages, not Forth. >>> >>> I program mostly on small microcontrollers. These days, I see more >>> devices with something like 128K ram, but I have done more than my fair >>> share with 4K ram or less. No, I am /not/ thinking megabytes of space. >>> But a 10 cell stack is /very/ limited. So is a 64 cell ram, and a 64 >>> cell program rom - even taking into account the code space efficiency of >>> Forth. I am not asking for MB here. >> >> Again, I don't mean to be rude, but saying a 10 cell stack is too small >> shows a lack of understanding of Forth. You are working with your >> experience in other languages, not Forth. >> >> I won't argue that 64 cells of RAM don't limit your applications, but >> the GA144 doesn't have 64 cells of RAM. It has 144 * 64 cells. >> External memory can be connected if needed. I won't argue this is not a >> limitation, but it is not a brick wall. Again, I suggest you stop >> comparing the GA144 to the other processors you have worked with and >> consider what *can* be done with it. What do you know that *has* been >> done with the GA144? > > I think that we are actually mostly in agreement here, but using vague > terms so it looks like we are saying different things. We agree, I > think that 10 stack cells and 64 cells RAM (which includes the user > program code, as far as I can tell) is very limited. We agree that it > is possible to do bigger tasks by combining lots of small cpus together. > And since the device is Turing complete, you can in theory do anything > you want on it - given enough time and external memory. > > The smallest microcontroller I worked with had 2KB flash, 64 bytes > eeprom, a 3 entry return stack, and /no/ ram - just the 32 8-bit cpu > registers. I programmed that in C. It was a simple program, but it did > the job in hand. So yes, I appreciate that sometimes "very limited" is > still big enough to be useful. But that does not stop it being very > limited.
Yes, if you need more than a few k of RAM, the GA144 needs external RAM. But that can be accommodated. The point is there is more than one way to skin a cat. Thinking in terms of how other processors do a job and trying to make the GA144 do the same job in the same way won't work. It has capabilities far beyond what people see in it.
>>>>>> It is the hardware >>>>>> limitation of the CPU. The GA144 was designed with a different >>>>>> philosophy. I would say for a different purpose, but it was not >>>>>> designed >>>>>> for *any* purpose. Chuck designed it as an experiment while exploring >>>>>> the space of minimal hardware processors. The capapbilities come from >>>>>> the high speed of each processor and the comms capability. >>>>> >>>>> Minimal systems can be interesting for theory, but are rarely of any >>>>> use >>>>> in practice. >>>> >>>> That comment would seem to indicate you are very familiar with minimal >>>> systems. I suspect the opposite is true. I find minimal CPUs to be >>>> *very* useful in FPGA designs allowing a "fast" processor to be >>>> implemented in even very small amounts of logic. >>>> >>> >>> If you have a specific limited task, then a small cpu can be very >>> useful. Maybe you've got an FPGA connected to a DDR DIMM socket. A >>> very small cpu might be the most convenient way to set up the memory >>> strobe delays and other parameters, letting the FPGA work with a cleaner >>> memory interface. But that is a case of a small cpu helping out a >>> bigger system - it is not a case of using the small cpus alone. It is a >>> different case altogether. >> >> I really don't follow your point here. I think a CPU would be a >> terrible way to control a DDR memory other than in a GA144 with 700 MIPS >> processors. I've never seen a CPU interface to DDR RAM without a >> hardware memory controller. Maybe I'm just not understanding what you >> are saying. > > I am probably just picking a bad example here - please forget it. I was > simply trying to think of a case where your main work would be done in > fast FPGA logic, while you need a little "housekeeping" work done and a > small cpu makes that flexible and space efficient despite being slower.
Sure, my app from 10 years ago would have been perfect to illustrate the utility of combining fast logic with a (relatively) slow CPU. At one point we were facing a limit to the available gates in the FPGA and the solution would have been replacing the slower logic with a small stack CPU, but it didn't come to that. I was able to push the utilization to around 90% without a problem.
>>> There are good reasons we don't use masses of tiny cpus instead of a few >>> big ones - just as we don't use ants as workers. It is not just a >>> matter of bias or unfamiliarity. >> >> Reasons you can't explain? >> > > Amdahl's law is useful here. Some tasks simply cannot be split into > smaller parallel parts. You always reach a point where you cannot split > them more, and you always reach a point where the overhead of dividing > up the tasks and recombining the results costs more than the gains of > splitting it up.
Amdahl's law doesn't apply. Tasks aren't being split into "parallel" parts for the sake of being parallel any more than in an FPGA were every LUT and FF operates in parallel. If you run out of speed in a GA144 CPU you can split the code between two or three CPUs. If you run out of RAM you can split the task over several CPUs to use more RAM.
> Imagine, for example, a network router or filter. Packets come in, get > checked or manipulated, and get passed out again. It is reasonable to > split this up in parallel - 4 cpus at 1 GHz are likely to do as good a > job as 1 cpu at 4 GHz. But what about 40 cpus at 100 MHz? Now you are > going to get longer latencies, and have significant effort tracking the > packets and computing resources - even though you have the same > theoretical bandwidth. 400 cpus at 10 MHz? That would be even worse. > If some data needs to be shared across the processing tasks, it is > likely to be hopeless with so many cpus. And if you try to build the > thing out of 8051 chips, it will never be successful no matter how many > millions you use, if the devices don't have enough memory to hold a packet.
You are not designing the code to effectively suit the chip. In the GA144 the comms channel allow data to be passed as easily as writing to memory. Break your task into small pieces that each do part of the task. The packets work through the CPUs and out the other end. Where is the problem?
> Or to pick a simple analogy - sometimes a rock is more useful than a > pile of sand.
Concrete with sand is better than rock any day.
>>>>>> I can't tell you how many people think FPGAs are complicated to >>>>>> design, >>>>>> power hungry and expensive. All three of these are not true. >>>>>> >>>>> >>>>> That certainly /was/ the case. >>>> >>>> 20 years ago maybe. >>>> >>> >>> A /lot/ less than 20 years ago. >> >> I designed a board almost a decade ago that was less than an inch wide >> and 4 inches long that provided an analog/digital synchronized interface >> for an IP networking card. It used a small, low power, low cost FPGA to >> do all the heavy lifting and made me well over a million dollars. At >> the time I built the board, that chip was already some three or four >> years old. So there is an example that was over 12 years ago. Other >> FPGAs that fit the same criteria were from closer to 2000 or 17 years >> ago. I didn't use them because I wanted to maximize the lifespan of the >> board. > > Again, I think our apparent disagreement is just a matter of using vague > terms that we each interpret slightly differently.
I thought we were in agreement on this one. Lattice and others started making small, low power, low cost FPGAs over 15 years ago.
>>>>> But yes, for a good while now there have >>>>> been cheap and low power FPGAs available. As for complicated to design >>>>> - well, I guess it's easy when you know how. But you do have to know >>>>> what you are doing. >>>> >>>> MCUs are no different. A newbie will do a hack job. I once provided >>>> some assistance to a programmer who needed to spin an FPGA design for >>>> his company. They wouldn't hire me to do it because they wanted to >>>> develop the ability in house. With minimal assistance (and I mean >>>> minimal) he first wrote a "hello, world" program for the FPGA. He then >>>> went on to write his application. >>>> >>>> The only tricky parts of programming FPGAs is when you need to optimize >>>> either speed or capacity or worse, both! But I believe the exact same >>>> thing is true about MCUs. Most projects can be done by beginners and >>>> indeed *are* done by beginners. That has been my experience. In fact, >>>> that is the whole reason for the development and use of the various >>>> tools for programming, making them usable by programmers with lesser >>>> skills, enabling a larger labor pool at a lower price. >>>> >>>> The only magic in FPGA design is the willingness to wade into the waters >>>> and get your feet wet. >>>> >>> >>> I will happily agree that FPGA design is not as hard as many people >>> think. However, I do think it is harder to learn and harder to get >>> write than basic microcontroller programming. The key difference is >>> that with microcontrollers, you are (mostly) doing one thing at a time >>> all in one place on the chip - with FPGAs, you are doing everything at >>> once but in separate parts of the chip. I think the serial execution is >>> a more familiar model to people - we are used to doing one thing at a >>> time, but being able to do many different tasks at different times. The >>> FPGA model is more like workers on a production line, and that takes >>> time to understand for an individual. >> >> What you just described is what makes FPGAs so easy to use. The serial >> execution in a processor to emulate parallel tasks is what makes CPUs so >> hard to use and supposedly what makes the XMOS so useful. FPGAs make >> parallelism easy with literally no thinking as the language and the >> tools are designed from the ground up for that. >> >> I like to say, whoever came up with the name for water wasn't a fish. >> In FPGAs no one even thinks about the fact that parallelism is being >> used... unless they aren't fish, meaning software people can have some >> difficulty realizing they aren't on land anymore and going with the flow. >> > > You have been designing with FPGAs for decades - that can make it hard > to understand why other people may find them difficult. I have done a > few CPLD/FPGA designs over the years - not many, but enough to be happy > with working with them. For people used sequential programming, > however, they appear hard - you have to think in a completely different > way. It is not so much that thinking in parallel is harder than > thinking in serial (though I believe it is), it is that it is /different/.
Different doesn't need to be hard. It is only hard if people won't allow themselves to learn something new. That's my point. Using FPGAs isn't hard, people make it hard by thinking it is the same as CPUs. It's actually easier.
>>>> There you go with the extremes again. Colorforth isn't designed >>>> "solely" for people with bad eyesight. It is designed to be as useful >>>> as possible. It is clear you have not learned enough about it to know >>>> what is good and what is bad. You took one quick look at it and turned >>>> away. >>> >>> I gave it several good looks. I have also given Forth a good look over >>> a number of times in the past few decades. It has some attractions, and >>> I would be happy if it were a practical choice for a lot of development. >>> It is always better when there is a choice - of chips, tools, >>> languages, whatever. But Forth just does not have what I need - not by >>> a long shot. What you take to be animosity, ignorance or bias here is >>> perhaps as much a result of frustration and a feeling of disappointment >>> that Forth is not better. >> >> I will say you have expressed your unhappiness with Forth without >> explaining what was lacking other than vague issues (like the look of >> Colorforth) and wanting it to be like other languages you are more used >> to. If you want the other languages, what is missing from them that you >> are still looking? > > I am not sure exactly what you are asking here, but if we are going to > bring in other languages, I think perhaps that would be a topic for a > new thread some other time. It could be a very interesting discussion > for comp.arch.embedded (less so for comp.lang.forth). However, I feel > this thread is big enough as it is! > >> >>>>>>>> The use of color to indicate aspects of the language is pretty much >>>>>>>> the >>>>>>>> same as the color highlighting I see in nearly every modern >>>>>>>> editor. The >>>>>>>> difference is that in ColorForth the highlighting is *part* of the >>>>>>>> language as it distinguishes when commands are executed. >>>>>>> >>>>>>> It is syntax highlighting. >>>>>> >>>>>> No, it is functional, not just illustrating. It is in the *language*, >>>>>> not just the editor. It's all integrated, not in the way the tools >>>>>> in a >>>>>> GUI are integrated, but in the way your heart, lungs and brain are >>>>>> integrated. >>>>>> >>>>> >>>>> No, it is syntax highlighting. >>>>> >>>>> There is a 4 bit "colour token" attached to each symbol. These >>>>> distinguish between variables, comments, word definitions, etc. There >>>>> is /nothing/ that this gives you compared to, say, $ prefixes for >>>>> variables (like PHP), _t suffixes for types (common convention in C), >>>>> etc., with colour syntax highlighting. The only difference is that the >>>>> editor hides the token. So when you have both var_foo and word_foo, >>>>> they are both displayed as "foo" in different colours rather than >>>>> "var_foo" and "word_foo" in different colours. >>>>> >>>>> That is all there is to it. >>>> >>>> You just said it is more than syntax highlighting. It is like type >>>> definitions in other languages. It is built into the language which >>>> won't work without it. That's the part you aren't getting. Compare >>>> Colorforth to ANSI Forth and you will see what I mean. >>>> >>> >>> It tags that you see by colour instead of as symbols or letters. >>> Glorified syntax highlighting. >> >> You can't get past the color highlighting. It's not about the color. >> It's about the fact that parts of the language have different uses. >> Color highlighting in other languages are just a nicety of the editor. >> The tokens in Colorforth are fundamental to the language. The color is >> used to indicate what is what, but color is not the point. >> > > Again, the tokens are nothing special. In most languages, the role is > filled by keyboards, symbols or other features of the grammar - but > there is nothing here that is fundamentally different. > > I haven't looked up a list of token types, but for the sake of argument > let's say that there is one indicating that something is a variable > shown in green, one indicating a word definition shown in red, and one > indicating a compile-time action shown in blue. And you have a name > "foo" that exists in all these contexts.
The devil is in the details. Making up examples won't cut it. It's not about simple syntax highlighting. The important stuff is when something is executed. The fact that Forth can do this makes it very powerful.
> You can show the different uses by displaying "foo" in different > colours. You can store it in code memory using a 4 bit token tag. You > could write it using keywords VAR, DEF and COMP before the identifier > "foo". You could use symbols $, : and # before the identifier to show > the difference. You could use other aspects of a language's grammar to > determine the difference. You could use the position within the line of > the code file to make the difference. You could simply say that the > same identifier cannot be used for different sorts of token, and the > token type is fixed when the identifier is created. > > The existence of different kinds of tokens for different uses is (at > least) as old as programming languages. Distinguishing them in > different ways is equally old. > > Yes, the use of colour as a way to show this is not really relevant. > However, it is not /me/ that is fussing about it - look at the /name/ of > this "marvellous new" Forth. It is called "colorFORTH".
Who said it is a "marvellous[sic] new" Forth?
>>> We do a fair amount of business taking people's bashed-together Arduino >>> prototypes and turning them into robust industrialised and professional >>> products. >> >> Yep, but they developed using the Arduino and they sell lots of them, >> likely a lot more than you sell of your industrialized products. > > The people that come to us may use Arduino or Pi's for prototyping, but > it is the industrial versions they sell (otherwise there would be no > point coming to us!). But no, we don't sell as many units as mass > produced cheap devices do.
So don't knock them. I'd love to be producing things like Arduinos that sell themselves rather than things that I have to pound the pavement to find users for. I know there are lots of people who will never like Forth. It is more of a tooll than a language. Its power lies in being very malleable allowing things to be done that are hard in other languages. I'm not an expert Forth programmer, so I can't explain all the ways it works better than other languages. The main thing I like is that it is interactive allowing me to interact with the hardware I build and construct interfaces from the bottom up testing as I go. Some of the details of using it can be clumsy actually, but it is still very useful for what I do. -- Rick C
albert@cherry.spenarnc.xs4all.nl (Albert van der Horst) writes:
>Interestingly, Java is supposed to be safe. I've seen dozens of >discussions of Euler problems of Java problems who had problems >with overflow and had wasted time debugging that.
Java has a BigInteger class, which seems ideal for dealing with big integers. I would not expect overflow problems when they use this class.
>That was never a time waster, >because that is the first thing to look at in such a case,
That may make the difference. The Java programmer may not have expected the overflow. - anton -- M. Anton Ertl http://www.complang.tuwien.ac.at/anton/home.html comp.lang.forth FAQs: http://www.complang.tuwien.ac.at/forth/faq/toc.html New standard: http://www.forth200x.org/forth200x.html EuroForth 2017: http://www.euroforth.org/ef17/
On 20.6.2017 &#1075;. 14:54, Tom Gardner wrote:
> On 20/06/17 11:49, Dimiter_Popoff wrote: >> On 20.6.2017 &#1075;. 13:25, Tom Gardner wrote: >>> On 20/06/17 10:36, dp wrote: >>>> On Tuesday, June 20, 2017 at 11:44:39 AM UTC+3, David Brown wrote: >>>>> On 20/06/17 09:25, Dimiter_Popoff wrote: >>>>>> On 20.6.2017 &#1075;. 07:12, rickman wrote: >>>>>>> .... >>>>>>> >>>>>>> So you think for a language to be modern it has to have hard coded >>>>>>> data >>>>>>> sizes? >>>>> >>>>> If you are going to use it for low-level programming and embedded >>>>> development, then yes. >>>> >>>> No. If you don't know which datum of which size and type is you are >>>> not up to programming. Delegating to the tool to track that means >>>> only more work for the programmer, sometimes a lot more while tracking >>>> what the tool got wrong _this_ time. >>> >>> Er. No. >>> >>> David is suggesting, correctly IMNSHO, that sometimes >>> it is necessary for me to specify exactly what the >>> tool has to achieve - and then to let the tool do it >>> in any way it sees fit. >>> >>> He gave a good example of that, which you snipped. >>> >>> With types such as uint8_t, uint_fast8_t and uint_least8_t, >>> modern C is a significant advance over K&R C. >> >> It may be an improvement in C indeed. But this is not relevant >> to the main point: delegating simple tasks to the tool costs >> the programmer more, often a lot more effort than it returns. > > For the examples given you haven't demonstrated your point, > and you have ignored the main point being made. > > To repeat the main point being made: it is better for me > to specify (in the source code) what the tool has to achieve, > and let the compiler decide how to achieve it. It is worse > for me to /ambiguously/ specify (in the source code) what > is required, and to implicitly point to the compiler's man > pages - and probably to hope the next program maintainer uses > the correct compiler flags.
Oh I did make my point all right, check my first post on the thread. The examples are completely beside my point, they are about C specifics and sort of.
> >>>>>> Programmers should use their ability to count not just for stack >>>>>> levels, >>>>>> it is a lot more effective than working to delegate the counting of >>>>>> this and that to a tool which is a lot less intelligent than the >>>>>> programmer. Just let the tool do the heavy lifting exercises, >>>>>> counting >>>>>> is not one of these. >>>>>> >>>>> >>>>> Counting is one of the tasks I expect a computer - and therefore a >>>>> programming language and a toolchain - to do well. I expect the >>>>> tool to >>>>> do the menial stuff and let the programmer get on with the thinking. >>>> >>>> If "counting" is too much of a workload is too much for a person he >>>> is not in the right job as a programmer (hopefully "counting" is not >>>> taken literally here, it means more "counting up to ten"). >>>> Delegating simple tasks to the tool makes life harder, not easier, >>>> as I said above. Often a lot harder. >>> >>> I can only easily deal with three numbers: 0, 1, many :) >>> All other numbers are a pain in the ass and I'm more >>> than happy to delegate them to a tool. >> >> Well you snipped my example with the programmer and the cook, let >> me repost it: > > I ignored it because, like most analogies, its > relevance is dubious and encourage focusing the > attention on /inapplicable/ details. > > That has already started happening in another > response which has started to discuss menus/meals > that people had last week!
Like it or not thinking is about making analogies. I realize my point is probably doomed to never come across to the vast majority of programmers today, sort of like trying to explain colours to a blind person (absolutely no insult meant here, I know I am talking to intelligent people, just wrestling to make a point). What I see from where I stand is that C as a language - not as a compiler, toolchain quality etc. - costs a lot more work to the programmer than needed. One of the reasons for that is the fact that the programmer has to delegate to the toolchain a lot of trivial "no brainer" work and _this_ costs a significant, at times a prohibitive, effort. How do I make my point to people who have never been really fluent in a lower level language which does not have the ugliness of a poor underlying model etc... A lost cause I guess.
> > I beats me how that is supposed to illuminate > the benefits of stating the peripheral's structure > and letting a compiler sort out how best to generate > code for it. >
It is not supposed to illuminate that. It is supposed to demonstrate that while wrestling with the compiler to make it do this or that one can often waste a lot more time if he could omit this step. Dimiter ====================================================== Dimiter Popoff, TGI http://www.tgi-sci.com ====================================================== http://www.flickr.com/photos/didi_tgi/

Memfault State of IoT Report