EmbeddedRelated.com
Forums
The 2026 Embedded Online Conference

Language feature selection

Started by Don Y March 5, 2017
Tom Gardner <spamjunk@blueyonder.co.uk> writes:
> OK, he was a good teacher (but not a good mathematician, he knew his > limits!), but even so I've never understood why people think calculus > is inherently difficult.
Don't know if that was a pun, but yes, the usual stumbling block is understanding limits (for every epsilon there is a delta etc). It's often the first encounter students have with that type of treatment.
On 14/03/17 09:14, Paul Rubin wrote:
> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >> OK, he was a good teacher (but not a good mathematician, he knew his >> limits!), but even so I've never understood why people think calculus >> is inherently difficult. > > Don't know if that was a pun, but yes, the usual stumbling block is > understanding limits (for every epsilon there is a delta etc). It's > often the first encounter students have with that type of treatment.
It wasn't /intended/ to be a pun; too early in the morning :) That teacher realised while doing maths at university that he was never going to be a professional mathematician. OTOH the teacher who had invented a new way of generating Pythagorean triads was a seriously cryptic teacher.
On 13/03/17 19:14, Walter Banks wrote:
> On 2017-03-12 7:15 PM, Tom Gardner wrote: >> That's why the necessity of having "language lawyers" to unravel the >> arcane complexity of modern C/C++ and tools implies that modern C/C++ >> is part of the problem as much as it is part of the solution. > > Having been at quite a few WG-14 meetings. C/C++ issues are a > combination of some level of compatibility to old tools and dealing with > new emerging code generation requirements. Thrown in for good measure > was C "Conventional wisdom" (Everyone knows that C does "this") > > It is hard to advance a language with that type of background. Many > things are, "implementation defined", it is difficult to extend the > language and many legacy items remained obscure. > > The open source folks for the most part didn't participate and created > their own variation. Conforming test suites have been for the most part > have been commercial products that are very useful in creating > conforming compilers but the implementation defined and conventional > wisdom aspects tends to impact the language portability. >
In the earlier days of C standards, there was not much participation from open source folks. Much of the WG were people with rather specialised focus (look at yourself - your own compilers are mostly aimed at the most C-unfriendly processors around, and are full of non-standard extensions to get the most from such cpus). The standardisation process was slow, it was extremely difficult to get new ideas into the language, it is almost impossible to throw out old compatibilities (such as for one's complement machines), and the bureaucratic factor was high. Those involved in open source compilers (which basically meant "gcc" at the time) were more interested in writing tools that worked, with extensions to suit their users - you could well say they were not mature enough members of the industry to see the importance of the standardisation process. That changed with time, and both gcc and clang folks are involved with language standardisation groups (mainly C++ rather than C, since that is where most of the work is). And both gcc and clang work hard to support the standards (while also supporting a number of extensions). There are no compilers, commercial or open source, that come close to gcc in the standards support for a broad range of targets. Most commercial embedded C compilers struggle enough with C99 and C++03, and haven't considered C11, C++11 or C++14. Most don't even document which C or C++ standards they are close to supporting - it's all just "C" or "C++" to them. Most of them rely on extensions (as distinct from /allowing/ extensions to improve code). And most commercial C compiler developers have never even heard of conformance test suites like Plum Hall, never mind considered using them. (Several companies will provide you with Plum Hall certificates for gcc - for a price. These companies also work to fix any non-conformances they find in gcc.)
> C isn't is particularly portable in many applications. >
A lot of embedded C code is closely tied to the target - but some parts can be made highly portable. C covers a wide range of possibilities here.
On 3/14/2017 1:48 AM, Tom Gardner wrote:
> On 14/03/17 02:23, Paul Rubin wrote: >> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >>> "When I was 14 I thought my father was an idiot. When I became 21 I >>> was amazed at how much he had learned in the past 7 years" >> >> One of the profs at my old school said at his retirement dinner "I've >> been teaching these kids freshman calculus for FORTY YEARS and they >> still don't get it!". > > We learned integration and differentials of polynomials > except 1/x for exams at 15. I can still visualise the > teacher taking a double period (80 mins) for each, > deriving the concepts from first principles.
I'd already had two years (not semesters) of calculus before going to college (despite having skipped the last year of high school). A lot depends on the quality of the local (public!) school system and the types of teachers made available to you.
> OK, he was a good teacher (but not a good mathematician, > he knew his limits!), but even so I've never understood > why people think calculus is inherently difficult.
Many of SWMBO's female "artist" friends can't "get" perspective. This completely threw me for a loop: it's the *easiest* aspect of drawing -- it's MECHANICAL in nature, well-behaved, "linear", etc! So "basic" that its difficult to know where to begin when asked to explain it. They periodically pester me to give a "class" on it but I'm not keen on spending that much time with a bunch of female "artist types" :< (wrong side of the brain)
On 03/14/2017 03:11 AM, Niklas Holsti wrote:
> On 17-03-14 00:27 , Brian G. Lucas wrote: > ... SNIP >> And yes, records have "structural equivalence". My language does not have as >> strict types as Ada. In part, that is to allow interoperability with C. > > I don't see how structural equivalence or weaker typing helps interoperability. > If one cannot use the same source text to declare a type in C in and in the > other language (or use a tool that translates or compares the two declarations) > one must still be very careful to declare the types in equivalent ways.
You are correct. The structural equivalence is not a cause or effect of interoperability with C. I have experimented with more strict typing and the language could support it, but does not at present. Regards, Brian
On 2017-03-14 8:27 AM, David Brown wrote:
> On 13/03/17 19:14, Walter Banks wrote: >> On 2017-03-12 7:15 PM, Tom Gardner wrote: >>> That's why the necessity of having "language lawyers" to unravel >>> the arcane complexity of modern C/C++ and tools implies that >>> modern C/C++ is part of the problem as much as it is part of the >>> solution. >> >> Having been at quite a few WG-14 meetings. C/C++ issues are a >> combination of some level of compatibility to old tools and dealing >> with new emerging code generation requirements. Thrown in for good >> measure was C "Conventional wisdom" (Everyone knows that C does >> "this") >> >> It is hard to advance a language with that type of background. >> Many things are, "implementation defined", it is difficult to >> extend the language and many legacy items remained obscure. >> >> The open source folks for the most part didn't participate and >> created their own variation. Conforming test suites have been for >> the most part have been commercial products that are very useful in >> creating conforming compilers but the implementation defined and >> conventional wisdom aspects tends to impact the language >> portability. >> > > In the earlier days of C standards, there was not much participation > from open source folks. Much of the WG were people with rather > specialised focus (look at yourself - your own compilers are mostly > aimed at the most C-unfriendly processors around, and are full of > non-standard extensions to get the most from such cpus). The > standardisation process was slow, it was extremely difficult to get > new ideas into the language, it is almost impossible to throw out > old compatibilities (such as for one's complement machines), and the > bureaucratic factor was high. Those involved in open source > compilers (which basically meant "gcc" at the time) were more > interested in writing tools that worked, with extensions to suit > their users - you could well say they were not mature enough members > of the industry to see the importance of the standardisation > process. > > That changed with time, and both gcc and clang folks are involved > with language standardisation groups (mainly C++ rather than C, since > that is where most of the work is). And both gcc and clang work hard > to support the standards (while also supporting a number of > extensions). There are no compilers, commercial or open source, that > come close to gcc in the standards support for a broad range of > targets. Most commercial embedded C compilers struggle enough with > C99 and C++03, and haven't considered C11, C++11 or C++14. Most > don't even document which C or C++ standards they are close to > supporting - it's all just "C" or "C++" to them. Most of them rely > on extensions (as distinct from /allowing/ extensions to improve > code). And most commercial C compiler developers have never even > heard of conformance test suites like Plum Hall, never mind > considered using them. (Several companies will provide you with Plum > Hall certificates for gcc - for a price. These companies also work > to fix any non-conformances they find in gcc.) > > >> C isn't is particularly portable in many applications. >> > > A lot of embedded C code is closely tied to the target - but some > parts can be made highly portable. C covers a wide range of > possibilities here. > >
Comments well taken. One of the things you point out is the non-conformance of the tools we develop and sell for embedded systems and I would like to also point out that a lot of my work with WG-14 is to provide a voice to explain the needs and solutions to supporting embedded systems. I have done this with work on the actual standards and on IEC/ISO documents that supplements the WG-14 C standards documents that is part of the WG-14 work. I should also point out the WG-14 mandate is to codify the current practice not to invent or define new standards or syntax. The standards committees will debate and define a resolution of conflicting syntax for new features. I can point to the nonstandard language structures in my work that became part of the standards. WG-14 has several times discovered just how important the need for existing practice is when they have wandered outside that boundary. One essential ingredient to new standards is the need for a body of experience to validate a change. I stand by my comment that the open source people have been for the most part absent from these discussions. w..
Am 14.03.2017 um 02:06 schrieb Tom Gardner:

> "When I was 14 I thought my father was an idiot. When > I became 21 I was amazed at how much he had learned > in the past 7 years"
Or as some clever guy put it: the most reliable indication that you've turned into a grown-up is when you catch yourself just having done something _despite_ your parents having told you so.
On 14/03/17 18:30, Walter Banks wrote:
> On 2017-03-14 8:27 AM, David Brown wrote:
>> A lot of embedded C code is closely tied to the target - but some >> parts can be made highly portable. C covers a wide range of >> possibilities here. >> >> > > Comments well taken. One of the things you point out is the > non-conformance of the tools we develop and sell for embedded systems > and I would like to also point out that a lot of my work with WG-14 is > to provide a voice to explain the needs and solutions to supporting > embedded systems. I have done this with work on the actual standards and > on IEC/ISO documents that supplements the WG-14 C standards documents > that is part of the WG-14 work.
I think it is a good thing that the C standards take this sort of thing into account - C is not just a language for PC-style processors with 8-bit char and 32-bit int. And I fully appreciate that for some targets, a lot of extensions are needed to get the best out of the chip, and it is unlikely that you will need full conformance (who needs wchar_t on a processor with 4K flash?). But I did think it was ironic for you to berate open source folks for "creating their own variation" ! (And note that in several cases, features from these gcc variations have been absorbed into the C or C++ standards. And the C++ standard library is heavily influenced by the open source Boost project.)
> > I should also point out the WG-14 mandate is to codify the current > practice not to invent or define new standards or syntax. The standards > committees will debate and define a resolution of conflicting syntax for > new features. I can point to the nonstandard language structures in my > work that became part of the standards. > > WG-14 has several times discovered just how important the need for > existing practice is when they have wandered outside that boundary. One > essential ingredient to new standards is the need for a body of > experience to validate a change. > > I stand by my comment that the open source people have been for the most > part absent from these discussions. >
On 2017-03-14 5:03 PM, David Brown wrote:
> > (And note that in several cases, features from these gcc variations have > been absorbed into the C or C++ standards. And the C++ standard library > is heavily influenced by the open source Boost project.) >
The C standards (WG-14) material from GCC has only happened when there was someone to write up changes and make it part of the formal record. The open source community have been quick to criticize but for the most part have been unwilling to put the effort into participation. WG-14 has been criticized for not looking for standard practice instead of just working from formal presentations. Working on standards is a non trivial commitment of time and resources. If a change is good enough to be included then it needs someone to advocate it's use and be knowledgeable enough to properly represent it in debate. w..
On Mon, 13 Mar 2017 20:31:34 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:

>On 3/13/2017 6:51 PM, Robert Wessel wrote: >> On Mon, 13 Mar 2017 17:08:41 -0700, Don Y >> <blockedofcourse@foo.invalid> wrote: >> >>> On 3/13/2017 2:50 PM, Robert Wessel wrote: >>>> On Sat, 11 Mar 2017 12:53:46 -0700, Don Y >>>> <blockedofcourse@foo.invalid> wrote: >>>> >>>>> (is there any reason why a TV *can't* do its own speech recognition without >>>>> farming that task out to some remote server? really??) >>>> >>>> Well, yes. The fairly high* quality speech recognition we see today >>>> depends on access to a large and evolving** database of things people >>>> have actually said and written. The modern speech recognition systems >>>> are yet another example that brute force is a much more viable >>>> approach to many problems than people used to think. >>>> >>>> *FSVO "high" >>>> >>>> **IOW, "Watch Game of Thrones" would be well understood now simply >>>> because that would be, in whole or in part, a common query. A few >>>> years ago, before the series hit, it would have been much shakier. >>> >>> It's a limited domain recognizer. Even if next season adds a show >>> called "Fraggle Frooppy Figglesnorts", the service that provides the >>> "TV guide" can provide enough information for the local recognizer to >>> handle the recognition locally. >> >> >> It's really not. Consider "the news show with John Smith" or "a movie >> with Hepburn and...", or "a children's show with animated unicorns". > >That's a different feature. If the guide contained an entry >that had the words "news" "john" and "smith" in it, I would >expect it to be able to find that entry with just local >processing. > >"a children's show with animated unicorns" might not be discernible >from an examination of the guide entries. Nor would "old geezer >escorting a midget through a magical land". > >>> [I don't expect even a remote server connected recognizer to >>> correctly handle "Watch Alfred Hitchcock" if he's no longer >>> on-the-air. Does your TV stop working when the server goes >>> offline? Your network connection dies? Provider goes out >>> of business? etc.] >> >> I dunno. "OK Google" on my phone immediately popped up a list of >> Hitchcock videos and shows for that query. >> >> And yes, a large chunk of my TV service dies if my network connection >> goes out. > >Without your internet connection, can you turn the TV on? off? change >volume? command it to "watch CSI Miami"?
Sure I can turn it on and off without an internet connection, but unless CSI Miami happens to be on a Comcast channel at the moment (or recorded from one earlier), I can't watch it. And very little (some live events excepted) of our TV watching follows that pattern. On-demand, Netflix, Hulu, etc. So TV is mostly dead for us without an internet connection.
>The music we've loaded (from CD) into SWMBO's vehicle contains song >titles, etc. I can say "Play Little Green Bag" and it will realize that >there is a song having the title "Little Green Bag" on the internal disk >drive and immediately start playing it. It might have a problem if I >asked it to "play fish" and fish was spelled "ghoti" on the song title... > >If the (TV) guide listed "CSI Miami", I'd expect it to be accessible without >requiring a server's assistance. If _The Day the Earth Stood Still_ was >listed in the guide's *description* of the "Late Night Movie" (a regularly >scheduled time slot that presents a "movie du jour"), I'd expect to be >able to access it by saying "watch Day Earth Still". > >The computationally intensive part of the problem is converting sound >to glyphs. The "search" algorithm beyond that is relatively trivial.
No it's not - the sound-to-text part is solidly integrated with the database of things that are actually said. The Google voice assistant is particularly adept at letting you watch it change what it's "heard" as you get further into you sentence.
>Being able to say "watch old movie with big robot from outer space" would >require completely different processing requirements and more general >knowledge. > >[*Think* about it. With speech recognition local, could *you* >search through a guide and come up with a likely match in these cases? >Would that be a HARDER challenge than recognizing the speech itself?]
No I couldn't, but that's why we have computers and databases and whatnot - so we have *better* reference works than a TV Guide.
The 2026 Embedded Online Conference