EmbeddedRelated.com
Forums
The 2026 Embedded Online Conference

Language feature selection

Started by Don Y March 5, 2017
On Sun, 12 Mar 2017 00:54:58 +0200, upsidedown@downunder.com wrote:

>FORTRAN had complex number support from the beginning. > >The array support has been added more recently. It seems that array >support was added ,more recently.. IMHO Fortran is still a viable >option for solving mathematical problems after recent updates.
FSVO "beginning". COMPLEX support was not the original versions of Fortran, but was added pretty early. Certainly it was a standard feature by the end of the 50s, perhaps in Fortran II.
On 03/13/2017 04:17 PM, Niklas Holsti wrote:
> Hm. In the spirit of language ecumenics and comparison, I translate your > example to Ada, below. I try to use your identifiers as such, although Ada > convention would use Title_Style capitalisation.
Great! That was fast. I had assumed that Ada would be able to come close.
> ... SNIP ... > > Syntactically Ada needs more lines, because enumerations and sub-records must > be defined as types. Does your language use "structural equivalence" for types? > That is, if one register field is defined as the enumeration (A, B, C), and > another register field is also defined as the enumeration (A, B, C), are these > fields of the same type?
There are also more lines because of the separation of "type T is record ..." and "type T use record ...". And yes, records have "structural equivalence". My language does not have as strict types as Ada. In part, that is to allow interoperability with C. Regards, Brian
On 3/13/2017 2:50 PM, Robert Wessel wrote:
> On Sat, 11 Mar 2017 12:53:46 -0700, Don Y > <blockedofcourse@foo.invalid> wrote: > >> (is there any reason why a TV *can't* do its own speech recognition without >> farming that task out to some remote server? really??) > > Well, yes. The fairly high* quality speech recognition we see today > depends on access to a large and evolving** database of things people > have actually said and written. The modern speech recognition systems > are yet another example that brute force is a much more viable > approach to many problems than people used to think. > > *FSVO "high" > > **IOW, "Watch Game of Thrones" would be well understood now simply > because that would be, in whole or in part, a common query. A few > years ago, before the series hit, it would have been much shakier.
It's a limited domain recognizer. Even if next season adds a show called "Fraggle Frooppy Figglesnorts", the service that provides the "TV guide" can provide enough information for the local recognizer to handle the recognition locally. [I don't expect even a remote server connected recognizer to correctly handle "Watch Alfred Hitchcock" if he's no longer on-the-air. Does your TV stop working when the server goes offline? Your network connection dies? Provider goes out of business? etc.]
On 3/13/2017 3:27 PM, Brian G. Lucas wrote:
> In part, that is to allow interoperability with C.
IMO, this sort of reasoning is behind many of the "bad" aspects of "new" innovations. There's a point at which you are better off just ignoring "backward compatibility" and adopting new, *better* ideas.
On 13/03/17 21:35, Don Y wrote:
> In school ("university" for right-ponders), many of the issues that > were brought up as "worthy goals" in language design seemed somewhat > arbitrary (at that time). Over time, I've been chagrined at how > many of those comments/lessons have gained new voice! > "Ahhh.... *that's* what they meant!"
"When I was 14 I thought my father was an idiot. When I became 21 I was amazed at how much he had learned in the past 7 years" Oft repeated to my daughter ;}
On Mon, 13 Mar 2017 17:08:41 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:

>On 3/13/2017 2:50 PM, Robert Wessel wrote: >> On Sat, 11 Mar 2017 12:53:46 -0700, Don Y >> <blockedofcourse@foo.invalid> wrote: >> >>> (is there any reason why a TV *can't* do its own speech recognition without >>> farming that task out to some remote server? really??) >> >> Well, yes. The fairly high* quality speech recognition we see today >> depends on access to a large and evolving** database of things people >> have actually said and written. The modern speech recognition systems >> are yet another example that brute force is a much more viable >> approach to many problems than people used to think. >> >> *FSVO "high" >> >> **IOW, "Watch Game of Thrones" would be well understood now simply >> because that would be, in whole or in part, a common query. A few >> years ago, before the series hit, it would have been much shakier. > >It's a limited domain recognizer. Even if next season adds a show >called "Fraggle Frooppy Figglesnorts", the service that provides the >"TV guide" can provide enough information for the local recognizer to >handle the recognition locally.
It's really not. Consider "the news show with John Smith" or "a movie with Hepburn and...", or "a children's show with animated unicorns".
>[I don't expect even a remote server connected recognizer to >correctly handle "Watch Alfred Hitchcock" if he's no longer >on-the-air. Does your TV stop working when the server goes >offline? Your network connection dies? Provider goes out >of business? etc.]
I dunno. "OK Google" on my phone immediately popped up a list of Hitchcock videos and shows for that query. And yes, a large chunk of my TV service dies if my network connection goes out.
Tom Gardner <spamjunk@blueyonder.co.uk> writes:
> "When I was 14 I thought my father was an idiot. When I became 21 I > was amazed at how much he had learned in the past 7 years"
One of the profs at my old school said at his retirement dinner "I've been teaching these kids freshman calculus for FORTY YEARS and they still don't get it!".
On 3/13/2017 6:51 PM, Robert Wessel wrote:
> On Mon, 13 Mar 2017 17:08:41 -0700, Don Y > <blockedofcourse@foo.invalid> wrote: > >> On 3/13/2017 2:50 PM, Robert Wessel wrote: >>> On Sat, 11 Mar 2017 12:53:46 -0700, Don Y >>> <blockedofcourse@foo.invalid> wrote: >>> >>>> (is there any reason why a TV *can't* do its own speech recognition without >>>> farming that task out to some remote server? really??) >>> >>> Well, yes. The fairly high* quality speech recognition we see today >>> depends on access to a large and evolving** database of things people >>> have actually said and written. The modern speech recognition systems >>> are yet another example that brute force is a much more viable >>> approach to many problems than people used to think. >>> >>> *FSVO "high" >>> >>> **IOW, "Watch Game of Thrones" would be well understood now simply >>> because that would be, in whole or in part, a common query. A few >>> years ago, before the series hit, it would have been much shakier. >> >> It's a limited domain recognizer. Even if next season adds a show >> called "Fraggle Frooppy Figglesnorts", the service that provides the >> "TV guide" can provide enough information for the local recognizer to >> handle the recognition locally. > > > It's really not. Consider "the news show with John Smith" or "a movie > with Hepburn and...", or "a children's show with animated unicorns".
That's a different feature. If the guide contained an entry that had the words "news" "john" and "smith" in it, I would expect it to be able to find that entry with just local processing. "a children's show with animated unicorns" might not be discernible from an examination of the guide entries. Nor would "old geezer escorting a midget through a magical land".
>> [I don't expect even a remote server connected recognizer to >> correctly handle "Watch Alfred Hitchcock" if he's no longer >> on-the-air. Does your TV stop working when the server goes >> offline? Your network connection dies? Provider goes out >> of business? etc.] > > I dunno. "OK Google" on my phone immediately popped up a list of > Hitchcock videos and shows for that query. > > And yes, a large chunk of my TV service dies if my network connection > goes out.
Without your internet connection, can you turn the TV on? off? change volume? command it to "watch CSI Miami"? The music we've loaded (from CD) into SWMBO's vehicle contains song titles, etc. I can say "Play Little Green Bag" and it will realize that there is a song having the title "Little Green Bag" on the internal disk drive and immediately start playing it. It might have a problem if I asked it to "play fish" and fish was spelled "ghoti" on the song title... If the (TV) guide listed "CSI Miami", I'd expect it to be accessible without requiring a server's assistance. If _The Day the Earth Stood Still_ was listed in the guide's *description* of the "Late Night Movie" (a regularly scheduled time slot that presents a "movie du jour"), I'd expect to be able to access it by saying "watch Day Earth Still". The computationally intensive part of the problem is converting sound to glyphs. The "search" algorithm beyond that is relatively trivial. Being able to say "watch old movie with big robot from outer space" would require completely different processing requirements and more general knowledge. [*Think* about it. With speech recognition local, could *you* search through a guide and come up with a likely match in these cases? Would that be a HARDER challenge than recognizing the speech itself?]
On 17-03-14 00:27 , Brian G. Lucas wrote:
> On 03/13/2017 04:17 PM, Niklas Holsti wrote: >> Hm. In the spirit of language ecumenics and comparison, I translate your >> example to Ada, below. I try to use your identifiers as such, although Ada >> convention would use Title_Style capitalisation. > > Great! That was fast. I had assumed that Ada would be able to come close. > >> ... SNIP ... >> >> Syntactically Ada needs more lines, because enumerations and sub-records must >> be defined as types. Does your language use "structural equivalence" for types? >> That is, if one register field is defined as the enumeration (A, B, C), and >> another register field is also defined as the enumeration (A, B, C), are these >> fields of the same type? > > There are also more lines because of the separation of "type T is record ..." > and "type T use record ...".
I could just have said "... with pack" for each record type, instead of the "type T use record ...", but IMO it is clearer and safer to give the bit numbers explicitly. Makes it easy to compare the source code with the data sheet that shows the register structures. There was a short discussion recently on the main Ada language-definition mailing list (ada-comment@ada-auth.org) about a proposal to allow the record lay-out (bit numbers) to be written in the record type declaration itself, avoiding these extra lines. It did not gain support; most responders felt that a separate definition of the lay-out gave clearer code.
> And yes, records have "structural equivalence". My language does not have as > strict types as Ada. In part, that is to allow interoperability with C.
I don't see how structural equivalence or weaker typing helps interoperability. If one cannot use the same source text to declare a type in C in and in the other language (or use a tool that translates or compares the two declarations) one must still be very careful to declare the types in equivalent ways. For simple C interfacing Ada has a standard predefined package, Interfaces.C, that defines Ada equivalents for the standard C types int, long, char, ... However, it does not fully cover the newer C standard types, such as the integer types of known size. Moreover, as different C compilers may (in principle) choose different sizes for "int" etc., one would anyway have to check the actual compatibility of a given C compiler's types with a given Ada compiler's implementation of Interfaces.C. The GNU Ada compiler, gnat, has a way to translate a set of C header files into the "corresponding" Ada package declarations, but the results may or may not be useful, because of the large semantic gap between the languages. I tried it once, but the result was so ugly and non-Ada-like, and cumbersome to use from Ada client code, that I preferred to write the Ada form manually. -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
On 14/03/17 02:23, Paul Rubin wrote:
> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >> "When I was 14 I thought my father was an idiot. When I became 21 I >> was amazed at how much he had learned in the past 7 years" > > One of the profs at my old school said at his retirement dinner "I've > been teaching these kids freshman calculus for FORTY YEARS and they > still don't get it!".
:) We learned integration and differentials of polynomials except 1/x for exams at 15. I can still visualise the teacher taking a double period (80 mins) for each, deriving the concepts from first principles. OK, he was a good teacher (but not a good mathematician, he knew his limits!), but even so I've never understood why people think calculus is inherently difficult.
The 2026 Embedded Online Conference