EmbeddedRelated.com
Forums

C# for Embedded ?...

Started by Chris November 4, 2016
On 11/6/2016 1:48 PM, Mike Perkins wrote:
> On 06/11/2016 19:46, Don Y wrote: >> On 11/6/2016 7:55 AM, Mike Perkins wrote: >>> On 04/11/2016 19:10, Don Y wrote: >>>> On 11/4/2016 11:54 AM, Don Y wrote: >>>>> Embedded systems come in all different sizes and varieties. >>>>> Also the "depth" of the embedding varies tremendously >>>>> (the code running on the Mars Rover is designed to operate >>>>> in a very different environment than the code that runs >>>>> the cash registers at your local supermarket!) >>>>> >>>>> As do the consequences of failures, etc. >>>> >>>> Here's a great "safety critical" application that's essentially >>>> trivial: dispensing gasoline! (a "process" controlled by unskilled >>>> users: DRIVERS!) >>> >>> It may appear that way, but in reality I think you'll find some hardware >>> interlock so when the user releases the nozzle trigger, one or more >>> valves will >>> close, independent of any software. >> >> You do realize that you can latch the nozzle "on"? >> And, that there is no guarantee that the user will have >> it positioned *in* the filler neck of a vehicle (so the >> "full" backpressure sensor will disengage that latch)? >> >> I.e., a malevolent user at an unattended gas station >> could latch the nozzle on and walk away, flooding the >> area with fuel. Or, the backpressure sensor could fail. >> (when addressing safety critical/mission critical/etc. >> applications, you have to PLAN on failures and decide >> how you will address their inevitability) >> >> [This was one of the issues we had to address when accommodating >> unattended operation of the pumps. You'll note that most service >> stations either operate in prepay or semiattended mode -- though >> we have electric and LNG/propane dispensaries, here, that are >> completely unattended] > > Most pumps limit how much fuel can be dispensed.
And, you have to ensure that limit is on VOLUME and not "dollar amount"! I.e., a station operator is likely to think in terms of limiting the size of the largest SALE ("$100 ought to cover it!"). But, that doesn't directly translate to fuel volume in a way that is easily (or even reliably!) predictable: how many gallons would that be at $0.00/gal?)
> I doubt you would use software > alone to limit the fuel and you would have a mechanical device to measure flow > and cut off accordingly.
Dunno -- I haven't been inside a fuel dispenser for 30+ years. At the time, the only "mechanical" valve was the "trigger" that your hand engages on the pump nozzle. returning the nozzle to its storage slot only trips a switch that signals the software that the transaction is complete (pump SHOULD be turned off, regardless). You can verify this by pumping gas, holding the nozzle "on" and reaching your hand up inside the storage slot -- push the "flap" upwards as if you'd just returned the nozzle to that position and you'll note that the pump stops... despite the fact that your OTHER hand is still commanding the nozzle to dispense. [Note, also, that the hose will remain under pressure if you release the "dispense" trigger (allowing pressure to build in the hose) and then return the nozzle to its storage slot. I.e., the pump is off but the stored pressure has no outlet as the mechanical valve at the nozzle is closed -- and the pump won't "bleed backwards"]
> I agree with you entirely on the planning of various risk control measures for > a variety of scenarios. We're probably singing from the same hymn sheet though > approaching from different directions.
My original point was that it is essentially a trivial problem. Even allowing for "software safeguards", it hardly merits a large design team (at least, not for the pump controller itself) or sophisticated implementation language. Rather, you want something that LOTS OF DEVELOPERS could maintain instead of looking for someone with a deep history of designing such systems. "Common sense" with advice from someone who already has would be more than sufficient.
Niklas Holsti <niklas.holsti@tidorum.invalid> writes:
> Question (2) is more interesting. What are the costs, or other > draw-backs, of using a "large" language (e.g. Ada or C++)?
It's similar to the issue of using a huge, complex, infrastructure-dependent device like an automobile to do something simple, like go to the store to pick up a few electronic parts. You could walk or use a skateboard instead, but most people won't think twice about using a car if the store is further than a km or two. Or in the software world, a modern web browser is one of the most complicated single applications out there. Yet we use them all the time for looking at 140-character text messages on twitter that were originally expected to be sent much more simply, by SMS. What's more, the 140 character message viewed in a browser is surrounded by a megabyte or so of javascript cruft that's almost do-nothing, but most users don't notice it or shrug their shoulders about it.
> For controlling a toaster, if one uses assembly language it is > probably possible to use an MCU with no RAM, such as a tiny AVR. There > is no fundamental technical reason why a "large"-language compiler > could not generate code that runs without RAM, for a very small > application, but one will probably not find such a compiler.
See: http://www.lightner.net/avr/ATtiny/projects/README.txt There's also a special purpose x86 compiler (part of Coreboot) that generates register-only code (no ram, no stack) to get the x86 boot process started before the ram controller is initialized: http://review.coreboot.org/cgit/coreboot.git/tree/util/romcc?id=HEAD
On 06/11/16 19:10, Paul Rubin wrote:
> Hans-Bernhard Br&ouml;ker <HBBroeker@t-online.de> writes: >> As to language design principles and goals, perhaps the shortest >> explanation of Java's (and thus, by cloning, C#'s) primary concept is >> "C++--". I.e. they took C++ and removed or at least downsized its >> more problematic constructs. Most prominently multiple inheritance >> was boiled down to the "interface"/"implements" construct, some of the >> legacy C semantics thrown out, and platform dependencies were moved >> from the language into the VM. > > From James Iry's "Brief, Incomplete, and Mostly Wrong History of > Programming Languages": > > 1996 - James Gosling invents Java. Java is a relatively verbose, > garbage collected, class based, statically typed, single dispatch, > object oriented language with single implementation inheritance and > multiple interface inheritance. Sun loudly heralds Java's novelty. > > 2001 - Anders Hejlsberg invents C#. C# is a relatively verbose, > garbage collected, class based, statically typed, single dispatch, > object oriented language with single implementation inheritance and > multiple interface inheritance. Microsoft loudly heralds C#'s > novelty. > > http://james-iry.blogspot.com/2009/05/brief-incomplete-and-mostly-wrong.html > > The rest of the history is hilarious too.
Oooh, that's /good/. And funny :) 1983 - Bjarne Stroustrup bolts everything he's ever heard of onto C to create C++. The resulting language is so complex that programs must be sent to the future to be compiled by the Skynet artificial intelligence. Build times suffer. Skynet's motives for performing the service remain unclear but spokespeople from the future say "there is nothing to be concerned about, baby," in an Austrian accented monotones. There is some speculation that Skynet is nothing more than a pretentious buffer overrun.
On 06/11/16 19:33, Paul Rubin wrote:
> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >> I looked at Java Card 15 years ago, and it is such a small subset of >> Java that it is as similar to Java as JavaScript is to Java. > > It's a useful enough subset for its intended applications, such as > payment cards. > >> In addition the execution environment is so (necessarily) limited that >> there's no real benefit to having Java. > > The main benefit compared with earlier systems was the security model, > which allowed applets running in the cards that were isolated from each > other and whose communication to the outside was controlled by the > communication object passed into the applet. There was some formal > verification work done on the security model iirc.
Just so. But the security model (the only "interesting" bit of Javacard) isn't specific to Javacard. ISTR there being other languages for programming smartcards. Aside: I'll _contend_ that the only useful thing that can be stored in a javacard is a magic number. When the card is used, that magic number is transmitted to a server, and a database defines the number allows. Corollary: there's no need for complex on-card logic.
Tom Gardner <spamjunk@blueyonder.co.uk> writes:
> Aside: I'll _contend_ that the only useful thing that can be stored in > a javacard is a magic number.
There are many javacard applications including pki tokens, payment cards, and a few billion phone sims. Most of them involve 2-way communication and crypto.
On 06/11/16 23:48, Paul Rubin wrote:
> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >> Aside: I'll _contend_ that the only useful thing that can be stored in >> a javacard is a magic number. > > There are many javacard applications including pki tokens, payment > cards, and a few billion phone sims. Most of them involve 2-way > communication and crypto. >
Maybe, but what is stored on the card, and why is that essential?
Tom Gardner <spamjunk@blueyonder.co.uk> writes:
> Maybe, but what is stored on the card, and why is that essential?
Typically user credentials including a cryptographic signing key. The key has to be encapsulated in the card so it can't be copied. It's instead used to sign a challenge presented by an external device. For example, smart cards are used to authenticate some cable TV set-top boxes. If you could duplicate a card, you could use the second one to watch cable shows without paying. There was a huge technical arms race between cable TV companies and cable pirates where pirates developed more and more sophisticated ways to copy cards, and card manufacturers designed progressively fancier countermeasures. Smart cards now have all kinds of obfuscation and tamper-resisting features to prevent extraction of the keys in the card.
On 07/11/16 08:09, Paul Rubin wrote:
> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >> Maybe, but what is stored on the card, and why is that essential? > > Typically user credentials including a cryptographic signing key. The > key has to be encapsulated in the card so it can't be copied. It's > instead used to sign a challenge presented by an external device. For > example, smart cards are used to authenticate some cable TV set-top > boxes. If you could duplicate a card, you could use the second one to > watch cable shows without paying.
And that is /precisely/ the kind of magic number that I was thinking of when I wrote: On 06/11/16 22:59, Tom Gardner wrote: > Aside: I'll _contend_ that the only useful thing that > can be stored in a javacard is a magic number. When > the card is used, that magic number is transmitted > to a server, and a database defines the number allows. > Corollary: there's no need for complex on-card logic. Whether the server is local or remote is uninteresting in this context. /Neither/ requires sufficient on-chip logic to make Java worthwhile. The JavaCard app is so small that it would be easy to write in assembler! Any support environment implementation would only be written once, by the card manufacturer, and could be written in any convenient language. So yes, the security aspects of JavaCard are beneficial, but they are scarcely a killer advantage.
Am 07.11.2016 um 10:56 schrieb Tom Gardner:
> On 07/11/16 08:09, Paul Rubin wrote: >> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >>> Maybe, but what is stored on the card, and why is that essential? >> >> Typically user credentials including a cryptographic signing key. The >> key has to be encapsulated in the card so it can't be copied. It's >> instead used to sign a challenge presented by an external device.
> And that is /precisely/ the kind of magic number that I > was thinking of when I wrote:
No, it's not. The crucial difference being that the actual magic number is _never_ever_ transmitted to any server.
On Friday, November 4, 2016 at 6:13:38 PM UTC-4, Clifford Heath wrote:
> On 05/11/16 04:03, John Speth wrote: > >> I've been invited to a meeting to discuss am automotive like engineering > >> project with a high level of safety critical requirements. > >> > >> They are using Simulink for some of the top level design work, but are > >> programming the whole lot in C#, with some of the code already written. > >> Not sure at this stage which rtos is being used, if at all. > >> > >> From what I've read, C# is a web / application / database programming > >> language and a quick look at the Wiki page suggests that the two > >> most recent versions are not approved by any international standards > >> organisation. > >> > >> C# raises alarm bells here for all kinds of reasons, but what do you > >> think ?... > > > > I agree with your negative concerns about the choice. But from a > > testing point of view, any language is just as good as any other *IF* it > > can be proved as such through thorough testing. > > Every hole needs to be tested. > > Not every hole *can* be tested. You run into the halting problem, > first and mainly with the garbage collector. There is no way to > prove it will always complete "in time" for any definition of > "in time". The same is likely true of many of the application's > own algorithms; but at least you're in control of those. > > Clifford Heath.
Sorry for joining late in the conversation. Actually I would say it depends on the systems architecture. C# is a decent GUI language. If you then assign the hard real time features to low level controllers (device drivers &/or micro controllers like Arduino), you can build a decently robust system. So I guess my opinion is "it depends". Ed