EmbeddedRelated.com
Forums

Engineering degree for embedded systems

Started by hogwarts July 27, 2017
On 2017-08-17 10:06 AM, David Brown wrote:
> On 17/08/17 14:24, Walter Banks wrote: > > The AVR needs different instructions for accessing data in flash and > ram, and address spaces provide a neater and less error-prone > solution than macros or function calls for flash data access. > > So far, so good - and if that is your work, then well done. The > actual text of the document could, IMHO, benefit from a more concrete > example usage of address spaces (such as for flash access, as that is > likely to be a very popular usage).
The named address space stuff is essentially all mine.
> > > The register storage class stuff, however, is not something I would > like to see in C standards. If I had wanted to mess with specific > CPU registers such as flag registers, I would be programming in > assembly. C is/not/ assembly - we use C so that we don't have to > use assembly. There may be a few specific cases of particular > awkward processors for which it is occasionally useful to have direct > access to flag bits - those are very much in the minority. And they > are getting more in the minority as painful architectures like COP8 > and PIC16 are being dropped in favour of C-friendly processors. It > is absolutely fine to put support for condition code registers (or > whatever) into compilers as target extensions. I can especially see > how it can help compiler implementers to write support libraries in > C rather than assembly. But it is/not/ something to clutter up C > standards or for general embedded C usage.
To be really clear this was a TR and never expected to be added to the C standards at the time. In the current environment I would like to see the C standards moved forward to support the emerging ISA's. There are many current applications that need additions to the language to describe effective solutions to some problems. Ad-hoc additions prevent the very thing that C is promoted for, that is portability. C standards are supposed to codify existing practice and so often the politics of standards become arguments about preserving old standards rather than support for newer processor technology. I know from what I have been doing is both the spirit and approach to code development in C can deal with changes in applications and processor technology. So many of the development tools still are restricted by the technology limits of development environments of 40 years ago.
> > The disappointing part of named address spaces is in Annex B.1. It > is tantalisingly close to allowing user-defined address spaces with > specific features such as neat access to data stored in other types > of memory. But it is missing all the detail needed to make it work, > how and when it could be used, examples, and all the thought into > how it would interplay with other features of the language. It also > totally ignores some major issues that are very contrary to the > spirit and philosophy of C. When writing C, one expects "x = 1;" to > operate immediately as a short sequence of instructions, or even to > be removed altogether by the compiler optimiser. With a > user-defined address space, such as an SPI eeprom mapping, this could > take significant time, it could interact badly with other code (such > as another thread or an interrupt the is also accessing the SPI bus), > it could depend on setup of things outside the control of the > compiler, and it could fail.
The named address space has often been used to support diverse forms of memory. To use your example x = 1; The choice where x is located and how it is accessed is made where it is declared. How it is handled after that is made by the compiler. The assumption is that the code is written with functional intent. As valid as the SPI conflict is it is a strawman in practice. C is filled with undefined and ambiguous cases and this type of potential problem in practice is very rare.
> > You need to think long and hard as to whether this is something > desirable in a C compiler.
I have and it is. Once I passed the general single address space C model named address space opened a level of flexibility that allows C to be used in a variety of application environments that conventional C code does not work well for.
> It would mean giving up the kind of transparency and low-level > predictability that are some of the key reasons people choose C over > C++ for such work. If the convenience of being able to access > different types of data in the same way in code is worth it, then > these issues must be made clear and the mechanisms developed - if > not, then the idea should be dropped. A half-written > half-thought-out annex is not the answer.
I buy the documentation point. From a usage point I disagree. Writing an application program that can be spread over many processors is a good example. In the two decades since this work was initially done things have changed considerably from consumer products that distributed an application over 3 or 4 processors. (after initially prototyping on a single processor). In these processor usage was almost always manually allocation using geographical centers of reference. This has evolved to compiler analysis that automate this whole process over many many processors. w..
On 17/08/17 18:15, Walter Banks wrote:
> On 2017-08-17 10:06 AM, David Brown wrote: >> On 17/08/17 14:24, Walter Banks wrote: >> >> The AVR needs different instructions for accessing data in flash and >> ram, and address spaces provide a neater and less error-prone solution >> than macros or function calls for flash data access. >> >> So far, so good - and if that is your work, then well done. The >> actual text of the document could, IMHO, benefit from a more concrete >> example usage of address spaces (such as for flash access, as that is >> likely to be a very popular usage). > > The named address space stuff is essentially all mine. > >> >> >> The register storage class stuff, however, is not something I would >> like to see in C standards. If I had wanted to mess with specific CPU >> registers such as flag registers, I would be programming in assembly. >> C is/not/ assembly - we use C so that we don't have to use assembly. >> There may be a few specific cases of particular >> awkward processors for which it is occasionally useful to have direct >> access to flag bits - those are very much in the minority. And they >> are getting more in the minority as painful architectures like COP8 >> and PIC16 are being dropped in favour of C-friendly processors. It >> is absolutely fine to put support for condition code registers (or >> whatever) into compilers as target extensions. I can especially see >> how it can help compiler implementers to write support libraries in >> C rather than assembly. But it is/not/ something to clutter up C >> standards or for general embedded C usage. > > To be really clear this was a TR and never expected to be added to the C > standards at the time.
I assume that it was hoped to become an addition to the C standards, or at least a basis and inspiration for such additions - otherwise what was the point? I would be quite happy with the idea of "supplementary" standards to go along with the main C standards, to add features or to provide a common set of implementation-dependent features. For example, Posix adds a number of standard library functions, and gives guarantees about the size and form of integers - thus people can write code that is portable to Posix without imposing requirements on compilers for an 8051. A similar additional standard giving features for embedded developers, but without imposing requirements on PC programmers, would make sense.
> > In the current environment I would like to see the C standards moved > forward to support the emerging ISA's. There are many current > applications that need additions to the language to describe effective > solutions to some problems. Ad-hoc additions prevent the very thing that > C is promoted for, that is portability.
C is intended to support two significantly different types of code. One is portable code that can run on a wide range of systems. The other is system-specific code that is targeted at a very small number of systems. If you are writing code that depends on features of a particular ISA, then you should be using target-specific or implementation-dependent features. If a new feature is useful across a range of targets, then sometimes a middle ground would make more sense. The C standards today have that in the form of optional features. For example, most targets support nice 8-bit, 16-bit, 32-bit and 64-bit integers with two's complement arithmetic. But some targets do not support them. So C99 and C11 give standard names and definitions of these types, but make them optional. This works well for features that many targets can support, and many people would have use of. For features that are useful on only a small number of ISA's, they should not be in the main C standards at all - a supplementary standard would make more sense. Yes, that would mean fragmenting the C world somewhat - but I think it would still be a better compromise. Incidentally, can you say anything about these "emerging ISA's" and the features needed? I fully understand if you cannot give details in public (of course, you'll need to do so some time if you want them standardised!).
> C standards are supposed to > codify existing practice and so often the politics of standards become > arguments about preserving old standards rather than support for newer > processor technology.
That is a major point of them, yes.
> I know from what I have been doing is both the > spirit and approach to code development in C can deal with changes in > applications and processor technology. > > So many of the development tools still are restricted by the technology > limits of development environments of 40 years ago.
It is the price of backwards compatibility. Like most C programmers, I have my own ideas of what is "old cruft" that could be removed from C standards without harm to current users. And like most C programmers, my ideas about what is "old cruft" will include things that some other C programmers still use to this day.
> >> >> The disappointing part of named address spaces is in Annex B.1. It is >> tantalisingly close to allowing user-defined address spaces with >> specific features such as neat access to data stored in other types of >> memory. But it is missing all the detail needed to make it work, how >> and when it could be used, examples, and all the thought into >> how it would interplay with other features of the language. It also >> totally ignores some major issues that are very contrary to the spirit >> and philosophy of C. When writing C, one expects "x = 1;" to operate >> immediately as a short sequence of instructions, or even to be removed >> altogether by the compiler optimiser. With a >> user-defined address space, such as an SPI eeprom mapping, this could >> take significant time, it could interact badly with other code (such >> as another thread or an interrupt the is also accessing the SPI bus), >> it could depend on setup of things outside the control of the >> compiler, and it could fail. > > The named address space has often been used to support diverse forms > of memory. To use your example x = 1; The choice where x is located > and how it is accessed is made where it is declared. How it is handled > after that is made by the compiler. The assumption is that the code > is written with functional intent. >
Yes, that is the nice thing about named address spaces here.
> As valid as the SPI conflict is it is a strawman in practice. C is > filled with undefined and ambiguous cases and this type of potential > problem in practice is very rare.
I don't agree. If you first say that named address spaces give a way of running arbitrary user code for something like "x = 1;", you are making a very big change in the way C works. And you make it very easy for programmers to make far-reaching code changes in unexpected ways. Imagine a program for controlling a music system. You have a global variable "volume", set in the main loop when the knob is checked, and read in a timer interrupt that is used to give smooth transition of the actual volume output (for cool fade-in and fade-out). Somebody then decides that the volume should be kept in non-volatile memory so that it is kept over power cycles. Great - you just stick a "_I2CEeprom" address space qualifier on the definition of "volume". Job done. Nobody notices that the timer interrupts now take milliseconds instead of microseconds to run. And nobody - except the unlucky customer - notices that all hell breaks loose and his speakers are blown when the volume timer interrupt happens in the middle of a poll of the I2C temperature sensors. Now, you can well say that this is all bad program design, or poor development methodology, or insufficient test procedures. But the point is that allowing such address space modifiers so simply changes the way C works - it changes what people expect from C. A C programmer has a very different expectation from "x = 1;" than "x = readFromEeprom(address);". I am /not/ saying the benefits are not worth the costs here - I am saying this needs to be considered very, very carefully, and features needed to be viewed in the context of the effects they can cause here. There are no /right/ answers - but calling it "a strawman in practice" is very much the /wrong/ answer. Problems that occur very rarely are the worst kind of problems. This is a very different case from something like flash access, or access to ram in different pages, where the access is quite clearly defined and has definite and predictable timings. You may still have challenges - if you need to set a "page select register", how do you ensure that everything works with interrupts that may also use this address space? But the challenges are smaller, and the benefits greater.
> >> >> You need to think long and hard as to whether this is something >> desirable in a C compiler. > > I have and it is. Once I passed the general single address space > C model named address space opened a level of flexibility that > allows C to be used in a variety of application environments > that conventional C code does not work well for. > >> It would mean giving up the kind of transparency and low-level >> predictability that are some of the key reasons people choose C over >> C++ for such work. If the convenience of being able to access >> different types of data in the same way in code is worth it, then >> these issues must be made clear and the mechanisms developed - if not, >> then the idea should be dropped. A half-written >> half-thought-out annex is not the answer. > > I buy the documentation point. > > From a usage point I disagree. Writing an application program that can > be spread over many processors is a good example.
That is a very different kind of programming from the current mainstream, and it is questionably as to whether C is a sensible choice of language for such systems. But okay, let's continue...
> In the two decades > since this work was initially done things have changed considerably from > consumer products that distributed an application over 3 or 4 > processors. (after initially prototyping on a single processor). In > these processor usage was almost always manually allocation using > geographical centers of reference. > > This has evolved to compiler analysis that automate this whole process > over many many processors. >
I assume you are not talking about multi-threaded code working on an SMP system - that is already possible in C, especially with C11 features like threading, atomic access, and thread-local data. (Of course more features might be useful, and just because it is possible does not mean programmers get things right.) You are talking about MPPA ("Multi purpose processor array") where you have many small cores with local memory distributed around a chip, with communication channels between the nodes. I would say that named address spaces are not the answer here - the answer is to drop C, or at least /substantially/ modify it. The XMOS xC language is an example. A key point is to allow the definition of a "node" of work with local data, functions operating in the context of that node, and communication channels in and out. Nodes should not be able to access data or functions on other nodes except through the channels, though for convenience of programming you might allow access to fixed data (compile-time constants, and functions with no static variables, which can all be duplicated as needed). Channel-to-channel connections should ideally be fixed at compile time, allowing the linker/placer/router to arrange the nodes to match the physical layout of the device. Lots of fun, but not C as we know it.
On 2017-08-06 upsidedown@downunder.com wrote in comp.arch.embedded:
> On Sat, 5 Aug 2017 15:20:40 -0500, Les Cargill ><lcargill99@comcast.com> wrote: > >>IMO, a reputable EE programme is still probably the best way. CS >>programs still vary too much; CS may or may not be a second-class >>setup in many universities. >> >>I get the feeling that *analog* engineers still have a stable job >>base because it's much harder to fake that. It's somewhat harder.
Yes a good understanding of analog (and digital) electronics is IMO still the best starting point if you plan to build and program "lower level" devices, like the "IoT" devices.
>>And I'd warn the OP against specifically targeting IoT. It's a big >>bubble. People win in bubbles but it's not likely you will be among >>them. > > I have often wondered what this IoT hype is all about. It seems to be > very similar to the PLC (Programmable Logic Controller) used for > decades. You need to do some programming but as equally important > interface to he external world (sensors, relay controls and > communication to other devices).
"IoT" mostly seems a new buzz word for things that have been done for decades, but then with improved (fancier) user interface. Saw an article on new IoT rat traps lately: "Remotely monitor the trap, warns if activated or battery low etc. Uses SMS to communicate with server". Now, that just sounds like what we did 20 years ago. But then we called it M2M communication and it did not have such a pretty web interface and we did not have to hand over all our data to Google or some other party. And there was no 'cloud', just a server. And ofcourse there are sensible IoT devices and services, but too many things are just labeled "IoT" for the label value alone. And what about this "new" thing: "Edge Computing" Something "new": Process the data locally (on the embedded device) before you send it to the server. Again something that has been done for decades (someone in this thread called it the "pork cycle"?) because we needed to. The slow serial connections just couldn't handle the raw, unprocessed data and servers could not handle data processing for many devices simultanously. Just sending everything over to the server was only made possible by fast intervet connections. But they now find out that with so many devices evrything is drowned in a data swamp. So bright new idea: Process locally and invent new buzz word. Hmm, I think I'm starting to sound old. ;-( -- Stef (remove caps, dashes and .invalid from e-mail address to reply by mail) Death is nature's way of saying `Howdy'.
On 2017-08-18 8:45 AM, David Brown wrote:
> On 17/08/17 18:15, Walter Banks wrote: >> On 2017-08-17 10:06 AM, David Brown wrote: >>> On 17/08/17 14:24, Walter Banks wrote: >>>
>> >> To be really clear this was a TR and never expected to be added to >> the C standards at the time. > > I assume that it was hoped to become an addition to the C standards, > or at least a basis and inspiration for such additions - otherwise > what was the point? I would be quite happy with the idea of > "supplementary" standards to go along with the main C standards, to > add features or to provide a common set of implementation-dependent > features. For example, Posix adds a number of standard library > functions, and gives guarantees about the size and form of integers - > thus people can write code that is portable to Posix without imposing > requirements on compilers for an 8051. A similar additional standard > giving features for embedded developers, but without imposing > requirements on PC programmers, would make sense.
> >> >> In the current environment I would like to see the C standards >> moved forward to support the emerging ISA's. There are many >> current applications that need additions to the language to >> describe effective solutions to some problems. Ad-hoc additions >> prevent the very thing that C is promoted for, that is >> portability. > > C is intended to support two significantly different types of code. > One is portable code that can run on a wide range of systems. The > other is system-specific code that is targeted at a very small number > of systems. If you are writing code that depends on features of a > particular ISA, then you should be using target-specific or > implementation-dependent features. > > If a new feature is useful across a range of targets, then sometimes > a middle ground would make more sense. The C standards today have > that in the form of optional features. For example, most targets > support nice 8-bit, 16-bit, 32-bit and 64-bit integers with two's > complement arithmetic. But some targets do not support them. So C99 > and C11 give standard names and definitions of these types, but make > them optional. This works well for features that many targets can > support, and many people would have use of. > > For features that are useful on only a small number of ISA's, they > should not be in the main C standards at all - a supplementary > standard would make more sense. Yes, that would mean fragmenting the > C world somewhat - but I think it would still be a better > compromise.
At the time 18037 was written there was a consensus that C should have a core set of common features and additional standards written to support specific additional application areas. The working title for 18037 was "C standards for Embedded Systems". Common core features turned out in practice to be very difficult to agree on and it was essentially abandoned. The standard names was the way tying more diverse users together. In general have worked well to support the types of embedded work that I do without staying too far from the C language.
>> >> So many of the development tools still are restricted by the >> technology limits of development environments of 40 years ago. > > It is the price of backwards compatibility. Like most C programmers, > I have my own ideas of what is "old cruft" that could be removed from > C standards without harm to current users. And like most C > programmers, my ideas about what is "old cruft" will include things > that some other C programmers still use to this day.
The argument is more about development tools than language. Our tools for example support both compiling to objects and linking as well as absolute code compiling to an executable. We have supported both for a long time. Our customers are split over the approach they use for application development. We have always compiled directly to machine code in our tools also not a language specific issue. Development platforms once had limited resources that were overcome with linking and post assembly translation. Those restrictions don't apply any more. The effects of old code generation technology is even more manifest than that. Linking has become a lot smarter on terms of code generation but it is a lot more computationally expensive than running a compiler strategy pass to analyze the data and control flow of and application. This information can give a compiler an overall plan to create code for the application this time.
>> >> The named address space has often been used to support diverse >> forms of memory. To use your example x = 1; The choice where x is >> located and how it is accessed is made where it is declared. How it >> is handled after that is made by the compiler. The assumption is >> that the code is written with functional intent. >> > > Yes, that is the nice thing about named address spaces here. > >> As valid as the SPI conflict is it is a strawman in practice. C is >> filled with undefined and ambiguous cases and this type of >> potential problem in practice is very rare. > > I don't agree. > > I am /not/ saying the benefits are not worth the costs here - I am > saying this needs to be considered very, very carefully, and > features needed to be viewed in the context of the effects they can > cause here. There are no /right/ answers - but calling it "a strawman > in practice" is very much the /wrong/ answer. Problems that occur > very rarely are the worst kind of problems. >
I essentially stand behind my comments. Problems of moving variable access methods using named address space have had few problems in practice.
> > That is a very different kind of programming from the current > mainstream, and it is questionably as to whether C is a sensible > choice of language for such systems. But okay, let's continue...
Why, I have no real conflict with the historical C and generally have no reason to want to impact old functionality. My approach is similar to the K&R argument declaration changes. Add new syntax support both. 20 years later the marketplace will sort out which is used.
> >> In the two decades since this work was initially done things have >> changed considerably from consumer products that distributed an >> application over 3 or 4 processors. (after initially prototyping on >> a single processor). In these processor usage was almost always >> manually allocation using geographical centers of reference. >> >> This has evolved to compiler analysis that automate this whole >> process over many many processors. >> > > I assume you are not talking about multi-threaded code working on an > SMP system - that is already possible in C, especially with C11 > features like threading, atomic access, and thread-local data. (Of > course more features might be useful, and just because it is possible > does not mean programmers get things right.) > > You are talking about MPPA ("Multi purpose processor array") where > you have many small cores with local memory distributed around a > chip, with communication channels between the nodes. >
That is a close enough description. C has been filled with ad-hoc separate memory spaces. Thread local data you just mentioned, dsp separate memory, historical real separate spaces for small embedded systems, paging and protected memory. Don't discard these but formalize there declaration and use. Do it in a way that can incorporate functionally what has been done and don't do anything to impede the continued use of what is there now. In a similar way look at the current approach to multiprocessors support. How different are threads to multiple execution units. Why shouldn't multiprocessors be managed in similar ways that memory space is currently managed and allocated now at least allowing for these to be machine instead of manually optimized? Finally why shouldn't generic approaches be formalized so the tools aren't restricting application development? By arguments for doing this in the C context are two. First the real impact on the language is small, all are additions not changes and have no impact on existing code bases. Second C is a living language and has lasted as long as it has because standards for the language are there to codify current practices. w..
Both Computer Engineering and Electronics and Communication Engineering can provide a strong foundation for working on embedded systems and IoT. However, the specific focus and coursework may vary between these programs, and the best choice depends on your interests and career goals.

Computer Engineering:
Focus: Computer engineering typically emphasizes the design and integration of computer systems. This includes hardware and software aspects, making it well-suited for working on embedded systems where both hardware and software play crucial roles.
Relevance to IoT: Computer engineering programs often cover topics such as microcontrollers, real-time operating systems, and hardware-software interfacing, which are directly applicable to IoT development.
Electronics and Communication Engineering:

Focus: This field is more inclined towards the design and development of electronic systems, communication systems, and signal processing. While it may not delve as deeply into software aspects as computer engineering, it provides a strong foundation in hardware design and communication technologies.
Relevance to IoT: Electronics and Communication Engineering can be beneficial for IoT, especially in the context of sensor design, communication protocols, and networking aspects of IoT systems.

Computer and Communication Engineering:
Focus: This interdisciplinary program combines aspects of computer engineering and communication engineering, offering a balanced approach to both fields.
Relevance to IoT: With a focus on both computer and communication aspects, this program could provide a well-rounded education for IoT, covering both the hardware and communication aspects of embedded systems.

Choosing the Right Program:
Consider the curriculum of each program at the specific university you are interested in. Look for courses that cover topics such as microcontrollers, embedded systems, communication protocols, and IoT applications. Additionally, consider any opportunities for hands-on projects or internships related to embedded systems and IoT.

If possible, reach out to current students or faculty members in each program to gain insights into the specific strengths and opportunities each program offers for pursuing a career in embedded systems and IoT.

Ultimately, both Computer Engineering and Electronics and Communication Engineering can lead to successful careers in IoT, so choose the program that aligns more closely with your interests and career aspirations. Answer Source https://www.treasuryprime.com/