Reply by Alex Afti December 19, 20232023-12-19
Both Computer Engineering and Electronics and Communication Engineering can provide a strong foundation for working on embedded systems and IoT. However, the specific focus and coursework may vary between these programs, and the best choice depends on your interests and career goals.

Computer Engineering:
Focus: Computer engineering typically emphasizes the design and integration of computer systems. This includes hardware and software aspects, making it well-suited for working on embedded systems where both hardware and software play crucial roles.
Relevance to IoT: Computer engineering programs often cover topics such as microcontrollers, real-time operating systems, and hardware-software interfacing, which are directly applicable to IoT development.
Electronics and Communication Engineering:

Focus: This field is more inclined towards the design and development of electronic systems, communication systems, and signal processing. While it may not delve as deeply into software aspects as computer engineering, it provides a strong foundation in hardware design and communication technologies.
Relevance to IoT: Electronics and Communication Engineering can be beneficial for IoT, especially in the context of sensor design, communication protocols, and networking aspects of IoT systems.

Computer and Communication Engineering:
Focus: This interdisciplinary program combines aspects of computer engineering and communication engineering, offering a balanced approach to both fields.
Relevance to IoT: With a focus on both computer and communication aspects, this program could provide a well-rounded education for IoT, covering both the hardware and communication aspects of embedded systems.

Choosing the Right Program:
Consider the curriculum of each program at the specific university you are interested in. Look for courses that cover topics such as microcontrollers, embedded systems, communication protocols, and IoT applications. Additionally, consider any opportunities for hands-on projects or internships related to embedded systems and IoT.

If possible, reach out to current students or faculty members in each program to gain insights into the specific strengths and opportunities each program offers for pursuing a career in embedded systems and IoT.

Ultimately, both Computer Engineering and Electronics and Communication Engineering can lead to successful careers in IoT, so choose the program that aligns more closely with your interests and career aspirations. Answer Source https://www.treasuryprime.com/
Reply by Walter Banks August 25, 20172017-08-25
On 2017-08-18 8:45 AM, David Brown wrote:
> On 17/08/17 18:15, Walter Banks wrote: >> On 2017-08-17 10:06 AM, David Brown wrote: >>> On 17/08/17 14:24, Walter Banks wrote: >>>
>> >> To be really clear this was a TR and never expected to be added to >> the C standards at the time. > > I assume that it was hoped to become an addition to the C standards, > or at least a basis and inspiration for such additions - otherwise > what was the point? I would be quite happy with the idea of > "supplementary" standards to go along with the main C standards, to > add features or to provide a common set of implementation-dependent > features. For example, Posix adds a number of standard library > functions, and gives guarantees about the size and form of integers - > thus people can write code that is portable to Posix without imposing > requirements on compilers for an 8051. A similar additional standard > giving features for embedded developers, but without imposing > requirements on PC programmers, would make sense.
> >> >> In the current environment I would like to see the C standards >> moved forward to support the emerging ISA's. There are many >> current applications that need additions to the language to >> describe effective solutions to some problems. Ad-hoc additions >> prevent the very thing that C is promoted for, that is >> portability. > > C is intended to support two significantly different types of code. > One is portable code that can run on a wide range of systems. The > other is system-specific code that is targeted at a very small number > of systems. If you are writing code that depends on features of a > particular ISA, then you should be using target-specific or > implementation-dependent features. > > If a new feature is useful across a range of targets, then sometimes > a middle ground would make more sense. The C standards today have > that in the form of optional features. For example, most targets > support nice 8-bit, 16-bit, 32-bit and 64-bit integers with two's > complement arithmetic. But some targets do not support them. So C99 > and C11 give standard names and definitions of these types, but make > them optional. This works well for features that many targets can > support, and many people would have use of. > > For features that are useful on only a small number of ISA's, they > should not be in the main C standards at all - a supplementary > standard would make more sense. Yes, that would mean fragmenting the > C world somewhat - but I think it would still be a better > compromise.
At the time 18037 was written there was a consensus that C should have a core set of common features and additional standards written to support specific additional application areas. The working title for 18037 was "C standards for Embedded Systems". Common core features turned out in practice to be very difficult to agree on and it was essentially abandoned. The standard names was the way tying more diverse users together. In general have worked well to support the types of embedded work that I do without staying too far from the C language.
>> >> So many of the development tools still are restricted by the >> technology limits of development environments of 40 years ago. > > It is the price of backwards compatibility. Like most C programmers, > I have my own ideas of what is "old cruft" that could be removed from > C standards without harm to current users. And like most C > programmers, my ideas about what is "old cruft" will include things > that some other C programmers still use to this day.
The argument is more about development tools than language. Our tools for example support both compiling to objects and linking as well as absolute code compiling to an executable. We have supported both for a long time. Our customers are split over the approach they use for application development. We have always compiled directly to machine code in our tools also not a language specific issue. Development platforms once had limited resources that were overcome with linking and post assembly translation. Those restrictions don't apply any more. The effects of old code generation technology is even more manifest than that. Linking has become a lot smarter on terms of code generation but it is a lot more computationally expensive than running a compiler strategy pass to analyze the data and control flow of and application. This information can give a compiler an overall plan to create code for the application this time.
>> >> The named address space has often been used to support diverse >> forms of memory. To use your example x = 1; The choice where x is >> located and how it is accessed is made where it is declared. How it >> is handled after that is made by the compiler. The assumption is >> that the code is written with functional intent. >> > > Yes, that is the nice thing about named address spaces here. > >> As valid as the SPI conflict is it is a strawman in practice. C is >> filled with undefined and ambiguous cases and this type of >> potential problem in practice is very rare. > > I don't agree. > > I am /not/ saying the benefits are not worth the costs here - I am > saying this needs to be considered very, very carefully, and > features needed to be viewed in the context of the effects they can > cause here. There are no /right/ answers - but calling it "a strawman > in practice" is very much the /wrong/ answer. Problems that occur > very rarely are the worst kind of problems. >
I essentially stand behind my comments. Problems of moving variable access methods using named address space have had few problems in practice.
> > That is a very different kind of programming from the current > mainstream, and it is questionably as to whether C is a sensible > choice of language for such systems. But okay, let's continue...
Why, I have no real conflict with the historical C and generally have no reason to want to impact old functionality. My approach is similar to the K&R argument declaration changes. Add new syntax support both. 20 years later the marketplace will sort out which is used.
> >> In the two decades since this work was initially done things have >> changed considerably from consumer products that distributed an >> application over 3 or 4 processors. (after initially prototyping on >> a single processor). In these processor usage was almost always >> manually allocation using geographical centers of reference. >> >> This has evolved to compiler analysis that automate this whole >> process over many many processors. >> > > I assume you are not talking about multi-threaded code working on an > SMP system - that is already possible in C, especially with C11 > features like threading, atomic access, and thread-local data. (Of > course more features might be useful, and just because it is possible > does not mean programmers get things right.) > > You are talking about MPPA ("Multi purpose processor array") where > you have many small cores with local memory distributed around a > chip, with communication channels between the nodes. >
That is a close enough description. C has been filled with ad-hoc separate memory spaces. Thread local data you just mentioned, dsp separate memory, historical real separate spaces for small embedded systems, paging and protected memory. Don't discard these but formalize there declaration and use. Do it in a way that can incorporate functionally what has been done and don't do anything to impede the continued use of what is there now. In a similar way look at the current approach to multiprocessors support. How different are threads to multiple execution units. Why shouldn't multiprocessors be managed in similar ways that memory space is currently managed and allocated now at least allowing for these to be machine instead of manually optimized? Finally why shouldn't generic approaches be formalized so the tools aren't restricting application development? By arguments for doing this in the C context are two. First the real impact on the language is small, all are additions not changes and have no impact on existing code bases. Second C is a living language and has lasted as long as it has because standards for the language are there to codify current practices. w..
Reply by Stef August 21, 20172017-08-21
On 2017-08-06 upsidedown@downunder.com wrote in comp.arch.embedded:
> On Sat, 5 Aug 2017 15:20:40 -0500, Les Cargill ><lcargill99@comcast.com> wrote: > >>IMO, a reputable EE programme is still probably the best way. CS >>programs still vary too much; CS may or may not be a second-class >>setup in many universities. >> >>I get the feeling that *analog* engineers still have a stable job >>base because it's much harder to fake that. It's somewhat harder.
Yes a good understanding of analog (and digital) electronics is IMO still the best starting point if you plan to build and program "lower level" devices, like the "IoT" devices.
>>And I'd warn the OP against specifically targeting IoT. It's a big >>bubble. People win in bubbles but it's not likely you will be among >>them. > > I have often wondered what this IoT hype is all about. It seems to be > very similar to the PLC (Programmable Logic Controller) used for > decades. You need to do some programming but as equally important > interface to he external world (sensors, relay controls and > communication to other devices).
"IoT" mostly seems a new buzz word for things that have been done for decades, but then with improved (fancier) user interface. Saw an article on new IoT rat traps lately: "Remotely monitor the trap, warns if activated or battery low etc. Uses SMS to communicate with server". Now, that just sounds like what we did 20 years ago. But then we called it M2M communication and it did not have such a pretty web interface and we did not have to hand over all our data to Google or some other party. And there was no 'cloud', just a server. And ofcourse there are sensible IoT devices and services, but too many things are just labeled "IoT" for the label value alone. And what about this "new" thing: "Edge Computing" Something "new": Process the data locally (on the embedded device) before you send it to the server. Again something that has been done for decades (someone in this thread called it the "pork cycle"?) because we needed to. The slow serial connections just couldn't handle the raw, unprocessed data and servers could not handle data processing for many devices simultanously. Just sending everything over to the server was only made possible by fast intervet connections. But they now find out that with so many devices evrything is drowned in a data swamp. So bright new idea: Process locally and invent new buzz word. Hmm, I think I'm starting to sound old. ;-( -- Stef (remove caps, dashes and .invalid from e-mail address to reply by mail) Death is nature's way of saying `Howdy'.
Reply by David Brown August 18, 20172017-08-18
On 17/08/17 18:15, Walter Banks wrote:
> On 2017-08-17 10:06 AM, David Brown wrote: >> On 17/08/17 14:24, Walter Banks wrote: >> >> The AVR needs different instructions for accessing data in flash and >> ram, and address spaces provide a neater and less error-prone solution >> than macros or function calls for flash data access. >> >> So far, so good - and if that is your work, then well done. The >> actual text of the document could, IMHO, benefit from a more concrete >> example usage of address spaces (such as for flash access, as that is >> likely to be a very popular usage). > > The named address space stuff is essentially all mine. > >> >> >> The register storage class stuff, however, is not something I would >> like to see in C standards. If I had wanted to mess with specific CPU >> registers such as flag registers, I would be programming in assembly. >> C is/not/ assembly - we use C so that we don't have to use assembly. >> There may be a few specific cases of particular >> awkward processors for which it is occasionally useful to have direct >> access to flag bits - those are very much in the minority. And they >> are getting more in the minority as painful architectures like COP8 >> and PIC16 are being dropped in favour of C-friendly processors. It >> is absolutely fine to put support for condition code registers (or >> whatever) into compilers as target extensions. I can especially see >> how it can help compiler implementers to write support libraries in >> C rather than assembly. But it is/not/ something to clutter up C >> standards or for general embedded C usage. > > To be really clear this was a TR and never expected to be added to the C > standards at the time.
I assume that it was hoped to become an addition to the C standards, or at least a basis and inspiration for such additions - otherwise what was the point? I would be quite happy with the idea of "supplementary" standards to go along with the main C standards, to add features or to provide a common set of implementation-dependent features. For example, Posix adds a number of standard library functions, and gives guarantees about the size and form of integers - thus people can write code that is portable to Posix without imposing requirements on compilers for an 8051. A similar additional standard giving features for embedded developers, but without imposing requirements on PC programmers, would make sense.
> > In the current environment I would like to see the C standards moved > forward to support the emerging ISA's. There are many current > applications that need additions to the language to describe effective > solutions to some problems. Ad-hoc additions prevent the very thing that > C is promoted for, that is portability.
C is intended to support two significantly different types of code. One is portable code that can run on a wide range of systems. The other is system-specific code that is targeted at a very small number of systems. If you are writing code that depends on features of a particular ISA, then you should be using target-specific or implementation-dependent features. If a new feature is useful across a range of targets, then sometimes a middle ground would make more sense. The C standards today have that in the form of optional features. For example, most targets support nice 8-bit, 16-bit, 32-bit and 64-bit integers with two's complement arithmetic. But some targets do not support them. So C99 and C11 give standard names and definitions of these types, but make them optional. This works well for features that many targets can support, and many people would have use of. For features that are useful on only a small number of ISA's, they should not be in the main C standards at all - a supplementary standard would make more sense. Yes, that would mean fragmenting the C world somewhat - but I think it would still be a better compromise. Incidentally, can you say anything about these "emerging ISA's" and the features needed? I fully understand if you cannot give details in public (of course, you'll need to do so some time if you want them standardised!).
> C standards are supposed to > codify existing practice and so often the politics of standards become > arguments about preserving old standards rather than support for newer > processor technology.
That is a major point of them, yes.
> I know from what I have been doing is both the > spirit and approach to code development in C can deal with changes in > applications and processor technology. > > So many of the development tools still are restricted by the technology > limits of development environments of 40 years ago.
It is the price of backwards compatibility. Like most C programmers, I have my own ideas of what is "old cruft" that could be removed from C standards without harm to current users. And like most C programmers, my ideas about what is "old cruft" will include things that some other C programmers still use to this day.
> >> >> The disappointing part of named address spaces is in Annex B.1. It is >> tantalisingly close to allowing user-defined address spaces with >> specific features such as neat access to data stored in other types of >> memory. But it is missing all the detail needed to make it work, how >> and when it could be used, examples, and all the thought into >> how it would interplay with other features of the language. It also >> totally ignores some major issues that are very contrary to the spirit >> and philosophy of C. When writing C, one expects "x = 1;" to operate >> immediately as a short sequence of instructions, or even to be removed >> altogether by the compiler optimiser. With a >> user-defined address space, such as an SPI eeprom mapping, this could >> take significant time, it could interact badly with other code (such >> as another thread or an interrupt the is also accessing the SPI bus), >> it could depend on setup of things outside the control of the >> compiler, and it could fail. > > The named address space has often been used to support diverse forms > of memory. To use your example x = 1; The choice where x is located > and how it is accessed is made where it is declared. How it is handled > after that is made by the compiler. The assumption is that the code > is written with functional intent. >
Yes, that is the nice thing about named address spaces here.
> As valid as the SPI conflict is it is a strawman in practice. C is > filled with undefined and ambiguous cases and this type of potential > problem in practice is very rare.
I don't agree. If you first say that named address spaces give a way of running arbitrary user code for something like "x = 1;", you are making a very big change in the way C works. And you make it very easy for programmers to make far-reaching code changes in unexpected ways. Imagine a program for controlling a music system. You have a global variable "volume", set in the main loop when the knob is checked, and read in a timer interrupt that is used to give smooth transition of the actual volume output (for cool fade-in and fade-out). Somebody then decides that the volume should be kept in non-volatile memory so that it is kept over power cycles. Great - you just stick a "_I2CEeprom" address space qualifier on the definition of "volume". Job done. Nobody notices that the timer interrupts now take milliseconds instead of microseconds to run. And nobody - except the unlucky customer - notices that all hell breaks loose and his speakers are blown when the volume timer interrupt happens in the middle of a poll of the I2C temperature sensors. Now, you can well say that this is all bad program design, or poor development methodology, or insufficient test procedures. But the point is that allowing such address space modifiers so simply changes the way C works - it changes what people expect from C. A C programmer has a very different expectation from "x = 1;" than "x = readFromEeprom(address);". I am /not/ saying the benefits are not worth the costs here - I am saying this needs to be considered very, very carefully, and features needed to be viewed in the context of the effects they can cause here. There are no /right/ answers - but calling it "a strawman in practice" is very much the /wrong/ answer. Problems that occur very rarely are the worst kind of problems. This is a very different case from something like flash access, or access to ram in different pages, where the access is quite clearly defined and has definite and predictable timings. You may still have challenges - if you need to set a "page select register", how do you ensure that everything works with interrupts that may also use this address space? But the challenges are smaller, and the benefits greater.
> >> >> You need to think long and hard as to whether this is something >> desirable in a C compiler. > > I have and it is. Once I passed the general single address space > C model named address space opened a level of flexibility that > allows C to be used in a variety of application environments > that conventional C code does not work well for. > >> It would mean giving up the kind of transparency and low-level >> predictability that are some of the key reasons people choose C over >> C++ for such work. If the convenience of being able to access >> different types of data in the same way in code is worth it, then >> these issues must be made clear and the mechanisms developed - if not, >> then the idea should be dropped. A half-written >> half-thought-out annex is not the answer. > > I buy the documentation point. > > From a usage point I disagree. Writing an application program that can > be spread over many processors is a good example.
That is a very different kind of programming from the current mainstream, and it is questionably as to whether C is a sensible choice of language for such systems. But okay, let's continue...
> In the two decades > since this work was initially done things have changed considerably from > consumer products that distributed an application over 3 or 4 > processors. (after initially prototyping on a single processor). In > these processor usage was almost always manually allocation using > geographical centers of reference. > > This has evolved to compiler analysis that automate this whole process > over many many processors. >
I assume you are not talking about multi-threaded code working on an SMP system - that is already possible in C, especially with C11 features like threading, atomic access, and thread-local data. (Of course more features might be useful, and just because it is possible does not mean programmers get things right.) You are talking about MPPA ("Multi purpose processor array") where you have many small cores with local memory distributed around a chip, with communication channels between the nodes. I would say that named address spaces are not the answer here - the answer is to drop C, or at least /substantially/ modify it. The XMOS xC language is an example. A key point is to allow the definition of a "node" of work with local data, functions operating in the context of that node, and communication channels in and out. Nodes should not be able to access data or functions on other nodes except through the channels, though for convenience of programming you might allow access to fixed data (compile-time constants, and functions with no static variables, which can all be duplicated as needed). Channel-to-channel connections should ideally be fixed at compile time, allowing the linker/placer/router to arrange the nodes to match the physical layout of the device. Lots of fun, but not C as we know it.
Reply by Walter Banks August 17, 20172017-08-17
On 2017-08-17 10:06 AM, David Brown wrote:
> On 17/08/17 14:24, Walter Banks wrote: > > The AVR needs different instructions for accessing data in flash and > ram, and address spaces provide a neater and less error-prone > solution than macros or function calls for flash data access. > > So far, so good - and if that is your work, then well done. The > actual text of the document could, IMHO, benefit from a more concrete > example usage of address spaces (such as for flash access, as that is > likely to be a very popular usage).
The named address space stuff is essentially all mine.
> > > The register storage class stuff, however, is not something I would > like to see in C standards. If I had wanted to mess with specific > CPU registers such as flag registers, I would be programming in > assembly. C is/not/ assembly - we use C so that we don't have to > use assembly. There may be a few specific cases of particular > awkward processors for which it is occasionally useful to have direct > access to flag bits - those are very much in the minority. And they > are getting more in the minority as painful architectures like COP8 > and PIC16 are being dropped in favour of C-friendly processors. It > is absolutely fine to put support for condition code registers (or > whatever) into compilers as target extensions. I can especially see > how it can help compiler implementers to write support libraries in > C rather than assembly. But it is/not/ something to clutter up C > standards or for general embedded C usage.
To be really clear this was a TR and never expected to be added to the C standards at the time. In the current environment I would like to see the C standards moved forward to support the emerging ISA's. There are many current applications that need additions to the language to describe effective solutions to some problems. Ad-hoc additions prevent the very thing that C is promoted for, that is portability. C standards are supposed to codify existing practice and so often the politics of standards become arguments about preserving old standards rather than support for newer processor technology. I know from what I have been doing is both the spirit and approach to code development in C can deal with changes in applications and processor technology. So many of the development tools still are restricted by the technology limits of development environments of 40 years ago.
> > The disappointing part of named address spaces is in Annex B.1. It > is tantalisingly close to allowing user-defined address spaces with > specific features such as neat access to data stored in other types > of memory. But it is missing all the detail needed to make it work, > how and when it could be used, examples, and all the thought into > how it would interplay with other features of the language. It also > totally ignores some major issues that are very contrary to the > spirit and philosophy of C. When writing C, one expects "x = 1;" to > operate immediately as a short sequence of instructions, or even to > be removed altogether by the compiler optimiser. With a > user-defined address space, such as an SPI eeprom mapping, this could > take significant time, it could interact badly with other code (such > as another thread or an interrupt the is also accessing the SPI bus), > it could depend on setup of things outside the control of the > compiler, and it could fail.
The named address space has often been used to support diverse forms of memory. To use your example x = 1; The choice where x is located and how it is accessed is made where it is declared. How it is handled after that is made by the compiler. The assumption is that the code is written with functional intent. As valid as the SPI conflict is it is a strawman in practice. C is filled with undefined and ambiguous cases and this type of potential problem in practice is very rare.
> > You need to think long and hard as to whether this is something > desirable in a C compiler.
I have and it is. Once I passed the general single address space C model named address space opened a level of flexibility that allows C to be used in a variety of application environments that conventional C code does not work well for.
> It would mean giving up the kind of transparency and low-level > predictability that are some of the key reasons people choose C over > C++ for such work. If the convenience of being able to access > different types of data in the same way in code is worth it, then > these issues must be made clear and the mechanisms developed - if > not, then the idea should be dropped. A half-written > half-thought-out annex is not the answer.
I buy the documentation point. From a usage point I disagree. Writing an application program that can be spread over many processors is a good example. In the two decades since this work was initially done things have changed considerably from consumer products that distributed an application over 3 or 4 processors. (after initially prototyping on a single processor). In these processor usage was almost always manually allocation using geographical centers of reference. This has evolved to compiler analysis that automate this whole process over many many processors. w..
Reply by David Brown August 17, 20172017-08-17
On 17/08/17 14:24, Walter Banks wrote:
> On 2017-08-17 3:37 AM, David Brown wrote: >> "IEC/ISO 18037" completely misses the point, and is a disaster for >> the world of embedded C programming. It is an enormous >> disappointment to anyone who programs small embedded systems in C, >> and it is no surprise that compiler implementers have almost entirely >> ignored it in the 15 years of its existence. Named address spaces >> are perhaps the only interesting and useful idea there, but the TR >> does not cover user-definable address spaces properly. > > > Guilty I wrote the section of 18037 on named address spaces based on our > use in consumer applications and earlier WG-14 papers. > > We extended the named address space material to also include processor > named space N1351,N1386
I don't know the details of these different versions of the papers. I have the 2008 draft of ISO/IEC TR 18037:2008 in front of me. With all due respect to your work and experience here, I have a good deal of comments on this paper. Consider it constructive criticism due to frustration at a major missed opportunity. In summary, TR 18037 is much like EC++ - a nice idea when you look at the title, but an almost total waste of time for everyone except compiler company marketing droids. The basic idea of named address spaces that are syntactically like const and volatile qualifiers is, IMHO, a good plan. For an example usage, look at the gcc support for "__flash" address spaces in the AVR port of gcc: <https://gcc.gnu.org/onlinedocs/gcc/Named-Address-Spaces.html> The AVR needs different instructions for accessing data in flash and ram, and address spaces provide a neater and less error-prone solution than macros or function calls for flash data access. So far, so good - and if that is your work, then well done. The actual text of the document could, IMHO, benefit from a more concrete example usage of address spaces (such as for flash access, as that is likely to be a very popular usage). The register storage class stuff, however, is not something I would like to see in C standards. If I had wanted to mess with specific cpu registers such as flag registers, I would be programming in assembly. C is /not/ assembly - we use C so that we don't have to use assembly. There may be a few specific cases of particular awkward processors for which it is occasionally useful to have direct access to flag bits - those are very much in the minority. And they are getting more in the minority as painful architectures like COP8 and PIC16 are being dropped in favour of C-friendly processors. It is absolutely fine to put support for condition code registers (or whatever) into compilers as target extensions. I can especially see how it can help compiler implementers to write support libraries in C rather than assembly. But it is /not/ something to clutter up C standards or for general embedded C usage. The disappointing part of named address spaces is in Annex B.1. It is tantalisingly close to allowing user-defined address spaces with specific features such as neat access to data stored in other types of memory. But it is missing all the detail needed to make it work, how and when it could be used, examples, and all the thought into how it would interplay with other features of the language. It also totally ignores some major issues that are very contrary to the spirit and philosophy of C. When writing C, one expects "x = 1;" to operate immediately as a short sequence of instructions, or even to be removed altogether by the compiler optimiser. With a user-defined address space, such as an SPI eeprom mapping, this could take significant time, it could interact badly with other code (such as another thread or an interrupt the is also accessing the SPI bus), it could depend on setup of things outside the control of the compiler, and it could fail. You need to think long and hard as to whether this is something desirable in a C compiler. It would mean giving up the kind of transparency and low-level predictability that are some of the key reasons people choose C over C++ for such work. If the convenience of being able to access different types of data in the same way in code is worth it, then these issues must be made clear and the mechanisms developed - if not, then the idea should be dropped. A half-written half-thought-out annex is not the answer. One point that is mentioned in Annex B is specific little endian and big endian access. This is a missed opportunity for the TR - qualifiers giving explicit endianness to a type would be extremely useful, completely independently of the named address space concept. Such qualifiers would be simple to implement on all but the weirdest of hardware platforms, and would be massively useful in embedded programming.
> > The fixed point material in 18037 is in my opinion reasonable.
No, it is crap. Look at C99. Look what it gave us over C90. One vital feature that made a huge difference to embedded programming is <stdint.h> with fixed size integer types. There is no longer any need for every piece of embedded C software, every library, every RTOS, to define its own types u16, u16t, uint_16_t, uWORD, RTOS_u16, and whatever. Now we can write uint16_t and be done with it. Then someone has come along and written this TR with a total disregard for this. So /if/ this support gets widely implemented, and /if/ people start using it, what types will people use? Either they will use "signed long _Fract" and friends, making for unreadable code due to the long names and having undocumented target-specific assumptions that make porting an error prone disaster, or we are going to see a proliferation of fract15_t, Q31, fp0_15, and a dozen different incompatible variations. If this was going to be of any use, a set of specific, fixed-size type names should have been defined from day one. The assorted _Fract and _Accum types are /useless/. They should not exist. My suggestion for a naming convention would be uint0q16_t, int7q8_t, etc., for the number of bits before and after the binary point. Implementations should be free to implement those that they can handle efficiently, and drop any that they cannot - but there should be no ambiguity. This would also avoid the next point - C99 was well established before the TR was written. What about the "long long" versions for completeness? Of course, with a sensible explicit naming scheme, as many different types as you want could exist. Then there is the control of overflow. It is one thing to say saturation would be a nice idea - but it is absolutely, totally and completely /wrong/ to allow this to be controllable by a pragma. Explicit in the type - yes, that's fine. Implicit based on what preprocessing directives happen to have passed before that bit of the source code is translated? Absolutely /not/. Equally, pragmas for precision and rounding - in fact, pragmas in general - are a terrible idea. Should the types behave differently in different files in the same code? Next up - fixed point constants. Hands up all those that think it is intuitive that 0.5uk makes it obvious that this is an "unsigned _Accum" constant? Write it as "(uint15q16_t) 0.5" instead - make it clear and explicit. The fixed point constant suffixes exist purely because someone thought there should be suffixes and picked some letters out of their hat. Oh, and for extra fun lets make these suffixes subtly different from the conversion specifiers for printf. You remember? that function that is already too big, slow and complicated for many embedded C systems. Then there is the selection of functions in <stdfix.h>. We have type-generic maths support in C99. There is no place for individual functions like abshr, abslr, abshk, abslk - a single type-generic absfx would do the job. We don't /need/ these underlying functions. The implementation may have them, but C programmers don't need to see that mess. Hide it away as implementation details. That would leave everything much simpler to describe, and much simpler to use, and mean it will work with explicit names for the types. And in the thirteen years that it has taken between this TR being first published, and today, when implementations are still rare, incomplete and inefficient, we now have microcontrollers that will do floating point quickly for under a dollar. Fixed point is rapidly becoming of marginal use or even irrelevant. As for the hardware IO stuff, the less said about that the better. It will /never/ be used. It has no benefits over the system used almost everywhere today - volatile accesses through casted constant addresses. The TR has failed to give the industry anything that embedded C programmers need, it has made suggestions that are worse than useless, and by putting in so much that is not helpful it has delayed any hope of implementation and standardisation for the ideas that might have been helpful.
> > We use both of these a lot especially in programming the massively > parallel ISA's I have been working on in the last few years. >
Implementation-specific extensions are clearly going to be useful for odd architectures like this. It is the attempt at standardisation in the TR that is a total failure.
Reply by Walter Banks August 17, 20172017-08-17
On 2017-08-17 3:37 AM, David Brown wrote:
> "IEC/ISO 18037" completely misses the point, and is a disaster for > the world of embedded C programming. It is an enormous > disappointment to anyone who programs small embedded systems in C, > and it is no surprise that compiler implementers have almost entirely > ignored it in the 15 years of its existence. Named address spaces > are perhaps the only interesting and useful idea there, but the TR > does not cover user-definable address spaces properly.
Guilty I wrote the section of 18037 on named address spaces based on our use in consumer applications and earlier WG-14 papers. We extended the named address space material to also include processor named space N1351,N1386 The fixed point material in 18037 is in my opinion reasonable. We use both of these a lot especially in programming the massively parallel ISA's I have been working on in the last few years. w..
Reply by David Brown August 17, 20172017-08-17
On 17/08/17 00:39, Walter Banks wrote:
> On 2017-08-10 9:11 AM, David Brown wrote: >> That sounds like a disaster for coupling compilers, linkers, OS's, and >> processor MMU setups. I don't see this happening automatically. Doing >> so/manually/ - giving explicit sections to variables, and explicitly >> configuring an MMU / MPU to make a particular area of the address space >> non-cached is fine. I have done it myself on occasion. But that's >> different from trying to make it part of the standard language. > > > couple comments on this. Compiling for multiple processors I have used > named address spaces to define private and shared space. IEC/ISO 18037
"IEC/ISO 18037" completely misses the point, and is a disaster for the world of embedded C programming. It is an enormous disappointment to anyone who programs small embedded systems in C, and it is no surprise that compiler implementers have almost entirely ignored it in the 15 years of its existence. Named address spaces are perhaps the only interesting and useful idea there, but the TR does not cover user-definable address spaces properly.
> The nice part of that is applications can start out running on a single > platform and then split later with minimum impact on the source code. > > Admittedly I have done this on non MMU systems. >
On some systems, such a "no_cache" keyword/attribute is entirely possible. My comment is not that this would not be a useful thing, but that it could not be a part of the C standard language. For example, on the Nios processor (Altera soft cpu for their FPGAs - and I don't remember if this was just for the original Nios or the Nios2) the highest bit of an address was used to indicate "no cache, no reordering", but it was otherwise unused for address decoding. When you made a volatile access, the compiler ensured that the highest bit of the address was set. On that processor, implementing a "no_cache" keyword would be easy - it was already done for "volatile". But on a processor that has an MMU? It would be a serious problem. And how would you handle casts to a no_cache pointer? Casting a pointer to normal data into a pointer to volatile is an essential operation in lots of low-level code. (It is implementation-defined behaviour, but works "as expected" in all compilers I have heard of.) So for some processors, "no_cache" access is easy. For some, it would require support from the linker (or at least linker scripts) and MMU setup, but have no possibility for casts. For others, memory barrier instructions and cache flush instructions would be the answer. On larger processors, that could quickly be /very/ expensive - much more so than an OS call to get some uncached memory (dma_alloc_coherent() on Linux, for example). uncached accesses cannot be implemented sensible or efficiently in the same way on different processors, and in some systems it cannot be done at all. The concept of cache is alien to the C standards. Any code that might need uncached memory is inherently low-level and highly system dependent. Therefore it is a concept that has no place in the C standards, even though it is a feature that could be very useful in many specific implementations for specific targets. A great thing about C is that there is no problem having such implementation-specific features and extensions.
> > I have linked across multiple processors including cases of > heterogeneous processors. > > An other comment about inter-processor communication. We found out a > long time ago that dual or multi port memory is not that much of an > advantage in most applications. The data rate can actually be quite low. > We have done quite a few consumer electronics packages with serial data > well below a mbit some as low as 8Kbits/second. It creates skew between > processor execution but generally has very limited impact on application > function or performance. > > w.. > > w..
Reply by Walter Banks August 16, 20172017-08-16
On 2017-08-10 9:11 AM, David Brown wrote:
> That sounds like a disaster for coupling compilers, linkers, OS's, and > processor MMU setups. I don't see this happening automatically. Doing > so/manually/ - giving explicit sections to variables, and explicitly > configuring an MMU / MPU to make a particular area of the address space > non-cached is fine. I have done it myself on occasion. But that's > different from trying to make it part of the standard language.
couple comments on this. Compiling for multiple processors I have used named address spaces to define private and shared space. IEC/ISO 18037 The nice part of that is applications can start out running on a single platform and then split later with minimum impact on the source code. Admittedly I have done this on non MMU systems. I have linked across multiple processors including cases of heterogeneous processors. An other comment about inter-processor communication. We found out a long time ago that dual or multi port memory is not that much of an advantage in most applications. The data rate can actually be quite low. We have done quite a few consumer electronics packages with serial data well below a mbit some as low as 8Kbits/second. It creates skew between processor execution but generally has very limited impact on application function or performance. w.. w..
Reply by David Brown August 10, 20172017-08-10
On 10/08/17 13:30, upsidedown@downunder.com wrote:
> On Wed, 09 Aug 2017 10:03:40 +0200, David Brown > <david.brown@hesbynett.no> wrote: > >> On 08/08/17 20:07, upsidedown@downunder.com wrote: >>> On Tue, 08 Aug 2017 17:11:22 +0200, David Brown >>> <david.brown@hesbynett.no> wrote: >>> >>>> On 08/08/17 16:56, Tom Gardner wrote: >>>>> On 08/08/17 11:56, David Brown wrote: >>>>>> On 08/08/17 12:09, Tom Gardner wrote: >>>>>>> On 08/08/17 10:26, David Brown wrote: >>>> >>>>> >>>>> >>>>>>> Consider single 32 core MCUs for &#4294967295;25 one-off. (xCORE) >>> >>> When there are a large number of cores/processors available, I would >>> start a project by assigning a thread/process for each core. Later on >>> you might have to do some fine adjustments to put multiple threads >>> into one core or split one thread into multiple cores. >> >> The XMOS is a bit special - it has hardware multi-threading. The 32 >> virtual core device has 4 real cores, each with 8 hardware threaded >> virtual cores. For hardware threads, you get one thread per virtual core. >> >>>>> >>>>> Sooner or later people will have to come to terms with >>>>> non-global memory and multicore processing and (preferably) >>>>> message passing. Different abstractions and tools /will/ >>>>> be required. Why not start now, from a good sound base? >>>>> Why hobble next-gen tools with last-gen problems? >>>>> >>>> >>>> That is /precisely/ the point - if you view it from the other side. A >>>> key way to implement message passing, is to use shared memory underneath >>>> - but you isolate the messy details from the ignorant programmer. If >>>> you have write the message passing library correctly, using features >>>> such as "consume" orders, then the high-level programmer can think of >>>> passing messages while the library and the compiler conspire to give >>>> optimal correct code even on very weak memory model cpus. >>>> >>>> You are never going to get away from shared memory systems - for some >>>> kind of multi-threaded applications, it is much, much more efficient >>>> than memory passing. But it would be good if multi-threaded apps used >>>> message passing more often, as it is easier to get correct. >>> >>> What is the issue with shared memory systems ? Use unidirectional >>> FIFOs between threads in shared memory for the actual message. The >>> real issue how to inform the consuming thread that there is a new >>> message available in the FIFO. >>> >> >> That is basically how you make a message passing system when you have >> shared memory for communication. The challenge for modern systems is >> making sure that other cpus see the same view of memory as the sending >> one. It is not enough to simply write the message, then update the >> head/tail pointers for the FIFO. You have cache coherency, write >> re-ordering buffers, out-of-order execution in the cpu, etc., as well as >> compiler re-ordering of writes. > > Sure you have to put the pointers into non-cached memory or into > write-through cache or use some explicit instruction to perform a > cache write-back. >
You also need the data pointed to in coherent memory of some sort (or synchronise it explicitly). It does not help if another processor sees the "data ready" flag become active before the data itself is visible!
> The problem is the granulation of the cache, typically at least a > cache line or a virtual memory page size.
No, that is rarely an issue. Most SMP systems have cache snooping for consistency. It /is/ a problem on non-uniform multi-processing systems. (And cache lines can lead to cache line thrashing, which is a performance problem but not a correctness problem.)
> > While "volatile" just affects code generation, it would be nice to > have a e.g. "no_cache" keyword to affect run time execution and cache > handling. This would put these variables into special program sections > and let the linker put all variables requiring "no_cache" into the > same cache line or virtual memory page. The actual implementation > could then vary according to hardware implementation.
That sounds like a disaster for coupling compilers, linkers, OS's, and processor MMU setups. I don't see this happening automatically. Doing so /manually/ - giving explicit sections to variables, and explicitly configuring an MMU / MPU to make a particular area of the address space non-cached is fine. I have done it myself on occasion. But that's different from trying to make it part of the standard language.
> > If usage of some specific shared data is defined as a single producer > thread (with full R/W access) and multiple consumer threads (with read > only access) in a write-back cache system, the producer would activate > the write-trough after each update, while each consumer would > invalidate_cache before any read access, forcing a cache reload before > using the data. The source code would be identical in both producer as > well as consumer threads, but separate binary code could be compiled > for the producer and the consumers.
That's what atomic access modes and fences are for in C11/C++11.
> > >> It would be nice to see cpus (or chipsets) having better hardware >> support for a variety of synchronisation mechanisms, rather than just >> "flush all previous writes to memory before doing any new writes" >> instructions. > > Is that really so bad limitation ?
For big SMP systems like modern x86 or PPC chips? Yes, it is - these barriers can cost hundreds of cycles of delay. And if you want the sequentially consistent barriers (not just acquire/release), so that all cores see the same order of memory, you need a broadcast that makes /all/ cores stop and flush all their write queues. (Cache lines don't need flushed - cache snooping takes care of that already.) I have used a microcontroller with a dedicated "semaphore" peripheral block. It was very handy, and very efficient for synchronising between the two cores.
> >> Multi-port and synchronised memory is expensive, but >> surely it would be possible to have a small amount that could be used >> for things like mutexes, semaphores, and the control parts of queues. > > Any system with memory mapped I/O registers must have a mechanism that > will disable any caching operations for these peripheral I/O > registers. Extending this to some RAM locations should be helpful. >
Agreed. But that ram would, in practice, be best implemented as a separate block of fast ram independent from the main system ram. For embedded systems, a bit of on-chip static ram would make sense. And note that it is /not/ enough to be uncached - you also need to make sure that writes are done in order, and that reads are not done speculatively or out of order.
> --- > > BTW, discussing about massively parallel systems with shared memory > resembles the memory mapped file usage with some big data base > engines. > > In these systems big (up to terabytes) files are mapped into the > virtual address space. After that, each byte in each memory mapped > file is accessed just as a huge (terabyte) array of bytes (or some > structured type) by simply assignment statements. With files larger > than a few hundred megabytes, a 64 bit processor architecture is > really nice to have :-) > > The OS handles loading a segment from the physical disk file into the > memory using the normal OS page fault loading and writeback mechanism. > Instead of accessing the page file, the mechanism access the user data > base files. > > Thus you can think about the physical disks as the real memory and the > computer main memory as the L4 cache. Since the main memory is just > one level in the cache hierarchy, there are also similar cache > consistency issues as with other cached systems. In transaction > processing, typically some Commit/Rollback is used. >
There is some saying about any big enough problem in computing being just an exercise in caching, but I forget the exact quotation. Serious caching systems are very far from easy to make, ensuring correctness, convenient use, and efficiency.
> I guess that designing products around these massively parallel chips, > studying the cache consistency tricks used by memory mapped data base > file systems might be helpful. >
Indeed.