EmbeddedRelated.com
Forums
The 2026 Embedded Online Conference

What a nightmare...

Started by Dimiter_Popoff September 30, 2016
Hi Dimiter,

On 10/2/2016 1:16 AM, Dimiter_Popoff wrote:

>>> Then I have plans for the t1042 (t1040) - how do you replace that? >> >> I'm sure you'll be able to find "equivalent" components (in terms of >> capabilities, if not packaging). The problem will be that they (almost >> assuredly!) won't run PPC binaries. Hence my comment about being wed >> to the PPC family in your design... > > I am far less sure one can find that than you are. Porting to another
As I said, you may not find what you want in a particular *package* (i.e., may have to resort to multiple MCU's to achieve a desired level of performance) but it is amazing (almost SCARY!) to see just what you can buy, nowadays, and for how *little* money (and power, real estate, etc.) As my projects are distributed, in nature, I'm facing the opposite problem: processors are becoming *too* capable -- it's difficult to come up with a use for all those capabilities! Yet, "bottom feeding" leaves me vulnerable to component obsolescence, etc. (as single MCU designs will tend to push the sweet spot UP the curve instead of down -- esp with the Linux-weenies who think everything can be solved just by starting with a bloated OS!)
> architecture is not prohibitive for me, I did make vpa some 16 years > ago to move from 68k (cpu32) to PPC and now the vpa (virtual processor > assembly) compiler (well I guess it is a compiler and not an > assembler in spite of the language name) can be prepared for another > architecture more or less in a straight forward manner. > BUT, where is going this _documented_ part to come from? Right now > there is none on the market - and it would be insane to hope this > will improve.
IMO, *none* of the products on the market are truly "well documented"; despite 1000+ page "datasheets". Nowhere is this more evident than the ARM world where most of the documentation you encounter is a regurgitation of the documentation from ARM Ltd re: their IP "components". Very little from the foundries who've globbed these "macrocells" together in a variety of (poorly documented) ways. It's akin to reading a datasheet that is full of "typ" parameters but no "min" or "max" numbers -- as if they'll only give you a general idea of how the device will perform if used in a particular way (which ALSO is undocumented!) [Nothing new, here -- this has increasingly been the case as MCU's have evolved in complexity (and vendors lacked imagination/resources to "quantify"/qualify fully.)]
> Second, it will feel like a huge waste having to go down from power > to an inferior architecture - ARM, x86, MIPS etc. Survivable but not > nice at all.
Folks have been living with x86's (of their own CHOICE!) for years and still managing to get things done. The industry tends not to reward the *best* designs but, rather, the designs that ended up in the most successful *products*! (e.g., the x86 SHOULD have been still-born -- had it not been for some idiots at IBM!)
>>> There just are no processors of that complexity & power on the market >>> from anyone else, thus qualcomming them means an end to the processor >>> market as we know it. Unless you are a politburo member so you are >>> entitled to data on a part you are just out in the cold. >>> Then how long do you thing it will take for TI and the rest to get >>> qualcommed (by whoever). >>> I guess we'll all have to learn to herd cattle or something. Someone >>> somewhere has decided to put an end to uncontrolled computer >>> development - and I don't think we can do a damn thing about it. >> >> I see fewer and fewer firms designing component level systems. >> Instead, it seems that the processor/core comes from a "module >> vendor" and firms just add I/O signal conditioning. Folks >> being more concerned with whether or not they can "start >> debugging code TODAY" than fine-tuning the hardware to their specific >> design requirements. >> >> I've yet to finalize on a particular set of components as I'm hoping to >> ride the evolutionary wave forward until all my designs are done; then >> bind the designs to a particular set of components available/affordable >> at THAT time. Sort of avoiding the "premature optimization" that >> is inherent in selecting a hardware implementation. > > Just to warn you - don't be so sure you will find *any* part with > enough documentation unless you are fine with one of those who remain > in busyness - Microchip are still there and... not many others I think. > STM perhaps. Renesas - I don't know how documented their stuff is, > perhaps it is. > But none of these makes any parts large enough to compare to Freescale's > QorIQ series, none comes even close.
This is a double-edged sword. With smaller (less capable) parts, I often have had to rely on tricks/exploits to get some extra performance out of a device by leveraging some "poorly documented feature". With the larger devices, I can afford to be sloppier in my implementation; I can "trust" the silicon/vendor more (e.g., I don't need to worry about the details of the FP encoding in order to short-circuit some computation/test; I can just let the silicon run the test/computation more "formally"!)
> So unless the Freescale part of NXP get sold to someone else willing > to continue the job and just the rest of NXP get qualcommed we are > pretty much doomed, all of us here. > Unless one has his own silicon house one is just out of the computer > trade within just a few years - and at least I cannot think of having > one - being as small as I am.
Well, only *you* can comment on the suitability of current (and planned, FUTURE) offerings are to your product line, development system, etc. While I've a general idea of what your products do, I am clueless as to what "needs" to happen under the hood. I know there is a lot less choice (in terms of FAMILIES) nowadays than in the 80's. OTOH, I also know that I don't have to spend as much time finely evaluating a particular processor for a particular application ("Hmmm... for a clock speed of X, how many wait states will I need for memory with a cycle time of Y? And, how does that alter the effective execution rate of this processor vs. that processor??") [I can remember agonizing about whether the cost of upgrading a 1MHz MC6809 system to *2* MHz was justified -- despite the pressures being applied by the software folks! And, how that would compare to a *6* MHz Z80...] On the one hand, designs have become far more "cookie cutter", less innovative/different (from other products). On the other hand, there's less time spent screwing around with the "best" way to design a DRAM interface, I/O subsystem, etc. This effectively shifts the emphasis from one skillset to another. I'm revisiting design approaches that I used to tackle with serial ports (e.g., UART, HDLC, etc.) and deliberate awareness of the actual line protocol to more performant approaches (e.g., ethernet) and specific hardware assists (cuz the line protocols are so much more complicated!). Same problem, different approach. On the one hand, simplifying system design. On the other, making it *harder* for folks to "enter the fray" (damn near anyone can hack together a UART driver; considerably harder to build a network/USB/BT stack!)
On 02/10/16 11:50, Don Y wrote:
> As my projects are distributed, in nature, I'm facing the opposite problem: > processors are becoming *too* capable -- it's difficult to come up with a use > for all those capabilities! Yet, "bottom feeding" leaves me vulnerable to > component obsolescence, etc. (as single MCU designs will tend to push the > sweet spot UP the curve instead of down -- esp with the Linux-weenies > who think everything can be solved just by starting with a bloated OS!)
There's an interesting alternative available for some dual-core processors: run an RTOS on one core and, say, *a* linux on the other. Xilinx touts that for their Zynq devices, which also contain significant FPGA resources. Xilinx is also touting its HLL->fpga+code products, but I haven't seriously looked at those. Some Zynq boards are down to £50, IIRC - but I don't know the cost of the newly announced single-core cost-optimised variants.
On 02.10.2016 г. 13:50, Don Y wrote:
> Hi Dimiter, > > On 10/2/2016 1:16 AM, Dimiter_Popoff wrote: > >>>> Then I have plans for the t1042 (t1040) - how do you replace that? >>> >>> I'm sure you'll be able to find "equivalent" components (in terms of >>> capabilities, if not packaging). The problem will be that they (almost >>> assuredly!) won't run PPC binaries. Hence my comment about being wed >>> to the PPC family in your design... >> >> I am far less sure one can find that than you are. Porting to another > > As I said, you may not find what you want in a particular *package* > (i.e., may have to resort to multiple MCU's to achieve a desired level > of performance) but it is amazing (almost SCARY!) to see just what you can > buy, nowadays, and for how *little* money (and power, real estate, etc.)
You can buy powerful silicon indeed - typically in assembled products. But you cannot buy anything coming close to what you can buy from Freescale (now - still - NXP) which is documented.
> .... > >> architecture is not prohibitive for me, I did make vpa some 16 years >> ago to move from 68k (cpu32) to PPC and now the vpa (virtual processor >> assembly) compiler (well I guess it is a compiler and not an >> assembler in spite of the language name) can be prepared for another >> architecture more or less in a straight forward manner. >> BUT, where is going this _documented_ part to come from? Right now >> there is none on the market - and it would be insane to hope this >> will improve. > > IMO, *none* of the products on the market are truly "well documented"; > despite 1000+ page "datasheets". Nowhere is this more evident than the > ARM world where most of the documentation you encounter is a regurgitation > of the documentation from ARM Ltd re: their IP "components". Very little > from the foundries who've globbed these "macrocells" together in a variety > of (poorly documented) ways.
The Motorola - Freescale - NXP processors have been always well enough documented for me. Well enough so I have never used a single bit of software except mine to put one into use last 25 years or so (and I have put a few, from small MCU-s like the hc11 to SOC-s like the 8240 and 5200 and now looking at the t104x). I have had to sign an NDA with them when the part I have been designing in was still too new to go public with ever changing errata sheets etc., but I have always had all the info it took to just need the silicon from them, no toolchains etc. by whoever, just mine being adapted over the years. I once used a TI DSP (54xx) and it was the same, I had all thhe info it took. So if all goes now the messy modern way - buy a "raspberry" and play with some software on top of what they control and trust they will do what they are telling you they do - this will be simply a iller for me, and to anyone who makes computers and not toys playing on top of someone else's computers.
> > It's akin to reading a datasheet that is full of "typ" parameters but > no "min" or "max" numbers -- as if they'll only give you a general > idea of how the device will perform if used in a particular way > (which ALSO is undocumented!)
Not really, I have not seen a datasheet from Freescale or ADI or On or TI - which specify just typical values. I have seen "TBD" entries all right, but over time these get more "D". Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/
On 10/2/2016 5:08 AM, Tom Gardner wrote:
> On 02/10/16 11:50, Don Y wrote: >> As my projects are distributed, in nature, I'm facing the opposite problem: >> processors are becoming *too* capable -- it's difficult to come up with a use >> for all those capabilities! Yet, "bottom feeding" leaves me vulnerable to >> component obsolescence, etc. (as single MCU designs will tend to push the >> sweet spot UP the curve instead of down -- esp with the Linux-weenies >> who think everything can be solved just by starting with a bloated OS!) > > There's an interesting alternative available for some dual-core > processors: run an RTOS on one core and, say, *a* linux on the > other. Xilinx touts that for their Zynq devices, which also > contain significant FPGA resources.
In my case, having multiple cores at a node is already wasteful (but increasingly unavoidable in higher-end processors). And, there's no "up-side" to running Linux on any of the nodes as it doesn't have the structure or capabilities that I am using -- it would be like opting to run any other desktop OS on a node (no disk, no display so what's the OS giving you?)
> Xilinx is also touting its HLL->fpga+code products, but I > haven't seriously looked at those. > > Some Zynq boards are down to £50, IIRC - but I don't know the > cost of the newly announced single-core cost-optimised variants.
SoC's are getting to the point where it is easier to just write code than throw hardware at a problem -- esp as it gives you more implementation (vendor) choices. E.g., I needed a mechanism to protect every (network) port in my fabric (as the fabric is effectively the equivalent of the "backplane" in a single-node system) -- at the protocol layer AND the hardware layer (think: adversary shorting a tesla coil to your exposed RJ45's). Along with adding support for PoE, PTP, etc. Design a custom switch? There are bits of silicon out there that would fit the task (at Gb speeds). Or, put a SoC with dual MII's "in-line" with each port and treat them as "smart fuses" (in the hardware and protocol senses!)
On 02/10/16 07:34, Dimiter_Popoff wrote:
> On 01.10.2016 &#1075;. 20:39, Don Y wrote: >> Hi Dimiter, >> >> On 9/30/2016 3:14 AM, Dimiter_Popoff wrote: >>> http://arstechnica.com/gadgets/2016/09/wsj-qualcomm-could-spend-over-30-billion-to-acquire-nxp-semiconductor/ >>> >>> >>> >>> Are we all supposed to finally shut everything down, use tablets and >>> stay still while spoon fed - if entitled to the latter, that is. >> >> <frown> I suggest you consider preparing for the possibility of doing >> an end-of-life buy on the PPC's! >> >> [A downside of tying your implementation too tightly to that family] > > Hi Don, > while I could do that with the 5200b the "family" thing is no help > whatsoever. I know of no similar part being sourced by more than one > maker so when they kill it that is it. It is still different, I can > still buy parts Motorola has released 25 years ago - and this will > come to an abrupt end should qualcomm buy nxp/freescale. >
ST have some PPC microcontrollers that are (AFAIK) identical to ones from Freescale. There may be others - Atmel make copies of some old Freescale devices (albeit at extreme prices). <http://www.st.com/content/st_com/en/products/automotive-microcontrollers/spc5-32-bit-automotive-mcus.html?querycriteria=productId=SC963>
> Then I have plans for the t1042 (t1040) - how do you replace that? > There just are no processors of that complexity & power on the market
I believe there are similar devices with many cores and lots of Ethernet ports from other manufacturers, but often with MIPS cores - and often not available to people buying in small quantities. However, while I think Qualcom buying NXP/Freescale would be a terrible idea, I can't imagine that it will lead to the immediate destruction of the key product lines of Freescale. It would not make economic sense - why would Qualcom buy NXP/Freescale if it did not want the existing products and customers? And the big Freescale customers are going to disappear as fast as they are able if Qualcom stops selling these PPC devices - the longevity of the parts is one of the main reasons those customers bought them in the first place.
> from anyone else, thus qualcomming them means an end to the processor > market as we know it. Unless you are a politburo member so you are > entitled to data on a part you are just out in the cold. > Then how long do you thing it will take for TI and the rest to get > qualcommed (by whoever). > I guess we'll all have to learn to herd cattle or something. Someone > somewhere has decided to put an end to uncontrolled computer > development - and I don't think we can do a damn thing about it. > > Dimiter > > ------------------------------------------------------ > Dimiter Popoff, TGI http://www.tgi-sci.com > ------------------------------------------------------ > http://www.flickr.com/photos/didi_tgi/ > >
On 10/02/16 18:52, Don Y wrote:

> E.g., I needed a mechanism to protect every (network) port in my > fabric (as the fabric is effectively the equivalent of the "backplane" > in a single-node system) -- at the protocol layer AND the hardware > layer (think: adversary shorting a tesla coil to your exposed RJ45's). > Along with adding support for PoE, PTP, etc. > > Design a custom switch? There are bits of silicon out there that would > fit the task (at Gb speeds). > > Or, put a SoC with dual MII's "in-line" with each port and treat them as > "smart fuses" (in the hardware and protocol senses!)
The only way to protect yourself from end of life problems is to abstract your software designs to a higher level and use just a subset of available features. For example, timers, uarts, ports etc, into libraries which don't change, but perhaps get added to over the years. For example, i/o ports are all similar, but you can make them table driven for initialisation, structure register address / value pairs and similar for read and write. It may make things a bit slower, but rarely a problem with modern processors. Also requires more upfront affort, but well worth it to have fully tested libraries of functions. Of course, there is always the processor specific stuff, but having some of the work done already can save a lot of development time and you already know the code works... Regards, Chris
On 10/2/2016 2:14 PM, Chris wrote:
> On 10/02/16 18:52, Don Y wrote: > >> E.g., I needed a mechanism to protect every (network) port in my >> fabric (as the fabric is effectively the equivalent of the "backplane" >> in a single-node system) -- at the protocol layer AND the hardware >> layer (think: adversary shorting a tesla coil to your exposed RJ45's). >> Along with adding support for PoE, PTP, etc. >> >> Design a custom switch? There are bits of silicon out there that would >> fit the task (at Gb speeds). >> >> Or, put a SoC with dual MII's "in-line" with each port and treat them as >> "smart fuses" (in the hardware and protocol senses!) > > The only way to protect yourself from end of life problems is to > abstract your software designs to a higher level and use just a > subset of available features. For example, timers, uarts, ports > etc, into libraries which don't change, but perhaps get added to > over the years.
Of course; that's the leverage HLL's try to exploit. But, the concept you've subtly mentioned but glossed over is that of abstracting *designs* -- not necessarily *implementations*! Reuse *designs*, even if you have to discard the *code*! E.g., you don't have to rely on specific (hardware) mechanisms to implement many "design features". You don't need a paged MMU to implement virtual memory. Or, an FPU to implement floating point operations, etc. As processors keep getting faster, its silly NOT to be exploiting that extra capability to make the design process simpler and more robust. E.g., run-time checks on arguments instead of "well, it worked in the lab...", etc. Virtual machines instead of running on bare metal...
> For example, i/o ports are all similar, but you can make them > table driven for initialisation, structure register address / value > pairs and similar for read and write. It may make things a bit > slower, but rarely a problem with modern processors. Also requires > more upfront affort, but well worth it to have fully tested libraries > of functions. Of course, there is always the processor specific stuff, > but having some of the work done already can save a lot of development > time and you already know the code works...
s/know the code works/know the DESIGN works/
On Sun, 02 Oct 2016 00:55:25 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:

>There's never a "guarantee of supply", regardless of the component >involved!
True ... but certain companies try harder. If you really need parts for a Hollerith Type 1 Tabulator [circa ~1905], you can *still* get them from IBM. For an obscenely outrageous price, of course. <grin> George
Hi George!

On 10/2/2016 3:04 PM, George Neuner wrote:
> On Sun, 02 Oct 2016 00:55:25 -0700, Don Y > <blockedofcourse@foo.invalid> wrote: > >> There's never a "guarantee of supply", regardless of the component >> involved! > > True ... but certain companies try harder. If you really need parts > for a Hollerith Type 1 Tabulator [circa ~1905], you can *still* get > them from IBM. > > For an obscenely outrageous price, of course. > <grin>
Yes -- which effectively makes them unavailable! :< IIRC, you could still buy 6502's and 8085's -- no doubt cuz they're used in some sort of munitions... Note that the "availability" issue also applies to "ethereal" components. I.e., if you happen to use version X of software product Y, there is no guarantee that the vendor will sell you a *license* to use it (even though there is no *media* being transferred!) when you "need" it! [Annoying because you KNOW there isn't a "manufacturing" issue]
On 10/2/2016 7:00 AM, Dimiter_Popoff wrote:
>>> architecture is not prohibitive for me, I did make vpa some 16 years >>> ago to move from 68k (cpu32) to PPC and now the vpa (virtual processor >>> assembly) compiler (well I guess it is a compiler and not an >>> assembler in spite of the language name) can be prepared for another >>> architecture more or less in a straight forward manner. >>> BUT, where is going this _documented_ part to come from? Right now >>> there is none on the market - and it would be insane to hope this >>> will improve. >> >> IMO, *none* of the products on the market are truly "well documented"; >> despite 1000+ page "datasheets". Nowhere is this more evident than the >> ARM world where most of the documentation you encounter is a regurgitation >> of the documentation from ARM Ltd re: their IP "components". Very little >> from the foundries who've globbed these "macrocells" together in a variety >> of (poorly documented) ways. > > The Motorola - Freescale - NXP processors have been always well enough > documented for me. Well enough so I have never used a single bit of > software except mine to put one into use last 25 years or so (and I > have put a few, from small MCU-s like the hc11 to SOC-s like the > 8240 and 5200 and now looking at the t104x).
I typically want to push the hardware in ways that the vendor would look at me puzzled and ask, "Why would you want to do THAT?". Then, after explaining my intended "exploit", find himself struggling to find an alternative that would be as clever or inexpensive as the approach I was HOPING "should work". (E.g., how soon after /RESET is released can I signal an IRQ? How can I *guarantee* that the first instruction executed will be that of the ISR and NOT the "reset vector"? etc.) NatSemi was, perhaps, the most forthcoming with errata and fine details (NS32K) but, unfortunately, had a losing product. (sad as it really made the x86 and 68K look like pigs)
> I have had to sign an NDA with them when the part I have > been designing in was still too new to go public with ever changing > errata sheets etc., but I have always had all the info it took to > just need the silicon from them, no toolchains etc. by whoever, just > mine being adapted over the years. I once used a TI DSP (54xx) and > it was the same, I had all thhe info it took.
Yes, I still have a "datasheet" (book!) for a Motogorilla processor that never made it to production. The terms of the NDA were that I must *return* the (signed and numbered) copy to them and NOT "destroy it" (i.e., they wanted to be sure folks didn't CLAIM they had destroyed their copy). But, the guy and the group involved disappeared with the project so I'm stuck with a large stack of bound pages of which I can't dispose :-/ [<shrug> No big deal. It is interesting to review each time I stumble across it in my dead tree archive -- esp in light of how technology evolved in the intervening years...]
> So if all goes now the messy modern way - buy a "raspberry" and > play with some software on top of what they control and trust they > will do what they are telling you they do - this will be simply a > iller for me, and to anyone who makes computers and not toys playing > on top of someone else's computers.
But that's the way product development is headed! Which is really amusing cuz virtually all of those products need some "custom" daughter card -- so, you're still in the PCB business. You've just eliminated the effort to layout the CPU itself (big deal!) Developers increasingly want to be able to start writing application code on day one -- long before they even know what they want the product to *do*! I find the "demo boards" (even those that are intended for production) only have value as platforms to get a coarse feel for peformance; easier to throw together little code snippets and MEASURE their actual performance (cache interactions, prefetch pipeline, etc.) than it is to hypothesize about it "on paper".
>> It's akin to reading a datasheet that is full of "typ" parameters but >> no "min" or "max" numbers -- as if they'll only give you a general >> idea of how the device will perform if used in a particular way >> (which ALSO is undocumented!) > > Not really, I have not seen a datasheet from Freescale or ADI or > On or TI - which specify just typical values. I have seen "TBD" > entries all right, but over time these get more "D". >
The 2026 Embedded Online Conference