On Wednesday, April 22, 2020 at 5:10:47 AM UTC-4, David Brown wrote:> On 21/04/2020 21:42, Rick C wrote: > > On Tuesday, April 21, 2020 at 11:25:37 AM UTC-4, David Brown wrote: > >> On 21/04/2020 15:15, Rick C wrote: > >>> On Tuesday, April 21, 2020 at 8:02:18 AM UTC-4, David Brown > >>> wrote: > >>>> On 21/04/2020 02:36, Rick C wrote: > >>>>> On Monday, April 20, 2020 at 9:58:09 AM UTC-4, David Brown > >>>>> wrote: > >>>>>> On 18/04/2020 21:38, Rick C wrote: > >>>>>>> On Saturday, April 18, 2020 at 9:06:57 AM UTC-4, David > >>>>>>> Brown wrote: > >>>>>>>> > >>>> > >>>>>> I need an MCU with 4 EtherCAT slave channels. There are > >>>>>> exactly 0 on the market. There are only two or three in > >>>>>> total - from all manufacturers together - with even /one/ > >>>>>> EtherCAT slave. > >>>>> > >>>>> Yes, because EtherCAT is not widely used at the moment. I > >>>>> had never heard of it. When I read about it I see some car > >>>>> makers are looking at adopting it. Once that happens there > >>>>> will be MCUs supporting the interface. Until then it is a > >>>>> niche market. Am I wrong? I don't see any indication there > >>>>> is much out there either in the supply or demand side. > >>>>> > >>>> > >>>> EtherCAT has been increasingly popular in industrial automation > >>>> (the world of Programmable Logic Controllers, Profibus, > >>>> Frequency Converters, etc.). > >>> > >>> You say "increasingly popular" but if it were being used in > >>> higher volumes MCUs with EtherCAT interfaces would be available. > >>> MCU makers aren't stupid and love to have any advantage over the > >>> competition they can find. > >>> > >>> So "popular" has to be something other than unit volume. > >> > >> I wrote "increasingly popular", because it is becoming > >> increasingly popular. That means both that more and more people > >> are using EtherCAT devices, more and more EtherCAT devices are > >> being installed, more and more EtherCAT devices are being > >> developed, more and more EtherCAT MCUs, standand-alone peripherals, > >> and FPGA cores have become available in recent years. > >> > >> In the big picture of MCU sales, EtherCAT usage is tiny. /Really/ > >> tiny. Less tiny than five years ago, but still tiny. Making an > >> EtherCAT peripheral in a MCU is not an insignificant investment for > >> an MCU company - it would be a very big investment. They won't do > >> that until they foresee a sizeable market - far greater than the > >> automation market. Until then, it will be left to the few who are > >> heavily involved in this sort of thing, such as Infineon (Siemens > >> has always been a big player in the automation world). > > > > So your use of "more and more" is not relevant to the MCU market > > which is what I've been talking about. > > I am not sure how I could have been clearer.That is certainly less clear.> > I don't know anything about the automation market, so I have to > > assume it is not so large if the MCU makers are ignoring a peripheral > > that is used "more and more" in that market for some value of "more > > and more". > > Relatively speaking, it is not a big market - numbers are a lot smaller > than automotive or consumer markets. And it is quite a conservative > market, with people using the same devices for decades. (This also > means manufacturers have to commit to very long product lifetimes in > this branch.)We were discussing this in the context of an application for the XMOS device. Meanwhile it seems to have taken on a life of it's own. Bottom line is if it were much of a market at all there would be MCU with EtherCAT built in. Since there is not a very significant market, external chips are used to add EtherCAT to MCUs. It seems XMOS doesn't have sufficient resources to develop an EtherCAT library to promote their devices.> I also did not say, or imply, that MCU makers are ignoring this > peripheral. I said they don't make many devices that support it - and I > said that the number of devices supporting EtherCAT has been increasing > in recent years. I am sure the big MCU makers are following EtherCAT > closely, and I am sure they have devices under development. When Ford, > or Toyota, or Volkswagen tells NXP and Texas Instruments that they are > interested in small microcontrollers with EtherCAT slave devices, the > MCU makers are /not/ going to say "EtherCAT? What's that?". They are > going to say "We've got some ideas under development. What combination > of cpu, memories and peripherals do you want? We'll put the bricks > together and do some samples". For all I know, some of these companies > already have devices for their big customers - these can be made years > before mere mortals get to hear about them.I have zero info on what R&D the various MCU makers are doing. I don't care much either. I'm discussing what is done currently.> > I know designing a CPU chip is costly, but the cost depends greatly > > on the process used. The CPUs in a cell phone cost millions just for > > the mask set. CPUs on the 150 nm node with 256 kB of flash, not so > > much. What level of CPU is married to a EtherCAT interface in the > > designs you see? I was thinking a CM4 would be appropriate. > > > > A Cortex-M4 would be fine for simpler EtherCAT slaves. It's possible to > use them with even smaller devices (or no microcontroller at all - > EtherCAT slave peripherals usually support a "remote digital I/O" mode). > But based on the size and speed needed for a EtherCAT module it would > be silly /not/ to have something like an M4. (We are using an M7 chip > with them.) > > > > >>>> It's the stuff that runs factories, and programmed and set up > >>>> by automation engineers that are a kind of cross between > >>>> electricians and software developers. Characteristics of > >>>> electronics in this field are that they are often quite > >>>> expensive, but designed to fit together and "just work" even > >>>> when made by different companies. Most of the stuff is made by > >>>> relatively few large companies, rather than small companies. > >>>> Implementing many of the protocols involved are quite horrible > >>>> - badly specified (with large fees to be paid before you can > >>>> even see the documents), overly complex, and typically require > >>>> complicated XML-based "descriptors" that make USB descriptors > >>>> look simple. But while that stuff makes them unpleasant to > >>>> implement, it makes them very easy to use for the people > >>>> actually making the automation setups. > >>>> > >>>> EtherCAT is also quite complicated, but a lot of it is handled > >>>> by dedicated slave controller chips and software stacks that > >>>> are available. > >>> > >>> So what sort of price premium are these peripheral chips adding > >>> to the BoM? > >>> > >> > >> I don't deal with prices at that level, but Digikey puts them at > >> about $10. > > > > That's pretty significant compared to a $5 XMOS or a $3 MCU chip. > > > > Yes. But development costs, development time, development resources are > all important too. BOM prices are rarely irrelevant, but not always the > most important factor. Also, those chips do a good deal more than we > can get from a tiny XMOS - we'd need a much bigger XMOS and external > PHY's. Maybe XMOS with EtherCAT modules would be a BOM cost win, maybe not.Again, this indicates the market for these designs is not very large. It doesn't take a lot of volume, compared to what it takes for MCU makers to address a need, to amortize the development costs of a board design. If your amortized development time is a significant part of your unit costs, you are not on the MCU designer's radar for deciding what interfaces to include in devices.> >> No, I have /not/ been suggesting EtherCAT would be a killer app > >> for XMOS. You seem to have combined various posts, adding 2 plus 3 > >> to get 17. > > > > Ok, whatever. I asked a question. > > You did - but that question showed that you badly misunderstood other > things I wrote. > > Perhaps the quantity of posts here, and their lengths, has simply got > out of hand. It becomes impossible to track everything that is said. I > know that I have to snip and skimp on posts.I think you are addressing things I haven't been talking about. I'm just comparing XMOS to other devices for use in the different applications. EtherCAT came up and it seems to still be a relatively niche market for MCU makers. Actually, the whole XMOS thing was thread drift from the topic of soft CPU designs in FPGAs. -- Rick C. --+-- Get 1,000 miles of free Supercharging --+-- Tesla referral code - https://ts.la/richard11209
Custom CPU Designs
Started by ●April 16, 2020
Reply by ●April 22, 20202020-04-22
Reply by ●April 22, 20202020-04-22
On Wed, 22 Apr 2020 13:19:45 +0000 (UTC), Grant Edwards <invalid@invalid.invalid> wrote:>On 2020-04-22, upsidedown@downunder.com <upsidedown@downunder.com> wrote: >> >>>> What is the point of using multicore processors, if a single core >>>> can perform the basic EtherCAT node functionality. >>> >>>What if you also want to run a web server and some other heavy-duty, >>>encrypted, protocols under Linux in your EtherCAT slave? >> >> Would one really want to have a large number of such stations all >> around a plant, each exchanging only a few at bits? > >What makes you think the multi-core EtherCAT slave is exchanging only >a few bits. The ones with multi-core processors are typically I/O >hubs that can handle many hundreds of bits. You asked what's the >point of using a multi-core processor in an EtherCAT slave. I told >you the reason why people design them that way: because they need the >CPU power to handle other protocols simultaneously or do things like >image processing.My main point was that if you are going to transfer a significant number of bytes/node (say at least 10 bytes), why use EtherCAT in the first place ? You could then use any standard garden variety RS-422/485 or 10/100/1000BaseT hardware with some standard protocol, even Modbus RTU/UDP/TCP. If the node complexity justifies using xCore, then most likely is going to transfer a lot of data to the outside world. The EtherCAT+xCore combination doesn't make much sense, but EtherCAT alone or xCore alone can be quite competitive in their own niches.> >> Use some hierarchical system, but the expected advantage of EtherCAT >> is lost. > >Generally, the multi-core EtherCAT slave _is_ part of a hierarchical >system. For example the EtherCAT slave might be an IO-Link master >with 8 attached IO-Link sensors, each of which can handle 32 bytes of >input and 32 bytes of output. > >You seem to be arguing against using a multi-core processor in an >EtherCAT slave does nothing other than handle a few bits of DIO. > >Nobody does that. > >Nobody is proposing that. > >>>> In addition, if there are dozens of series connected twisted pair >>>> connectors, what is the electromechanical reliability of each >>>> connection ? A single fault will prevent the Ethernet frame >>>> circulating back to the master. >>> >>>If single point of failure is an issue, then you can connect the >>>EtherCAT devices in a loop to get some redundancy. >> >> The EtherCAT has the same reliability issues as 10Base2 and 10Base5 >> coaxial Ethernets with a large number of connections to a single >> bus. > >You were worried the entire network was susceptible to single-point >connector failure. With a ring, it's not, you'll need a two-point >failure to loose comms.
Reply by ●April 22, 20202020-04-22
On 4/21/2020 19:20, David Brown wrote:> On 21/04/2020 02:23, Clifford Heath wrote: >> On 21/4/20 10:01 am, Rick C wrote: >>> On Monday, April 20, 2020 at 8:59:36 AM UTC-4, Clifford Heath wrote: >>>> On 20/4/20 4:52 am, Przemek Klosowski wrote: >>>>> On Thu, 16 Apr 2020 17:13:41 -0700, Paul Rubin wrote: >>>>> >>>>>> Grant Edwards <invalid@invalid.invalid> writes: >>>>>>> Definitely. The M-class parts are so cheap, there's not much >>>>>>> point in >>>>>>> thinking about doing it in an FPGA. >>>>>> >>>>>> Well I think the idea is already you have other stuff in the FPGA, so >>>>>> you save a package and some communications by dropping in a softcore >>>>>> rather than using an external MCU. I'm surprised that only high end >>>>>> FPGA's currently have hard MCU's already there. Just like they >>>>>> have DSP >>>>>> blocks, ram blocks, SERDES, etc., they might as well put in some CPU >>>>>> blocks. >>>>> >>>>> Maybe Risc-V will catch on. The design is FOSS, as is the toolchain >>>>> (GDB >>>>> and LLVM have Risc-V backends already for a while), and the simple >>>>> versions take very few gates. >>>>> https://github.com/SpinalHDL/VexRiscv >>>>> https://hackaday.com/2019/11/19/emulating-risc-v-on-an-fpga/ >>>>> >>>> >>>> There's a lot of push in the direction of the Power architecture. What >>>> does that look like in FPGA? >>>> >>>> CH >>> >>> Do you mean the Power PC? That was the hard IP used in the very old >>> and possibly obsolete Virtex II Pro devices. >>> >>> Why do they have to use such goofy names like "Pro" or "Polarfire". >>> Do they really think that sells even one frigging chip? I would be >>> so much more inclined to dig through their information if they just >>> had decent names that give you some idea of the technical details >>> including the heritage. >> >> Hah! I hear you. :) >> >> Yes, I mean PowerPC, specifically OpenPoWER: >> <https://en.wikipedia.org/wiki/OpenPOWER_Foundation> >> >> A friend is a fan. Me, I haven't read much about it. >> > > I haven't seem much of PowerPC in recent times. For a while they were > popular for high-end microcontrollers, especially in the automotive > industry, but that seems to be fading - since NXP took over Freescale, I > think the PPC lines are dying out in favour of more ARM lines. > > Power, which is a different beast (with shared ancestry and a certain > degree of compatibility), is alive and reasonably well - it's a popular > choice for really big iron. I've seen benchmarks showing Power9 chips > giving more bitcoins per MWt than dedicated miner ASICs, if you are into > that sort of thing. > > I can't see anyone embedding a Power core in an FPGA... > > >Actually what used to be called "PowerPC" is today called "Power Architecture". Some parts from Freescale have been called either over time. NXP support and make the power architecture line (and yes, they do call it that), their top of the line parts are still these (QORIQ, that name did not go away as it might have to....). There is difference from core to core of course but this has always been the case last 25+ years. The initial "power" by IBM from the 80-s was different - not that much but more than they differ nowadays. It can be transparent for user level code on which core it runs, well, 32 vs. 64 bits can of course be more challenging. Dimiter ====================================================== Dimiter Popoff, TGI http://www.tgi-sci.com ====================================================== http://www.flickr.com/photos/didi_tgi/
Reply by ●April 22, 20202020-04-22
On Wednesday, April 22, 2020 at 5:28:26 AM UTC-4, David Brown wrote:> > > > If you think about it a bit you will see the only real way to have > > "redundancy" in FPGAs is to excise entire sections of the chip for a > > single failure. So a 50 kLUT chip will become a 25 kLUT chip if it > > has a failure(s) in one half. That's all I've heard of. Trying to > > replace a small section of a chip to retain the full functionality > > would result in uneven delays and that's a real problem in FPGAs. > > > > Yes, that may well be the way to do it. (I'd guess you could split up > sections a bit more than that, especially if you are willing to relax > the timing specifications for routine a little.) But even with the > suggested half-disabling, it could be worth it if your yields are low. > Suppose that 30% of your 50 kLUT chip have a fault - that means 70% can > be sold. 70% of the remaining ones - 20% of the die - can then be sold > as 25 kLUT devices. These are "free".I'm trying to explain they don't test the chips to "bin" them and sell them according to their capacity. They simply design a die to have X capacity but also sold as Y capacity. The die are tested to how they want to sell them and if they don't pass they are trashed for either size testing. Apparently they don't find it worthwhile to test and retest. I think on most devices if you have a failure rate high enough to make binning worthwhile you have process problems that need to be addressed.> All big IC designs are made with a view to minimising the waste due to > production faults, because faults are not uncommon with big chips that > push the limits for production. Multi-core CPUs are regularly made with > more cores, and sold as fewer core parts where faulty cores are > disabled. The same applies to memory of all types. And I know that > Altera certainly used to have an option to buy pre-programmed devices to > fit your design - these were cheaper because they could use dies that > had faults which did not affect your particular design.I was told they were cheaper because the testing time is shorter and test time is a significant portion of the cost of making and verifying the chip. Just considering the routing, imagine how many times they have to reconfigure the device to exercise every routing segment. The largest chips in any FPGA line may have significant failure rates, but for the bread and butter products they don't have a low enough yield to worry with how many die are rejected due to testing failures. The real reason they use the same die for more than one product is because the cost of the mask sets is so high. They make more money selling a die at half capacity rather than making two different designs. -- Rick C. --+-+ Get 1,000 miles of free Supercharging --+-+ Tesla referral code - https://ts.la/richard11209
Reply by ●April 22, 20202020-04-22
On 2020-04-22, upsidedown@downunder.com <upsidedown@downunder.com> wrote:> My main point was that if you are going to transfer a significant > number of bytes/node (say at least 10 bytes), why use EtherCAT in > the first place ?Because that's what the rest of the plant is using. Not all EtherCAT nodes are identical. Many may only be exchanging a few bits. Some need to do more. The nodes that need to do more may need more processing power.> You could then use any standard garden variety RS-422/485 or > 10/100/1000BaseT hardware with some standard protocol, even Modbus > RTU/UDP/TCP.That requires a whole new cabling infrascture. -- Grant
Reply by ●April 22, 20202020-04-22
On 2020-04-22, Dimiter_Popoff <dp@tgi-sci.com> wrote:> NXP support and make the power architecture line (and yes, they do call > it that), their top of the line parts are still these (QORIQ, that name > did not go away as it might have to....).QoriQ? Wow. That name is stunningly, amzaingly bad. Do Silicon vendors send people to some specialized school where they learn to come up with the most awfult product line names possible? -- Grant
Reply by ●April 22, 20202020-04-22
On 4/22/2020 20:06, Grant Edwards wrote:> On 2020-04-22, Dimiter_Popoff <dp@tgi-sci.com> wrote: > >> NXP support and make the power architecture line (and yes, they do call >> it that), their top of the line parts are still these (QORIQ, that name >> did not go away as it might have to....). > > QoriQ? > > Wow. That name is stunningly, amzaingly bad. Do Silicon vendors send > people to some specialized school where they learn to come up with the > most awfult product line names possible? > > -- > Grant >They had that "digital DNA" before, not much better :-). Someone in their marketing may think they are in the business of selling soap or chocolate... Of course it does not matter much, how many of us would pay attention to the marketing name when choosing a platform - and their products are really good. OTOH I am not sure to what extent the likes of us here have much to say in big corporations when it comes to platform selection so things like that may have cost them.... or made them profit, I would not bet much on which of the two. Dimiter
Reply by ●April 22, 20202020-04-22
On 2020-04-22, Dimiter_Popoff <dp@tgi-sci.com> wrote:> On 4/22/2020 20:06, Grant Edwards wrote: > >> QoriQ? >> >> Wow. That name is stunningly, amzaingly bad. [...] > > They had that "digital DNA" before, not much better :-).I remember being at an Embedded System Conference during the "Digital DNA" campaign and seeing that phrase on T-shirts, tote-bags, ID badge lanyards, etc. I even sat through a "Digital DNA" video presentation at one point during the conference. Neither I nor anybody I talked to had the faintest idea what "Digital DNA" was supposed to mean or whether it referred to anything concrete or not. -- Grant
Reply by ●April 22, 20202020-04-22
On 22/04/2020 17:09, Rick C wrote:> Actually, the whole XMOS thing was thread drift from the topic of soft CPU designs in FPGAs. >There has been a great deal of drift in this thread! I don't know if you ever got much of an answer to your original question, but I think some of the branches have been interesting.
Reply by ●April 22, 20202020-04-22
On 22/04/2020 17:24, Rick C wrote:> On Wednesday, April 22, 2020 at 5:28:26 AM UTC-4, David Brown wrote: >>> >>> If you think about it a bit you will see the only real way to >>> have "redundancy" in FPGAs is to excise entire sections of the >>> chip for a single failure. So a 50 kLUT chip will become a 25 >>> kLUT chip if it has a failure(s) in one half. That's all I've >>> heard of. Trying to replace a small section of a chip to retain >>> the full functionality would result in uneven delays and that's a >>> real problem in FPGAs. >>> >> >> Yes, that may well be the way to do it. (I'd guess you could split >> up sections a bit more than that, especially if you are willing to >> relax the timing specifications for routine a little.) But even >> with the suggested half-disabling, it could be worth it if your >> yields are low. Suppose that 30% of your 50 kLUT chip have a fault >> - that means 70% can be sold. 70% of the remaining ones - 20% of >> the die - can then be sold as 25 kLUT devices. These are "free". > > I'm trying to explain they don't test the chips to "bin" them and > sell them according to their capacity. They simply design a die to > have X capacity but also sold as Y capacity. The die are tested to > how they want to sell them and if they don't pass they are trashed > for either size testing. Apparently they don't find it worthwhile to > test and retest. >I know that this is done with some devices, certainly. For one of Atmel's AVR devices, the sole difference between the 64K version and the 32K version was the text printed on the package. (Long ago we used to use a microcontroller that had 8K of OTP memory. Then we discovered that the 32K version was significantly cheaper. This was because the 8K version was made by producing a 32K version and then running an extra step to program 24K of the memory to zeros.) I am not privy to the testing or binning procedures for FPGAs. Your suggestions sound perfectly reasonable to me. The suggestion that they using binning for some parts is also perfectly reasonable, and I know it is done on some other big chips. But I have no idea which is used for FPGAs.> I think on most devices if you have a failure rate high enough to > make binning worthwhile you have process problems that need to be > addressed.Some devices /do/ have high failure rates - particularly in early stages of development or for low volume parts.> > >> All big IC designs are made with a view to minimising the waste due >> to production faults, because faults are not uncommon with big >> chips that push the limits for production. Multi-core CPUs are >> regularly made with more cores, and sold as fewer core parts where >> faulty cores are disabled. The same applies to memory of all >> types. And I know that Altera certainly used to have an option to >> buy pre-programmed devices to fit your design - these were cheaper >> because they could use dies that had faults which did not affect >> your particular design. > > I was told they were cheaper because the testing time is shorter and > test time is a significant portion of the cost of making and > verifying the chip. Just considering the routing, imagine how many > times they have to reconfigure the device to exercise every routing > segment.That also sounds reasonable. It is not the explanation I heard, but I have no way to judge which system might be used. (Or maybe it's a combination, or maybe it has changed, or varies for different parts or different manufacturers.) There is little point in guessing.> > The largest chips in any FPGA line may have significant failure > rates, but for the bread and butter products they don't have a low > enough yield to worry with how many die are rejected due to testing > failures. > > The real reason they use the same die for more than one product is > because the cost of the mask sets is so high. They make more money > selling a die at half capacity rather than making two different > designs. >







