Reply by Theo April 23, 20202020-04-23
Rick C <gnuarm.deletethisbit@gmail.com> wrote:
> Yeah, but why exactly? Is the concern that there won't be future > updates/upgrades to the architecture? Tool support will wane? With the > various parties involved dropping like flies, it seems like it would be a > choice where no one would have an interest in bothering to collect > royalties.
1. If the IP situation is unclear, companies may hold back in case someone litigious ends up as the owner of the IP (see SCO) 2. It appears there is only one remaining Linux maintainer, LLVM has no maintenance, FreeBSD is likely to drop MIPS in 2022. Most end users are unlikely to put in the effort to keep the toolchains current. Theo
Reply by David Brown April 23, 20202020-04-23
On 22/04/2020 20:45, Rick C wrote:
> On Wednesday, April 22, 2020 at 2:18:27 PM UTC-4, David Brown wrote: >> On 22/04/2020 17:24, Rick C wrote: >>> On Wednesday, April 22, 2020 at 5:28:26 AM UTC-4, David Brown >>> wrote: >>>>> >>>>> If you think about it a bit you will see the only real way >>>>> to have "redundancy" in FPGAs is to excise entire sections of >>>>> the chip for a single failure. So a 50 kLUT chip will become >>>>> a 25 kLUT chip if it has a failure(s) in one half. That's >>>>> all I've heard of. Trying to replace a small section of a >>>>> chip to retain the full functionality would result in uneven >>>>> delays and that's a real problem in FPGAs. >>>>> >>>> >>>> Yes, that may well be the way to do it. (I'd guess you could >>>> split up sections a bit more than that, especially if you are >>>> willing to relax the timing specifications for routine a >>>> little.) But even with the suggested half-disabling, it could >>>> be worth it if your yields are low. Suppose that 30% of your 50 >>>> kLUT chip have a fault - that means 70% can be sold. 70% of >>>> the remaining ones - 20% of the die - can then be sold as 25 >>>> kLUT devices. These are "free". >>> >>> I'm trying to explain they don't test the chips to "bin" them >>> and sell them according to their capacity. They simply design a >>> die to have X capacity but also sold as Y capacity. The die are >>> tested to how they want to sell them and if they don't pass they >>> are trashed for either size testing. Apparently they don't find >>> it worthwhile to test and retest. >>> >> >> I know that this is done with some devices, certainly. For one of >> Atmel's AVR devices, the sole difference between the 64K version >> and the 32K version was the text printed on the package. > > Did they not disable the extra memory in some way? Even if most of > the chips work over the full 64K the possibility of a problem from > the chip not being fully tested might be enough of a deterrent that > people won't buy the 32K part to use as 64K. >
In this particular case, no the memory was not disabled - but it (so I heard) was fully tested. Basically, the 64K parts were more popular (due to a few big customers), and it was cheaper for the company to make and test more of them than to have a different setup for the 32K parts. It was perhaps only a temporary measure - one hears stories, but never full details.
> >> (Long ago we used to use a microcontroller that had 8K of OTP >> memory. Then we discovered that the 32K version was significantly >> cheaper. This was because the 8K version was made by producing a >> 32K version and then running an extra step to program 24K of the >> memory to zeros.) >> >> I am not privy to the testing or binning procedures for FPGAs. >> Your suggestions sound perfectly reasonable to me. The suggestion >> that they using binning for some parts is also perfectly >> reasonable, and I know it is done on some other big chips. But I >> have no idea which is used for FPGAs. > > I'm just telling you what I was told by the FPGA company > representatives who used to post in c.a.fpga years ago. One was > particularly argumentative and the company got them to stop. Another > was Peter Alfke who was an FPGA industry icon.
Fair enough.
> > >>> I think on most devices if you have a failure rate high enough >>> to make binning worthwhile you have process problems that need to >>> be addressed. >> >> Some devices /do/ have high failure rates - particularly in early >> stages of development or for low volume parts. > > I don't know of any low volume FPGAs or MCUs other than perhaps very > old end of life product. The cost of yield issues hugely impact the > cost of the final product because you not only pay for the bad dies, > but all the testing time to show the bad dies are bad. That's > probably why they don't retest for the "slower" or "smaller" bins, > the chances of failing that test are probably much higher and so even > more costly per good unit. >
I believe that for some parts there are spot-checks on dies - they take a few of the parts and test them for speed, temperature, power, etc., and use that information for binning all parts on the wafer. But again, I have no idea if that applies to any particular part.
Reply by George Neuner April 23, 20202020-04-23
On Wed, 22 Apr 2020 17:06:35 +0000 (UTC), Grant Edwards
<invalid@invalid.invalid> wrote:

>On 2020-04-22, Dimiter_Popoff <dp@tgi-sci.com> wrote: > >> NXP support and make the power architecture line (and yes, they do call >> it that), their top of the line parts are still these (QORIQ, that name >> did not go away as it might have to....). > >QoriQ? > >Wow. That name is stunningly, amzaingly bad. Do Silicon vendors send >people to some specialized school where they learn to come up with the >most awfult product line names possible?
They get names from the same source as do pharmaceuticals.
Reply by Rick C April 22, 20202020-04-22
On Wednesday, April 22, 2020 at 5:06:55 PM UTC-4, Theo wrote:
> Rick C <gnuarm.deletethisbit@gmail.com> wrote: > > I would be worried this would be lost in the large interest in RISC-V > > which seems to have a pretty good head of steam at this point. How many > > open architectures do we need? Isn't MIPS open at this point? > > MIPS is... complicated: > https://www.cnx-software.com/2020/04/22/is-mips-dead-lawsuit-bankruptcy-maintainers-leaving-and-more/ > > I would not choose it for a new design at this point. > > Theo
Yeah, but why exactly? Is the concern that there won't be future updates/upgrades to the architecture? Tool support will wane? With the various parties involved dropping like flies, it seems like it would be a choice where no one would have an interest in bothering to collect royalties. But then what do I know? This might set you up to where five years from now you end up having to pay not only royalties, but penalties. -- Rick C. --+++ Get 1,000 miles of free Supercharging --+++ Tesla referral code - https://ts.la/richard11209
Reply by Theo April 22, 20202020-04-22
Rick C <gnuarm.deletethisbit@gmail.com> wrote:
> I would be worried this would be lost in the large interest in RISC-V > which seems to have a pretty good head of steam at this point. How many > open architectures do we need? Isn't MIPS open at this point?
MIPS is... complicated: https://www.cnx-software.com/2020/04/22/is-mips-dead-lawsuit-bankruptcy-maintainers-leaving-and-more/ I would not choose it for a new design at this point. Theo
Reply by Rick C April 22, 20202020-04-22
On Wednesday, April 22, 2020 at 2:18:27 PM UTC-4, David Brown wrote:
> On 22/04/2020 17:24, Rick C wrote: > > On Wednesday, April 22, 2020 at 5:28:26 AM UTC-4, David Brown wrote: > >>> > >>> If you think about it a bit you will see the only real way to > >>> have "redundancy" in FPGAs is to excise entire sections of the > >>> chip for a single failure. So a 50 kLUT chip will become a 25 > >>> kLUT chip if it has a failure(s) in one half. That's all I've > >>> heard of. Trying to replace a small section of a chip to retain > >>> the full functionality would result in uneven delays and that's a > >>> real problem in FPGAs. > >>> > >> > >> Yes, that may well be the way to do it. (I'd guess you could split > >> up sections a bit more than that, especially if you are willing to > >> relax the timing specifications for routine a little.) But even > >> with the suggested half-disabling, it could be worth it if your > >> yields are low. Suppose that 30% of your 50 kLUT chip have a fault > >> - that means 70% can be sold. 70% of the remaining ones - 20% of > >> the die - can then be sold as 25 kLUT devices. These are "free". > > > > I'm trying to explain they don't test the chips to "bin" them and > > sell them according to their capacity. They simply design a die to > > have X capacity but also sold as Y capacity. The die are tested to > > how they want to sell them and if they don't pass they are trashed > > for either size testing. Apparently they don't find it worthwhile to > > test and retest. > > > > I know that this is done with some devices, certainly. For one of > Atmel's AVR devices, the sole difference between the 64K version and the > 32K version was the text printed on the package.
Did they not disable the extra memory in some way? Even if most of the chips work over the full 64K the possibility of a problem from the chip not being fully tested might be enough of a deterrent that people won't buy the 32K part to use as 64K.
> (Long ago we used to use a microcontroller that had 8K of OTP memory. > Then we discovered that the 32K version was significantly cheaper. This > was because the 8K version was made by producing a 32K version and then > running an extra step to program 24K of the memory to zeros.) > > I am not privy to the testing or binning procedures for FPGAs. Your > suggestions sound perfectly reasonable to me. The suggestion that they > using binning for some parts is also perfectly reasonable, and I know it > is done on some other big chips. But I have no idea which is used for > FPGAs.
I'm just telling you what I was told by the FPGA company representatives who used to post in c.a.fpga years ago. One was particularly argumentative and the company got them to stop. Another was Peter Alfke who was an FPGA industry icon.
> > I think on most devices if you have a failure rate high enough to > > make binning worthwhile you have process problems that need to be > > addressed. > > Some devices /do/ have high failure rates - particularly in early stages > of development or for low volume parts.
I don't know of any low volume FPGAs or MCUs other than perhaps very old end of life product. The cost of yield issues hugely impact the cost of the final product because you not only pay for the bad dies, but all the testing time to show the bad dies are bad. That's probably why they don't retest for the "slower" or "smaller" bins, the chances of failing that test are probably much higher and so even more costly per good unit. -- Rick C. --++- Get 1,000 miles of free Supercharging --++- Tesla referral code - https://ts.la/richard11209
Reply by David Brown April 22, 20202020-04-22
On 22/04/2020 17:24, Rick C wrote:
> On Wednesday, April 22, 2020 at 5:28:26 AM UTC-4, David Brown wrote: >>> >>> If you think about it a bit you will see the only real way to >>> have "redundancy" in FPGAs is to excise entire sections of the >>> chip for a single failure. So a 50 kLUT chip will become a 25 >>> kLUT chip if it has a failure(s) in one half. That's all I've >>> heard of. Trying to replace a small section of a chip to retain >>> the full functionality would result in uneven delays and that's a >>> real problem in FPGAs. >>> >> >> Yes, that may well be the way to do it. (I'd guess you could split >> up sections a bit more than that, especially if you are willing to >> relax the timing specifications for routine a little.) But even >> with the suggested half-disabling, it could be worth it if your >> yields are low. Suppose that 30% of your 50 kLUT chip have a fault >> - that means 70% can be sold. 70% of the remaining ones - 20% of >> the die - can then be sold as 25 kLUT devices. These are "free". > > I'm trying to explain they don't test the chips to "bin" them and > sell them according to their capacity. They simply design a die to > have X capacity but also sold as Y capacity. The die are tested to > how they want to sell them and if they don't pass they are trashed > for either size testing. Apparently they don't find it worthwhile to > test and retest. >
I know that this is done with some devices, certainly. For one of Atmel's AVR devices, the sole difference between the 64K version and the 32K version was the text printed on the package. (Long ago we used to use a microcontroller that had 8K of OTP memory. Then we discovered that the 32K version was significantly cheaper. This was because the 8K version was made by producing a 32K version and then running an extra step to program 24K of the memory to zeros.) I am not privy to the testing or binning procedures for FPGAs. Your suggestions sound perfectly reasonable to me. The suggestion that they using binning for some parts is also perfectly reasonable, and I know it is done on some other big chips. But I have no idea which is used for FPGAs.
> I think on most devices if you have a failure rate high enough to > make binning worthwhile you have process problems that need to be > addressed.
Some devices /do/ have high failure rates - particularly in early stages of development or for low volume parts.
> > >> All big IC designs are made with a view to minimising the waste due >> to production faults, because faults are not uncommon with big >> chips that push the limits for production. Multi-core CPUs are >> regularly made with more cores, and sold as fewer core parts where >> faulty cores are disabled. The same applies to memory of all >> types. And I know that Altera certainly used to have an option to >> buy pre-programmed devices to fit your design - these were cheaper >> because they could use dies that had faults which did not affect >> your particular design. > > I was told they were cheaper because the testing time is shorter and > test time is a significant portion of the cost of making and > verifying the chip. Just considering the routing, imagine how many > times they have to reconfigure the device to exercise every routing > segment.
That also sounds reasonable. It is not the explanation I heard, but I have no way to judge which system might be used. (Or maybe it's a combination, or maybe it has changed, or varies for different parts or different manufacturers.) There is little point in guessing.
> > The largest chips in any FPGA line may have significant failure > rates, but for the bread and butter products they don't have a low > enough yield to worry with how many die are rejected due to testing > failures. > > The real reason they use the same die for more than one product is > because the cost of the mask sets is so high. They make more money > selling a die at half capacity rather than making two different > designs. >
Reply by David Brown April 22, 20202020-04-22
On 22/04/2020 17:09, Rick C wrote:

> Actually, the whole XMOS thing was thread drift from the topic of soft CPU designs in FPGAs. >
There has been a great deal of drift in this thread! I don't know if you ever got much of an answer to your original question, but I think some of the branches have been interesting.
Reply by Grant Edwards April 22, 20202020-04-22
On 2020-04-22, Dimiter_Popoff <dp@tgi-sci.com> wrote:
> On 4/22/2020 20:06, Grant Edwards wrote: > >> QoriQ? >> >> Wow. That name is stunningly, amzaingly bad. [...] > > They had that "digital DNA" before, not much better :-).
I remember being at an Embedded System Conference during the "Digital DNA" campaign and seeing that phrase on T-shirts, tote-bags, ID badge lanyards, etc. I even sat through a "Digital DNA" video presentation at one point during the conference. Neither I nor anybody I talked to had the faintest idea what "Digital DNA" was supposed to mean or whether it referred to anything concrete or not. -- Grant
Reply by Dimiter_Popoff April 22, 20202020-04-22
On 4/22/2020 20:06, Grant Edwards wrote:
> On 2020-04-22, Dimiter_Popoff <dp@tgi-sci.com> wrote: > >> NXP support and make the power architecture line (and yes, they do call >> it that), their top of the line parts are still these (QORIQ, that name >> did not go away as it might have to....). > > QoriQ? > > Wow. That name is stunningly, amzaingly bad. Do Silicon vendors send > people to some specialized school where they learn to come up with the > most awfult product line names possible? > > -- > Grant >
They had that "digital DNA" before, not much better :-). Someone in their marketing may think they are in the business of selling soap or chocolate... Of course it does not matter much, how many of us would pay attention to the marketing name when choosing a platform - and their products are really good. OTOH I am not sure to what extent the likes of us here have much to say in big corporations when it comes to platform selection so things like that may have cost them.... or made them profit, I would not bet much on which of the two. Dimiter