On Wednesday, April 22, 2020 at 2:18:27 PM UTC-4, David Brown wrote:> On 22/04/2020 17:24, Rick C wrote: > > On Wednesday, April 22, 2020 at 5:28:26 AM UTC-4, David Brown wrote: > >>> > >>> If you think about it a bit you will see the only real way to > >>> have "redundancy" in FPGAs is to excise entire sections of the > >>> chip for a single failure. So a 50 kLUT chip will become a 25 > >>> kLUT chip if it has a failure(s) in one half. That's all I've > >>> heard of. Trying to replace a small section of a chip to retain > >>> the full functionality would result in uneven delays and that's a > >>> real problem in FPGAs. > >>> > >> > >> Yes, that may well be the way to do it. (I'd guess you could split > >> up sections a bit more than that, especially if you are willing to > >> relax the timing specifications for routine a little.) But even > >> with the suggested half-disabling, it could be worth it if your > >> yields are low. Suppose that 30% of your 50 kLUT chip have a fault > >> - that means 70% can be sold. 70% of the remaining ones - 20% of > >> the die - can then be sold as 25 kLUT devices. These are "free". > > > > I'm trying to explain they don't test the chips to "bin" them and > > sell them according to their capacity. They simply design a die to > > have X capacity but also sold as Y capacity. The die are tested to > > how they want to sell them and if they don't pass they are trashed > > for either size testing. Apparently they don't find it worthwhile to > > test and retest. > > > > I know that this is done with some devices, certainly. For one of > Atmel's AVR devices, the sole difference between the 64K version and the > 32K version was the text printed on the package.Did they not disable the extra memory in some way? Even if most of the chips work over the full 64K the possibility of a problem from the chip not being fully tested might be enough of a deterrent that people won't buy the 32K part to use as 64K.> (Long ago we used to use a microcontroller that had 8K of OTP memory. > Then we discovered that the 32K version was significantly cheaper. This > was because the 8K version was made by producing a 32K version and then > running an extra step to program 24K of the memory to zeros.) > > I am not privy to the testing or binning procedures for FPGAs. Your > suggestions sound perfectly reasonable to me. The suggestion that they > using binning for some parts is also perfectly reasonable, and I know it > is done on some other big chips. But I have no idea which is used for > FPGAs.I'm just telling you what I was told by the FPGA company representatives who used to post in c.a.fpga years ago. One was particularly argumentative and the company got them to stop. Another was Peter Alfke who was an FPGA industry icon.> > I think on most devices if you have a failure rate high enough to > > make binning worthwhile you have process problems that need to be > > addressed. > > Some devices /do/ have high failure rates - particularly in early stages > of development or for low volume parts.I don't know of any low volume FPGAs or MCUs other than perhaps very old end of life product. The cost of yield issues hugely impact the cost of the final product because you not only pay for the bad dies, but all the testing time to show the bad dies are bad. That's probably why they don't retest for the "slower" or "smaller" bins, the chances of failing that test are probably much higher and so even more costly per good unit. -- Rick C. --++- Get 1,000 miles of free Supercharging --++- Tesla referral code - https://ts.la/richard11209
Custom CPU Designs
Started by ●April 16, 2020
Reply by ●April 22, 20202020-04-22
Reply by ●April 22, 20202020-04-22
Rick C <gnuarm.deletethisbit@gmail.com> wrote:> I would be worried this would be lost in the large interest in RISC-V > which seems to have a pretty good head of steam at this point. How many > open architectures do we need? Isn't MIPS open at this point?MIPS is... complicated: https://www.cnx-software.com/2020/04/22/is-mips-dead-lawsuit-bankruptcy-maintainers-leaving-and-more/ I would not choose it for a new design at this point. Theo
Reply by ●April 22, 20202020-04-22
On Wednesday, April 22, 2020 at 5:06:55 PM UTC-4, Theo wrote:> Rick C <gnuarm.deletethisbit@gmail.com> wrote: > > I would be worried this would be lost in the large interest in RISC-V > > which seems to have a pretty good head of steam at this point. How many > > open architectures do we need? Isn't MIPS open at this point? > > MIPS is... complicated: > https://www.cnx-software.com/2020/04/22/is-mips-dead-lawsuit-bankruptcy-maintainers-leaving-and-more/ > > I would not choose it for a new design at this point. > > TheoYeah, but why exactly? Is the concern that there won't be future updates/upgrades to the architecture? Tool support will wane? With the various parties involved dropping like flies, it seems like it would be a choice where no one would have an interest in bothering to collect royalties. But then what do I know? This might set you up to where five years from now you end up having to pay not only royalties, but penalties. -- Rick C. --+++ Get 1,000 miles of free Supercharging --+++ Tesla referral code - https://ts.la/richard11209
Reply by ●April 23, 20202020-04-23
On Wed, 22 Apr 2020 17:06:35 +0000 (UTC), Grant Edwards <invalid@invalid.invalid> wrote:>On 2020-04-22, Dimiter_Popoff <dp@tgi-sci.com> wrote: > >> NXP support and make the power architecture line (and yes, they do call >> it that), their top of the line parts are still these (QORIQ, that name >> did not go away as it might have to....). > >QoriQ? > >Wow. That name is stunningly, amzaingly bad. Do Silicon vendors send >people to some specialized school where they learn to come up with the >most awfult product line names possible?They get names from the same source as do pharmaceuticals.
Reply by ●April 23, 20202020-04-23
On 22/04/2020 20:45, Rick C wrote:> On Wednesday, April 22, 2020 at 2:18:27 PM UTC-4, David Brown wrote: >> On 22/04/2020 17:24, Rick C wrote: >>> On Wednesday, April 22, 2020 at 5:28:26 AM UTC-4, David Brown >>> wrote: >>>>> >>>>> If you think about it a bit you will see the only real way >>>>> to have "redundancy" in FPGAs is to excise entire sections of >>>>> the chip for a single failure. So a 50 kLUT chip will become >>>>> a 25 kLUT chip if it has a failure(s) in one half. That's >>>>> all I've heard of. Trying to replace a small section of a >>>>> chip to retain the full functionality would result in uneven >>>>> delays and that's a real problem in FPGAs. >>>>> >>>> >>>> Yes, that may well be the way to do it. (I'd guess you could >>>> split up sections a bit more than that, especially if you are >>>> willing to relax the timing specifications for routine a >>>> little.) But even with the suggested half-disabling, it could >>>> be worth it if your yields are low. Suppose that 30% of your 50 >>>> kLUT chip have a fault - that means 70% can be sold. 70% of >>>> the remaining ones - 20% of the die - can then be sold as 25 >>>> kLUT devices. These are "free". >>> >>> I'm trying to explain they don't test the chips to "bin" them >>> and sell them according to their capacity. They simply design a >>> die to have X capacity but also sold as Y capacity. The die are >>> tested to how they want to sell them and if they don't pass they >>> are trashed for either size testing. Apparently they don't find >>> it worthwhile to test and retest. >>> >> >> I know that this is done with some devices, certainly. For one of >> Atmel's AVR devices, the sole difference between the 64K version >> and the 32K version was the text printed on the package. > > Did they not disable the extra memory in some way? Even if most of > the chips work over the full 64K the possibility of a problem from > the chip not being fully tested might be enough of a deterrent that > people won't buy the 32K part to use as 64K. >In this particular case, no the memory was not disabled - but it (so I heard) was fully tested. Basically, the 64K parts were more popular (due to a few big customers), and it was cheaper for the company to make and test more of them than to have a different setup for the 32K parts. It was perhaps only a temporary measure - one hears stories, but never full details.> >> (Long ago we used to use a microcontroller that had 8K of OTP >> memory. Then we discovered that the 32K version was significantly >> cheaper. This was because the 8K version was made by producing a >> 32K version and then running an extra step to program 24K of the >> memory to zeros.) >> >> I am not privy to the testing or binning procedures for FPGAs. >> Your suggestions sound perfectly reasonable to me. The suggestion >> that they using binning for some parts is also perfectly >> reasonable, and I know it is done on some other big chips. But I >> have no idea which is used for FPGAs. > > I'm just telling you what I was told by the FPGA company > representatives who used to post in c.a.fpga years ago. One was > particularly argumentative and the company got them to stop. Another > was Peter Alfke who was an FPGA industry icon.Fair enough.> > >>> I think on most devices if you have a failure rate high enough >>> to make binning worthwhile you have process problems that need to >>> be addressed. >> >> Some devices /do/ have high failure rates - particularly in early >> stages of development or for low volume parts. > > I don't know of any low volume FPGAs or MCUs other than perhaps very > old end of life product. The cost of yield issues hugely impact the > cost of the final product because you not only pay for the bad dies, > but all the testing time to show the bad dies are bad. That's > probably why they don't retest for the "slower" or "smaller" bins, > the chances of failing that test are probably much higher and so even > more costly per good unit. >I believe that for some parts there are spot-checks on dies - they take a few of the parts and test them for speed, temperature, power, etc., and use that information for binning all parts on the wafer. But again, I have no idea if that applies to any particular part.
Reply by ●April 23, 20202020-04-23
Rick C <gnuarm.deletethisbit@gmail.com> wrote:> Yeah, but why exactly? Is the concern that there won't be future > updates/upgrades to the architecture? Tool support will wane? With the > various parties involved dropping like flies, it seems like it would be a > choice where no one would have an interest in bothering to collect > royalties.1. If the IP situation is unclear, companies may hold back in case someone litigious ends up as the owner of the IP (see SCO) 2. It appears there is only one remaining Linux maintainer, LLVM has no maintenance, FreeBSD is likely to drop MIPS in 2022. Most end users are unlikely to put in the effort to keep the toolchains current. Theo







