EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

Amtel SAM9 "boot from NAND" is a myth?

Started by Grant Edwards September 15, 2010
On Sep 17, 7:52=A0am, Marc Jet <jetm...@hotmail.com> wrote:
> > All "hardware ECC support" I have seen so far is useless for anything > > but older, smaller SLC parts. "Hardware ECC support" is doing a Hamming > > code in hardware, which can correct a single bit error. Current large > > SLC parts, and MLC parts, need a 4- or even 8-bit-correcting code. > > IMHO it is actually worse. > > The way many NAND datasheets are written, they allow for more than > just 1 or 4 or 8 bad bits in a block. =A0A certain number of blocks > could go away COMPLETELY, and the part would still be in-spec. > > People commonly expect bad blocks to have more bit errors than their > ECC copes with. =A0However, nowhere in the datasheets is a guarantee for > this. > > For what I know, blocks could just as well become all 1. =A0Or all 0. > Or return read timeout. =A0Or worse, they could become "unerasable" - > stuck at your own previous content (with your headers, and valid > ECC!). > > Now I want to see how your FTL copes with that!
Looking back, I never actually used a NAND flash in a design. I understand how the bad bits would be managed. But what about bad blocks? Is this a spec on delivery or is it allowed for blocks to go bad in the field? I can't see how that could be supported without a very complex scheme along the lines of RAID drives. Rick
On 2010-09-17, Stefan Reuther <stefan.news@arcor.de> wrote:
> Wil Taphoorn wrote: >> On 16-9-2010 20:46, Grant Edwards wrote: >> >>> The blocks are guaranteed to be valid for the endurance >>> specified for this area (see section 5.6.1.23) when the host >>> follows the specified number of bits to correct. >>> >>>.. and it explicitly says they can require that the host do ECC >>>for those "garanteed valid" blocks. >> >> Doesn't that mean that the programming device that is writing the >> boot sector has to verify for errors and, if so, reject the device? > > I interpret that to mean that the boot sector can consist of X > perfectly reliable bits and Y unreliable bits (e.g. permanently > zero). The boot loader would then have to ECC-correct the unreliable > bits each time it loads, and the manufacturer guarantees that Y > doesn't grow above the ECC requirements.
That's what it means to me, that's what it means to the FAE's we're working with, and judging by the parts' datasheets, that's what it means to the guys doing QA at the fabs. -- Grant
On Fri, 17 Sep 2010 12:19:56 -0700, rickman wrote:

> On Sep 17, 7:52&nbsp;am, Marc Jet <jetm...@hotmail.com> wrote: >> > All "hardware ECC support" I have seen so far is useless for anything >> > but older, smaller SLC parts. "Hardware ECC support" is doing a >> > Hamming code in hardware, which can correct a single bit error. >> > Current large SLC parts, and MLC parts, need a 4- or even >> > 8-bit-correcting code. >> >> IMHO it is actually worse. >> >> The way many NAND datasheets are written, they allow for more than just >> 1 or 4 or 8 bad bits in a block. &nbsp;A certain number of blocks could go >> away COMPLETELY, and the part would still be in-spec. >> >> People commonly expect bad blocks to have more bit errors than their >> ECC copes with. &nbsp;However, nowhere in the datasheets is a guarantee for >> this. >> >> For what I know, blocks could just as well become all 1. &nbsp;Or all 0. Or >> return read timeout. &nbsp;Or worse, they could become "unerasable" - stuck >> at your own previous content (with your headers, and valid ECC!). >> >> Now I want to see how your FTL copes with that! > > Looking back, I never actually used a NAND flash in a design. I > understand how the bad bits would be managed. But what about bad > blocks? Is this a spec on delivery or is it allowed for blocks to go > bad in the field? I can't see how that could be supported without a > very complex scheme along the lines of RAID drives.
It's pretty simple actually. When the driver reads a block that has an error, it copies the corrected contents to an unused block and sets the bad block flag in the original block, preventing its reuse. No software will ever clear the bad block flag, which means that the effective size of the device decreases as blocks go bad in the field. From the point of view of the flash device, the bad block flag is just another bit. The meaning comes from the software behaviour. The device manufacturer will also mark some blocks bad during test. All filesystems will use this same bit. Even if you reformat the device and put a different filesystem on it, the bad block information is retained. Cheers, Allan
On Sep 18, 12:34=A0am, Allan Herriman <allanherri...@hotmail.com> wrote:
> On Fri, 17 Sep 2010 12:19:56 -0700, rickman wrote: > > On Sep 17, 7:52=A0am, Marc Jet <jetm...@hotmail.com> wrote: > >> > All "hardware ECC support" I have seen so far is useless for anythin=
g
> >> > but older, smaller SLC parts. "Hardware ECC support" is doing a > >> > Hamming code in hardware, which can correct a single bit error. > >> > Current large SLC parts, and MLC parts, need a 4- or even > >> > 8-bit-correcting code. > > >> IMHO it is actually worse. > > >> The way many NAND datasheets are written, they allow for more than jus=
t
> >> 1 or 4 or 8 bad bits in a block. =A0A certain number of blocks could g=
o
> >> away COMPLETELY, and the part would still be in-spec. > > >> People commonly expect bad blocks to have more bit errors than their > >> ECC copes with. =A0However, nowhere in the datasheets is a guarantee f=
or
> >> this. > > >> For what I know, blocks could just as well become all 1. =A0Or all 0. =
Or
> >> return read timeout. =A0Or worse, they could become "unerasable" - stu=
ck
> >> at your own previous content (with your headers, and valid ECC!). > > >> Now I want to see how your FTL copes with that! > > > Looking back, I never actually used a NAND flash in a design. =A0I > > understand how the bad bits would be managed. =A0But what about bad > > blocks? =A0Is this a spec on delivery or is it allowed for blocks to go > > bad in the field? =A0I can't see how that could be supported without a > > very complex scheme along the lines of RAID drives. > > It's pretty simple actually. =A0When the driver reads a block that has an > error, it copies the corrected contents to an unused block and sets the > bad block flag in the original block, preventing its reuse. > No software will ever clear the bad block flag, which means that the > effective size of the device decreases as blocks go bad in the field. > > From the point of view of the flash device, the bad block flag is just > another bit. =A0The meaning comes from the software behaviour. =A0The dev=
ice
> manufacturer will also mark some blocks bad during test. =A0All filesyste=
ms
> will use this same bit. =A0Even if you reformat the device and put a > different filesystem on it, the bad block information is retained. > > Cheers, > Allan
You lost me. If there is an recoverable error, the block is not bad, right? That's the purpose of the ECC. If the block accumulates enough bad bits that the ECC can not correct, then you can't recover the data. Obviously there is something about the definition of "bad block" that I am not getting. Are blocks with *any* bit errors considered bad and not used? What if a block goes bad because it went from no bit errors to more than the correctable number of bit errors? As Marc indicated, a block can go bad for multiple reasons, many of which do not allow the data to be recovered. This sounds just like a bad block on a hard drive. When the block goes bad, you lose data. No way around it, just tough luck! I suppose in both media that is one of the limitations of the media. I didn't realize that NAND Flash had this same sort of specified behavior which is considered part of normal operation. I'll have to keep that in mind. Rick
Allan Herriman wrote:
> On Fri, 17 Sep 2010 12:19:56 -0700, rickman wrote: >>On Sep 17, 7:52 am, Marc Jet <jetm...@hotmail.com> wrote: >>>People commonly expect bad blocks to have more bit errors than their >>>ECC copes with. However, nowhere in the datasheets is a guarantee for >>>this.
[...]
>>Looking back, I never actually used a NAND flash in a design. I >>understand how the bad bits would be managed. But what about bad >>blocks? Is this a spec on delivery or is it allowed for blocks to go >>bad in the field? I can't see how that could be supported without a >>very complex scheme along the lines of RAID drives. > > It's pretty simple actually. When the driver reads a block that has an > error, it copies the corrected contents to an unused block and sets the > bad block flag in the original block, preventing its reuse. > No software will ever clear the bad block flag, which means that the > effective size of the device decreases as blocks go bad in the field.
But where do you store the "bad block" flag? It is pretty common to store it in the bad block itself. The point Marc is making is that this is not guaranteed to work.
> From the point of view of the flash device, the bad block flag is just > another bit. The meaning comes from the software behaviour. The device > manufacturer will also mark some blocks bad during test. All filesystems > will use this same bit. Even if you reformat the device and put a > different filesystem on it, the bad block information is retained.
In an ideal world, maybe. All file systems I have seen so far use different bad block schemes. Which is not surprising, as NAND flash parts themselves use different schemes to mark factory bad blocks. Stefan
On 18/09/2010 13:26, rickman wrote:
> On Sep 18, 12:34 am, Allan Herriman<allanherri...@hotmail.com> wrote: >> On Fri, 17 Sep 2010 12:19:56 -0700, rickman wrote: >>> On Sep 17, 7:52 am, Marc Jet<jetm...@hotmail.com> wrote: >>>>> All "hardware ECC support" I have seen so far is useless for anything >>>>> but older, smaller SLC parts. "Hardware ECC support" is doing a >>>>> Hamming code in hardware, which can correct a single bit error. >>>>> Current large SLC parts, and MLC parts, need a 4- or even >>>>> 8-bit-correcting code. >> >>>> IMHO it is actually worse. >> >>>> The way many NAND datasheets are written, they allow for more than just >>>> 1 or 4 or 8 bad bits in a block. A certain number of blocks could go >>>> away COMPLETELY, and the part would still be in-spec. >> >>>> People commonly expect bad blocks to have more bit errors than their >>>> ECC copes with. However, nowhere in the datasheets is a guarantee for >>>> this. >> >>>> For what I know, blocks could just as well become all 1. Or all 0. Or >>>> return read timeout. Or worse, they could become "unerasable" - stuck >>>> at your own previous content (with your headers, and valid ECC!). >> >>>> Now I want to see how your FTL copes with that! >> >>> Looking back, I never actually used a NAND flash in a design. I >>> understand how the bad bits would be managed. But what about bad >>> blocks? Is this a spec on delivery or is it allowed for blocks to go >>> bad in the field? I can't see how that could be supported without a >>> very complex scheme along the lines of RAID drives. >> >> It's pretty simple actually. When the driver reads a block that has an >> error, it copies the corrected contents to an unused block and sets the >> bad block flag in the original block, preventing its reuse. >> No software will ever clear the bad block flag, which means that the >> effective size of the device decreases as blocks go bad in the field. >> >> From the point of view of the flash device, the bad block flag is just >> another bit. The meaning comes from the software behaviour. The device >> manufacturer will also mark some blocks bad during test. All filesystems >> will use this same bit. Even if you reformat the device and put a >> different filesystem on it, the bad block information is retained. >> >> Cheers, >> Allan > > You lost me. If there is an recoverable error, the block is not bad, > right? That's the purpose of the ECC. If the block accumulates > enough bad bits that the ECC can not correct, then you can't recover > the data. > > Obviously there is something about the definition of "bad block" that > I am not getting. Are blocks with *any* bit errors considered bad and > not used? What if a block goes bad because it went from no bit errors > to more than the correctable number of bit errors? As Marc indicated, > a block can go bad for multiple reasons, many of which do not allow > the data to be recovered. > > This sounds just like a bad block on a hard drive. When the block > goes bad, you lose data. No way around it, just tough luck! I > suppose in both media that is one of the limitations of the media. I > didn't realize that NAND Flash had this same sort of specified > behavior which is considered part of normal operation. I'll have to > keep that in mind. > > Rick
Just like with hard disks, the NAND flash ECC can correct several errors in a block. So when there are a few correctable errors in a block, the block is still "good" and still used. But once you have got close to the correctable limit, you can still read out the data but you mark it as bad so that it won't be used again. There is always a possibility of a major failure that unexpectedly increases the error rate beyond the capabilities of the ECC. But that should be a fairly rare event - like a head crash on a hard disk. The idea is to detect slow, gradual decay and limit its consequences. If you need to protect against sudden disaster, then something equivalent to RAID is the answer.
On Sep 18, 10:25=A0am, David Brown
<david.br...@hesbynett.removethisbit.no> wrote:
> On 18/09/2010 13:26, rickman wrote: > > > > > On Sep 18, 12:34 am, Allan Herriman<allanherri...@hotmail.com> =A0wrote=
:
> >> On Fri, 17 Sep 2010 12:19:56 -0700, rickman wrote: > >>> On Sep 17, 7:52 am, Marc Jet<jetm...@hotmail.com> =A0wrote: > >>>>> All "hardware ECC support" I have seen so far is useless for anythi=
ng
> >>>>> but older, smaller SLC parts. "Hardware ECC support" is doing a > >>>>> Hamming code in hardware, which can correct a single bit error. > >>>>> Current large SLC parts, and MLC parts, need a 4- or even > >>>>> 8-bit-correcting code. > > >>>> IMHO it is actually worse. > > >>>> The way many NAND datasheets are written, they allow for more than j=
ust
> >>>> 1 or 4 or 8 bad bits in a block. =A0A certain number of blocks could=
go
> >>>> away COMPLETELY, and the part would still be in-spec. > > >>>> People commonly expect bad blocks to have more bit errors than their > >>>> ECC copes with. =A0However, nowhere in the datasheets is a guarantee=
for
> >>>> this. > > >>>> For what I know, blocks could just as well become all 1. =A0Or all 0=
. Or
> >>>> return read timeout. =A0Or worse, they could become "unerasable" - s=
tuck
> >>>> at your own previous content (with your headers, and valid ECC!). > > >>>> Now I want to see how your FTL copes with that! > > >>> Looking back, I never actually used a NAND flash in a design. =A0I > >>> understand how the bad bits would be managed. =A0But what about bad > >>> blocks? =A0Is this a spec on delivery or is it allowed for blocks to =
go
> >>> bad in the field? =A0I can't see how that could be supported without =
a
> >>> very complex scheme along the lines of RAID drives. > > >> It's pretty simple actually. =A0When the driver reads a block that has=
an
> >> error, it copies the corrected contents to an unused block and sets th=
e
> >> bad block flag in the original block, preventing its reuse. > >> No software will ever clear the bad block flag, which means that the > >> effective size of the device decreases as blocks go bad in the field. > > >> =A0From the point of view of the flash device, the bad block flag is j=
ust
> >> another bit. =A0The meaning comes from the software behaviour. =A0The =
device
> >> manufacturer will also mark some blocks bad during test. =A0All filesy=
stems
> >> will use this same bit. =A0Even if you reformat the device and put a > >> different filesystem on it, the bad block information is retained. > > >> Cheers, > >> Allan > > > You lost me. =A0If there is an recoverable error, the block is not bad, > > right? =A0That's the purpose of the ECC. =A0If the block accumulates > > enough bad bits that the ECC can not correct, then you can't recover > > the data. > > > Obviously there is something about the definition of "bad block" that > > I am not getting. =A0Are blocks with *any* bit errors considered bad an=
d
> > not used? =A0What if a block goes bad because it went from no bit error=
s
> > to more than the correctable number of bit errors? =A0As Marc indicated=
,
> > a block can go bad for multiple reasons, many of which do not allow > > the data to be recovered. > > > This sounds just like a bad block on a hard drive. =A0When the block > > goes bad, you lose data. =A0No way around it, just tough luck! =A0I > > suppose in both media that is one of the limitations of the media. =A0I > > didn't realize that NAND Flash had this same sort of specified > > behavior which is considered part of normal operation. =A0I'll have to > > keep that in mind. > > > Rick > > Just like with hard disks, the NAND flash ECC can correct several errors > in a block. =A0So when there are a few correctable errors in a block, the > block is still "good" and still used. =A0But once you have got close to > the correctable limit, you can still read out the data but you mark it > as bad so that it won't be used again.
"Close" isn't good enough. You can't assume that it will fail gradually. If it goes from good to bad, then you have lost data. Now that I am aware of that, I will treat NAND flash the same as hard disks, not to be counted on for embedded projects where a file system failure is important.
> There is always a possibility of a major failure that unexpectedly > increases the error rate beyond the capabilities of the ECC. =A0But that > should be a fairly rare event - like a head crash on a hard disk. =A0The > idea is to detect slow, gradual decay and limit its consequences. =A0If > you need to protect against sudden disaster, then something equivalent > to RAID is the answer.
Yes, a bad block happening without warning may be "rare", but the point is that it is JUST like a hard disk drive and can not be used in an app where this would cause a system failure. Any part of the system can fail, but a bad block is not considered a "failure" of the chip even though it can cause a failure of the system. Rick
On Sep 18, 7:34=A0am, Stefan Reuther <stefan.n...@arcor.de> wrote:
> Allan Herriman wrote: > > On Fri, 17 Sep 2010 12:19:56 -0700, rickman wrote: > >>On Sep 17, 7:52 am, Marc Jet <jetm...@hotmail.com> wrote: > >>>People commonly expect bad blocks to have more bit errors than their > >>>ECC copes with. =A0However, nowhere in the datasheets is a guarantee f=
or
> >>>this. > [...] > >>Looking back, I never actually used a NAND flash in a design. =A0I > >>understand how the bad bits would be managed. =A0But what about bad > >>blocks? =A0Is this a spec on delivery or is it allowed for blocks to go > >>bad in the field? =A0I can't see how that could be supported without a > >>very complex scheme along the lines of RAID drives. > > > It's pretty simple actually. =A0When the driver reads a block that has =
an
> > error, it copies the corrected contents to an unused block and sets the > > bad block flag in the original block, preventing its reuse. > > No software will ever clear the bad block flag, which means that the > > effective size of the device decreases as blocks go bad in the field. > > But where do you store the "bad block" flag? It is pretty common to > store it in the bad block itself. The point Marc is making is that this > is not guaranteed to work.
Why do you need a bad block flag? If the block has an ECC failure, it is bad and the OS will note that. You may have to read the block ECC the first time it fails, but after that it can be noted in the file system as not part of a file and not part of free space on the drive.
> > From the point of view of the flash device, the bad block flag is just > > another bit. =A0The meaning comes from the software behaviour. =A0The d=
evice
> > manufacturer will also mark some blocks bad during test. =A0All filesys=
tems
> > will use this same bit. =A0Even if you reformat the device and put a > > different filesystem on it, the bad block information is retained. > > In an ideal world, maybe. All file systems I have seen so far use > different bad block schemes. Which is not surprising, as NAND flash > parts themselves use different schemes to mark factory bad blocks. > > =A0 Stefan
I don't see how this is any different from a hard drive. There they use a combination of factory data and the file system to track bad blocks. Rick
On 19/09/2010 05:05, rickman wrote:
> On Sep 18, 10:25 am, David Brown > <david.br...@hesbynett.removethisbit.no> wrote: >> On 18/09/2010 13:26, rickman wrote: >> Just like with hard disks, the NAND flash ECC can correct several errors >> in a block. So when there are a few correctable errors in a block, the >> block is still "good" and still used. But once you have got close to >> the correctable limit, you can still read out the data but you mark it >> as bad so that it won't be used again. > > "Close" isn't good enough. You can't assume that it will fail > gradually. If it goes from good to bad, then you have lost data. Now > that I am aware of that, I will treat NAND flash the same as hard > disks, not to be counted on for embedded projects where a file system > failure is important. >
That's just nonsense. /Everything/ has a chance of failure. Are you going to stop using microcontrollers because you've heard that they occasionally fail? Will you stop driving your car to work because they sometimes break down? What is important for building reliable systems is to have an understanding of the failure modes of the parts, the chances of these failures, and the consequences of the failures. NAND flash has significant risk of failure with reasonably well understood characteristics - the failure of individual bits is mostly independent, and the risk of failure increases with each erase/write cycle. So what you get is a pattern of gradually more random bit failures within any given block, increasing as the block gets erased and re-written. You correct for a few bit failures, but if there are too many errors you consider the block to be failing - you can read from it, but you won't trust it to store new data. In most cases, you'll copy the data over to a different block. Note that the same principle applies if the ECC coding only corrects a single error - with one correctable error you consider the block too risky for re-use, but trust the (corrected) data read out.
> >> There is always a possibility of a major failure that unexpectedly >> increases the error rate beyond the capabilities of the ECC. But that >> should be a fairly rare event - like a head crash on a hard disk. The >> idea is to detect slow, gradual decay and limit its consequences. If >> you need to protect against sudden disaster, then something equivalent >> to RAID is the answer. > > Yes, a bad block happening without warning may be "rare", but the > point is that it is JUST like a hard disk drive and can not be used in > an app where this would cause a system failure. Any part of the > system can fail, but a bad block is not considered a "failure" of the > chip even though it can cause a failure of the system. >
The only way to make a system safe in the event of rare catastrophic failures of critical systems is with redundancy. It applies to NAND devices just like it applies to every other part of the system. The difference is that with a NAND flash, a bad block is /not/ considered a failure because you take the wear of the blocks into account in the design of the system, so that they don't lead to system failure. Think of it like a battery - you know it is going to "fail", and plan accordingly so that it does not lead to a catastrophic failure of the system.
On 19/09/2010 05:09, rickman wrote:
> On Sep 18, 7:34 am, Stefan Reuther<stefan.n...@arcor.de> wrote: >> Allan Herriman wrote: >>> On Fri, 17 Sep 2010 12:19:56 -0700, rickman wrote: >>>> On Sep 17, 7:52 am, Marc Jet<jetm...@hotmail.com> wrote: >>>>> People commonly expect bad blocks to have more bit errors than their >>>>> ECC copes with. However, nowhere in the datasheets is a guarantee for >>>>> this. >> [...] >>>> Looking back, I never actually used a NAND flash in a design. I >>>> understand how the bad bits would be managed. But what about bad >>>> blocks? Is this a spec on delivery or is it allowed for blocks to go >>>> bad in the field? I can't see how that could be supported without a >>>> very complex scheme along the lines of RAID drives. >> >>> It's pretty simple actually. When the driver reads a block that has an >>> error, it copies the corrected contents to an unused block and sets the >>> bad block flag in the original block, preventing its reuse. >>> No software will ever clear the bad block flag, which means that the >>> effective size of the device decreases as blocks go bad in the field. >> >> But where do you store the "bad block" flag? It is pretty common to >> store it in the bad block itself. The point Marc is making is that this >> is not guaranteed to work. > > Why do you need a bad block flag? If the block has an ECC failure, it > is bad and the OS will note that. You may have to read the block ECC > the first time it fails, but after that it can be noted in the file > system as not part of a file and not part of free space on the drive. > >
Failures can be intermittent - a partially failed bit could be read correctly or incorrectly depending on the data stored, the temperature, or the voltage. So if you see that you are getting failures, you make a note of them and don't use that block again.

The 2024 Embedded Online Conference