EmbeddedRelated.com
Forums

Amtel SAM9 "boot from NAND" is a myth?

Started by Grant Edwards September 15, 2010
On Sun, 19 Sep 2010 16:06:20 +0200, David Brown wrote:

> On 19/09/2010 05:09, rickman wrote: >> On Sep 18, 7:34 am, Stefan Reuther<stefan.n...@arcor.de> wrote: >>> Allan Herriman wrote: >>>> On Fri, 17 Sep 2010 12:19:56 -0700, rickman wrote: >>>>> On Sep 17, 7:52 am, Marc Jet<jetm...@hotmail.com> wrote: >>>>>> People commonly expect bad blocks to have more bit errors than >>>>>> their ECC copes with. However, nowhere in the datasheets is a >>>>>> guarantee for this. >>> [...] >>>>> Looking back, I never actually used a NAND flash in a design. I >>>>> understand how the bad bits would be managed. But what about bad >>>>> blocks? Is this a spec on delivery or is it allowed for blocks to >>>>> go bad in the field? I can't see how that could be supported >>>>> without a very complex scheme along the lines of RAID drives. >>> >>>> It's pretty simple actually. When the driver reads a block that has >>>> an error, it copies the corrected contents to an unused block and >>>> sets the bad block flag in the original block, preventing its reuse. >>>> No software will ever clear the bad block flag, which means that the >>>> effective size of the device decreases as blocks go bad in the field. >>> >>> But where do you store the "bad block" flag? It is pretty common to >>> store it in the bad block itself. The point Marc is making is that >>> this is not guaranteed to work. >> >> Why do you need a bad block flag? If the block has an ECC failure, it >> is bad and the OS will note that. You may have to read the block ECC >> the first time it fails, but after that it can be noted in the file >> system as not part of a file and not part of free space on the drive. >> >> >> > Failures can be intermittent - a partially failed bit could be read > correctly or incorrectly depending on the data stored, the temperature, > or the voltage. So if you see that you are getting failures, you make a > note of them and don't use that block again.
Our experience has been that parts fresh from the factory will have some blocks flagged as bad. If we (by using a modified driver) write and read those blocks, some of them will actually work ok. Presumably the factory test is rather more rigorous. ("Modified driver" should be read as "partially ported and still bug ridden driver".) It's been a while, but ISTR that the parts we were using had 0 to 3 bad blocks per device, which was within the manufacturer's spec. We stress tested a bunch of them and we did see a block go bad. The number of erase/ write cycles required exceeded the manufacturer's minimum spec. (This stress testing was performed to test the bad block handling in software.) To keep this relevant to the OP, we were using parts that had a guaranteed good block 0. The board is still in production. Regards, Allan
rickman skrev:
> On Sep 18, 10:25 am, David Brown > <david.br...@hesbynett.removethisbit.no> wrote: >> On 18/09/2010 13:26, rickman wrote: >> >> >> >>> On Sep 18, 12:34 am, Allan Herriman<allanherri...@hotmail.com> wrote: >>>> On Fri, 17 Sep 2010 12:19:56 -0700, rickman wrote: >>>>> On Sep 17, 7:52 am, Marc Jet<jetm...@hotmail.com> wrote: >>>>>>> All "hardware ECC support" I have seen so far is useless for anything >>>>>>> but older, smaller SLC parts. "Hardware ECC support" is doing a >>>>>>> Hamming code in hardware, which can correct a single bit error. >>>>>>> Current large SLC parts, and MLC parts, need a 4- or even >>>>>>> 8-bit-correcting code. >>>>>> IMHO it is actually worse. >>>>>> The way many NAND datasheets are written, they allow for more than just >>>>>> 1 or 4 or 8 bad bits in a block. A certain number of blocks could go >>>>>> away COMPLETELY, and the part would still be in-spec. >>>>>> People commonly expect bad blocks to have more bit errors than their >>>>>> ECC copes with. However, nowhere in the datasheets is a guarantee for >>>>>> this. >>>>>> For what I know, blocks could just as well become all 1. Or all 0.. Or >>>>>> return read timeout. Or worse, they could become "unerasable" - stuck >>>>>> at your own previous content (with your headers, and valid ECC!). >>>>>> Now I want to see how your FTL copes with that! >>>>> Looking back, I never actually used a NAND flash in a design. I >>>>> understand how the bad bits would be managed. But what about bad >>>>> blocks? Is this a spec on delivery or is it allowed for blocks to go >>>>> bad in the field? I can't see how that could be supported without a >>>>> very complex scheme along the lines of RAID drives. >>>> It's pretty simple actually. When the driver reads a block that has an >>>> error, it copies the corrected contents to an unused block and sets the >>>> bad block flag in the original block, preventing its reuse. >>>> No software will ever clear the bad block flag, which means that the >>>> effective size of the device decreases as blocks go bad in the field. >>>> From the point of view of the flash device, the bad block flag is just >>>> another bit. The meaning comes from the software behaviour. The device >>>> manufacturer will also mark some blocks bad during test. All filesystems >>>> will use this same bit. Even if you reformat the device and put a >>>> different filesystem on it, the bad block information is retained. >>>> Cheers, >>>> Allan >>> You lost me. If there is an recoverable error, the block is not bad, >>> right? That's the purpose of the ECC. If the block accumulates >>> enough bad bits that the ECC can not correct, then you can't recover >>> the data. >>> Obviously there is something about the definition of "bad block" that >>> I am not getting. Are blocks with *any* bit errors considered bad and >>> not used? What if a block goes bad because it went from no bit errors >>> to more than the correctable number of bit errors? As Marc indicated, >>> a block can go bad for multiple reasons, many of which do not allow >>> the data to be recovered. >>> This sounds just like a bad block on a hard drive. When the block >>> goes bad, you lose data. No way around it, just tough luck! I >>> suppose in both media that is one of the limitations of the media. I >>> didn't realize that NAND Flash had this same sort of specified >>> behavior which is considered part of normal operation. I'll have to >>> keep that in mind. >>> Rick >> Just like with hard disks, the NAND flash ECC can correct several errors >> in a block. So when there are a few correctable errors in a block, the >> block is still "good" and still used. But once you have got close to >> the correctable limit, you can still read out the data but you mark it >> as bad so that it won't be used again. > > "Close" isn't good enough. You can't assume that it will fail > gradually. If it goes from good to bad, then you have lost data. Now > that I am aware of that, I will treat NAND flash the same as hard > disks, not to be counted on for embedded projects where a file system > failure is important. >
News to you: All flash memories will eventually "wear out". You have to have a strategy to handle this.
> >> There is always a possibility of a major failure that unexpectedly >> increases the error rate beyond the capabilities of the ECC. But that >> should be a fairly rare event - like a head crash on a hard disk. The >> idea is to detect slow, gradual decay and limit its consequences. If >> you need to protect against sudden disaster, then something equivalent >> to RAID is the answer. > > Yes, a bad block happening without warning may be "rare", but the > point is that it is JUST like a hard disk drive and can not be used in > an app where this would cause a system failure. Any part of the > system can fail, but a bad block is not considered a "failure" of the > chip even though it can cause a failure of the system. > > Rick
-- Best Regards Ulf Samuelsson These are my own personal opinions, which may or may not be shared by my employer Atmel Nordic AB
Grant Edwards skrev:
> We recently based a board on an Atmel AT91SAM9G20 part which the FAE > and rep said could boot from NAND flash. The eval board can indeed be > configured to boot from NAND flash. However, when it comes time to > spec parts for a real product, we find that's all smoke and mirrors. > > The Atmel SAM9 parts require that block 0 be completely free of bit > errors since the ROM bootloader doesn't do ECC (despite the fact that > the part does have hardware ECC support). So you have to use a NAND > flash that guarantees a good block 0 _without_using_ECC_. It turns > out those sorts of NAND flash parts appear to be made of a combination > of Unicorn's horn and Unobtanium. IOW, they don't exist. At least > that's what the flash distributor and rep tell us. > > What was Amtel thinking when they decided not to do ECC when reading > NAND flash? I realize Atmel doesn't make NAND flash, but surely they > must have been aware that NAND flash parts aren't spec'ed to be > fault-free by the flash vendors. > > My opinion? It's a way for Atmel to suck you in and then after you > get the unpleasant surprise that you _can't_ boot from NAND, they try > to sell you a serial dataflash part you don't really want.
The problem is that the NAND flash market has moved on since the AT91SAM9G20 was designed. The NAND flash used to guarantee Block 0. Now there are memories which does not have this guarantee. I think that the move in the industry is towards eMMC Note that the fact that block 0 is OK, is no guarantee that you can boot. The part configuration must also be recognized by the boot ROM. Some manufacturers "reuse" id's so if the table contains two elements with the same Id, only the first will be found.
> > OTOH, TI did it right in their OMAP parts: not only does the > bootloader do ECC, it also will skip blocks that have uncorrectable > errors. > > Atmel: Block 0 must be good without ECC > > TI: _Any_ of blocks 0,1,2,3 must be good _with_ ECC > > Which do you think is going to work better? >
-- Best Regards Ulf Samuelsson These are my own personal opinions, which may or may not be shared by my employer Atmel Nordic AB
Op Sat, 18 Sep 2010 13:26:30 +0200 schreef rickman <gnuarm@gmail.com>:
> On Sep 18, 12:34 am, Allan Herriman <allanherri...@hotmail.com> wrote: >> On Fri, 17 Sep 2010 12:19:56 -0700, rickman wrote: >> > On Sep 17, 7:52 am, Marc Jet <jetm...@hotmail.com> wrote: >> >> > [...] >> >> > Looking back, I never actually used a NAND flash in a design. I >> > understand how the bad bits would be managed. But what about bad >> > blocks? Is this a spec on delivery or is it allowed for blocks to go >> > bad in the field? I can't see how that could be supported without a >> > very complex scheme along the lines of RAID drives. >> >> It's pretty simple actually. When the driver reads a block that has an >> error, it copies the corrected contents to an unused block and sets the >> bad block flag in the original block, preventing its reuse. >> No software will ever clear the bad block flag, which means that the >> effective size of the device decreases as blocks go bad in the field. >> >> From the point of view of the flash device, the bad block flag is just >> another bit. The meaning comes from the software behaviour. The device >> manufacturer will also mark some blocks bad during test. All >> filesystems >> will use this same bit. Even if you reformat the device and put a >> different filesystem on it, the bad block information is retained. > > You lost me. If there is an recoverable error, the block is not bad, > right?
It means that it's not doing particularly well and is likely to become bad.
> That's the purpose of the ECC. If the block accumulates > enough bad bits that the ECC can not correct, then you can't recover > the data.
Precisely. Without ECC, you wouldn't be able to evacuate the data to a good block. In theory I think you could still use them for writing but you'd have to verify the data every time.
> Obviously there is something about the definition of "bad block" that > I am not getting. Are blocks with *any* bit errors considered bad and > not used? What if a block goes bad because it went from no bit errors > to more than the correctable number of bit errors?
That would be "really bad".
> As Marc indicated, > a block can go bad for multiple reasons, many of which do not allow > the data to be recovered. > > This sounds just like a bad block on a hard drive. When the block > goes bad, you lose data. No way around it, just tough luck! I > suppose in both media that is one of the limitations of the media. I > didn't realize that NAND Flash had this same sort of specified > behavior which is considered part of normal operation. I'll have to > keep that in mind. > > Rick
-- Gemaakt met Opera's revolutionaire e-mailprogramma: http://www.opera.com/mail/ (remove the obvious prefix to reply by mail)
NAND chips allow for up to N "bad blocks" before a device is
considered defective.  Some blocks come already marked as bad from
factory.  It is recommended to preserve this information, as factory
testing is usually more exhaustive than what you can implement in a
typical embedded system.  However, more bad blocks are allowed to
develop DURING THE LIFE TIME of the device, up to the specified
maximum (N).

This means that you whatever you write to the device, may or may not
be readable afterwards!  You have 3 choices how to handle this:

Choice 1:  Use enough error correction to be 100% safe against it.

Note that "normal" ECC is definately not enough.  On a device with 0
known bad blocks, up to N blocks can disappear from one moment to the
other (in the worst possible scenario).  To be safe against this, you
must distribute each and every of your precious bits over at least N+1
blocks.

Algorithms exist that can do this (for example RS), but they are not
nice.  Besides the algorithmic complexity, there is another problem
with this approach.  The higher the storage efficiency (data bits
versus redundancy bits), the more blocks you have to read before
you're able to extract your bits.  With N in the range of 20 to 40 in
typical NAND chips, this results in an unavoidable and very high read/
write latency.

Choice 2: Avoid giving more reliability guarantees than your
underlying technology.

This sounds simple and impossible, yet in fact it's quite realistic.
The problem is not that your storage capacity may go away.  It's just
that vital data stored in a particular place may become unreadable.
If you introduce a procedure to restore the data and make that
procedure part of the normal operation of your device, then it's not a
real problem.

In practice this means that your bad block layer must be able to
identify the bad blocks in all circumstances.  I know that many real-
world algorithms (like the mentioned one of using the "bad block bit")
are not 100% fit for the task.  After all, the bad block bit may be
stuck at '1', and you can't do what's necessary to mark it bad.  But
there are more reliable approaches that can achieve the necessary
guarantee.

Of course the other essential part for this choice is to provide a way
to restore the data, which can be a PC flasher program (like iTunes
"restore device").

Then your device can be declared to be always working, without
extending the reliability guarantees beyond those given by the NAND
manufacturer.

Choice 3:  Implement reasonable ECC, give all the guarantees, and hope
for the best.

This seems to be "industry standard".  It seems to work out quite OK,
because NAND failures usually are not very catastrophic.  As others
have pointed out, creeping failures can be detected and data migrated
before ECC capability is exceeded.  Usually failures go in hand with
write activity in the same block or page, and write patterns are under
software control.

But then again, to make it very clear:  this approach is not 100%
safe.  It's a compromise between feasibility and reliability.

You will see yield problems.  Unless it's life threatening technology,
you're probably better off accepting them than to cure them.
On 2010-09-20, Ulf Samuelsson <ulf@a-t-m-e-l.com> wrote:
> Grant Edwards skrev:
>> The Atmel SAM9 parts require that block 0 be completely free of bit >> errors since the ROM bootloader doesn't do ECC (despite the fact that >> the part does have hardware ECC support). So you have to use a NAND >> flash that guarantees a good block 0 _without_using_ECC_. It turns >> out those sorts of NAND flash parts appear to be made of a >> combination of Unicorn's horn and Unobtanium. IOW, they don't exist. >> At least that's what the flash distributor and rep tell us. > > The problem is that the NAND flash market has moved on since the > AT91SAM9G20 was designed. The NAND flash used to guarantee Block 0.
That was the conclusion to which I eventually came after reviewing a bunch of datasheets. The parts that you could use to boot a G20 were all several years old, and the parts that required ECC on block 0 were newer. Since the hardware guys wanted a small (read BGA) package, that pretty much left only the recent parts that reuire ECC on block 0. It looks like we're going to either have to settle for TSOP or add a SPI NOR flash to hold the 16KB bootstrap.
> Note that the fact that block 0 is OK, is no guarantee that you can > boot. The part configuration must also be recognized by the boot ROM. > Some manufacturers "reuse" id's so if the table contains two elements > with the same Id, only the first will be found.
-- Grant Edwards grant.b.edwards Yow! I wish I was a at sex-starved manicurist gmail.com found dead in the Bronx!!

Marc Jet wrote:
> NAND chips allow for up to N "bad blocks" before a device is > considered defective.
That N could be as high as 2% of the total capacity. The tendency is allowing for more and more of N. The higher is the flash density, the lower is the reliability. This is especially true for the multilevel flash. If the application requires high reliability of data, I avoid using high density flash. There is also NAND flash of industrial quality, which is substantially more reliable then consumer grade.
> Some blocks come already marked as bad from > factory. It is recommended to preserve this information, as factory > testing is usually more exhaustive than what you can implement in a > typical embedded system.
You are making unfounded assumptions here.
> However, more bad blocks are allowed to > develop DURING THE LIFE TIME of the device, up to the specified > maximum (N).
And higher then maximum. N+1, N+2 and so on.
> This means that you whatever you write to the device, may or may not > be readable afterwards!
Incredible, isn't it?
> You have 3 choices how to handle this: > Choice 1: Use enough error correction to be 100% safe against it.
Only the insurance agencies are promising 100% guaranteed result.
> Choice 2: Avoid giving more reliability guarantees than your > underlying technology. > Choice 3: Implement reasonable ECC, give all the guarantees, and hope > for the best. > This seems to be "industry standard".
RAID or RAID-like solutions are well known for the safe storage of data.
> It seems to work out quite OK, > because NAND failures usually are not very catastrophic.
Until some critical part of the filesystem fails, making all other data unaccessible.
> As others > have pointed out, creeping failures can be detected and data migrated > before ECC capability is exceeded. Usually failures go in hand with > write activity in the same block or page, and write patterns are under > software control.
Those intelligent measures introduce a lot of overhead and increase the amount of write activity. Also, they create critical situations when accidental power failure can destroy the filesystem.
> But then again, to make it very clear: this approach is not 100% > safe. It's a compromise between feasibility and reliability. > > You will see yield problems. Unless it's life threatening technology, > you're probably better off accepting them than to cure them.
Sure. Who cares about occasionaly broken .mp3 or .jpg file. Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com
2010-09-20 16:18, Grant Edwards skrev:
> On 2010-09-20, Ulf Samuelsson<ulf@a-t-m-e-l.com> wrote: >> Grant Edwards skrev: > >>> The Atmel SAM9 parts require that block 0 be completely free of bit >>> errors since the ROM bootloader doesn't do ECC (despite the fact that >>> the part does have hardware ECC support). So you have to use a NAND >>> flash that guarantees a good block 0 _without_using_ECC_. It turns >>> out those sorts of NAND flash parts appear to be made of a >>> combination of Unicorn's horn and Unobtanium. IOW, they don't exist. >>> At least that's what the flash distributor and rep tell us. >> >> The problem is that the NAND flash market has moved on since the >> AT91SAM9G20 was designed. The NAND flash used to guarantee Block 0. > > That was the conclusion to which I eventually came after reviewing a > bunch of datasheets. The parts that you could use to boot a G20 were > all several years old, and the parts that required ECC on block 0 were > newer. Since the hardware guys wanted a small (read BGA) package, > that pretty much left only the recent parts that reuire ECC on block 0. > > It looks like we're going to either have to settle for TSOP or add a > SPI NOR flash to hold the 16KB bootstrap. > >> Note that the fact that block 0 is OK, is no guarantee that you can >> boot. The part configuration must also be recognized by the boot ROM. >> Some manufacturers "reuse" id's so if the table contains two elements >> with the same Id, only the first will be found. >
If you add an SPI flash (or a dataflash) and plan to boot Linux, you are probably better of by putting also u-boot and u-boot environment in the dataflash. You might also want to consider the kernel. Reason is the SAM-BA S/W which only knows how to erase the complete NAND flash. If you plan to program the NAND flash using another method, then of course, use the NAND flash for everything except bootstrap. -- Best Regards Ulf Samuelsson These are my own personal opinions, which may (or may not) be shared by my employer Atmel Nordic AB
[I haven't got rickman's post.]

David Brown wrote:
> On 19/09/2010 05:09, rickman wrote: >> On Sep 18, 7:34 am, Stefan Reuther<stefan.n...@arcor.de> wrote: >>> Allan Herriman wrote: >>>> It's pretty simple actually. When the driver reads a block that has an >>>> error, it copies the corrected contents to an unused block and sets the >>>> bad block flag in the original block, preventing its reuse. >>>> No software will ever clear the bad block flag, which means that the >>>> effective size of the device decreases as blocks go bad in the field. >>> >>> But where do you store the "bad block" flag? It is pretty common to >>> store it in the bad block itself. The point Marc is making is that this >>> is not guaranteed to work. >> >> Why do you need a bad block flag? If the block has an ECC failure, it >> is bad and the OS will note that. You may have to read the block ECC >> the first time it fails, but after that it can be noted in the file >> system as not part of a file and not part of free space on the drive.
How do you mark it "in the file system" if your file system is actually inside the NAND flash? Thought experiment: your bad block table is stored in a particular block. That block goes bad. Where do you mark that this block is now bad? State of the art seems to be to use magic numbers for valid data, and destroy the ECC and/or magic numbers for blocks that are gone bad, so you can identify them later. That's the "bad block flag".
> Failures can be intermittent - a partially failed bit could be read > correctly or incorrectly depending on the data stored, the temperature, > or the voltage. So if you see that you are getting failures, you make a > note of them and don't use that block again.
From what I've seen, those temporary failed bits are still within the specs of the NAND flash as long as you're running the part within specs. However, when you're way out of spec (say, 30&#4294967295;C over limit), all hell breaks loose. Stefan
Marc Jet wrote:
> Choice 1: Use enough error correction to be 100% safe against it. > > Note that "normal" ECC is definately not enough. On a device with 0 > known bad blocks, up to N blocks can disappear from one moment to the > other (in the worst possible scenario). To be safe against this, you > must distribute each and every of your precious bits over at least N+1 > blocks.
This means you have to distribute each single data block across, say, 161 blocks. With a block size of 4k and NOP=4 this means the minimum amount of data you can write (aka "cluster size") is 161 kBytes. Plus, remember that NAND flash tends to get more forgetful if you actually use NOP=4, so you'd more likely write 161x4 = 644 kBytes. Well, that's certainly a way to reach 100,000 programming cycles.
> Choice 2: Avoid giving more reliability guarantees than your > underlying technology. > > This sounds simple and impossible, yet in fact it's quite realistic. > The problem is not that your storage capacity may go away. It's just > that vital data stored in a particular place may become unreadable. > If you introduce a procedure to restore the data and make that > procedure part of the normal operation of your device, then it's not a > real problem. > > In practice this means that your bad block layer must be able to > identify the bad blocks in all circumstances. I know that many real- > world algorithms (like the mentioned one of using the "bad block bit") > are not 100% fit for the task. After all, the bad block bit may be > stuck at '1', and you can't do what's necessary to mark it bad. But > there are more reliable approaches that can achieve the necessary > guarantee.
That's why you don't use a single bit. If my bad block layer sees a bad block, it tries to actively stomp on all bits that still live there, to destroy as much of the ECC and magic numbers as possible. Remember, we don't need 100% reliability. After all, all components have a finite life, and the flash just needs to live a little longer than the plug connectors or capacitors in the device :-) And by using many bits, I believe to got the chance that they all refuse to flip low enough. It's a flash. It's electrons that tunnel out gradually. It's not an evil gnome sitting within the package, deciding "today, I'll annoy the engineer in an especially evil twisted way", so while the data sheet allows a NAND flash to keep its old contents unmodifiably in a bad sector, I assume this doesn't happen in practice. Or, at least, not often enough to be observable. Stefan