On 6/16/18 2:12 PM, pozz wrote:> > > Third question, more complex (for me). > Suppose I decided to split non-volatile data in two blocks, for example > calibration data and user settings. > What happens if user settings change when calibration settings are being > written?� I think I should convert your writeTriggered in two updated > flags: calibration_updated and settings_updated. > In idle state I should check both flags and start writing the relative > block. > >I will typically divide my parameters into two groups. One group has the data that the user sets, usage information, and other information that is updated as the user works. This data gets saved shortly after the user makes a setting change or enough time passes that the other information is worth saving. I often also include a user option to reset this to some 'factory default' for if the user totally messes up the settings (this won't reset the usage data, just user settings). There is a second block of factory calibration data. This will never be update by the user (or only by very trusted users) and typically this block doesn't have multiple copies (unless I need a backup for actual flash corruption). To activate the save for this block requires giving the device a special unlock sequence, which allow the adjustment of these parameters and then a specific factory save command.> Again another scenario.� Until now we talked about settings, a structure > filled with parameters that can change when the user wants at any time. > > How to manage a log, a list of events with timestamp and some data? > Suppose one entry takes 8 bytes.� I reserve 4kB of memory for around 500 > entries organized in a FIFO. > > Log isn't so critical as the settings, so I think I could avoid > redundancy in non-volatile memory.� Maybe only the CRC that, when not > valid, clears all the log.� It should be acceptable. > > As usual we can talk about the opportunity to read the full log and put > it in RAM, or read the entries when needed (because the user wants to > read some entries, mostly the more recent). > Reading 10 entries (80 bytes) from 10MHz SPI memory doesn't take too > much (no more than 100usec). But here the problem is that reading can be > needed during writing of settings.� And this is a big problem. > > As usual, simplest solution is to have the full log in RAM... sigh! > > What about writing of one or a few new entries in the log?� A writing > operation (for example, settings) should be in process.� I should > schedule and postpone the log update after writing of settings is finished. > > Because you have much more experience than me (and you are so kind to > share it with me and other lurkers), could you suggest a smart approach? >For logs, I will define a log information block to store a single log entry, back the number of them I can into a flash sector. The total log then has a number of these sectors reserved for the log, forming a circular list (so writing a new log entry overwrites the oldest log record). I tend to have two sectors of these log entries 'cached', so I can be creating one log entry at the end of one block and one at the beginning of the next block. While I am filling a given log entry, it is marked as 'invalid', and that is cleared when the entry is finished. A given sector is written when it is full, or a sufficient time after a block has been updated to minimize log data loses due to power loss. This write uses the same flash buffer as the parameter flash buffer as I can't be both writing a log sector and a parameter sector at the same moment.
Serial EEPROM or Serial Flash?
Started by ●June 14, 2018
Reply by ●June 17, 20182018-06-17
Reply by ●June 19, 20182018-06-19
Il 17/06/2018 19:27, David Brown ha scritto:> (You emailed me a copy of this post too.� I guess that was a slip of the > mouse - we all do that from time to time.� Anyway, I'm replying the > newsgroup and ignoring the email version.� That way everyone can see it, > and more people can join in the fun.� It's good to see an interesting > thread here in c.a.e. - it's been a bit idle recently.)My Thunderbird has a Reply button that sends an email to the sender and not to the groups. I have to right-click and choose "send to group" explicitly. And it sometimes happen I forget.> On 16/06/18 20:12, pozz wrote: >> >> Il 15/06/2018 16:21, David Brown ha scritto: >> >> �> static enum { idle, writing, checking, erasing } writerState; >> �> >> �> void doWriter(void) { >> �>���� switch (writerState) { >> �>�������� case idle : >> �>������������ if (writeTriggered) { >> �>���������������� writeTriggered = false; >> �>���������������� startWriting(); >> �>���������������� writerState = writing; >> �>������������ } >> �>������������ break; >> �>�������� case writing : >> �>������������ bool stillWorking = pollNVMdevice(); >> �>������������ if (stillWorking) return; >> �>������������ startChecking(); >> �>������������ writerState = checking; >> �>������������ break; >> �>�������� case checking : >> �>������������ bool stillWorking = pollNVMdevice(); >> �>������������ if (stillWorking) return; >> �>������������ startErasingNextBlock(); >> �>������������ writerState = erasing; >> �>������������ break; >> �>�������� case erasing : >> �>������������ bool stillWorking = pollNVMdevice(); >> �>������������ if (stillWorking) return; >> �>������������ writerState = idle; >> �>������������ break; >> �>���� } >> �> } >> �> >> �> There - half your program is done >> >> I was thinking on this "long-running" task that writes non-volatile >> data to serial memory in the background, during normal execution of >> the� main application. >> >> First question. If I understood well your words, the checking state >> has the goal of comparing data written (by reading them) against >> original data in RAM.� If they differ, we should go back in writing >> state, maybe changing the destination sector (because the write could >> be failed again for physical damage of that sector). > > Yes, basically.� It is up to you how you handle things if the ram copy > can be changed underway.� You might use a second copy in ram for a > check, you might ban changes during the write/check period, or you might > simply run a checksum on the written data and check that it matches. > >> >> Another question is: what happens if saving is triggered again during >> writing?� I think there isn't any big problem.� The writing task can >> be started again from the beginning, even on the same destination sector. >> So during writing and checking states I should check for >> writeTriggered flags again and prematurely stop the writing and start >> again from the beginning. > > Usually the most important thing is that the data stored is a consistent > snapshot of the data structure, rather than the most recent version.� So > you should continue saving what you are doing before triggering a new > write.� But be very careful if you allow writing to the ram copy of the > data while a write is in progress. > >> >> Third question, more complex (for me). >> Suppose I decided to split non-volatile data in two blocks, for >> example calibration data and user settings. >> What happens if user settings change when calibration settings are >> being written?� I think I should convert your writeTriggered in two >> updated flags: calibration_updated and settings_updated. >> In idle state I should check both flags and start writing the relative >> block. >> > > Sure, have as many blocks as you like. > >> >> Again another scenario.� Until now we talked about settings, a >> structure filled with parameters that can change when the user wants >> at any time. >> >> How to manage a log, a list of events with timestamp and some data? >> Suppose one entry takes 8 bytes.� I reserve 4kB of memory for around >> 500 entries organized in a FIFO. >> >> Log isn't so critical as the settings, so I think I could avoid >> redundancy in non-volatile memory.� Maybe only the CRC that, when not >> valid, clears all the log.� It should be acceptable. > > Have part of your NVM chip reserved for the logs.� Log in blocks, with a > crc on each.� Don't bother holding more than two log blocks in ram (one > being stored to NVM, and one for updating at the moment).Good suggestion. Maybe I missed to explain that the user could want to read the log. If I don't keep *all* the log in RAM, it is possible that the application needs to read the log from the memory chip... this is against our first assumption to read all the data at startup and keep it in RAM to simplify writing without blocking. The function that needs to return one or more entries should read from the memory chip... however it can be busy in writing. One possibility is to block while waiting for the end of writing, but blocking tasks aren't good. Another is to change the function to be asyncronous...>> As usual we can talk about the opportunity to read the full log and >> put it in RAM, or read the entries when needed (because the user wants >> to read some entries, mostly the more recent). >> Reading 10 entries (80 bytes) from 10MHz SPI memory doesn't take too >> much (no more than 100usec). But here the problem is that reading can >> be needed during writing of settings.� And this is a big problem. >> >> As usual, simplest solution is to have the full log in RAM... sigh! >> >> What about writing of one or a few new entries in the log?� A writing >> operation (for example, settings) should be in process.� I should >> schedule and postpone the log update after writing of settings is >> finished. >> >> Because you have much more experience than me (and you are so kind to >> share it with me and other lurkers), could you suggest a smart approach?
Reply by ●June 19, 20182018-06-19
Il 17/06/2018 23:05, Richard Damon ha scritto:> On 6/16/18 2:12 PM, pozz wrote: >> >> >> Third question, more complex (for me). >> Suppose I decided to split non-volatile data in two blocks, for example >> calibration data and user settings. >> What happens if user settings change when calibration settings are being >> written?� I think I should convert your writeTriggered in two updated >> flags: calibration_updated and settings_updated. >> In idle state I should check both flags and start writing the relative >> block. >> >> > > I will typically divide my parameters into two groups. One group has the > data that the user sets, usage information, and other information that > is updated as the user works. This data gets saved shortly after the > user makes a setting change or enough time passes that the other > information is worth saving. I often also include a user option to reset > this to some 'factory default' for if the user totally messes up the > settings (this won't reset the usage data, just user settings). There is > a second block of factory calibration data. This will never be update by > the user (or only by very trusted users) and typically this block > doesn't have multiple copies (unless I need a backup for actual flash > corruption). To activate the save for this block requires giving the > device a special unlock sequence, which allow the adjustment of these > parameters and then a specific factory save command. > >> Again another scenario.� Until now we talked about settings, a structure >> filled with parameters that can change when the user wants at any time. >> >> How to manage a log, a list of events with timestamp and some data? >> Suppose one entry takes 8 bytes.� I reserve 4kB of memory for around 500 >> entries organized in a FIFO. >> >> Log isn't so critical as the settings, so I think I could avoid >> redundancy in non-volatile memory.� Maybe only the CRC that, when not >> valid, clears all the log.� It should be acceptable. >> >> As usual we can talk about the opportunity to read the full log and put >> it in RAM, or read the entries when needed (because the user wants to >> read some entries, mostly the more recent). >> Reading 10 entries (80 bytes) from 10MHz SPI memory doesn't take too >> much (no more than 100usec). But here the problem is that reading can be >> needed during writing of settings.� And this is a big problem. >> >> As usual, simplest solution is to have the full log in RAM... sigh! >> >> What about writing of one or a few new entries in the log?� A writing >> operation (for example, settings) should be in process.� I should >> schedule and postpone the log update after writing of settings is finished. >> >> Because you have much more experience than me (and you are so kind to >> share it with me and other lurkers), could you suggest a smart approach? >> > > For logs, I will define a log information block to store a single log > entry, back the number of them I can into a flash sector. The total log > then has a number of these sectors reserved for the log, forming a > circular list (so writing a new log entry overwrites the oldest log > record). I tend to have two sectors of these log entries 'cached', so I > can be creating one log entry at the end of one block and one at the > beginning of the next block. While I am filling a given log entry, it is > marked as 'invalid', and that is cleared when the entry is finished. A > given sector is written when it is full, or a sufficient time after a > block has been updated to minimize log data loses due to power loss. > This write uses the same flash buffer as the parameter flash buffer as I > can't be both writing a log sector and a parameter sector at the same > moment. >Do your application read log entries? What do you do to avoid reading when the memory chip is busy in writing?
Reply by ●June 20, 20182018-06-20
On 6/19/18 11:44 AM, pozz wrote:> Il 17/06/2018 23:05, Richard Damon ha scritto: >> On 6/16/18 2:12 PM, pozz wrote: >>> >>> >>> Third question, more complex (for me). >>> Suppose I decided to split non-volatile data in two blocks, for example >>> calibration data and user settings. >>> What happens if user settings change when calibration settings are being >>> written?� I think I should convert your writeTriggered in two updated >>> flags: calibration_updated and settings_updated. >>> In idle state I should check both flags and start writing the relative >>> block. >>> >>> >> >> I will typically divide my parameters into two groups. One group has the >> data that the user sets, usage information, and other information that >> is updated as the user works. This data gets saved shortly after the >> user makes a setting change or enough time passes that the other >> information is worth saving. I often also include a user option to reset >> this to some 'factory default' for if the user totally messes up the >> settings (this won't reset the usage data, just user settings). There is >> a second block of factory calibration data. This will never be update by >> the user (or only by very trusted users) and typically this block >> doesn't have multiple copies (unless I need a backup for actual flash >> corruption). To activate the save for this block requires giving the >> device a special unlock sequence, which allow the adjustment of these >> parameters and then a specific factory save command. >> >>> Again another scenario.� Until now we talked about settings, a structure >>> filled with parameters that can change when the user wants at any time. >>> >>> How to manage a log, a list of events with timestamp and some data? >>> Suppose one entry takes 8 bytes.� I reserve 4kB of memory for around 500 >>> entries organized in a FIFO. >>> >>> Log isn't so critical as the settings, so I think I could avoid >>> redundancy in non-volatile memory.� Maybe only the CRC that, when not >>> valid, clears all the log.� It should be acceptable. >>> >>> As usual we can talk about the opportunity to read the full log and put >>> it in RAM, or read the entries when needed (because the user wants to >>> read some entries, mostly the more recent). >>> Reading 10 entries (80 bytes) from 10MHz SPI memory doesn't take too >>> much (no more than 100usec). But here the problem is that reading can be >>> needed during writing of settings.� And this is a big problem. >>> >>> As usual, simplest solution is to have the full log in RAM... sigh! >>> >>> What about writing of one or a few new entries in the log?� A writing >>> operation (for example, settings) should be in process.� I should >>> schedule and postpone the log update after writing of settings is >>> finished. >>> >>> Because you have much more experience than me (and you are so kind to >>> share it with me and other lurkers), could you suggest a smart approach? >>> >> >> For logs, I will define a log information block to store a single log >> entry, back the number of them I can into a flash sector. The total log >> then has a number of these sectors reserved for the log, forming a >> circular list (so writing a new log entry overwrites the oldest log >> record). I tend to have two sectors of these log entries 'cached', so I >> can be creating one log entry at the end of one block and one at the >> beginning of the next block. While I am filling a given log entry, it is >> marked as 'invalid', and that is cleared when the entry is finished. A >> given sector is written when it is full, or a sufficient time after a >> block has been updated to minimize log data loses due to power loss. >> This write uses the same flash buffer as the parameter flash buffer as I >> can't be both writing a log sector and a parameter sector at the same >> moment. >> > > Do your application read log entries?� What do you do to avoid reading > when the memory chip is busy in writing?If it is an external flash (or an internal flash that a write blocks reading), then when the application asks for the block it will block on the Mutex guarding the device. One big reason to use a pre-emption based system. Normally, reading of log entries is only done in response to an external command
Reply by ●June 20, 20182018-06-20
Il 20/06/2018 05:14, Richard Damon ha scritto:> On 6/19/18 11:44 AM, pozz wrote: >> Il 17/06/2018 23:05, Richard Damon ha scritto: >>> On 6/16/18 2:12 PM, pozz wrote: >>>> >>>> >>>> Third question, more complex (for me). >>>> Suppose I decided to split non-volatile data in two blocks, for example >>>> calibration data and user settings. >>>> What happens if user settings change when calibration settings are being >>>> written?� I think I should convert your writeTriggered in two updated >>>> flags: calibration_updated and settings_updated. >>>> In idle state I should check both flags and start writing the relative >>>> block. >>>> >>>> >>> >>> I will typically divide my parameters into two groups. One group has the >>> data that the user sets, usage information, and other information that >>> is updated as the user works. This data gets saved shortly after the >>> user makes a setting change or enough time passes that the other >>> information is worth saving. I often also include a user option to reset >>> this to some 'factory default' for if the user totally messes up the >>> settings (this won't reset the usage data, just user settings). There is >>> a second block of factory calibration data. This will never be update by >>> the user (or only by very trusted users) and typically this block >>> doesn't have multiple copies (unless I need a backup for actual flash >>> corruption). To activate the save for this block requires giving the >>> device a special unlock sequence, which allow the adjustment of these >>> parameters and then a specific factory save command. >>> >>>> Again another scenario.� Until now we talked about settings, a structure >>>> filled with parameters that can change when the user wants at any time. >>>> >>>> How to manage a log, a list of events with timestamp and some data? >>>> Suppose one entry takes 8 bytes.� I reserve 4kB of memory for around 500 >>>> entries organized in a FIFO. >>>> >>>> Log isn't so critical as the settings, so I think I could avoid >>>> redundancy in non-volatile memory.� Maybe only the CRC that, when not >>>> valid, clears all the log.� It should be acceptable. >>>> >>>> As usual we can talk about the opportunity to read the full log and put >>>> it in RAM, or read the entries when needed (because the user wants to >>>> read some entries, mostly the more recent). >>>> Reading 10 entries (80 bytes) from 10MHz SPI memory doesn't take too >>>> much (no more than 100usec). But here the problem is that reading can be >>>> needed during writing of settings.� And this is a big problem. >>>> >>>> As usual, simplest solution is to have the full log in RAM... sigh! >>>> >>>> What about writing of one or a few new entries in the log?� A writing >>>> operation (for example, settings) should be in process.� I should >>>> schedule and postpone the log update after writing of settings is >>>> finished. >>>> >>>> Because you have much more experience than me (and you are so kind to >>>> share it with me and other lurkers), could you suggest a smart approach? >>>> >>> >>> For logs, I will define a log information block to store a single log >>> entry, back the number of them I can into a flash sector. The total log >>> then has a number of these sectors reserved for the log, forming a >>> circular list (so writing a new log entry overwrites the oldest log >>> record). I tend to have two sectors of these log entries 'cached', so I >>> can be creating one log entry at the end of one block and one at the >>> beginning of the next block. While I am filling a given log entry, it is >>> marked as 'invalid', and that is cleared when the entry is finished. A >>> given sector is written when it is full, or a sufficient time after a >>> block has been updated to minimize log data loses due to power loss. >>> This write uses the same flash buffer as the parameter flash buffer as I >>> can't be both writing a log sector and a parameter sector at the same >>> moment. >>> >> >> Do your application read log entries?� What do you do to avoid reading >> when the memory chip is busy in writing? > > If it is an external flash (or an internal flash that a write blocks > reading), then when the application asks for the block it will block on > the Mutex guarding the device.In my cooperative kernel, I have two choices: - block the entire application waiting for serial memory availability * for 24LC64, maximum the page-write time, max 5ms * for a serial Flash, maximum the sectore-erase time that is too much - convert the code in a state-machine, sigh... :-(> One big reason to use a pre-emption based > system. Normally, reading of log entries is only done in response to an > external commandI usually work on a bare-metal system.
Reply by ●June 20, 20182018-06-20
On 6/20/18 5:03 AM, pozz wrote:> Il 20/06/2018 05:14, Richard Damon ha scritto: >> On 6/19/18 11:44 AM, pozz wrote:>>> >>> Do your application read log entries?� What do you do to avoid reading >>> when the memory chip is busy in writing? >> >> If it is an external flash (or an internal flash that a write blocks >> reading), then when the application asks for the block it will block on >> the Mutex guarding the device. > > In my cooperative kernel, I have two choices: > � - block the entire application waiting for serial memory availability > ��� * for 24LC64, maximum the page-write time, max 5ms > ��� * for a serial Flash, maximum the sectore-erase time that is too > ����� much > � - convert the code in a state-machine, sigh... :-( > > >> One big reason to use a pre-emption based >> system. Normally, reading of log entries is only done in response to an >> external command > > I usually work on a bare-metal system.And that is one of the issues YOU need to solve when you drop down to cooperative/bare metal system. What to do if you want to do something but can't at the moment. You need to design in ways to effectively use the wait time, and yes, that often means things like state machines, and that often means that the base level of an operation needs to know when some sub part isn't ready to do its thing. On very small machines, the code isn't that complicated (being limited by the processor's ability), the bare metal approach isn't that bad. As the machine gets bigger, normally because the task has gotten more complicated, then the bare metal, hand crafted cooperative system starts to get heavy, so you 'upgrade' to a pre-emptive micro-kernel. (and if the problem get enormous, maybe you upgrade to a large scale processor running a full embedded os).
Reply by ●June 23, 20182018-06-23
[This followup was posted to comp.arch.embedded and a copy was sent to the cited author.] In article <pftfgp$l3j$1@dont-email.me>, pozzugno@gmail.com says...> > I need to save some data on a non-volatile memory. They are some > parameters that the user could change infrequently, for example 10 times > per day at a maximum. In the range 100-2kB. > > As usually occurs, the parameters that changes more frequently (10 times > per day) are fewer than parameters that changes very rarely (10-100 > times in the device lifetime). > > How to save those data? After discarding the internal MCU Flash (because > of interrupts block during programming), I'm deciding if it's better a > serial EEPROM or serial Flash. > > First of all, I think SPI is better than I2C. SPI seems much faster: > 10-20MHz against 400kHz-1MHz. At least for reading. Erasing/writing > time is identical between I2C and SPI. > > EEPROM or Flash? I know EEPROM can be written one byte at a time without > erasing an entire block, against Flash that needs a sector erase before > writing even a single byte. > > The firmware would be simpler with EEPROMs, because I don't need to save > the entire sector before erasing and restoring it during programming, > when writing a single byte. With EEPROMs I can write a byte. Stop. > > However I don't think this simple approach can be used in a real > production. Suppose I have 10 bytes to write. What happens if the > writing process is stopped at the middle, maybe after 5 bytes? How to > protect the system against those events? I think one solution is to > have at least two copies of data in the memory and switch to the other > bank after all data is completely written, with an "atomic" write operation. > This means I need to copy&paste an entire block everytime, even for a > single byte change. And this is similar to Flash approach, where I > *need* a sector erase before changing a single byte. > > What about the time? EEPROM write cycle is about 5ms for a 32-bytes > page. For 128 bytes/4 pages, 20 ms. > Flash sector erase time is 18 ms, plus 14us for each byte. The overall > write cycle time is similar between EEPROM and Flash. > > If the data are bigger, for example 1kB, the Flash technology wins. The > sector size in Flash memories are usually bigger than 1kB. So I need to > erase only one time (18m + 14u*1024=32ms). In EEPROM I have 32 32-bytes > pages, so 5m*32=160ms. 5 times more than Flash. > > I'm not considering endurance. EEPROMs are better (1000k write cycles) > than Flash (100k write cycles), but I don't need so much write cycles in > the entire device lifetime. > >You should consider the use of a FRAM chip. You can get SPI or I2C versions. Endurance almost becomes a non issue. Supports byte by byte write and write speed is pretty much the same as reading - with the serial interface the write speed is pretty much hidden in the interface timing. I use them for NVM storage and it is possible to create a robust parameter and settings system around FRAM. My general concept is to store data in blocks two times with CRCs. The CRCs allow for checking at load time if to use the first or second stored image. If both CRCs bad then initialize to defaults. With the beauty of byte writes I have my driver setup where I keep two copies of the data set in RAM. One matches the stored content and the other copy is where changes are made. At time of the write commit to NVM I only write the bytes that have actually changed, This drastically reduces the amount of time spent storing a data set back to the FRAM when only a few bytes have changed. -- Michael Karas Carousel Design Solutions http://www.carousel-design.com --- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus