Il 16/06/2018 22:41, Richard Damon ha scritto:
> On 6/16/18 1:38 PM, pozz wrote:
>> Il 16/06/2018 15:20, Richard Damon ha scritto:
>>> On 6/16/18 2:26 AM, pozz wrote:
>>>> Il 16/06/2018 00:25, Richard Damon ha scritto:
>>>>> On 6/15/18 4:54 PM, pozz wrote:
>>>>>> Il 15/06/2018 19:11, Richard Damon ha scritto:
>>>>>>> I typically also do something like this. The data structure is a
>>>>>>> union
>>>>>>> of the basic data structure with a preamble that includes (in very
>>>> fixed
>>>>>>> locations) a data structure version, checksum/crc, and if a
>>>>>>> versioning
>>>>>>> store a timestamp/data generation number. A 'Magic Number' isn't
>>>>>>> often
>>>>>>> needed unless it is a removable media as it will either be or
not the
>>>>>>> expected data, nothing else could be there (if the unit might have
>>>>>>> different sorts of programs, then a piece of the data version would
>>>> be a
>>>>>>> program ID.)
>>>>>>>
>>>>>>> Often I will have TWO copies of the data packet.
>>>>>>
>>>>>> TWO copies in EEPROM or in RAM?
>>>>>
>>>>> As I described, normally both.
>>>>
>>>> So you are so lucky that you have abundant RAM.
>>>
>>> If you can't spare the space for the two buffers, then you become
forced
>>> to 'lock' the system (or at least the parameter table) for the entire
>>> time of the flash write, otherwise you end up with corrupted (CRC
error)
>>> data blocks.
>>
>> Why? You have a single datablock in RAM that must be transfered to the
>> serial memory when changes occur. You start an activity (a task) that
>> performs the copy in background.
>> If new changes occur during writing, the task can be restarted without
>> problems, even on the same memory area. I hope the changes aren't so
>> frequent so finally you will have a valid datablock (with correct CRC)
>> written in memory, without blocking the main application.
>>
>> Why do you need an additional datablock to avoid blocking task?
>>
>
> First, often in the parameter block for me are cumulative usage stats,
> so the parameter block will be automatically saved on the controlled
> shutdown (like the use flipping the off switch) and maybe automatically
> after sufficient or significant usage. Yes, I can lose some usage on an
> unexpected turn off (pull the plug), but some units may even have enough
> power reserves to save in this situation.
I see, so in your situation the usage stats parameters changes more
frequently than the writing time of datablock. So you need to freeze a
datablock for saving, but allowing the application to continue updating it.
As usuale, different requirements lead to different solutions.
>>> It also makes writing the format uprev code much easier.
>>> You comment about how to update 'in place' becomes a REAL question
which
>>> needs to be answered, and that answer will generally require detailed
>>> understanding of how the elements are physically stored and making sure
>>> that you do things in the right order (which may require limiting the
>>> optimizations the compiler can do for that code, as you are likely
to do
>>> actions the Standard defines as Undefined Behavior).
>>
>> Yes, solving the problem of converting an old layout in a new completely
>> different layout is very complex without an additional structure. The
>> in-place conversion would be very difficult and depends on the changes.
>>
>> I think you can avoid all this stuff, as I said, changing the datablock
>> layout with care (adding parameters at the end, without changing the
>> first/old layout).
>
> Yes, you can put off the need with controlled changes, but sometimes you
> just build up enough cruft, that a re-layout makes sense. Maybe on the
> really small processors, your tasks are simpler and more stable. I find
> I want to allow for the possibility that at some point the structure may
> need to be rearranged, and often I want a unit to be able to be upgraded
> and preserve its settings.
In my case such big redesign of the memory layout can be managed with a
reset to default settings. Instead of implementing a complex conversion
function.
>>>>>>> Often organized something like this:
>>>>>>>
>>>>>>> struct SystemParameters {
>>>>>>> // List of data parameters that need to be saved
>>>>>>> } parms;
>>>>>>>
>>>>>>> union {
>>>>>>> uint8_t raw_data[DATA_SIZE];
>>>>>>> struct {
>>>>>>> struct StandardHeader header;
>>>>>>> struct SystemParameters parms;
>>>>>>>
>>>>>>> } flash_parms;
>>>>>>>
>>>>>>> /* Check that sizeof(flash_parms) == DATA_SIZE, and that
DATA_SIZE is
>>>>>>> a multiple of the flash sector size */
>>>>>>
>>>>>> Do you need raw_data[] array only to set a well-known size to
>>>> flash_parms?
>>>>>> Do you alloc an entire (or a multiple of) flash sector in RAM?
>>>>>> Even if
>>>>>> your params in flash take only 1/5 or 1/2 or 1/10 of a sector?
>>>>>>
>>>>>
>>>>> One purpose of the raw_data array is to fix the size of the flash
data.
>>>>
>>>> Sincerely I can't understand why you need a fixed data size in RAM.
>>>> Suppose release 1.0 needs 2kB, but I'm so smart to provide a double
>>>> space for new params in future releases. So I decide to use 4kB blocks
>>>> *on the serial memory*. Maybe this big space will never be used
in all
>>>> the future releases.
>>>>
>>>> Why the hell I should load all 4kB datablock in RAM at startup?
IMHO it
>>>> is sufficient to load only 2kB.
>>>>
>>>> The only reason I see is to simplify the checksum calculation.
>>>> CRC must be calculated over the entire fixed-size 4kB block, otherwise
>>>> how to know the datablock size if it could change from one release to
>>>> the other?
>>>> However you can calc the checksum of 4kB block by reading small blocks
>>>> multiple times (for example, 16 sections of 256 bytes).
>>>> After CRC validation passed, you can read in RAM *only* 2kB of
data that
>>>> the application 1.0 needs.
>>>>
>>>> I know, maybe this process is slower, but it happens only at startup.
>>>> During writing of 4kB datablock, the checksum can be calculated
>>>> on-the-fly.
>>>>
>>> A big reason for having a copy in RAM being an exact copy of the flash
>>> memory is design isolation. It allows me to define a checksum
routine to
>>> compute the value of an arbitrary buffer, and flash routines that
>>> transfer arbitrary data without it needing full understanding of the
>>> parameter storage.
>>
>> Yes, your earn some cents on implementation of some functions.
>>
>>
>>> As to the use of extra memory, If I anticipate that I am going to
>>> possibly need a 4k parameter block in the future, but don't have
the ram
>>> memory now to read in the 4k block, most assuredly in the future when I
>>> do need that bigger parameter block, I am not going to have more ram on
>>> THIS processor. Unless the flash memory is a removable media (which
>>> presents a totally different set of issues) there is no need to reserve
>>> the extra space for this processor.
>>
>> When you reserve 4k block for future uses, you don't know if it will be
>> REALLY used in the future. You are right when you say "no space now, no
>> space in the future", but are we sure we can know if we have space
or not?
>>
>> I don't know if my approach is correct, I usually don't use heap memory,
>> so I have only static data and stack. It's very difficult to calculate
>> how much space you need for stack to avoid stack overflow in ANY
>> condition. So I end up to reserve all the free RAM space to stack,
>> where the free space is total RAM minus static allocation.
>
> I follow roughly similar guidelines. The one exception is that I allow
> the use of dynamic memory that is allocated during 'startup'. Also, I
> find it useful to use a lightweight RTOS, so I don't have a single
> 'system stack', but several task stacks, so I do need to figure out the
> space requirements and can't just say it gets everything. You do run
> tests and check how much of the allocated space is used (stack space is
> prefilled with a funny pattern, and you see how much remains with that
> pattern.
>>
>> So I try being parsimonius when allocate global static variables, mostly
>> if they are big.
>> Reserving some space for extra not used memory will sacrify stack space.
>>
>> Maybe the stack space is sufficient even with extra padding in RAM, who
>> knows. At first I try to do all my best to avoid useless big static
>> allocation in RAM.
>>
>> At the contrary, what happens if you notice that your stack is critical
>> when padding is in RAM? Do you prefer to reduce the datablock *padding*
>> size or to recode some functions to avoid having the padding at all in
>> the RAM?
>>
> When RAM space gets tight, you need to sit down and look at ALL RAM
> utilization and compare it to your initial estimates. You need to do
> this sort of estimate at the very beginning of the project so you know
> it should be feasible to begin with. You need to be able to flag that
> there is a serious risk in implementation before the project gets too
> far, and perhaps re-target to a better processor.
>>
>> I think there is another approach to keep checksum functions simple and
>> avoid having padding space in RAM.
>> The problem here is the space where CRC is calculated. At startup we
>> don't know the size of data, because it could belong to an old or new
>> layout, so we can't calculate CRC on the right area. We should know the
>> data layout version before calculating CRC, but version code is *in* the
>> CRC area.
>>
>> We could think to pull the layout code outside the CRC area and protect
>> it by duplicating.
>>
>> typedef struct {
>> uint8_t layout_version;
>> uint8_t layout_version_check;
>> uint16_t crc;
>> /* The CRC is calculated starting from the following byte */
>> uint16_t counter;
>> union {
>> nvmdatav1_t datav1;
>> nvmdatav2_t datav2;
>> }
>>
>> At startup the layout version is read and checked. Then CRC is read and
>> checked against the correct size that depends on the layout version. The
>> CRC checksum is simple.
>>
>> if (datablock.layout_version == datablock.layout_version_check) {
>> size_t s = datablock.layout_version == 1 ?
>> sizeof(nvmdatav1_t) :
>> sizeof(nvmdatav2_t);
>> if (datablock.crc ==
>> calc_crc(&((const unsigned char *)&datablock)[5]),
>> s) {
>> if (datablock.counter > prev_counter) {
>> /* New valid datablock found */
>> }
>> }
>> }
>>
>> No padding at all (in RAM and in serial memory), simple functions.
>>
>> Of course padding must be taken into account during writing to separate
>> the blocks in memory for redundancy and wear-leveling.
>>
>>
>
> There is no problem with the version code being inside the CRCed area.
> In fact you really want it checked as if something corrupted the version
> code, you really want the block invalidated. The version should be in
> the early header (so parm changes can't affect it, one reason I define a
> separate struct for it that is put in the flash_parms structure. It also
> says that the flash parameter saving can be a canned routine that
> defines that structure and uses the parameter definition from the
> application code.
I suggested to move version code outside CRCed area because you need to
read version code *before* check CRC, because you need to know the size
of CRCed area and this depends on the version code.
Before looking inside the CRCed area, I think it would be better to
check the CRC itsel.
> I suppose this is one reason I don't want the size to change, at that
> point the 'canned' code has to be taken out of the can, as it need more
> application knowledge to know the parameter size. The way I do it, I can
> have a parms.h that defines the ram parameter structure, and a #define
> for the size of the block to use in flash for it, and the canned routine
> does most of the work to find the best saved block and to save the
> parameters to flash on command. It does need a callback into the
> application layer with the block to use to provide the application layer
> up-rev of the parameter data.
If this is your problem, you can save the size of CRCed area, besides
version code and so on. The 'canned' code will know the size when the
application calls the save() function, specifying the size as a parameter.
At startup the 'canned' code reads the size and check the CRC, without
having to know much more details from the application.