> On 19-06-27 19:37 , pozz wrote:
>> Il 27/06/2019 16:47, Niklas Holsti ha scritto:
>>> On 19-06-27 16:59 , pozz wrote:
>>>> Il 27/06/2019 13:42, firstname.lastname@example.org ha scritto:
>>>>> On Thu, 27 Jun 2019 13:08:20 +0200, pozz <email@example.com> wrote:
>>> � [snip]
>>>>>> However the dynamic allocation algorithm appears very strange to
>>>>>> me. I
>>>>>> expected that malloc() returns always the start of the heap if the
>>>>>> is completely free. It is so at the beginning, however when some
>>>>>> allocations and deallocations happen, even if the heap is completely
>>>>>> deallocated (all the allocations were freed), a new allocation
>>>>>> return the start of the heap.
>>>>>> How to explain this behaviour?
>>>>> malloc() needs to store the actual allocated size (user requested
>>>>> size+possible align to word/longword boundary), [...snip...]� thus
>>>>> the user
>>>>> receiver never starts at the start of heap, but after this descriptor.
>>>> Yes, I know.� When I wrote that malloc() returns a different address
>>>> than the start of the heap, I meant it is *very* different.
>>>> In other words, if the heap is at 0x1000, after many
>>>> allocations/deallocations, when the heap is again completely free,
>>>> malloc() returns 0x1100, 256 bytes from the start of the heap.
>>> If it is always 256 bytes, whatever the pattern of allocations and
>>> releases, then I would guess that your heap manager allocates a
>>> 256-byte block for its own use, on first call, and keeps that block
>>> allocated thereafter.
>> No, it isn't my case.
>>> If the offset varies depending on the size and order of allocations
>>> and releases, your are probably seeing an effect of the allocation
>>> algorithm which tries to find the "best" (or a "good enough") free
>>> block to allocate. Even if all heap blocks have been released, this
>>> "good" block may not be at the start of the heap, probably because the
>>> allocation algorithm does not coalesce all adjacent free blocks.
>> Oh bad!
>>> It is quite complex (and can be processor-heavy) to structure and
>>> manage the heap area in such a way that releasing all allocated blocks
>>> returns the heap to its original state as a single, large, empty area.
>> Complex? Really?
> Ok, not complex in the special case when _all_ blocks are released, but
> complex in general to detect and coalesce adjacent free blocks while
> _some_ blocks remain allocated.
>>> Instead, the heap area tends to be left as a collection of "free"
>>> blocks, of various sizes, and the next malloc() call picks one of
>>> these blocks to use for the new allocation.
>> However is strange to me. If I have some allocated and freed blocks
>> scattered on the heap space, I could understand.� But when there aren't
>> allocated blocks, but only free blocks, I think it's very simple to
>> fallback to an initial state without *any* free blocks.
> Yes, this special case could be detected by keeping count of the number
> of blocks allocated but not yet released, and reinitializing the heap
> when releasing a block makes this count zero. However, in programs which
> use the heap for more than one kind of data I expect this case to happen
>>> But the details depend on the particular allocation algorithm, and
>>> there are _many_ different algorithms.
>> I'm using newlib and I think the allocator is this.
> That seems to be using the "first-fit" algorithm and a single free-block
> list holding blocks of any size. The free-block list is sorted into
> address order which lets the algorithm coalesce adjacent free blocks
> when such appear. However, it means that free() must traverse the
> (potentially long) free-block list when inserting the released block
> into the list at the proper place, which can take a while; Order(N)
> where N = length of free-block list.
> After a glance at the code, it seems to me that this coalescing
> algorithm should result in a single, large free block after all
> allocated blocks are released, so I don't understand why that does not
> happen in your case.
I don't know too. I'm not sure I'm using that allocator, because it is
included in newlib-nano, that are the libraries I selected in NXP