Reply by Alex Colvin April 4, 20062006-04-04
>But we can come accross a similar problem if we are not using dynamic >memory allocation too.
>For instance,
>void entry_point() >{ >struct Foo f; > // Stuff >}
Right. Don't do that either. -- mac the na�f
Reply by Paul Keinanen April 3, 20062006-04-03
On Mon, 3 Apr 2006 13:51:38 +0100, "Steve at fivetrees"
<steve@NOSPAMTAfivetrees.com> wrote:

>"Alex Vinokur" <alexvn@users.sourceforge.net> wrote in message >news:1144040891.299937.142390@v46g2000cwv.googlegroups.com... >> But we can come accross a similar problem if we are not using dynamic >> memory allocation too. >> >> For instance, >> >> void entry_point() >> { >> struct Foo f; >> // Stuff >> } >> >> The entry_point() function is an entry point for tasks that are created >> in run time. >> Number of 'the task existed in the same time' depends on situation on >> our embedded system. >> If that number is too big and sizeof(Foo) is enough big, the memory can >> be exceeded. > >Huh? Unless I'm missing something, in this case you're using the stack >rather than the heap. The stack will never be fragmented.
In most small systems, stacks (and task control blocks) are usually preallocated into a fixed number of stack slots. Creating multiple instances of the same task will consume more stack slots and when the slots are all used, the system fails to create new tasks, thus there is a well defined maximum number of concurrent task activations. In larger systems supporting dynamic task creation and deleting with different task sizes, it is quite common to allocate the stack for a new task from the global dynamic memory pool. Paul
Reply by Steve at fivetrees April 3, 20062006-04-03
"Alex Vinokur" <alexvn@users.sourceforge.net> wrote in message 
news:1144040891.299937.142390@v46g2000cwv.googlegroups.com...
> But we can come accross a similar problem if we are not using dynamic > memory allocation too. > > For instance, > > void entry_point() > { > struct Foo f; > // Stuff > } > > The entry_point() function is an entry point for tasks that are created > in run time. > Number of 'the task existed in the same time' depends on situation on > our embedded system. > If that number is too big and sizeof(Foo) is enough big, the memory can > be exceeded.
Huh? Unless I'm missing something, in this case you're using the stack rather than the heap. The stack will never be fragmented. Yes, if your stack is too small you have a problem. But that's a rather different kettle of fish. Steve http://www.fivetrees.com
Reply by Paul Keinanen April 3, 20062006-04-03
On 2 Apr 2006 22:08:11 -0700, "Alex Vinokur"
<alexvn@users.sourceforge.net> wrote:

>Hans-Bernhard Broeker wrote: >> In comp.arch.embedded Alex Vinokur <alexvn@users.sourceforge.net> wrote:
>> If you can't afford to allow exception handling, you generally can't >> allow C++ style dynamic memory handling. It's as easy as that. >> >> > How can one manage that? >> >> Primarily by not doing it. >> >> > 2. But how to manage memory allocation in containers for embedded >> > systems? >> >> Primarily by not using them. >[snip] > >But we can come accross a similar problem if we are not using dynamic >memory allocation too. > >For instance, > >void entry_point() >{ >struct Foo f; > // Stuff >} > >The entry_point() function is an entry point for tasks that are created >in run time. >Number of 'the task existed in the same time' depends on situation on >our embedded system. >If that number is too big and sizeof(Foo) is enough big, the memory can >be exceeded.
If that happens in a system requiring high reliability, you have either failed to provide enough memory to satisfy the absolute maximum memory requirement. If you can not specify that absolute maximum value, then the system design is faulty. If the system must work with a predefined amount of memory, then the maximum number of such tasks can be calculated at startup and during normal execution, the system must _enforce_ that the number is not exceeded. If there is a risk that this limit might be exceeded, there must be a predefined policy (such as a higher level protocol) what to do, when running on low resources, such as rejecting all requests, reject less important requests and as a last resort, put the system into safe-mode.
>If entry_point2() is used insred of entry_point() >void entry_point2() >{ >struct Foo *f = new (nothrow) f; > if (f == 0) > { > // On memory allocation failure > } > // Stuff >} >we at least can manage memory allocation failures.
IMHO, it is much simpler to have a task counter and compare it to a fixed limit :-). The suggested use of the dynamic allocation failure as an indication of a severe system overload might be realistic, provided that: 1.) the struct Foo is absolutely the largest allocation in the system (preferably by one or two orders of magnitude) 2.) this big allocation is done as the first (and preferably only) allocation in the task If you first do some small allocations and finally do the largest one and several tasks are starting concurrently, the small allocations may succeed in all tasks, but driving the free memory to dangerously low levels, which might make it impossible to recover, especially if the recovery routines require some dynamic memory as work space. This would course a deadlock situation. At least if you put the big allocation as the first allocation in the task, if it fails, you have less memory than the big allocation required, but quite likely only slightly less than the big allocation (since smaller allocations should be at least one magnitude smaller), so recovery should be possible. Paul
Reply by Alex Vinokur April 3, 20062006-04-03
Alex Vinokur wrote:
[snip]
> void entry_point2() > { > struct Foo *f = new (nothrow) f; > if (f == 0) > { > // On memory allocation failure > } > // Stuff
delete f;
> }
[snip] Alex Vinokur email: alex DOT vinokur AT gmail DOT com http://mathforum.org/library/view/10978.html http://sourceforge.net/users/alexvn
Reply by Alex Vinokur April 3, 20062006-04-03
Hans-Bernhard Broeker wrote:
> In comp.arch.embedded Alex Vinokur <alexvn@users.sourceforge.net> wrote: > > > The memory allocation issue in embedded systems is usually critical.. > > It's usually beyond critical --- it's inacceptable, period. An > embedded system has nobody to complain to if an operation as strictly > internal as a memory allocation fails. So it had better not go there > at all. > > If you can't afford to allow exception handling, you generally can't > allow C++ style dynamic memory handling. It's as easy as that. > > > How can one manage that? > > Primarily by not doing it. > > > 2. But how to manage memory allocation in containers for embedded > > systems? > > Primarily by not using them.
[snip] But we can come accross a similar problem if we are not using dynamic memory allocation too. For instance, void entry_point() { struct Foo f; // Stuff } The entry_point() function is an entry point for tasks that are created in run time. Number of 'the task existed in the same time' depends on situation on our embedded system. If that number is too big and sizeof(Foo) is enough big, the memory can be exceeded. If entry_point2() is used insred of entry_point() void entry_point2() { struct Foo *f = new (nothrow) f; if (f == 0) { // On memory allocation failure } // Stuff } we at least can manage memory allocation failures. Alex Vinokur email: alex DOT vinokur AT gmail DOT com http://mathforum.org/library/view/10978.html http://sourceforge.net/users/alexvn
Reply by Steve at fivetrees April 2, 20062006-04-02
"Michael N. Moran" <mike@mnmoran.org> wrote in message 
news:a1TXf.31518$67.3337@bignews6.bellsouth.net...
> > I hate non-deterministic stuff in my systems.
That bears repeating:
> I hate non-deterministic stuff in my systems.
As they say, "word". Steve http://www.fivetrees.com
Reply by Michael N. Moran April 2, 20062006-04-02
CBFalconer wrote:
> "Michael N. Moran" wrote: >> Michiel.Salters@tomtom.com wrote:
[snip]
>>> Sure, it would be annoying if they failed, but the watchdog can >>> restart them once the critical parts are done using all memory. >> >> Sure, as long as your customers don't mind an ocassional unexpected >> reboot. ;-) >> >> Noone dies, but my company and I gain a less than desirable >> reputation. > > Why do you assume a reboot?
Note the quoted statement about a watchdog at the top. IMHO, watchdog == reboot.
> The failure of a malloc can simply mean postponing that particular > operation until later.
Two problems with malloc/free and relatives, is knowing "when" or "if" the memory will become available. *When* must be solved by polling (yuk.) *If* is unknowable in systems where heap fragmentation is possible. I hate non-deterministic stuff in my systems.
> The important thing is not to assume success.
True for any function that can return an error. In general, it's a "Good Thing" [tm] to limit these types of functions where practical.
> Another option is to perform the mallocs on system initiation.
Yup. Pre-allocation is good. Only when you use "free" and friends does heap-fragmentation become an issue. [I'm getting that deja-vu feeling about this thread.]
> They can handle variable length arrays, for example, which the usual > suspects cannot.
I'm out-of-sync with respect to "they". I guess you mean malloc at system initiation, to which I say ... yep. Malloc is a wonderful tool for this type of situation. Obviously, the system documentation and requirements should state the limits so that the customer won't configure a system that fails to operate unexpectedly from lack of memory. cheers, mike -- Michael N. Moran (h) 770 516 7918 5009 Old Field Ct. (c) 678 521 5460 Kennesaw, GA, USA 30144 http://mnmoran.org "So often times it happens, that we live our lives in chains and we never even know we have the key." The Eagles, "Already Gone" The Beatles were wrong: 1 & 1 & 1 is 1
Reply by CBFalconer April 1, 20062006-04-01
Steve at fivetrees wrote:
> "Paul Keinanen" <keinanen@sci.fi> wrote in message >> >> If the system fails to start due to lack of memory, it is better >> that the person who performed the reboot is present and might be >> able to rectify the problem (e.g. by returning to the old >> functional configuration) rather than having a fatal error at >> random time within a few months or years e.g. due to dynamic >> memory fragmentation. > > That's the point, really. It's not that malloc/free are evil - > it's memory fragmentation that's evil, and can cause (or, given > time, *will* cause) malloc to fail. > > What's needed, of course, is a hardware heap manager which > separates logical memory blocks from real memory blocks, and > actively disallows fragmentation ;).
Now you are talking about the quality of the malloc implementation, not its use. Of course you can get in trouble by blindly using features without understanding what goes on. But you usually have several courses of action available: You can postpone some actions when the needed malloc fails. You can keep your own track of possibly freeable candidates, do so, and try again. You can build some sort of memory pool, control its use, and let some background process continuously defrag it. I had such a system running for over three years, until a power failure caused it to reboot. All these things normally involve giving up some measure of portability. However optimizing memory usage can make a process feasible on the given hardware. Fragmentation will not necessarily ever cause failure. A decent malloc will put back together any freed blocks, and once all allocated blocks are freed there is NO fragmentation left. However if your code is faulty, and you have memory leaks, that is another tale entirely. -- "If you want to post a followup via groups.google.com, don't use the broken "Reply" link at the bottom of the article. Click on "show options" at the top of the article, then click on the "Reply" at the bottom of the article headers." - Keith Thompson More details at: <http://cfaj.freeshell.org/google/> Also see <http://www.safalra.com/special/googlegroupsreply/>
Reply by Steve at fivetrees April 1, 20062006-04-01
"Paul Keinanen" <keinanen@sci.fi> wrote in message 
news:fo7t22pi5l2ud0ngm4vkdsbn4onlh87rce@4ax.com...
> > If the system fails to start due to lack of memory, it is better that > the person who performed the reboot is present and might be able to > rectify the problem (e.g. by returning to the old functional > configuration) rather than having a fatal error at random time within > a few months or years e.g. due to dynamic memory fragmentation.
That's the point, really. It's not that malloc/free are evil - it's memory fragmentation that's evil, and can cause (or, given time, *will* cause) malloc to fail. What's needed, of course, is a hardware heap manager which separates logical memory blocks from real memory blocks, and actively disallows fragmentation ;). Steve http://www.fivetrees.com