EmbeddedRelated.com
Forums

g++ on Cortex-M with no dynamic memory

Started by Dave Nadler November 5, 2016
On Monday, November 7, 2016 at 2:52:05 AM UTC-5, Tom Gardner wrote:
> ...Except where the programmer then proceeds to create > their own version of malloc/free on top of it :( > Seen that, doh!
Yup. Sane in limited cases (ie fixed buffer pool for a comm subsystem) limited to a particular subsystem...
Op 08-Nov-16 om 8:59 PM schreef Dave Nadler:
> On Monday, November 7, 2016 at 12:59:30 PM UTC-5, Tim Wescott wrote: >> If you're not going to allow global constructors, >> what's the point of using C++? > > Because of ordering issues, I've typically placed "global lifetime" > objects as static objects inside (ordered by code) subsystem initialization > routines. Then referred to them via static pointers. Not ideal but workable > way to order the initialization. > > Global ctors aside, C++ provides huge advantages: > - type safety > - RAII especially preventing resource leaks in dtors > - templates > - specialized storage allocation by class if required > and so forth. > All without any dynamic allocation except on stack. > > If only there was a way to have exceptions without heap; > exceptions really do help make safer code. > Might be possible in some C++ toolchains if throws are > limited to pointers (to static exception info)? > Depends I guess on the implementation (ie does exception > processing rely on RTTI).
To my knowledge exception handling requires something like RTTI, but not necessarily a heap (AFAIK current implemnentations do). I've had a few discussions about exceptions. The current aim of the anti-exception advocates (as heared in SG14) is that exceptions are too costly in time. This is mostly from gaming, fast trading and google. My (main) problem with exceptions is the use of the heap. Wouter "Objects? No thanks!" van Ooijen
On 08.11.2016 г. 17:01, Don Y wrote:
> Hi Dimiter, > > On 11/8/2016 6:11 AM, Dimiter_Popoff wrote: >>>>> The problem with heaps is the fragmentation of the heap in a long >>>>> running system. It might seem running OK for the first year, but will >>>>> it run for the next ten years ? >>>> >>>> That depends on how that memory is physically implemented. >>>> E.g., slip a VMM under it and issues change (albeit at the >>>> granularity of pages) >>> >>> If you have some virtual memory system available that helps a lot >>> >>> Unfortunately a virtual memory system requires a backup storage to >>> hold the "dirty" pages, a page file. >>> >>> If rotating fans and rotating disk can't be used, the only alternative >>> is a Flash disk with limited number of write cycles. >> >> While generally so VM does not necessarily need to swap memory. >> If your physical memory is say 64M and your logical space is set to >> say 1G _and_ you know the 64M will never be all used up, the benefit >> of the VM is that fragmentation will be much less of a problem. > > It also lets you "write protect" memory, trap on writes to > memory that shouldn't be written, trap on writes to memory > that *should* be written, provide protected address spaces, > share memory with different processes (and in different places), > move large blocks of memory in constant time, etc.
All of the protection etc., of course. When I write something new to run under dps many of the mistakes I make are captured by the "attempted access to non-allocated space, press CR to kill task" :-). There are also inconveniences associated with VM though. Your interrupt handlers cannot be in page translated memory without killing the system's latency figures (get an IRQ while fixing a page fault and hhave the IRQ handled create a fault of its own.... ooops, here is the reset button :). So - like it is in dps - the interrupt handlers must be in BAT (block array translated) memory which has to be set upon boot time such that it can fit all allocation requests it will get during a boot session.
> Put the stack "someplace special" (in that large, "empty" address > space) and let the hardware tell you when you need to allocate > another page for the stack -- or, when you've *underrun* it, etc.
Yes, in dps a task gets started with an allocated data section and its (user) stack pointer is put on top of it, command line data is on its bottom. Needs just to be large enough, this is "logical memory" being allocated, physical gets there just after page faults. Well and then there is the same - but much shorter - for the system stack and then there is the initialized data section etc.
> > [Things that are hard to do with pure software mechanisms] > >> Then if memory is allocated using worst-fit strategy fragmentation >> is not much of an issue even without VM. >> >> To me (writing under DPS) stack usage and general allocation >> are just both in use, stacks are nice because of what you already >> stated, generic allocation (not sure how you'd call that, when >> you do a system call to allocate that much memory and you get >> returned the address and the actually allocated size (the latter >> probably somewhat more than requested because of the granularity)) >> is good because it can be deallocated "out of order" (unlike stack >> frames). > > The problem with most memory allocators is they are used for a > hodge-podge of unstructured/unordered requests. E.g., override > new() and you'll have a better chance at a "well behaved" memory > subsystem.
In dps I call "allocation" the act when memory is allocated to a requestor based on a bitmap - a bit per cluster. The lowest level is the system allocation - a cluster being a page, 4k for power; so there is the LCAT (logical memory cluster allocation table) and the PCAT (physical memory CAT). Then there is the page translation table. Tasks get memory allocated by requesting some and getting back an address and allocated size (size being a multiple of the cluster size, i.e. 4k on 32 bit power). The allocated piece can be recorded in a "task history record" so it is deallocated by the system upon task kill or it can be not put there (the task can still have put a call point in its history records which is called upon kill). On top of that there is a dps object (dps maintains a runtime objects system) called memory_pool; it can have a cluster size as small as just a few bytes and be of variable size, can be "made" even from within a script, can be "removed" from there etc. Then the pools can get automatically chained if needed - i.e. if some object requests more memory than there is in any of the pools of a chain a new pool is "made" etc. Since having all this I have not had any necessity to think "how do I allocate this or that", and well, I have never thought of stack growth as of "allocation" (though I do write "allocate a new stack frame" when I make one :-)). Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/
On 11/8/2016 11:26 PM, Dimiter_Popoff wrote:
>>>> If rotating fans and rotating disk can't be used, the only alternative >>>> is a Flash disk with limited number of write cycles. >>> >>> While generally so VM does not necessarily need to swap memory. >>> If your physical memory is say 64M and your logical space is set to >>> say 1G _and_ you know the 64M will never be all used up, the benefit >>> of the VM is that fragmentation will be much less of a problem. >> >> It also lets you "write protect" memory, trap on writes to >> memory that shouldn't be written, trap on writes to memory >> that *should* be written, provide protected address spaces, >> share memory with different processes (and in different places), >> move large blocks of memory in constant time, etc. > > All of the protection etc., of course. When I write something new > to run under dps many of the mistakes I make are captured by the > "attempted access to non-allocated space, press CR to kill task" :-).
Yes, handy at debug time but also handy for (VERY) long-lived processes; "CAN'T HAPPEN" had really better NOT happen!
> There are also inconveniences associated with VM though. > Your interrupt handlers cannot be in page translated memory without > killing the system's latency figures (get an IRQ while fixing a > page fault and hhave the IRQ handled create a fault of its own.... > ooops, here is the reset button :). So - like it is in dps - the > interrupt handlers must be in BAT (block array translated) memory > which has to be set upon boot time such that it can fit all > allocation requests it will get during a boot session.
There are LOTS of "inconveniences"! But, there are also lots of *opportunities* that are simply unavailable without that hardware. E.g., when I release virtual memory, it need not immediately be made available to satisfy another request -- ANY "free" page can suffice! So, *this* free'd page can be scrubbed clean (so process A doesn't leak information to process B through this uncontrolled channel). That "scrubbing" can happen when the system has time to do it -- instead of immediately on the release of the page (as long as it is scrubbed prior to reallocation)
>> Put the stack "someplace special" (in that large, "empty" address >> space) and let the hardware tell you when you need to allocate >> another page for the stack -- or, when you've *underrun* it, etc. > > Yes, in dps a task gets started with an allocated data section > and its (user) stack pointer is put on top of it, command line > data is on its bottom. Needs just to be large enough, this is > "logical memory" being allocated, physical gets there just after > page faults. Well and then there is the same - but much shorter - for > the system stack and then there is the initialized data section etc.
Part of a thread's configuration specifies how much stack MUST be resident (in which case, it is allocated/mmap'ed when the task is spawned) and a maximum size to which it will be allowed to dynamically grow. Other parameters govern how aggressively the system tries to reclaim no-longer-in-use stack space. (you don't want to foster pathological behaviors wherein you reclaim a page only to see it faulted back in as soon as the task resumes!)
>> [Things that are hard to do with pure software mechanisms] >> >>> Then if memory is allocated using worst-fit strategy fragmentation >>> is not much of an issue even without VM. >>> >>> To me (writing under DPS) stack usage and general allocation >>> are just both in use, stacks are nice because of what you already >>> stated, generic allocation (not sure how you'd call that, when >>> you do a system call to allocate that much memory and you get >>> returned the address and the actually allocated size (the latter >>> probably somewhat more than requested because of the granularity)) >>> is good because it can be deallocated "out of order" (unlike stack >>> frames). >> >> The problem with most memory allocators is they are used for a >> hodge-podge of unstructured/unordered requests. E.g., override >> new() and you'll have a better chance at a "well behaved" memory >> subsystem. > > In dps I call "allocation" the act when memory is allocated to a > requestor based on a bitmap - a bit per cluster. The lowest level > is the system allocation - a cluster being a page, 4k for power; so > there is the LCAT (logical memory cluster allocation table) and the > PCAT (physical memory CAT). Then there is the page translation table. > > Tasks get memory allocated by requesting some and getting back > an address and allocated size (size being a multiple of the cluster > size, i.e. 4k on 32 bit power). The allocated piece can be recorded > in a "task history record" so it is deallocated by the system upon > task kill or it can be not put there (the task can still have put a call > point in its history records which is called upon kill). > > On top of that there is a dps object (dps maintains a runtime objects > system) called memory_pool; it can have a cluster size as small as just > a few bytes and be of variable size, can be "made" even from within > a script, can be "removed" from there etc. Then the pools can get > automatically chained if needed - i.e. if some object requests > more memory than there is in any of the pools of a chain a new > pool is "made" etc.
I support multiple "heaps" that run in the logical memory space. Depending on the policies the caller uses to manage them, they can end up as classic heaps, buffer pools, FIFO's, etc. Let the developer figure out how memory will behave in his application and let the heap implementation simply be the mechanisms to make that happen (the developer sets *policy* in how he uses each heap). The VMM runs beneath all this. There are also bridge objects that essentially map the heaps conveniently onto VMM pages. As damn near everything in my world is IPC/RPC, this lets me move objects and sets of objects (e.g., parameters) between processes just by twiddling the underlying VMM system. Of course, silly for small objects that can simply be passed by copy. And, for asynchronous operations, it provides immutability without "burdening" the caller with a copy operation. E.g., I can pass a frame of raw video to a remote process and not worry about the caller altering the frame *after* the call (but before it has actually been "transmitted"). If the caller doesn't fiddle with the frame's contents until AFTER it has been transmitted, then the call was "free". OTOH, if the caller attempts to alter it's contents BEFORE it is actually transmitted, then the VMM system ensures a clean copy is transmitted without the caller's "corruption". [The system then knows to "consume" such an extra copy after it has been "used"]
> Since having all this I have not had any necessity to think "how > do I allocate this or that", and well, I have never thought of > stack growth as of "allocation" (though I do write "allocate a > new stack frame" when I make one :-)).
Most VMM operations happen without the developer's knowledge, in my case. The OS takes care of keeping track of all the magic so the developer sees a "clean" interface free of hazzards, etc. Its tricky to provide these hooks without EXPOSING them (which would require a more sophisticated class of developer!)
On 08/11/16 23:30, Wouter van Ooijen wrote:
> Op 08-Nov-16 om 8:59 PM schreef Dave Nadler: >> On Monday, November 7, 2016 at 12:59:30 PM UTC-5, Tim Wescott wrote: >>> If you're not going to allow global constructors, >>> what's the point of using C++? >> >> Because of ordering issues, I've typically placed "global lifetime" >> objects as static objects inside (ordered by code) subsystem >> initialization >> routines. Then referred to them via static pointers. Not ideal but >> workable >> way to order the initialization. >> >> Global ctors aside, C++ provides huge advantages: >> - type safety >> - RAII especially preventing resource leaks in dtors >> - templates >> - specialized storage allocation by class if required >> and so forth. >> All without any dynamic allocation except on stack. >> >> If only there was a way to have exceptions without heap; >> exceptions really do help make safer code. >> Might be possible in some C++ toolchains if throws are >> limited to pointers (to static exception info)? >> Depends I guess on the implementation (ie does exception >> processing rely on RTTI). > > To my knowledge exception handling requires something like RTTI, but not > necessarily a heap (AFAIK current implemnentations do).
I believe that exception handling does not need RTTI, unless you are using polymorphic exceptions (i.e., throwing specific exceptions and catching generic ones).
> > I've had a few discussions about exceptions. The current aim of the > anti-exception advocates (as heared in SG14) is that exceptions are too > costly in time. This is mostly from gaming, fast trading and google. >
I am not keen on exceptions in embedded code, but the time costs are not the main issue. The run-time costs for code that could throw, but does not, are very small - often smaller than alternative explicit error handling code. I think the possibility of exceptions occurring may hinder optimisations - in particular, it could limit certain types of code re-ordering. There is also the cost in code space for all the stack unwind stuff. I dislike exceptions simply because they are undocumented gotos. I don't like the idea of functions that can quietly fail in some way and hope that something further up the system can deal with it in a sensible manner. It can be okay on a PC desktop program to say that a particular operation failed, and carry on with rest of the tasks - if one web page fails to open correctly, the rest of the browser is still usable. But on embedded systems, /everything/ should work correctly all the time - if a bit of software is not necessary, it should not be part of the program. (Note that some things, such as hardware, may fail - but the software should correctly deal with those problems.)
> My (main) problem with exceptions is the use of the heap. >
Exceptions do not /have/ to use the heap. I had a quick google for this, and found that the gcc C++ library will try to get heap memory for an exception - but if it fails, it will use memory from a statically allocated "emergency buffer". It should not be too hard to make stubs for these functions for your own use, making exception handling always use a static buffer. The code will be implementation-specific, of course, and might be tightly tied to particular versions of the compiler.
> Wouter "Objects? No thanks!" van Ooijen >
Op 09-Nov-16 om 9:23 AM schreef David Brown:
> On 08/11/16 23:30, Wouter van Ooijen wrote: >> Op 08-Nov-16 om 8:59 PM schreef Dave Nadler: >>> On Monday, November 7, 2016 at 12:59:30 PM UTC-5, Tim Wescott wrote: >>>> If you're not going to allow global constructors, >>>> what's the point of using C++? >>> >>> Because of ordering issues, I've typically placed "global lifetime" >>> objects as static objects inside (ordered by code) subsystem >>> initialization >>> routines. Then referred to them via static pointers. Not ideal but >>> workable >>> way to order the initialization. >>> >>> Global ctors aside, C++ provides huge advantages: >>> - type safety >>> - RAII especially preventing resource leaks in dtors >>> - templates >>> - specialized storage allocation by class if required >>> and so forth. >>> All without any dynamic allocation except on stack. >>> >>> If only there was a way to have exceptions without heap; >>> exceptions really do help make safer code. >>> Might be possible in some C++ toolchains if throws are >>> limited to pointers (to static exception info)? >>> Depends I guess on the implementation (ie does exception >>> processing rely on RTTI). >> >> To my knowledge exception handling requires something like RTTI, but not >> necessarily a heap (AFAIK current implemnentations do). > > I believe that exception handling does not need RTTI, unless you are > using polymorphic exceptions (i.e., throwing specific exceptions and > catching generic ones). > >> >> I've had a few discussions about exceptions. The current aim of the >> anti-exception advocates (as heared in SG14) is that exceptions are too >> costly in time. This is mostly from gaming, fast trading and google. >> > > I am not keen on exceptions in embedded code, but the time costs are not > the main issue. The run-time costs for code that could throw, but does > not, are very small - often smaller than alternative explicit error > handling code. I think the possibility of exceptions occurring may > hinder optimisations - in particular, it could limit certain types of > code re-ordering. There is also the cost in code space for all the > stack unwind stuff. > > I dislike exceptions simply because they are undocumented gotos. I > don't like the idea of functions that can quietly fail in some way and > hope that something further up the system can deal with it in a sensible > manner. It can be okay on a PC desktop program to say that a particular > operation failed, and carry on with rest of the tasks - if one web page > fails to open correctly, the rest of the browser is still usable. But > on embedded systems, /everything/ should work correctly all the time - > if a bit of software is not necessary, it should not be part of the > program. (Note that some things, such as hardware, may fail - but the > software should correctly deal with those problems.) > >> My (main) problem with exceptions is the use of the heap. >> > > Exceptions do not /have/ to use the heap.
Maybe, but the current implementations do.
> I had a quick google for > this, and found that the gcc C++ library will try to get heap memory for > an exception - but if it fails, it will use memory from a statically > allocated "emergency buffer".
That sound weird. If it can use the static buffer, why not use that anyway? Are you sure it isn't the other way round (a small-expection optimization similar to small-string optimization)? Wouter "Objects? No thanks!" van Ooijen
On 09.11.2016 г. 09:55, Don Y wrote:
> On 11/8/2016 11:26 PM, Dimiter_Popoff wrote: >>>>> If rotating fans and rotating disk can't be used, the only alternative >>>>> is a Flash disk with limited number of write cycles. >>>> >>>> While generally so VM does not necessarily need to swap memory. >>>> If your physical memory is say 64M and your logical space is set to >>>> say 1G _and_ you know the 64M will never be all used up, the benefit >>>> of the VM is that fragmentation will be much less of a problem. >>> >>> It also lets you "write protect" memory, trap on writes to >>> memory that shouldn't be written, trap on writes to memory >>> that *should* be written, provide protected address spaces, >>> share memory with different processes (and in different places), >>> move large blocks of memory in constant time, etc. >> >> All of the protection etc., of course. When I write something new >> to run under dps many of the mistakes I make are captured by the >> "attempted access to non-allocated space, press CR to kill task" :-). > > Yes, handy at debug time but also handy for (VERY) long-lived > processes; "CAN'T HAPPEN" had really better NOT happen!
Hi Don, in my experience long-lived tasks (months) just do not fail. It appears that if some piece of code survives a day or so chances are it is pretty stable and "bug free" (to the extent it will not fail at all). Which is not to say your scenario is not thinkable of course, just how things are with me. Where I do see failures is when doing new things, e.g. a user using some relatively new feature. Had this recently during a demo - user being me. Right click on a button crashed the task, got that "attempted access to non-allocated space" message. The buttons had been there for a few years... had never seen that. The cause turned out to be a non-initialized pointer of sorts; had lay dormant (pointing to some allocated piece to be read to no further consequences, what was read was irrelevant) until some other global system change or whatever woke it up. Good thing the entire demo was a shiny plug-and-play affair (with a HPGe detector people are used to anything but that) so I could just laugh the bug off.
> > Most VMM operations happen without the developer's knowledge, > in my case. The OS takes care of keeping track of all the > magic so the developer sees a "clean" interface free of > hazzards, etc.
Well if the developer has to allocate memory he has to say so, no way around that. I can't think of a cleaner and simpler way than have him say "give me that much" and in return "here you are, at this address, that much, rounded up a little". The developer (even if this is me) just does not think of physical memory at this level, just of logical. Then if you write at a higher level - e.g. write a dps shell script - allocate/deallocate is completely hidden. You "make" an object which requests memory from its "container" object, logs that so it will deallocate upon "remove" etc. It is a highly sophisticated/interconnected piece of underlying code really.
> Its tricky to provide these hooks without EXPOSING them > (which would require a more sophisticated class of developer!)
I just do not think of the limitations of the user. If I can learn to do something he/she also can (if not he is just messing with things he is not suppose to mess with. If something is too much for me to process in the time I want to spend on a given task it will be for someone else, too; so I just pack things into higher and higher level as needed, thus reducing the level of competence one needs to have involved in the task at hand. I.e. if I want to make some search utility I just write a dps script, I do not use my knowledge on allocate/deallocate, vpa language etc. Generally I think the assumption that a user is too stupid to be handed this or that so we have to stupefy the product is a bad approach, I have never seen anything useful come out of it (I don't think this is what you mean but it brought the association, it is one of my red buttons I suppose). Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/
Hi Dimiter,

On 11/10/2016 1:27 AM, Dimiter_Popoff wrote:
> On 09.11.2016 г. 09:55, Don Y wrote: >> On 11/8/2016 11:26 PM, Dimiter_Popoff wrote: >>>>>> If rotating fans and rotating disk can't be used, the only alternative >>>>>> is a Flash disk with limited number of write cycles. >>>>> >>>>> While generally so VM does not necessarily need to swap memory. >>>>> If your physical memory is say 64M and your logical space is set to >>>>> say 1G _and_ you know the 64M will never be all used up, the benefit >>>>> of the VM is that fragmentation will be much less of a problem. >>>> >>>> It also lets you "write protect" memory, trap on writes to >>>> memory that shouldn't be written, trap on writes to memory >>>> that *should* be written, provide protected address spaces, >>>> share memory with different processes (and in different places), >>>> move large blocks of memory in constant time, etc. >>> >>> All of the protection etc., of course. When I write something new >>> to run under dps many of the mistakes I make are captured by the >>> "attempted access to non-allocated space, press CR to kill task" :-). >> >> Yes, handy at debug time but also handy for (VERY) long-lived >> processes; "CAN'T HAPPEN" had really better NOT happen! > > in my experience long-lived tasks (months) just do not fail. It > appears that if some piece of code survives a day or so chances > are it is pretty stable and "bug free" (to the extent it will > not fail at all). Which is not to say your scenario is not > thinkable of course, just how things are with me.
It depends on what portions of the code are exercised in those "months". E.g., if something only runs on the 5th Monday of each month (and this is February) there may not *be* a 5th monday in the near future. Or, an action that the user typically seldom invokes ("compact mailboxes"), etc. Or, effectively exposing a hazzard that might not be encountered in short term use (e.g., someone checking mail and then quitting will rarely encounter a competitor process accessing the mailbox) I.e., the user interface can be running "forever" and still not tickle a latent bug.
> Where I do see failures is when doing new things, e.g. a user > using some relatively new feature. Had this recently during a > demo - user being me. Right click on a button crashed the task, > got that "attempted access to non-allocated space" message. > The buttons had been there for a few years... had never seen > that. The cause turned out to be a non-initialized pointer of > sorts; had lay dormant (pointing to some allocated piece to > be read to no further consequences, what was read was irrelevant) > until some other global system change or whatever woke it up. > Good thing the entire demo was a shiny plug-and-play affair > (with a HPGe detector people are used to anything but that) > so I could just laugh the bug off. > >> Most VMM operations happen without the developer's knowledge, >> in my case. The OS takes care of keeping track of all the >> magic so the developer sees a "clean" interface free of >> hazzards, etc. > > Well if the developer has to allocate memory he has to say so, > no way around that. I can't think of a cleaner and simpler way > than have him say "give me that much" and in return "here you > are, at this address, that much, rounded up a little". > The developer (even if this is me) just does not think of > physical memory at this level, just of logical.
I try to hide as much unnecessary detail as possible at each "level" in my design. And, the "standard" interfaces that you would expect to encounter tend to behave as you'd expect of them. But, a knowledgeable user can usually access the same mechanisms via a more versatile interface for finer control over the implementation. (However, the API doesn't REQUIRE this of all users!)
> Then if you write at a higher level - e.g. write a dps shell > script - allocate/deallocate is completely hidden.
Yes. I use a similar philosophy throughout the design. I expose what a developer coding at a particular level in the design expects/needs to meet his design goals. Someone writing for bare metal sees far more detail than a "user" (end user/consumer) who writes nothing OR, at best, "scripts". Someone writing a device driver sees a different level of abstractions and services. Writing atop the RTOS exposes yet another API -- that of the proxy/service. At the very top (least capable, most hand-holding) are user "scripts". Here, the user doesn't worry about numeric representations, memory allocation/garbage collection, communication channels, permissions (assuming he HAS them), crash recovery, etc.
> You "make" an object which requests memory from its "container" > object, logs that so it will deallocate upon "remove" etc. > It is a highly sophisticated/interconnected piece of underlying > code really. > >> Its tricky to provide these hooks without EXPOSING them >> (which would require a more sophisticated class of developer!) > > I just do not think of the limitations of the user. If I can > learn to do something he/she also can (if not he is just messing > with things he is not suppose to mess with. If something is too > much for me to process in the time I want to spend on a given > task it will be for someone else, too; so I just pack things > into higher and higher level as needed, thus reducing the level > of competence one needs to have involved in the task at hand.
Exactly. In my case, I don't want developers to have to be concerned with the VMM and how it ties into the RTOS, etc. If a developer (a sort of "user") wants to send a message to a particular host/process, I don't want him to have to worry about the mechanism for doing that: create the message and enqueue it on the desired destination -- let the system figure out *where* the target lies WHEN THE MESSAGE IS ACTUALLY SENT (a target can MOVE while the message is enqueued -- why bother the developer with handling that case?!) Likewise, my IDL lets me define the contracts between clients and the proxy/stub libraries for each service. E.g., if you are MOVE-ing an object to another process (local or remote), then the semantics of that IPC/RPC imply that the object will cease to exist on the originating process/node. The stub generated by the IDL will unmap the object from the caller's address space when the routine is invoked. However, the object will persist within the stub until it can actually be "transferred" to the destination -- at which point, it will be deleted from the node (if the destination was remote) or deleted from the stub's memory space. A whole class of errors are avoided -- those where the object is not unmapped by the caller AFTER the RPC returns (as well as ensuring the caller doesn't try to alter the cached object while it is waiting to be transmitted). OTOH, if you are *copying* the object, then it's never unmapped from the caller's memory space -- BUT, is protected from alteration (zero copy semantics) while the stub is waiting to copy it out to the destination. Again, another class of errors avoided: changing the object while it is conceptually immutable (protecting against that would normally require a copyout() -- even if the caller doesn't elect to alter the data before the actual transmission occurs!) *BUT*, the caller needs to provide a page-aligned object in order for the system to efficiently implement this magic! (Yet I don't want to directly expose the VMM to the caller)
> I.e. if I want to make some search utility I just write a dps > script, I do not use my knowledge on allocate/deallocate, vpa > language etc.
Yes, but you can do so because the machinery beneath you is already in place and automated.
> Generally I think the assumption that a user is too stupid to > be handed this or that so we have to stupefy the product is a > bad approach, I have never seen anything useful come out of it > (I don't think this is what you mean but it brought the association, > it is one of my red buttons I suppose).
I want users (at their respective "levels" in the implementation hierarchy) to concentrate on their objectives and not my mechanisms. E.g., if a user wants to figure out (roughly) how many 18x18" ceramic tiles with 1/4" grout lines are needed to tile a 19'5" x 12'2" room (assuming a 3/8 inch border along each edge): (19 ft 5 in - (2 * 3/8 in)) * (12 ft 2 in - (2 * 3/8 in)) / (18 in + 1/4 in) ^ 2 will yield a dimensionless value (area/area). OTOH, omitting the divisor would lead to an area result -- which the user could request in any suitable "area units". It's silly to force the user to do a bunch of trivial but error prone normalizations when the API can handle those more reliably.
On 10/11/16 07:34, Wouter van Ooijen wrote:
> Op 09-Nov-16 om 9:23 AM schreef David Brown: >> On 08/11/16 23:30, Wouter van Ooijen wrote: >>> Op 08-Nov-16 om 8:59 PM schreef Dave Nadler: >>>> On Monday, November 7, 2016 at 12:59:30 PM UTC-5, Tim Wescott wrote: >>>>> If you're not going to allow global constructors, >>>>> what's the point of using C++? >>>> >>>> Because of ordering issues, I've typically placed "global lifetime" >>>> objects as static objects inside (ordered by code) subsystem >>>> initialization >>>> routines. Then referred to them via static pointers. Not ideal but >>>> workable >>>> way to order the initialization. >>>> >>>> Global ctors aside, C++ provides huge advantages: >>>> - type safety >>>> - RAII especially preventing resource leaks in dtors >>>> - templates >>>> - specialized storage allocation by class if required >>>> and so forth. >>>> All without any dynamic allocation except on stack. >>>> >>>> If only there was a way to have exceptions without heap; >>>> exceptions really do help make safer code. >>>> Might be possible in some C++ toolchains if throws are >>>> limited to pointers (to static exception info)? >>>> Depends I guess on the implementation (ie does exception >>>> processing rely on RTTI). >>> >>> To my knowledge exception handling requires something like RTTI, but not >>> necessarily a heap (AFAIK current implemnentations do). >> >> I believe that exception handling does not need RTTI, unless you are >> using polymorphic exceptions (i.e., throwing specific exceptions and >> catching generic ones). >> >>> >>> I've had a few discussions about exceptions. The current aim of the >>> anti-exception advocates (as heared in SG14) is that exceptions are too >>> costly in time. This is mostly from gaming, fast trading and google. >>> >> >> I am not keen on exceptions in embedded code, but the time costs are not >> the main issue. The run-time costs for code that could throw, but does >> not, are very small - often smaller than alternative explicit error >> handling code. I think the possibility of exceptions occurring may >> hinder optimisations - in particular, it could limit certain types of >> code re-ordering. There is also the cost in code space for all the >> stack unwind stuff. >> >> I dislike exceptions simply because they are undocumented gotos. I >> don't like the idea of functions that can quietly fail in some way and >> hope that something further up the system can deal with it in a sensible >> manner. It can be okay on a PC desktop program to say that a particular >> operation failed, and carry on with rest of the tasks - if one web page >> fails to open correctly, the rest of the browser is still usable. But >> on embedded systems, /everything/ should work correctly all the time - >> if a bit of software is not necessary, it should not be part of the >> program. (Note that some things, such as hardware, may fail - but the >> software should correctly deal with those problems.) >> >>> My (main) problem with exceptions is the use of the heap. >>> >> >> Exceptions do not /have/ to use the heap. > > Maybe, but the current implementations do. > >> I had a quick google for >> this, and found that the gcc C++ library will try to get heap memory for >> an exception - but if it fails, it will use memory from a statically >> allocated "emergency buffer". > > That sound weird. If it can use the static buffer, why not use that > anyway? Are you sure it isn't the other way round (a small-expection > optimization similar to small-string optimization)? >
I haven't looked at this in detail, nor have I tried to find documentation about /why/ it has this sort of implementation. So I can only speculate. And just to be clear, I am talking about PC or "big embedded system" implementations here, not memory-constrained devices. Using an emergency buffer for exceptions will be limited. It puts a tight limit on the size of the object thrown (since it is statically allocated, and a bigger buffer is a waste of space), and it can only be used for one exception (unless you want to re-invent malloc). In a multi-threaded program, only one thread can use the emergency buffer at a time - there is a lock to protect it. I don't think it would be a bad idea to have a small buffer per thread that would be used for common exception types as a first choice, then fall back to heap for big exceptions, then to the emergency buffer if that fails too. But the current libstdc++ does not have such an implementation.
On Thursday, November 10, 2016 at 4:43:35 AM UTC-5, David Brown wrote:
> On 10/11/16 07:34, Wouter van Ooijen wrote: > > Op 09-Nov-16 om 9:23 AM schreef David Brown: > >> On 08/11/16 23:30, Wouter van Ooijen wrote: > >>> Op 08-Nov-16 om 8:59 PM schreef Dave Nadler: > >>>> On Monday, November 7, 2016 at 12:59:30 PM UTC-5, Tim Wescott wrote: > >>>>> If you're not going to allow global constructors, > >>>>> what's the point of using C++? > >>>> > >>>> Because of ordering issues, I've typically placed "global lifetime" > >>>> objects as static objects inside (ordered by code) subsystem > >>>> initialization > >>>> routines. Then referred to them via static pointers. Not ideal but > >>>> workable > >>>> way to order the initialization. > >>>> > >>>> Global ctors aside, C++ provides huge advantages: > >>>> - type safety > >>>> - RAII especially preventing resource leaks in dtors > >>>> - templates > >>>> - specialized storage allocation by class if required > >>>> and so forth. > >>>> All without any dynamic allocation except on stack. > >>>> > >>>> If only there was a way to have exceptions without heap; > >>>> exceptions really do help make safer code. > >>>> Might be possible in some C++ toolchains if throws are > >>>> limited to pointers (to static exception info)? > >>>> Depends I guess on the implementation (ie does exception > >>>> processing rely on RTTI). > >>> > >>> To my knowledge exception handling requires something like RTTI, but not > >>> necessarily a heap (AFAIK current implemnentations do). > >> > >> I believe that exception handling does not need RTTI, unless you are > >> using polymorphic exceptions (i.e., throwing specific exceptions and > >> catching generic ones). > >> > >>> > >>> I've had a few discussions about exceptions. The current aim of the > >>> anti-exception advocates (as heared in SG14) is that exceptions are too > >>> costly in time. This is mostly from gaming, fast trading and google. > >>> > >> > >> I am not keen on exceptions in embedded code, but the time costs are not > >> the main issue. The run-time costs for code that could throw, but does > >> not, are very small - often smaller than alternative explicit error > >> handling code. I think the possibility of exceptions occurring may > >> hinder optimisations - in particular, it could limit certain types of > >> code re-ordering. There is also the cost in code space for all the > >> stack unwind stuff. > >> > >> I dislike exceptions simply because they are undocumented gotos. I > >> don't like the idea of functions that can quietly fail in some way and > >> hope that something further up the system can deal with it in a sensible > >> manner. It can be okay on a PC desktop program to say that a particular > >> operation failed, and carry on with rest of the tasks - if one web page > >> fails to open correctly, the rest of the browser is still usable. But > >> on embedded systems, /everything/ should work correctly all the time - > >> if a bit of software is not necessary, it should not be part of the > >> program. (Note that some things, such as hardware, may fail - but the > >> software should correctly deal with those problems.) > >> > >>> My (main) problem with exceptions is the use of the heap. > >>> > >> > >> Exceptions do not /have/ to use the heap. > > > > Maybe, but the current implementations do. > > > >> I had a quick google for > >> this, and found that the gcc C++ library will try to get heap memory for > >> an exception - but if it fails, it will use memory from a statically > >> allocated "emergency buffer". > > > > That sound weird. If it can use the static buffer, why not use that > > anyway? Are you sure it isn't the other way round (a small-expection > > optimization similar to small-string optimization)? > > > > > I haven't looked at this in detail, nor have I tried to find > documentation about /why/ it has this sort of implementation. So I can > only speculate. And just to be clear, I am talking about PC or "big > embedded system" implementations here, not memory-constrained devices. > > Using an emergency buffer for exceptions will be limited. It puts a > tight limit on the size of the object thrown (since it is statically > allocated, and a bigger buffer is a waste of space), and it can only be > used for one exception (unless you want to re-invent malloc). In a > multi-threaded program, only one thread can use the emergency buffer at > a time - there is a lock to protect it. > > I don't think it would be a bad idea to have a small buffer per thread > that would be used for common exception types as a first choice, then > fall back to heap for big exceptions, then to the emergency buffer if > that fails too. But the current libstdc++ does not have such an > implementation.
Here's a nice write-up on how libstdc++ manages exceptions, and how to write a replacement: https://monoinfinito.wordpress.com/series/exception-handling-in-c/