On 30/12/2010 11:49, Stefan Reuther wrote:> David Brown wrote: >> On 30/12/2010 09:39, Arlet Ottens wrote: >>> On Thu, 30 Dec 2010 01:43:56 -0700, D Yuniskis wrote: >>>> But, my current implementation has very coarse granularity. In essence, >>>> the kernel uses increasingly harsh tactics to get what it needs: >>>> - "please give up resources" >>>> - "you WILL give up resources" >>>> - "if you won't cooperate, you'll have to go away" - "die, sucker!" >>> >>> Seems overly complicated. Why not simply design the tasks so they'll give >>> up any resources they no longer need ? >> >> I agree. The whole idea seems misplaced in an RTOS. The point of a >> /real time/ operating system is that tasks should do what they have to >> do, within the time (and resource) limits they specify. > > Exactly that's what I was thinking. "Real Time" means: given this set of > resources, and this workload, it will work (at all time) or not. > > Thus, every task has a set of resources it will always need and never > give up. For everything else, it asks the kernel. "Hey, kernel, if you > have some spare CPU cycles, here's this low-priority job. It helps me to > have it, but I don't need it all the time." > > Doing something similar with memory is probably not as easy, but has > been done (e.g. Windows 3.0 "discardable memory"). The problem is that > usually the task has to know that the memory is gone, but you can't let > it clean up for minutes before taking the memory. >And if Windows 3 was any indication of the "success" of this tactic, then it should be avoided. Windows 3 used "cooperative multitasking", as in "if I assume that all other applications cooperate nicely, then I can be selfish and get all the resources!".
Applications "buying" resources
Started by ●December 30, 2010
Reply by ●December 30, 20102010-12-30
Reply by ●December 30, 20102010-12-30
On 12/30/2010 1:43 AM, D Yuniskis wrote:> Hi, > > My RTOS has hooks that let the "kernel" ask tasks > (forget how that is defined) to voluntarily shed > resources (time, memory, power). This lets the > kernel (actually, a "special" user-land task that > makes decisions *for* the kernel) throttle back > the demands placed on the *fixed* resources to > better adjust to the current demands placed on them. > > So, if some "new" task needs to be spawned and > needs some resources, the kernel can ask the > currently running set of tasks to relinquish > resources that they may be holding but not *needing*. > > But, my current implementation has very coarse > granularity. In essence, the kernel uses increasingly > harsh tactics to get what it needs: > - "please give up resources" > - "you WILL give up resources" > - "if you won't cooperate, you'll have to go away" > - "die, sucker!" > > Of course, if all applications were well behaved, they > would comply with early requests and, hopefully, stave > off the more severe requests that could follow. > > But, applications tend to "gamble" -- "maybe someone else > will give up enough resources so I won't have to!" This > effectively reduces resource management to: > - "please give up resources" > - "die, sucker!" > > I can tweak the resources *available* to a particular task. > But, if I use this mechanism to compensate for "uncooperative" > tasks, then it unnecessarily restricts their capabilities... > which makes them *more* inclined to "not cooperate" (as > they *horde* the limited resources that they have!) > > What I would like is a scheme whereby applications can decide > how much the resources they are holding are "worth". And, > then "buy" those resources (with some sort of constrained > "purse"). This gives the "kernel" a bit more flexibility > in handling resource reallocation -- e.g., *it* sets the > current price for the resources and can change that price > dynamically to "entice" tasks to shed resources (by making > it increasingly expensive to hold onto them) > > [there are some fairness issues I still need to think through... > e.g., if task A gave up a resource at a pricepoint of K units > and the kernel then raises the pricepoint to L units, should > task A be compensated for his "early sale"?] > > Of course, this approach kicks the can down the road a bit > as it now requires me to decide on how much "cash" each > task is given (and if it can vary). > > While this seems like a cute idea/model, I wonder if there > isn't a simpler scheme that I could fall back onto? > > [remember, my goal is always to NOT have to ask the user > to make these decisions -- especially at the instant the > resource shortage manifests!] > > Thanks! > --donIf you (your team) are writing all of these tasks, a design review will get the offending task (programmer) to change their code to comply. Fix the problem at the beginning, not in the product. hamilton
Reply by ●December 30, 20102010-12-30
Hi Arlet, On 12/30/2010 2:14 AM, Arlet Ottens wrote:> On Thu, 30 Dec 2010 02:01:06 -0700, D Yuniskis wrote: > >>> Seems overly complicated. Why not simply design the tasks so they'll >>> give up any resources they no longer need ? >> >> That's the intent behind the "please give up resources" request. But, it >> relies on the task to decide *which* resources it is going to give up >> (e.g., maybe it needs this chunk of memory but not *that* chunk, etc.). >> >> Since each task wants to "look good", they will all *tend* to hold onto >> as much as they can get away with -- "maybe someone else will satisfy >> the current need". >> >> And, often algorithms trade one resource for another. E.g., give me a >> bigger chunk of time and I can live with less memory (or vice versa). >> >> From the outside (kernel viewpoint), you can't tell what's >> important to a particular task. E.g., when running on AC power with few >> competitors, certain tasks might gobble up *huge* chunks of memory >> (e.g., many megabytes whereas they can nominally work with tens of KB) >> -- this lets them be more responsive (memory with nothing in it doesn't >> save you any power, etc.) >> >> So, I need a "currency" that tasks can use to balance their "wants" with >> their "abilities". So, the information concerning the relative values >> of those resources can remain encapsulated in the task and not exported >> to some other "mechanism" for enforcement "from without". > > How many tasks do you think will be capable of such resource exchange ?It's an inherent part of the API. IF they don't want to cooperate, they risk being terminated. E.g., how many window applications handle an "expose" event? Ans: ALL of them -- else their windows don't get drawn! :>> You could introduce virtual memory, paging, and a filesystem cache, all > controlled by the kernel, so it can shift resources around dynamically.I already support VM -- but without a backing store. I.e., the kernel can reclaim pages of physical memory but if the application isn't aware of it (and cooperative), then the contents of that page are lost to the original application. Hence the reason to signal tasks to decide what they want to release.> If the set of tasks is reasonably fixed, you can design resource > partition up-front.If the task set was fixed, the problem wouldn't exist :>> If the set of tasks is wildly variable, under control of the user, you > could ask the user to set up resource usage on a per-task basis. After > all, the user knows best what the priorities are.Yes, that is the ultimate goal. I can assign default "budgets" to each task and let the user modify those (within some range of constraints). The "currency" I proposed is one such mechanism. But, users can't understand "quantum size", "memory allocation", priority ceiling, etc. They just know "*this* is worth more to me than *that*". So, you need a dimensionless unit that lets them show their preferences without having to understand what is happening. If your desktop PC (e.g.) popped up a message to the effect of "insufficient resources to handle all tasks", could *you* decide how to reallocate those resources if I gave you a table of "current memory allocation", "current run priority", etc.? Chances are, you would just kill some set of tasks until things started running again. Now, imagine those tasks being smart enough to try to enhance their performance by taking extra (!) resources AS THEY ARE AVAILABLE. So, perhaps an MP3 decoder "decodes ahead" and caches the next 10 seconds of music so that it can "sleep" hereafter (this makes it's timeslices available to other tasks! It's decision to "waste memory" could have positive impact on a task that wants more CPU time). Or, maybe a file system maintenance routine grabs an excess of *time* to expedite it's check of the filesystem -- thereby removing itself from the list of "running" tasks sooner than it would otherwise (freeing memory *and* time for use by subsequent tasks). In that environment, if the *tasks* don't cooperate (under some set of brokerage criteria), could *you*, the user, look at that augmented table of "current memory allocation", "current run priority", "MINIMUM REQUIRED MEMORY ALLOCATION" (i.e., how much "extra" is it currently using), "MINIMUM REQUIRED RUN PRIORITY" (i.e., how much elevated is its current priority), etc. and decide how to reallocate these resources?? I'm assuming the application developer can. *IF* he is motivated to do so (by policies that can make his UNcooperative application look like *crap* if he doesn't comply)
Reply by ●December 30, 20102010-12-30
Hi Hans-Bernhard, On 12/30/2010 6:47 AM, Hans-Bernhard Br�ker wrote:> On 30.12.2010 09:53, D Yuniskis wrote: > >> First of all, note that a task can ignore ANY or ALL >> of these "tactics"/requests (except, of course, the >> last one :> ) > > That's exactly the problem. If they can, you have to assume they will. > Which renders the entire scheme moot.By the same token, "voluntary multitasking" (each task holding the CPU for as long as it wants before relinquishing control to the next eligible task) "won't work". :> Of course, it *does* work because the developer(s) "behave responsibly". This is fine in a closed-end, fixed functionality product. But, if you open the product up to other applications, you run the risk of those other applications misbehaving -- either intentionally (to "look better" or to make their development effort easier) or unintentionally (by not understanding the costs of each operation they perform). If you can develop a policy that lets tasks (applications) express their relative resource needs "within a budget", then you can have an external agency (the kernel) *enforce* that sharing/cooperation.
Reply by ●December 30, 20102010-12-30
On 12/30/2010 2:40 AM, upsidedown@downunder.com wrote:> On Thu, 30 Dec 2010 01:43:56 -0700, D Yuniskis > <not.going.to.be@seen.com> wrote: > >> My RTOS has hooks that let the "kernel" ask tasks >> (forget how that is defined) to voluntarily shed >> resources (time, memory, power). This lets the >> kernel (actually, a "special" user-land task that >> makes decisions *for* the kernel) throttle back >> the demands placed on the *fixed* resources to >> better adjust to the current demands placed on them. >> >> So, if some "new" task needs to be spawned and >> needs some resources, the kernel can ask the >> currently running set of tasks to relinquish >> resources that they may be holding but not *needing*. > > During the last few decades, I have written a few successful control > systems using simple fixed priority real time operating systems > without problems. > > The key issue is that any high priority task should use as little > resources as possible. The lowest priority task can use as much > resources it likes.This approach only works in a static system. If the number of consumers changes, then the strategy falls apart. Note that (process) control systems tend to be VERY static: "do this... FOREVER". Could your systems work if some arbitrary third party was allowed to add his code to your code base (assuming he wasn't allowed to "touch" your code) and WITHOUT changing any of the hardware?
Reply by ●December 30, 20102010-12-30
Hi Hamilton, On 12/30/2010 12:01 PM, hamilton wrote:> On 12/30/2010 1:43 AM, D Yuniskis wrote: >> Hi, >> >> My RTOS has hooks that let the "kernel" ask tasks >> (forget how that is defined) to voluntarily shed >> resources (time, memory, power). This lets the >> kernel (actually, a "special" user-land task that >> makes decisions *for* the kernel) throttle back >> the demands placed on the *fixed* resources to >> better adjust to the current demands placed on them. >> >> So, if some "new" task needs to be spawned and >> needs some resources, the kernel can ask the >> currently running set of tasks to relinquish >> resources that they may be holding but not *needing*. >> >> But, my current implementation has very coarse >> granularity. In essence, the kernel uses increasingly >> harsh tactics to get what it needs: >> - "please give up resources" >> - "you WILL give up resources" >> - "if you won't cooperate, you'll have to go away" >> - "die, sucker!" >> >> Of course, if all applications were well behaved, they >> would comply with early requests and, hopefully, stave >> off the more severe requests that could follow. >> >> But, applications tend to "gamble" -- "maybe someone else >> will give up enough resources so I won't have to!" This >> effectively reduces resource management to: >> - "please give up resources" >> - "die, sucker!" >> >> I can tweak the resources *available* to a particular task. >> But, if I use this mechanism to compensate for "uncooperative" >> tasks, then it unnecessarily restricts their capabilities... >> which makes them *more* inclined to "not cooperate" (as >> they *horde* the limited resources that they have!) >> >> What I would like is a scheme whereby applications can decide >> how much the resources they are holding are "worth". And, >> then "buy" those resources (with some sort of constrained >> "purse"). This gives the "kernel" a bit more flexibility >> in handling resource reallocation -- e.g., *it* sets the >> current price for the resources and can change that price >> dynamically to "entice" tasks to shed resources (by making >> it increasingly expensive to hold onto them) >> >> [there are some fairness issues I still need to think through... >> e.g., if task A gave up a resource at a pricepoint of K units >> and the kernel then raises the pricepoint to L units, should >> task A be compensated for his "early sale"?] >> >> Of course, this approach kicks the can down the road a bit >> as it now requires me to decide on how much "cash" each >> task is given (and if it can vary). >> >> While this seems like a cute idea/model, I wonder if there >> isn't a simpler scheme that I could fall back onto? >> >> [remember, my goal is always to NOT have to ask the user >> to make these decisions -- especially at the instant the >> resource shortage manifests!] > > If you (your team) are writing all of these tasks, a design review will--^^ Where's the "else" clause in your statement? :>> get the offending task (programmer) to change their code to comply. > > Fix the problem at the beginning, not in the product.
Reply by ●December 30, 20102010-12-30
Hi David, On 12/30/2010 2:03 AM, David Brown wrote:> On 30/12/2010 09:39, Arlet Ottens wrote: >> On Thu, 30 Dec 2010 01:43:56 -0700, D Yuniskis wrote: >> >>> My RTOS has hooks that let the "kernel" ask tasks (forget how that is >>> defined) to voluntarily shed resources (time, memory, power). This lets >>> the kernel (actually, a "special" user-land task that makes decisions >>> *for* the kernel) throttle back the demands placed on the *fixed* >>> resources to better adjust to the current demands placed on them. >>> >>> So, if some "new" task needs to be spawned and needs some resources, the >>> kernel can ask the currently running set of tasks to relinquish >>> resources that they may be holding but not *needing*. >>> >>> But, my current implementation has very coarse granularity. In essence, >>> the kernel uses increasingly harsh tactics to get what it needs: >>> - "please give up resources" >>> - "you WILL give up resources" >>> - "if you won't cooperate, you'll have to go away" - "die, sucker!" >> >> Seems overly complicated. Why not simply design the tasks so they'll give >> up any resources they no longer need ? > > I agree. The whole idea seems misplaced in an RTOS. The point of a /real > time/ operating system is that tasks should do what they have to do, > within the time (and resource) limits they specify. Then you put them(sigh) No. Real-time simply means that there are temporal aspects to the "correctness" of an algorithm/application. In a non-real-time application, the correctness of the "answer" is time-invariant. 2+2=4. Whether that answer is obtained in 200ns or 200 *years*, it is still correct. In a real-time application, there is a value function associated with the "timeliness" of the answer. In a hard real-time (HRT) system, the value function is zero for all times at or beyond the "deadline" (note that the deadline is defined by the application domain). In a soft real-time system, the value function remains positive -- though often with a monotonically decreasing slope. I.e., "now is better than later... which is better than later STILL". Note that a real-time system can include NON-real-time aspects.> together in a complete system with enough resources to cover all needs,No. You don't have to have enough resources to cover all needs. That's an extremely conservative way of designing (not cost effective). It makes the designer's job a lot easier, though: "I need X RAM, Y CPU, Z crystal frequency". If "all" of the RT aspects are not active at a particular time, then you have over-specified your hardware. This can have real performance consequences (e.g., reduced battery life for a portable application). And, an RT system need not *meet* all the requirements of its constituent parts. Do you complain if you can't run *EVERY* application that you have installed on your PC *concurrently*? What if you try to start an application and you get an "insufficient memory" error? Should you shut the machine off and throw it into the trash (i.e., *obviously* it MUST be "broken", right?)? As with anything, there are relative values to individual applications/tasks in any system. In a RT system -- even an HRT application -- there is often value to trodding onward even in the face of missed deadlines. For example, VoIP is an HRT task. If you miss a deadline and can't get the "next" voice packet through the decoder before the (current) deadline, you will hear a drop-out in the audio signal. How objectionable this is varies on the actual speech involved -- a "clipped" sound vs. a "missing word", etc. Now, once you have missed that deadline (assume decoding the sound packet "late" has no value to it so it is simply discarded), do you terminate the application -- because it is *broken*? Or, do you tolerate the temporal anomaly and hope that things improve? (for VoIP, you have no control over the network end-to-end so there is a chance that things can get better or WORSE from packet to packet) Or, do you enlarge your buffers and dynamically increase latency in the hope that your *perceived* quality will improve (because you are giving yourself more time -- and memory -- to decode "upcoming speech packets"... in essence, pushing the deadline further out). And, *if* the VoIP application eventually craps out, does that mean that the MP3 decoder running on the same platform should also die? I.e., the "device" is considered "broken"?> with an RTOS that dishes out those resources according to set rules, and > everyone is happy - every task has sufficient resources to do its job. > It is a regimented, military-style organisation with the RTOS kernel as > the absolute dictator.No. That's an inflexible way of designing. It results in overspecified hardware and underperformance. By definition, it *requires* you to know, a priori, *all* of the tasks that could ever run concurrently on the platform.> This system, however, sounds more like some sort of 70's hippie commune, > where you think it will all sort of be all right if everyone is nice to > each other.My original implementation relies on that. As such, it is prone to abuse by folks who don't want to cooperate (or who aren't smart enough to know *how* to cooperate). It would be like leaving handicapped parking places close to business entrances and expecting ONLY people who needed them to use them -- instead of "requiring" people to have a "handicapped (license) plate" or "placard" to entitle them to that use. Obviously, lawmakers realize that people, left to their own devices, would blatantly ignore this "policy" and/or come up with rationalizations to justify abusing it ("Oh, I just had to run in for a minute to pick up a gallon of milk"; "Oh, my FINGER hurts so I *should* be able to use it"; etc.). I want a mechanism that will allow tasks to dynamically take advantage of "spare resources" without opening the door to abuse. E.g., desktop schedulers (interactive environments) tend to "favor" short running tasks by using priority aging mechanisms. As such, a short task gets in, does it's work and terminates without burdening the system by its continued presence. I can create a variety of mechanisms to try to offset undesirable behaviors (e.g., keep time-memory product constant: you want more memory? you get less CPU time) but they also have bad side-effects. And, can be gimmicked by careful planning. Something *like* the "currency" scheme I proposed should let applications take whatever approach they want to resource utilization WITHIN THEIR PROVIDED BUDGET.> Even desktop systems - which don't have real time constraints, and which > deal which hugely varying loads and programs from all sorts of different > sources - don't have this sort of complicated mechanism. Programs get > the resources they ask for, if the system can supply them. If there are > no more resources, they get nothing more. If the OS runs out of > resources for critical tasks, it does not ask processes nicely to give > back resources. It finds a victim (something low priority, or perhaps a > memory hog) and asks it to stop itself. If that fails, it kills it. No > ifs and buts, no "please", no messing around with "costs".I think you will find most desktop systems aren't that smart. They let everything that is running *continue* to run. It is the *user* who has to intervene and shut something down. If the user tries to *start* something, he gets an impolite "no".> Now, just because other RTOS's and non-RT OS's have simpler and clearer > methods of controlling resources, does not necessarily mean that your > idea is a bad one. But you need to do some very careful thinking through > of realistic use-cases to see what, if anything, this added complexity > will gain you. > > In particular, if you conclude that in the worst case your kernel must > simply kill the task, then you might as well do that from day 1 - in an > RTOS, your system has to keep working even in the worst case anyway. InNo, you're assuming "working" means "every task meets all of its deadlines". An equally viable definition of "working" is "all tasks achieve their results with non-zero value functions". Or, "this set of tasks meets their goals (and these others don't)". As I said earlier, if you can't open an instance of Firefox on your desktop AT THIS TIME, does that mean your desktop system is *broken*? If you can't place a telephone call at this time, does that mean the system is *broken*? Or, just "temporarily overloaded"? (i.e., to the other people participating *in* calls at this time, it is "working fine")> the desktop OS world, it is good to aim for graceful gradual failure > when things go bad. In the RTOS world, you aim to guarantee that they > won't go bad - graceful failure is thus a waste of resources.
Reply by ●December 30, 20102010-12-30
Hi Stefan, On 12/30/2010 3:49 AM, Stefan Reuther wrote:> David Brown wrote: >> On 30/12/2010 09:39, Arlet Ottens wrote: >>> On Thu, 30 Dec 2010 01:43:56 -0700, D Yuniskis wrote: >>>> But, my current implementation has very coarse granularity. In essence, >>>> the kernel uses increasingly harsh tactics to get what it needs: >>>> - "please give up resources" >>>> - "you WILL give up resources" >>>> - "if you won't cooperate, you'll have to go away" - "die, sucker!" >>> >>> Seems overly complicated. Why not simply design the tasks so they'll give >>> up any resources they no longer need ? >> >> I agree. The whole idea seems misplaced in an RTOS. The point of a >> /real time/ operating system is that tasks should do what they have to >> do, within the time (and resource) limits they specify. > > Exactly that's what I was thinking. "Real Time" means: given this set of > resources, and this workload, it will work (at all time) or not.No. Please see my explanation in my reply to David's post. Real-time just means "result has a time-sensitive component". Also, the *scale* of that sensitivity is application specific: a VoIP application obviously works on a much shorter timescale than a deep space probe. And, the "values" of their missed deadlines also differ -- missing a packet decode deadline for VoIP by 5 seconds means the packet is almost assuredly COMPLETELY WORTHLESS. OTOH, missing the deadline to fire the main rocket on a space probe by 10 *minutes* might just require a longer burn to compensate (at *some* point, you can't burn long enough *to* compensate!)> Thus, every task has a set of resources it will always need and neverAgain, that's an overly conservative way of designing. And, really only works well in a static environment. If you have N tasks/activities that the device can perform, do you size it for all N of those activities even if they only *rarely* happen concurrently? (i.e., there are a million telephones where I live; do you think TPC has half a million lines in place to allow all of those phones to be in use??) If you do, then most of the time you have memory, cpu cycles and "power" sitting around unused. Your product *could* perform better but deliberately *doesn't* -- out of fear that some other task *might* become activated and need those resources.> give up. For everything else, it asks the kernel. "Hey, kernel, if you > have some spare CPU cycles, here's this low-priority job. It helps me to > have it, but I don't need it all the time."So, your MP3 decoder spawns a separate task to "decode ahead" so that *it* can have less stringent timing requirements? And, the MP3 decoder then *kills* that task when the CPU needs those cycles? (no, I imagine you wrote your MP3 decoder to always work the same way in the same time in the same footprint)> Doing something similar with memory is probably not as easy, but has > been done (e.g. Windows 3.0 "discardable memory"). The problem is that > usually the task has to know that the memory is gone, but you can't letThat's why you *ask* ^^^^^^^^^ the task to relinquish the resource(s).> it clean up for minutes before taking the memory.There is a cost for using "spare" resources. Obviously, applications want to be able to minimize this cost since *they* incur it. E.g., in my "hippie commune" <grin> approach, if it is expensive for you to handle an ASYNCHRONOUS request from the kernel to shed resources, then, if you *use* them, you run the risk of the kernel getting annoyed with you and forcing its will upon you. So, when you choose to "make yourself look better" by exploiting those resources, you have to keep in mind how much it costs you to *employ* them and then to *unemploy* (? seems like there should be a better word for this...) them. E.g., in the MP3 decoder example, you can use extra memory for very little cost: just treat it as discardable "chunks" of some convenient size (N frames) and tag each chunk with something that allows you to relate it back to the MP3 source from which it was decoded. Now, when asked to free up memory, you simply discard those chunks with a bias towards those that are the "furthest in the future (portion of the mp3 stream)". I.e., the time you spent decoding them was wasted... you will probably have to decode them, again, in a few milliseconds but *within* the memory footprint to which you have pared down. Think about it. Think about how you would rewrite applications if you "had more memory" or "more CPU". Note that you *don't* have a guarantee of having either of those ALWAYS. But, chances are, you will end up with one or more of them "often". And, if you want to write an uninspired task, you still can. You just will never take advantage of those extra resources and will always be at risk for termination if your *fixed* set of resources becomes inappropriate for some particular set of running applications. I.e., as other applications get smarter, you will "lose market share" because *they* survive in a resource stressed environment whereas *you* end up "killed".
Reply by ●December 30, 20102010-12-30
Just thought I'd let you know that your worldview is much closer to mine than David's is. I almost could have written your words for you. Interesting; and enjoying the discussion too. Jon
Reply by ●December 30, 20102010-12-30
On 30.12.2010 21:33, D Yuniskis wrote:> On 12/30/2010 6:47 AM, Hans-Bernhard Br�ker wrote: >> On 30.12.2010 09:53, D Yuniskis wrote:>>> First of all, note that a task can ignore ANY or ALL >>> of these "tactics"/requests (except, of course, the >>> last one :> )>> That's exactly the problem. If they can, you have to assume they will. >> Which renders the entire scheme moot.> By the same token, "voluntary multitasking" (each task > holding the CPU for as long as it wants before relinquishing > control to the next eligible task) "won't work". :>And guess what: ultimately it doesn't. If that needed proving, experience with less-than-perfectly-behaved Windows 3.x programs has done just that to very convincing level.> This is fine in a closed-end, fixed functionality product. But, > if you open the product up to other applications, you run the > risk of those other applications misbehavingExactly. So if you want to guarantee any kind of real-time behaviour, you have no choice but to disallow such misbehaviour right at the source, i.e. foresee killing processes or threads (or at least taking the CPU away from them forcibly). Once we agree you need to do that anyway, there's little, if anything, to be gained from offering other mechanisms on top of it. They only add complexity for little gain. You'll likely end up spending more effort in the kernel on getting all those niceties handled than it would have taken the application to just do the job at full throttle.