EmbeddedRelated.com
Forums

Applications "buying" resources

Started by D Yuniskis December 30, 2010
Hi Jon,

On 12/30/2010 4:37 PM, Jon Kirwan wrote:
> Just thought I'd let you know that your worldview is much > closer to mine than David's is. I almost could have written > your words for you. Interesting; and enjoying the discussion > too.
<shrug> My approach to these problems has evolved as my solutions have become more sophisticated. With the sort of processing power and resources that you can cram into a few cubic inches *today*, there are far more possibilities that can be explored. I.e., if people think it a worthwhile use of resources to make menus that fade in/out, then *I* can consider it a worthwhile use of resources to dynamically control the resources that applications use! <grin> The parallel to windowed display environments is a good one. I.e., you can chose to naively handle your "expose event" and repaint the *entire* window. Or, you can try to use some smarts to only repaint the newly-exposed portion. Or, you can let the window manager handle some or all of this with backing store. Each approach has different levels of sophistication and cost. But, the cost of the *framework* is distributed across all subsequent application instances. I.e., once you *can* do these sorts of things, it becomes *easier* to do them (imagine running a windowed environment on a KSR-33!) Just because you don't have such an environment *now* doesn't mean you can't *develop* one (e.g., the "power" resource hasn't even been discussed, here, yet will be *increasingly* important in future product designs as users expect more functionality in smaller sizes and longer battery lifes [sic])
On Thu, 30 Dec 2010 17:13:02 -0700, D Yuniskis
<not.going.to.be@seen.com> wrote:

>Hi Jon, > >On 12/30/2010 4:37 PM, Jon Kirwan wrote: >> Just thought I'd let you know that your worldview is much >> closer to mine than David's is. I almost could have written >> your words for you. Interesting; and enjoying the discussion >> too. > ><shrug> My approach to these problems has evolved as >my solutions have become more sophisticated. With the >sort of processing power and resources that you can >cram into a few cubic inches *today*, there are far >more possibilities that can be explored. > >I.e., if people think it a worthwhile use of resources >to make menus that fade in/out, then *I* can consider >it a worthwhile use of resources to dynamically >control the resources that applications use! <grin> > >The parallel to windowed display environments is a good >one. I.e., you can chose to naively handle your "expose >event" and repaint the *entire* window. Or, you can >try to use some smarts to only repaint the newly-exposed >portion. Or, you can let the window manager handle some >or all of this with backing store. > >Each approach has different levels of sophistication and >cost. But, the cost of the *framework* is distributed across >all subsequent application instances. I.e., once you *can* >do these sorts of things, it becomes *easier* to do them >(imagine running a windowed environment on a KSR-33!) >Just because you don't have such an environment *now* >doesn't mean you can't *develop* one (e.g., the "power" >resource hasn't even been discussed, here, yet will be >*increasingly* important in future product designs as >users expect more functionality in smaller sizes and >longer battery lifes [sic])
Only one comment. I don't remember ever seeing a KSR-33. I have seen, used, and even repaired (I say very modestly, because whoever designed the darned thing most assuredly went right into a mental hospital afterwards) a KSR-35. Note the -35. The only -33's I ever saw were of the ASR (with tape punch and reader) variety. Not that KSR-33's didn't exist. I wouldn't know. I just never saw one. :) Still trying to imagine the windowed -33, though. I can still readily read ASCII punches off of punch tape, so I'm trying to think how it might be done using the tape punch, now. ;) Jon
Hi Hans-Bernhard,

On 12/30/2010 4:41 PM, Hans-Bernhard Br&#4294967295;ker wrote:
> On 30.12.2010 21:33, D Yuniskis wrote: >> On 12/30/2010 6:47 AM, Hans-Bernhard Br&#4294967295;ker wrote: >>> On 30.12.2010 09:53, D Yuniskis wrote: > >>>> First of all, note that a task can ignore ANY or ALL >>>> of these "tactics"/requests (except, of course, the >>>> last one :> ) > >>> That's exactly the problem. If they can, you have to assume they will. >>> Which renders the entire scheme moot. > >> By the same token, "voluntary multitasking" (each task >> holding the CPU for as long as it wants before relinquishing >> control to the next eligible task) "won't work". :> > > And guess what: ultimately it doesn't. If that needed proving, > experience with less-than-perfectly-behaved Windows 3.x programs has > done just that to very convincing level.
You're making my point! Voluntary multitasking *works* when a *fixed* developer (or group of developers) undertakes the entire system implementation. They are aware of the risks and *own* the design -- if they opt to bend the rules for a particular portion of the design, they are aware of the costs/benefits. But, once you open the doors to potentially noncooperative developers, then the potential for co-executing applications goes out the window (Windows 3 was really just a glorified "swap one executing task for another"). If, on the other hand, windows 3 had mechanisms whereby it could (and WOULD) terminate offensive tasks/applications, then the experience would have been very different (i.e., people would be complaining to the offending application vendor that *their* product "didn't work" instead of complaining that "Windows locked up, was unresponsive, sluggish, etc.")
>> This is fine in a closed-end, fixed functionality product. But, >> if you open the product up to other applications, you run the >> risk of those other applications misbehaving > > Exactly. So if you want to guarantee any kind of real-time behaviour, > you have no choice but to disallow such misbehaviour right at the > source, i.e. foresee killing processes or threads (or at least taking > the CPU away from them forcibly).
I am doing that by informing tasks when the system is in need of resources and *hoping* they will relinquish the resources that they don't *need* (what you need and what you *use* are often very different). The problem with my current implementation is that *other* developers can grin slyly and chose to ignore these "requests". The only recourse I currently have is the threat of unceremoniously killing the process. (this is something that windows 3 lacked -- it was up to the *user* to take this action and few users would/could do so in practice) If, on the other hand, I can redefine the interface as one of "Your current resources 'cost' X units. You only have Y units available. You *will* shed X-Y units worth of resources or *I* will take them from you in an unpredictable way (i.e., you will crash; your customers will think you are a crappy product and will think the other *cooperative* applications are *so* much BETTER than you!)" I.e., there is no longer a voluntary aspect *except* the choice the kernel gives you to decide *which* resources you want to "sell off" (in avoidance of *taking* them from you indiscriminately)
> Once we agree you need to do that anyway, there's little, if anything, > to be gained from offering other mechanisms on top of it. They only add > complexity for little gain. You'll likely end up spending more effort in
I see *lots* of potential gain! This flexibility lets applications in a lightly loaded system achieve greater levels of performance than they would if they had been artificially constrained to the level of resource utilization required to coexist in a "fully loaded" system (i.e., every app running concurrently). And, still gives the user the ability to dynamically add new tasks without explicitly having to "restart" or "reconfigure" the previously running applications. The hardware tries to be maximally utilized at all times instead of innocently idled in preparation for an application that *might* be started a few milliseconds from now... or now... or now... (or NEVER!)
> the kernel on getting all those niceties handled than it would have > taken the application to just do the job at full throttle.
Hi Jon,

On 12/30/2010 5:16 PM, Jon Kirwan wrote:
> On Thu, 30 Dec 2010 17:13:02 -0700, D Yuniskis > <not.going.to.be@seen.com> wrote: > >> Hi Jon, >> >> On 12/30/2010 4:37 PM, Jon Kirwan wrote: >>> Just thought I'd let you know that your worldview is much >>> closer to mine than David's is. I almost could have written >>> your words for you. Interesting; and enjoying the discussion >>> too. >> >> <shrug> My approach to these problems has evolved as >> my solutions have become more sophisticated. With the >> sort of processing power and resources that you can >> cram into a few cubic inches *today*, there are far >> more possibilities that can be explored. >> >> I.e., if people think it a worthwhile use of resources >> to make menus that fade in/out, then *I* can consider >> it a worthwhile use of resources to dynamically >> control the resources that applications use!<grin> >> >> The parallel to windowed display environments is a good >> one. I.e., you can chose to naively handle your "expose >> event" and repaint the *entire* window. Or, you can >> try to use some smarts to only repaint the newly-exposed >> portion. Or, you can let the window manager handle some >> or all of this with backing store. >> >> Each approach has different levels of sophistication and >> cost. But, the cost of the *framework* is distributed across >> all subsequent application instances. I.e., once you *can* >> do these sorts of things, it becomes *easier* to do them >> (imagine running a windowed environment on a KSR-33!) >> Just because you don't have such an environment *now* >> doesn't mean you can't *develop* one (e.g., the "power" >> resource hasn't even been discussed, here, yet will be >> *increasingly* important in future product designs as >> users expect more functionality in smaller sizes and >> longer battery lifes [sic]) > > Only one comment. I don't remember ever seeing a KSR-33. I > have seen, used, and even repaired (I say very modestly, > because whoever designed the darned thing most assuredly went > right into a mental hospital afterwards) a KSR-35. Note the > -35. The only -33's I ever saw were of the ASR (with tape > punch and reader) variety. Not that KSR-33's didn't exist. I > wouldn't know. I just never saw one. :)
KSR just didn't have the tape.
> Still trying to imagine the windowed -33, though. I can > still readily read ASCII punches off of punch tape, so I'm > trying to think how it might be done using the tape punch, > now. ;)
For the sheer perversity of my comment... Imagine a curses application. Now, imagine a widowed environment layered on curses (I have deployed products like this -- very effective and economical -- though not GRAPHICAL!). Now, imagine that same environment running on a fast, though *dumb* TTY (e.g., a PTY). Finally, imagine it running on a HARD COPY device :-/ (I once wrote a file system driver layered atop a *block* interface to a 9 track tape drive. It was hilarious watching the transport "seek" each sector fetched from the "filesystem"! Perverse sense of humor...)
Hi Robert,

[gack!  long post -- your name isn't, perchance, YUNISKIS, is it??  :> ]

On 12/30/2010 3:35 AM, robertwessel2@yahoo.com wrote:
> On Dec 30, 2:43 am, D Yuniskis<not.going.to...@seen.com> wrote:
[snip]
>> What I would like is a scheme whereby applications can decide >> how much the resources they are holding are "worth". And, >> then "buy" those resources (with some sort of constrained >> "purse"). This gives the "kernel" a bit more flexibility >> in handling resource reallocation -- e.g., *it* sets the >> current price for the resources and can change that price >> dynamically to "entice" tasks to shed resources (by making >> it increasingly expensive to hold onto them) >> >> [there are some fairness issues I still need to think through... >> e.g., if task A gave up a resource at a pricepoint of K units >> and the kernel then raises the pricepoint to L units, should >> task A be compensated for his "early sale"?] >> >> Of course, this approach kicks the can down the road a bit >> as it now requires me to decide on how much "cash" each >> task is given (and if it can vary). >> >> [remember, my goal is always to NOT have to ask the user >> to make these decisions -- especially at the instant the >> resource shortage manifests!] > > Turning this into an economic problem is certainly an interesting
<shrug> Well, it isn't really "economics" but, rather, a simple model to try to apply --one that might be easier to pitch to users...
> approach. In that sense you'd be looking to create markets where > tasks can bid on resources. The exact units of the cash is not > important, just their relative assignments to the task (IOW, if task 1 > is twice as important as task 2, it should get double the budget of > cash units). Then a task might bid 1/3 of its budget for memory.
Hmmm, this (and below) is a different approach than I had considered. Rather, I was driving it from the other end (i.e., the kernel telling you to shed resources you can't "afford" -- since *it* knows what your budget is and what you have currently "spent").
> Something like a Dutch auction might provide the correct model, but it > would have to be more or less continuous instead of a single event. > IOW, you'd basically need to repeat the auction every time an > additional resource was allocated (or freed). A Dutch auction is
Yes -- but actually only when there is some task that needs something that it doesn't already have. E.g., if the free memory pool isn't empty and a task asks for a chunk of memory that the kernel can satisfy from this pool, then there is no need to involve the other tasks -- they can keep the resources they have.
> typically used when you have multiple items for sale (let's say the OS > has one million bytes of memory). You rank all the bids, and assuming > you have more bids than items, the highest N bids win, and every > bidder is charged the Nth highest bid. Lets say you have three items > and bids of $1, $2, $3, $4 and $5. The $3, $4 and $5 bidders get the > three items, and each pays $3. Usually there's an obvious extension > for bidding on several items (I want M at $x each, which is basically > shorthand for M separate single item bids for $x). Often there's a
Understood -- having played that game on eBay in years past.
> reserve price, below which the items won't be sold (or equivalently > there's a minimum bid). If there are fewer bidders than items, > usually everyone get the items at either the reserve prices or the > lowest bid. > > As the auction progresses, you'd have tasks see their bids drop off > the bottom of the "win" group. They can then increase their bids, > possibly by allocating more of their budget (and thus freeing > resources elsewhere - which would trigger rebidding in *those* > markets), or possibly by requesting fewer resources (thus being able > to big higher on a unit basis). > > Once the bidding stabilizes, everyone gets their allocations, and > things are stable until demand changes.
Understood.
> Hopefully. > > With tasks moving budget around multiple markets convergence could be > a real issue if extra constraints were not added. One thing you could > do would be to *not* allow bidders currently in the provisional "win" > subset to retract their bids - that guarantees any single market > eventually converges (eventually no one can place a higher bid), but
Yes. The problem is consequential to having three different resources to "manage" (bid on). And, can be complicated by the introduction of "memory currency" vs. "power currency" vs. "timeslice currency" (though a single currency means you have to arbitrarily decide the relative worth of power/time/space)
> see linked allocations below. But you must be able to guarantee that > an auction ends (even if another one start right after, an auction > must end, or at least come to a state where the items are doled out > before bidding and reallocation continues). In the real world this > often happens with a time limit, but that may be difficult to apply. > > A problem this ignores is that there may be substantial deallocation
Yes. Since the tasks (bidders) bear those costs, it is in their best interests to minimize them unless it *isn't* in their best interest as complexity might have some value for particular tasks)
> costs - for example, lets say the database task has a million pages > allocated to cache - it can release those, but it's going to have to > do a heck of a lot of work to flush those to disk. That points out
No disk (in this implementation). So, memory just gets discarded -- the application mainly has to keep track of how it *created* the stuff in that discarded chunk of memory so that it can recreate it when/*if* needed.
> another issue, freeing up those resources may be slow. Similarly > processes that cannot bid high enough to continue running typically > would have to abandon the work they've done as they suspend or even > terminate themselves. These would correspond to externalities in > economics.
<frown> Sorry, clueless as to "Economic Theory" and terminology thereof. :> But, i think I understand your point.
> Another problem is linked allocations - a task may want N units of > resource A only if it can also get M units of resource B. If it gets > outbid for resource B, it may want to give up some (or all) of bids on > resource A, even if it's currently a provisional winning bidder for > A. That too can lead to cycles - dropping the number of units being > bid for A will likely cause the selling price on A to drop, which will > give the task more budget to spend in the next round, and if two tasks > have different demand ratios for A and B (IOW one task wants 2A:1B and > the other 1A:2B), they may spend a lot of time cycling back and forth.
Understood. One "solution" would be to let them submit multiple bids -- each as a tuple (I hadn't previously considered dealing with *all* resources concurrently; rather, thought each would be managed independantly and the kernel -- or its agent -- would keep (re)setting the price for each resource). I wonder if N bids would suffice for N resources? I.e., "what's your highest set of bids for resource A? B? C?" and then having the "winner" picked (Q: do you compare all A bids with all other A bids? Or, any one bid from task A with all *other* bids from other tasks, etc.)
> A more complex market would be multiple sellers and multiple buyers, > and you'd have something more like a trading pit on a stock exchange, > but I think convergence gets even hairier there. > > I think this would be academically interesting to explore, but far to > complex to use in the real world (or at least in real computer > systems, it obviously *is* used in the *real* world). Far better to > just assure that your system has sufficient resources, and that a few > resources have big buffers with useful (but not critical) secondary > uses (like memory used for disk caching) that can be throttled > cleanly. In my experience excessive management of those last few > percent of resources is very hard to do well, and provides very little > gain at the end of the day, for a huge amount of trouble. Pick a > simple scheme.
I don't think there is a "simple scheme" when dealing with these sorts of resources. E.g., how do you manage a power budget on a portable device "simply" -- yet provide maximal *service* to the user?
> One principal I&#4294967295;ve followed fairly firmly (in far simpler schemes) is > to apply a certain amount of hysteresis to resource allocation > decisions &#4294967295; constantly yanking stuff around is not useful. But doing
Correct.
> that to some general principal is hard, usually you end up with some > fairly arbitrary heuristics. For example, a system I work on has > worker threads servicing queues of work. A queue can have zero or > more workers assigned to it, and the assignment is reviewed based on > estimated workloads in the queues, and workers can be added and > removed (the threads use limited resources to process work, and > there&#4294967295;s a non-trivial cost to starting up and shutting down workers). > In some cases it&#4294967295;s simple &#4294967295; a queue with several workers simply > doesn&#4294967295;t have enough work, and starts loosing workers (eventually its > last one if the queue is completely idle for long enough). Or if the > resources are available a queue with more work that its current set of > workers can handle can get more workers assigned without trouble. But > when things start running short, workers can be taken away from queues > if necessary, (and that&#4294967295;s broken into two pieces &#4294967295; for queues with > work and zero workers, the reassignment process in fairly aggressive, > even, as a last resort, the taking the last worker for a queue still > containing work &#4294967295; and a much gentler process for reallocating workers > to queues that are merely overloaded, from queues that are less so, > and that never leaves a queue without workers). But when the harsher > stuff is happening the system is overloaded, and it&#4294967295;s typically just
Understood. This is how I handle service providers -- deciding when to fork new threads in the server, etc. Startup costs for me to spawn "yet another thread" are pretty small -- allocate another stack, etc. (since it will be using the same code image that the other sibling threads are using). But, the algorithms become more "art" than "science" and tuning becomes a big part of the design and deployment. I'm looking for a solution, here, that "finds its own sweet spot" instead of forcing me (or "something") to *impose* one.
> delaying the inevitable. But getting back to hysteresis, a queue > that&#4294967295;s been the subject of a management action (basically adding or > removing a worker), is *not* subject to another management action for > a defined interval.
Yes -- and the sizes of those intervals become "magic numbers" that are hard to "justify" (on paper) though easily *explained*. I'm going to stew on the dutch auction concept and see how easily that would work. Maybe run some simulations of competing tasks... Thanks!
On Thu, 30 Dec 2010 17:46:47 -0700, D Yuniskis
<not.going.to.be@seen.com> wrote:

>Hi Jon, > >On 12/30/2010 5:16 PM, Jon Kirwan wrote: >> On Thu, 30 Dec 2010 17:13:02 -0700, D Yuniskis >> <not.going.to.be@seen.com> wrote: >> >>> Hi Jon, >>> >>> On 12/30/2010 4:37 PM, Jon Kirwan wrote: >>>> Just thought I'd let you know that your worldview is much >>>> closer to mine than David's is. I almost could have written >>>> your words for you. Interesting; and enjoying the discussion >>>> too. >>> >>> <shrug> My approach to these problems has evolved as >>> my solutions have become more sophisticated. With the >>> sort of processing power and resources that you can >>> cram into a few cubic inches *today*, there are far >>> more possibilities that can be explored. >>> >>> I.e., if people think it a worthwhile use of resources >>> to make menus that fade in/out, then *I* can consider >>> it a worthwhile use of resources to dynamically >>> control the resources that applications use!<grin> >>> >>> The parallel to windowed display environments is a good >>> one. I.e., you can chose to naively handle your "expose >>> event" and repaint the *entire* window. Or, you can >>> try to use some smarts to only repaint the newly-exposed >>> portion. Or, you can let the window manager handle some >>> or all of this with backing store. >>> >>> Each approach has different levels of sophistication and >>> cost. But, the cost of the *framework* is distributed across >>> all subsequent application instances. I.e., once you *can* >>> do these sorts of things, it becomes *easier* to do them >>> (imagine running a windowed environment on a KSR-33!) >>> Just because you don't have such an environment *now* >>> doesn't mean you can't *develop* one (e.g., the "power" >>> resource hasn't even been discussed, here, yet will be >>> *increasingly* important in future product designs as >>> users expect more functionality in smaller sizes and >>> longer battery lifes [sic]) >> >> Only one comment. I don't remember ever seeing a KSR-33. I >> have seen, used, and even repaired (I say very modestly, >> because whoever designed the darned thing most assuredly went >> right into a mental hospital afterwards) a KSR-35. Note the >> -35. The only -33's I ever saw were of the ASR (with tape >> punch and reader) variety. Not that KSR-33's didn't exist. I >> wouldn't know. I just never saw one. :) > >KSR just didn't have the tape.
Oh, yes. I know what the designations meant. I just never saw one. By the time the -33 came out (after the -35), it seems everyone bought the tape unit to go with it. Never did see one without it. People used to put a piece of paper tape into the dashpot and run the head all the way over to the opposite side and hit RETURN to slam and jam it. That was a pain, at times, to clear out.
>> Still trying to imagine the windowed -33, though. I can >> still readily read ASCII punches off of punch tape, so I'm >> trying to think how it might be done using the tape punch, >> now. ;) > >For the sheer perversity of my comment... > >Imagine a curses application. Now, imagine a widowed environment >layered on curses (I have deployed products like this -- very >effective and economical -- though not GRAPHICAL!). Now, imagine >that same environment running on a fast, though *dumb* TTY >(e.g., a PTY). Finally, imagine it running on a HARD COPY >device :-/
Okay. I'm imagining this on a Diablo daisy wheel system with a split ink/erase tape dispenser, now. Sheesh.
>(I once wrote a file system driver layered atop a *block* >interface to a 9 track tape drive. It was hilarious >watching the transport "seek" each sector fetched from >the "filesystem"! Perverse sense of humor...)
I used to write stuff for 800, 1600 and 6250 tape drives. Let's say, 'tried' anyway. You just made me remember the gaps and all the difficulties not having rewritten blocks "walk" into the gap and make further reading... a problem. Jon
Hi Jon,

On 12/30/2010 9:15 PM, Jon Kirwan wrote:

[attributions elided]

>>> Only one comment. I don't remember ever seeing a KSR-33. I >>> have seen, used, and even repaired (I say very modestly, >>> because whoever designed the darned thing most assuredly went >>> right into a mental hospital afterwards) a KSR-35. Note the >>> -35. The only -33's I ever saw were of the ASR (with tape >>> punch and reader) variety. Not that KSR-33's didn't exist. I >>> wouldn't know. I just never saw one. :) >> >> KSR just didn't have the tape. > > Oh, yes. I know what the designations meant. I just never > saw one. By the time the -33 came out (after the -35), it > seems everyone bought the tape unit to go with it. Never did > see one without it.
I've seen a few of them at surplus equipment auctions, etc. (I still have an ASR-33). I suspect some may have been "de-A"-ed to become K's. If the tape punch or reader got munged, it was usually not something easily fixed (mechanical kludge).
> People used to put a piece of paper tape into the dashpot and > run the head all the way over to the opposite side and hit > RETURN to slam and jam it. That was a pain, at times, to > clear out.
I can recall writing programs to print "ticker tape ASCII". Mindless effort but always amusing to see some "message" come streaming off the PTP.
>>> Still trying to imagine the windowed -33, though. I can >>> still readily read ASCII punches off of punch tape, so I'm >>> trying to think how it might be done using the tape punch, >>> now. ;) >> >> For the sheer perversity of my comment... >> >> Imagine a curses application. Now, imagine a widowed environment >> layered on curses (I have deployed products like this -- very >> effective and economical -- though not GRAPHICAL!). Now, imagine >> that same environment running on a fast, though *dumb* TTY >> (e.g., a PTY). Finally, imagine it running on a HARD COPY >> device :-/ > > Okay. I'm imagining this on a Diablo daisy wheel system with > a split ink/erase tape dispenser, now. Sheesh.
I.e., you *can* do it but wonder wonder why anyone *would*! Sort of like a rotary dial telephone that generates touchtones.
>> (I once wrote a file system driver layered atop a *block* >> interface to a 9 track tape drive. It was hilarious >> watching the transport "seek" each sector fetched from >> the "filesystem"! Perverse sense of humor...) > > I used to write stuff for 800, 1600 and 6250 tape drives. > Let's say, 'tried' anyway. You just made me remember the > gaps and all the difficulties not having rewritten blocks > "walk" into the gap and make further reading... a problem.
My approach only (pseudo-)reliably worked for R/O filesystems. I would build an image of a filesystem and then write it to the tape. Then, mount it R/O and watch the reels grind. Gotta wonder what sorts of currents were tossed around starting and stopping and reversing them as quickly as it did! The "character" mode driver was the first one I had written (under NetBSD) so I did the block device as an "exercise". Watching the drive "thrash" is the same sort of "therapeutic relaxation" that one derives from watching a pen plotter (I guess "normal people" would watch tanks of fish???)
On Thu, 30 Dec 2010 13:29:17 -0700, D Yuniskis wrote:

> Yes, that is the ultimate goal. I can assign default "budgets" to each > task and let the user modify those (within some range of constraints). > The "currency" I proposed is one such mechanism. > > But, users can't understand "quantum size", "memory allocation", > priority ceiling, etc. They just know "*this* is worth more to me than > *that*". So, you need a dimensionless unit that lets them show their > preferences without having to understand what is happening.
If you're talking about memory, the user could select the number of bytes. Some applications already do that..e.g. my browser has a preferences menu where I can set the number of MB for a cache. I think this works better than a dimensionless unit that I have no idea what it means.
> If your desktop PC (e.g.) popped up a message to the effect of > "insufficient resources to handle all tasks", could *you* decide how to > reallocate those resources if I gave you a table of "current memory > allocation", "current run priority", etc.? Chances are, you would just > kill some set of tasks until things started running again.
None of my desktop applications have ever shown me a message like that. I'm using virtual memory, and applications usually become too slow before the VM runs out, so I just kill them and restart them. Typically, such high memory consumption is caused by a memory leak in the application, in which case none of your fancy mechanisms would help.
> Now, imagine those tasks being smart enough to try to enhance their > performance by taking extra (!) resources AS THEY ARE AVAILABLE.
Like others have said, try to come up with a number of realistic cases that cannot be solved in an easier way.
> So, perhaps an MP3 decoder "decodes ahead" and caches the next 10 > seconds of music so that it can "sleep" hereafter (this makes it's > timeslices available to other tasks! It's decision to "waste memory" > could have positive impact on a task that wants more CPU time). Or,
I don't see how such a "bursty" MP3 decoder would be any better than an MP3 decoder that keeps a constant 1 second buffer.
> maybe a file system maintenance routine grabs an excess of *time* to > expedite it's check of the filesystem -- thereby removing itself from > the list of "running" tasks sooner than it would otherwise (freeing > memory *and* time for use by subsequent tasks).
Just run it at a low priority, and it'll grab whatever time is left over.
> In that environment, if the *tasks* don't cooperate (under some set of > brokerage criteria), could *you*, the user, look at that augmented table > of "current memory allocation", "current run priority", "MINIMUM > REQUIRED MEMORY ALLOCATION" (i.e., how much "extra" is it currently > using), "MINIMUM REQUIRED RUN PRIORITY" (i.e., how much elevated is its > current priority), etc. and decide how to reallocate these resources??
No, I don't want to deal with that many details. A simple preference menu with a MB slider is good enough. If I get an application that doesn't behave, I'll uninstall it, and find a replacement. That strategy has worked fine so far.
> I'm assuming the application developer can. *IF* he is motivated to do > so (by policies that can make his UNcooperative application look like > *crap* if he doesn't comply)
Certainly. A crappy developer will simply ignore all requests from the kernel to reduce resource. A good developer won't need such requests, because the application will be behave nicely.
On Dec 30, 10:01=A0pm, D Yuniskis <not.going.to...@seen.com> wrote:
> Hi Robert, > > [gack! =A0long post -- your name isn't, perchance, YUNISKIS, is it?? =A0:= > ]
Not ACFAIK... ;-)
> On 12/30/2010 3:35 AM, robertwess...@yahoo.com wrote: > > Turning this into an economic problem is certainly an interesting > > <shrug> =A0Well, it isn't really "economics" but, rather, a > simple model to try to apply --one that might be easier > to pitch to users... > > > approach. =A0In that sense you'd be looking to create markets where > > tasks can bid on resources. =A0The exact units of the cash is not > > important, just their relative assignments to the task (IOW, if task 1 > > is twice as important as task 2, it should get double the budget of > > cash units). =A0Then a task might bid 1/3 of its budget for memory. > > Hmmm, this (and below) is a different approach than I > had considered. =A0Rather, I was driving it from the > other end (i.e., the kernel telling you to shed > resources you can't "afford" -- since *it* knows > what your budget is and what you have currently > "spent"). >
Actually it *is* economics, and that was, I think, my major point. Money and markets in the real world are fundamentally resource allocation mechanisms. And all the complexity and variations that go with that. Central planning, free markets, socialism, laissez-faire, capitalism, government intervention, all end up with analogs in a scheme like this. And many of the standard models clearly could be applied with little or no adaptation. Of course I still think it's far too complex (and unpredictable) to use in most computer systems. But it would be an interesting thing to study and develop.
> > another issue, freeing up those resources may be slow. =A0Similarly > > processes that cannot bid high enough to continue running typically > > would have to abandon the work they've done as they suspend or even > > terminate themselves. =A0These would correspond to externalities in > > economics. > > <frown> Sorry, clueless as to "Economic Theory" and terminology > thereof. =A0:> =A0But, i think I understand your point.
An externality is a cost (or benefit) that's not captured in the transaction, and is often borne by others, thus distorting the true price of the transaction, leading to incorrect economic decisions. For example, a factory discharging pollutants into a stream is not seeing the full costs of that pollution, but the people downstream from there will pay the price. Lets say that discharge ends up causing health and other issues that cost* directly and indirectly $10 million. Since the factory is not paying that $10 million, its cost to dispose of that waste is $10 million too low, and thus does not make the "correct" economic decision to not use that form of discharge unless the alternatives cost more than $10 million. *Ignoring the morality of assigning dollar values to human suffering (and remembering that we do it all the time anyway)
Hi Arlet,

On 12/31/2010 12:32 AM, Arlet Ottens wrote:
> On Thu, 30 Dec 2010 13:29:17 -0700, D Yuniskis wrote: > >> Yes, that is the ultimate goal. I can assign default "budgets" to each >> task and let the user modify those (within some range of constraints). >> The "currency" I proposed is one such mechanism. >> >> But, users can't understand "quantum size", "memory allocation", >> priority ceiling, etc. They just know "*this* is worth more to me than >> *that*". So, you need a dimensionless unit that lets them show their >> preferences without having to understand what is happening. > > If you're talking about memory, the user could select the number of > bytes. Some applications already do that..e.g. my browser has a > preferences menu where I can set the number of MB for a cache. I think
And where do you specify how much CPU you get to use? And how much physical memory? And how much of the devices *power* budget is consumed by this application?
> this works better than a dimensionless unit that I have no idea what it > means.
The "dimensionless unit" allows the user to rate the relative importance of individual applications. Obviously, application developers would have to provide a scale for them to use to put this in perspective. Note that this scale also gives them an idea of how "efficient" the application is.
>> If your desktop PC (e.g.) popped up a message to the effect of >> "insufficient resources to handle all tasks", could *you* decide how to >> reallocate those resources if I gave you a table of "current memory >> allocation", "current run priority", etc.? Chances are, you would just >> kill some set of tasks until things started running again. > > None of my desktop applications have ever shown me a message like that. > I'm using virtual memory, and applications usually become too slow before > the VM runs out, so I just kill them and restart them. Typically, such > high memory consumption is caused by a memory leak in the application, in > which case none of your fancy mechanisms would help.
Actually, my fancy mechanisms would help in exactly this case! When asked (told) to relinquish resources, your leaky application would throw up it's hands and say "I've released *everything*!". The kernel can then say, "you're a resource hog. I'm terminating you!" (instead of waiting for the user to *somehow* discover how many resources that task has "tied up"). [note in my system, all your resources are tracked by the OS -- as it must be able to kill you and reclaim those resources. So, the OS knows more about "you" than *you* do]
>> Now, imagine those tasks being smart enough to try to enhance their >> performance by taking extra (!) resources AS THEY ARE AVAILABLE. > > Like others have said, try to come up with a number of realistic cases > that cannot be solved in an easier way.
How do you define "easier" -- "add more hardware until you have safeguarded against every potentiality... regardless of how unlikely they might be in practice"? If you can get away with that with your marketing folks (and with your *market*!), GREAT! You can write your applications in Java, run them on a 3GHz machine with a 500AHr lead acid car battery mounted on a motorized cart (to transport it). :>
>> So, perhaps an MP3 decoder "decodes ahead" and caches the next 10 >> seconds of music so that it can "sleep" hereafter (this makes it's >> timeslices available to other tasks! It's decision to "waste memory" >> could have positive impact on a task that wants more CPU time). Or, > > I don't see how such a "bursty" MP3 decoder would be any better than an > MP3 decoder that keeps a constant 1 second buffer.
What if I want to skip forward 3 minutes? *You* have to read the next 2:59 of data from the MP3 file (which may be on the other end of a network connection) to *find* the 3 minute mark. If I've already decoded that 3 minutes of audio, I can produce it "instantly". [imagine the same example but with a "VGA" VIDEO codec -- 10-20X the data requirements] And why would you waste a a sizable fraction of a megabyte on an MP3 buffer? Why not ~12K (IIRC, that's the smallest amount that you need -- apologies if I have misremembered this... my brain is frozen from babysitting all of the citrus trees -- to be *guaranteed* to produce audio)? Or, why not 100MB so you can buffer the entire side of an album/live concert? What's so magical about 1 second? Are you *sure* you'll be able to get the next second's worth of audio when you need it? What if some other higher priority task comes along and/or the network is tied up moving someone else's data with higher QoS guarantees? In my case, I can take advantage of whatever resources are "lying idle". With your approach, you have to define "at compile time" (?) what resources you will use and *stick* with them (since you won't let the OS tell you when to reliquish them)
>> maybe a file system maintenance routine grabs an excess of *time* to >> expedite it's check of the filesystem -- thereby removing itself from >> the list of "running" tasks sooner than it would otherwise (freeing >> memory *and* time for use by subsequent tasks). > > Just run it at a low priority, and it'll grab whatever time is left over.
What if it isn't inherently a low priority task? I.e., if it has to take *the* top level lock on the filesystem in order to guarantee that the filesystem remains quiescent during its scan, then you surely don't want it happening "whenever there is some free time".
>> In that environment, if the *tasks* don't cooperate (under some set of >> brokerage criteria), could *you*, the user, look at that augmented table >> of "current memory allocation", "current run priority", "MINIMUM >> REQUIRED MEMORY ALLOCATION" (i.e., how much "extra" is it currently >> using), "MINIMUM REQUIRED RUN PRIORITY" (i.e., how much elevated is its >> current priority), etc. and decide how to reallocate these resources?? > > No, I don't want to deal with that many details. A simple preference menu > with a MB slider is good enough. If I get an application that doesn't > behave, I'll uninstall it, and find a replacement. That strategy has > worked fine so far.
Exactly! If an application DOESN'T BEHAVE (according to the rules laid out and ENFORCED by your OS), you stop using it. This is exactly the "stick" that I am trying to implement in my resource sharing scheme. The thing that is missing from my "civilized" approach (i.e., expecting applications to cooperate). The "currency" idea gives you that "slider" -- move it up and this task gets a greater "resource preference"; down and it gets less consideration. No need for you to even be concerned with the numerical details behind the slider's position!
>> I'm assuming the application developer can. *IF* he is motivated to do >> so (by policies that can make his UNcooperative application look like >> *crap* if he doesn't comply) > > Certainly. A crappy developer will simply ignore all requests from the > kernel to reduce resource. A good developer won't need such requests, > because the application will be behave nicely.
No. You are still thinking of static implementations. You would constrain each application to use some small portion of the total resources available in the device JUST to make sure the other applications could *also* use their small allocations *when* they happen to be co-executing with the first application (or, you will give each application full reign of the hardware and only allow *one* application to run at a time). Turn off your paging file. Wire down all the physical memory in your machine. *Now* show me what you can run conveniently in that environment. Then you'll see the assumptions your developers made with that software ("Gee, I expected to have a near infinite disk drive to use as secondary storage. What do you mean there's no swap??")