EmbeddedRelated.com
Forums
The 2026 Embedded Online Conference

Forewarning of resource inadequacies

Started by Don Y April 7, 2016
On 4/13/2016 2:36 AM, Boudewijn Dijkstra wrote:

>> regarding addressing >> potential ("current") resource inadequacies when starting a task >> (or, offering that capability *to* start that task to a user)? > > Depends who is initiating the task and who is responsible for resource > availability. > > 1. In an open user-controlled environment, leave it to the user. This > means giving the user the tools to determine resource usage and the option > to shoot himself in the foot.
The issue isn't to prevent the user from shooting himself in the foot; rather, the problem is not SURPRISING him at some later time that an action he (appeared to) successfully initiated has not completed as he had expected. Because something changed -- possibly the result of a subsequent (or EARLIER, still running!) action on his part or that of another. We've all been "disappointed" to come back to some long-running, computer-related activity at a later time -- only to discover that it has abended, "unexpectedly". And, often ANGRY if the reason for this could have been known when we STARTED the activity! (e.g., not enough space on a device, file size exceeds maximum file size supported by targeted filesystem, daily scheduled job at 00:00, etc.) This suggests locking those resources at the start of the operation. But, that can needlessly prevent other tasks from sneaking in, using those resources and RELEASING them before they ACTUALLY are needed by the first task. E.g., don't lock up the printer for the output you'll be generating an hour from now cuz other tasks might want to print in the meantime! [OTOH, if you leave the printer "available", you risk it running out of paper from use by one of those other intervening tasks] Bottom line, I don't see any one-size-fits-all solution. The fact that tasks can take prolonged periods of time exacerbates the problem as it allows the problem and its notification to be decoupled (in time) from the task initiation -- the original user may no longer be "present" for that notification! "The timer went off." "When??" "Oh, about 15 minutes ago." "Did you take the bread out of the oven?" "No, I didn't realize you were baking!" "Didn't you wonder why the timer was on and the oven was hot?" "Well, no." "Then why not tell me about it WHEN it happened? Isn't a timer the sort of thing/event that has an immediacy associated with it?" etc.
> 2. In more closed/rigid environments, make a best effort to inform the > user. If system damage is possible, prevent it.
>> Again, these are only examples. The question is what criteria do you use >> for alerting (and/or inhibiting!) the user when you know that it is likely >> that he won't be able to perform the desired task WITH THE SYSTEM IN ITS >> CURRENT STATE -- and *when* do you impose those notifications?
Op Wed, 13 Apr 2016 15:51:10 +0200 schreef Don Y  
<blockedofcourse@foo.invalid>:
> On 4/13/2016 2:36 AM, Boudewijn Dijkstra wrote: > > [...] the problem is not SURPRISING him at some later time that an > action he (appeared to) successfully initiated has not completed as he > had expected. [...] > > This suggests locking those resources at the start of the operation.
No, it suggests reserving them. For a complex operation, it suggests locking until the exact resource needs have been determined. -- (Remove the obvious prefix to reply privately.) Gemaakt met Opera's e-mailprogramma: http://www.opera.com/mail/
On 4/14/2016 5:43 AM, Boudewijn Dijkstra wrote:
> Op Wed, 13 Apr 2016 15:51:10 +0200 schreef Don Y <blockedofcourse@foo.invalid>: >> On 4/13/2016 2:36 AM, Boudewijn Dijkstra wrote: >> >> [...] the problem is not SURPRISING him at some later time that an >> action he (appeared to) successfully initiated has not completed as he >> had expected. [...] >> >> This suggests locking those resources at the start of the operation. > > No, it suggests reserving them. For a complex operation, it suggests locking > until the exact resource needs have been determined.
This ties up those resources for the length of the operation -- preventing other similar (unprivileged) tasks from using those resources even if their use may be brief and transitory -- not jeopardizing to the execution of this first task. I.e., that's the nature of the problem: - if you want to be able to tell the user that his task *will* execute, then you have to impose the same sorts of reservations that you would for privileged tasks (i.e., at the expense of other tasks that the user may want to execute -- you can't risk "gambling" that things will work out to the satisfaction of all his tasks) - if you want to maximize potential utility of resources (for an indeterminate set of possible user tasks), then you can't give assurances at task activation -- because you can't (don't want to) follow up by imposing those restrictions on the resource use I.e., the potential for maximizing utilization comes with the inherent risk of a potential (future) shortage; if you go that route, you must be willing to inform the user of that possibility WHEN (if) it later occurs -- even if that is impractical. That's the "no free lunch" aspect. So, even adding heuristics to implement "partial" gambling (to maximize utilization) -- i.e., make those guarantees for long (whatever that means) operations where the user may walk away, lose interest or forget about the task's activation but NOT for short-lived operations where you HOPE the user is still around for any potential notification. I.e., the user doesn't have a consistent interface/relationship with the system: sometimes he KNOWS that a task will complete simply because it was accepted and started; other times, he might be surprised to be BELATEDLY (though not *too* late?) informed that a task that HAD started won't be able to complete. The user has to "learn" how to differentiate between these two types of tasks. Or, the system must be able to tell him (at or prior to activation). *Or*, the user must be given the OPTION of "that guarantee": "This is going to take a while to complete. I can guarantee completion but only at the expense of other tasks that you may elect to activate (or, have previously activated 'conditionally'). Would you like to take advantage of this capability?" But, again, this makes some tasks different (in terms of what the user experiences when activating them). And, no way for the user to know which tasks those might be (unless I consistently ask -- even for short-lived tasks that will PROBABLY be able to complete unimpeded). It's a sort of: "Are you sure you want to do that?"
Op Thu, 14 Apr 2016 15:37:15 +0200 schreef Don Y  
<blockedofcourse@foo.invalid>:
> On 4/14/2016 5:43 AM, Boudewijn Dijkstra wrote: >> Op Wed, 13 Apr 2016 15:51:10 +0200 schreef Don Y >> <blockedofcourse@foo.invalid>: >>> On 4/13/2016 2:36 AM, Boudewijn Dijkstra wrote: >>> >>> [...] the problem is not SURPRISING him at some later time that an >>> action he (appeared to) successfully initiated has not completed as he >>> had expected. [...] >>> >>> This suggests locking those resources at the start of the operation. >> >> No, it suggests reserving them. For a complex operation, it suggests >> locking until the exact resource needs have been determined. > > This ties up those resources for the length of the operation -- > preventing other similar (unprivileged) tasks from using those resources > even if their use may be brief and transitory -- not jeopardizing to the > execution of this first task.
It depends how fine-grained these tasks can specify their resource usage.
> I.e., that's the nature of the problem: > - if you want to be able to tell the user that his task *will* execute, > then you have to impose the same sorts of reservations that you would > for privileged tasks (i.e., at the expense of other tasks that the > user may > want to execute -- you can't risk "gambling" that things will work out > to the satisfaction of all his tasks)
Exactly, you can't predict the future without knowing what will happen.
> - if you want to maximize potential utility of resources (for an > indeterminate > set of possible user tasks), then you can't give assurances at task > activation -- because you can't (don't want to) follow up by imposing > those restrictions on the resource use
I was thinking it would be perfectly fine to impose those restrictions for cheap resources like e.g. disk space. If reporting about resources inadequacies is important, then the task itself is important and deserves special treatment.
> I.e., the potential for maximizing utilization comes with the inherent > risk of a potential (future) shortage; if you go that route, you must be > willing to inform the user of that possibility WHEN (if) it later occurs > -- even if that is impractical. > > That's the "no free lunch" aspect. > > So, even adding heuristics to implement "partial" gambling (to maximize > utilization) -- i.e., make those guarantees for long (whatever that > means) operations where the user may walk away, lose interest or forget > about the task's activation but NOT for short-lived operations where > you HOPE the user is still around for any potential notification. > > I.e., the user doesn't have a consistent interface/relationship with the > system: sometimes he KNOWS that a task will complete simply because it > was accepted and started; other times, he might be surprised to be > BELATEDLY (though not *too* late?) informed that a task that HAD started > won't be able to complete.
I think there's one thing at play here that you haven't explicitly mentioned: confidence factor. More confidence is less surprise. During a task, the system could report its confidence of completion.
> The user has to "learn" how to differentiate between these two types of > tasks. Or, the system must be able to tell him (at or prior to > activation). > > *Or*, the user must be given the OPTION of "that guarantee": > "This is going to take a while to complete. I can guarantee > completion but only at the expense of other tasks that you may > elect to activate (or, have previously activated 'conditionally'). > Would you like to take advantage of this capability?"
So: start job at 100% confidence.
> But, again, this makes some tasks different (in terms of what the user > experiences when activating them). And, no way for the user to know > which tasks those might be (unless I consistently ask -- even for > short-lived tasks that will PROBABLY be able to complete unimpeded).
If the tasks are predefined, a rough confidence factor could be assigned to them.
> It's a sort of: > "Are you sure you want to do that?" > >
-- (Remove the obvious prefix to reply privately.) Gemaakt met Opera's e-mailprogramma: http://www.opera.com/mail/
On 4/15/2016 5:24 AM, Boudewijn Dijkstra wrote:
> Op Thu, 14 Apr 2016 15:37:15 +0200 schreef Don Y <blockedofcourse@foo.invalid>: >> On 4/14/2016 5:43 AM, Boudewijn Dijkstra wrote: >>> Op Wed, 13 Apr 2016 15:51:10 +0200 schreef Don Y <blockedofcourse@foo.invalid>: >>>> On 4/13/2016 2:36 AM, Boudewijn Dijkstra wrote: >>>> >>>> [...] the problem is not SURPRISING him at some later time that an >>>> action he (appeared to) successfully initiated has not completed as he >>>> had expected. [...] >>>> >>>> This suggests locking those resources at the start of the operation. >>> >>> No, it suggests reserving them. For a complex operation, it suggests locking >>> until the exact resource needs have been determined. >> >> This ties up those resources for the length of the operation -- preventing >> other similar (unprivileged) tasks from using those resources even if their >> use may be brief and transitory -- not jeopardizing to the execution of this >> first task. > > It depends how fine-grained these tasks can specify their resource usage.
They can only specify their needs over the run of the entire job. I.e., can't say "I need this for X minutes, then that for the next Y minutes, etc." So, a job that might "think" for 10 minutes and then consume gobs of disk in the last 30 seconds looks the same as one that eats the same amount of disk at a steady pace. For tasks that MUST run, you have to treat their requirements in a block as you can't (usually) bias their scheduling wrt other similar tasks to exploit any complementary overlaps. For a "hard-wired" implementation, you can embed *your* knowledge of the resource interplay of different "consumers" into the use of those resources. E.g., to emulate that sort of capability, I have to wrap complementary tasks into a single "pseudo-task" and specify the resource requirements of the MAX(task1, task2, task...) for that pseudo-task; then, ensure that I only activate one of them at a time. [There's no way for me to tell the system: this task requires XYZ but only if neither task A nor B are active -- otherwise it requires 0.]
>> I.e., that's the nature of the problem: >> - if you want to be able to tell the user that his task *will* execute, >> then you have to impose the same sorts of reservations that you would >> for privileged tasks (i.e., at the expense of other tasks that the user may >> want to execute -- you can't risk "gambling" that things will work out >> to the satisfaction of all his tasks) > > Exactly, you can't predict the future without knowing what will happen.
And, neither can the user! But, the machine won't get emotional if something it wanted to do COULDN'T complete. A user, OTOH, can become annoyed if "surprised" at some later time ("Why didn't you tell me there wasn't enough disk space BEFORE you started working on it?")
>> - if you want to maximize potential utility of resources (for an indeterminate >> set of possible user tasks), then you can't give assurances at task >> activation -- because you can't (don't want to) follow up by imposing >> those restrictions on the resource use > > I was thinking it would be perfectly fine to impose those restrictions for > cheap resources like e.g. disk space. If reporting about resources inadequacies > is important, then the task itself is important and deserves special treatment.
That's specious reasoning. You're equating "important" and "convenient". If I try to move a folder onto a thumb drive and, some minutes later, get informed that there is no space left on the device, it's INCONVENIENT; I now have to move everything back and find a bigger target drive. I might be able to defer that activity for days or weeks (I may have to purchase a larger thumb drive!) -- hardly "important". Or, if I want it done immediately, I may have to rearrange what's on the thumb drive (make space) *or* consider cutting the folder into two logical pieces that "make sense" to move onto two different media. OTOH, if its a laptop running on battery and that battery is nearly depleted, it is "important" that everything be shut down in an orderly fashion if I don't want to risk losing something. More significant than "convenient"! I can deal with "important" by simply reserving resources in the admittance process. I'd like NOT to have to treat everything that might become "inconvenient" as "important"!
>> I.e., the potential for maximizing utilization comes with the inherent risk >> of a potential (future) shortage; if you go that route, you must be willing >> to inform the user of that possibility WHEN (if) it later occurs -- even if >> that is impractical. >> >> That's the "no free lunch" aspect. >> >> So, even adding heuristics to implement "partial" gambling (to maximize >> utilization) -- i.e., make those guarantees for long (whatever that >> means) operations where the user may walk away, lose interest or forget >> about the task's activation but NOT for short-lived operations where >> you HOPE the user is still around for any potential notification. >> >> I.e., the user doesn't have a consistent interface/relationship with the >> system: sometimes he KNOWS that a task will complete simply because it >> was accepted and started; other times, he might be surprised to be >> BELATEDLY (though not *too* late?) informed that a task that HAD started >> won't be able to complete. > > I think there's one thing at play here that you haven't explicitly mentioned: > confidence factor. More confidence is less surprise. During a task, the system > could report its confidence of completion.
You're assuming you have the user's attention for that entire period. If you activate a "print" job, you typically don't sit there watching pages come out of the printer, one at a time, until the job is finished. You may move on to some other task -- perhaps not even involving The Computer. Or, depart (go to bed, etc.). If the printer runs out of supplies (paper/ink) at some point prior to completion, you are disappointed/annoyed when you eventually go to pick up your FINISHED job. If the printer (and print service) doesn't let you resume an interrupted (paper out) job, you have to restart the job -- from the point at which it prematurely stopped. If the printer *thinks* it did a great job printing but you notice all sorts of visual artifacts on the pages (smearing/smudging/dropouts/etc.) then there's a significant difference of opinion as to whether or not the job was actually *done*! :>
>> The user has to "learn" how to differentiate between these two types of >> tasks. Or, the system must be able to tell him (at or prior to >> activation). >> >> *Or*, the user must be given the OPTION of "that guarantee": >> "This is going to take a while to complete. I can guarantee >> completion but only at the expense of other tasks that you may >> elect to activate (or, have previously activated 'conditionally'). >> Would you like to take advantage of this capability?" > > So: start job at 100% confidence.
And what do I do when the user walks away THINKING everything is fine? Only to return some time later to see "confidence = -10; job aborted"? The problem is coming to "some understanding" with the user regarding the quality of service provided and expected. And, "being fair" in stating the realities involved. I.e., to claim 80% confidence but REPEATEDLY fail to complete the task can only be seen as disingenuous. As the machine can't predict the future, it shouldn't lower its confidence estimation: "past performance is no indication of FUTURE performance". But, the user (an emotional being) won't see it that way.
>> But, again, this makes some tasks different (in terms of what the user >> experiences when activating them). And, no way for the user to know >> which tasks those might be (unless I consistently ask -- even for short-lived >> tasks that will PROBABLY be able to complete unimpeded). > > If the tasks are predefined, a rough confidence factor could be assigned to them.
Would you use anything on your PC if it announced that it had anything less than 100% confidence that it would complete? How do I assign a confidence factor to a user-written script? Track it's performance statistically and report THAT each time it is started?
>> It's a sort of: >> "Are you sure you want to do that?"
I think there are two "solutions": - give hard and fast guarantees (then, use the mechanisms available to ensure these are met) - give NO guarantees, but allow tasks to run indefinitely (no deadlines) so a task has never really "failed"/aborted! (it just keeps waiting in the naive hope that the resource it needs WILL become available!)
The 2026 Embedded Online Conference