EmbeddedRelated.com
Forums

Managing "capabilities" for security

Started by Don Y November 1, 2013
Hi Don,

On Mon, 04 Nov 2013 08:23:57 -0700, Don Y <This.is@not.Me> wrote:
> >On 11/4/2013 1:20 AM, George Neuner wrote: >> >> Revoking always should be asynchronous because it is solely at the >> discretion of the giver. > >Yes, of course [re: giver]. What I was trying to draw attention to is >how The "holder" is effectively "made aware" of his loss of some/all of >the "authorizations" that he had previously. (more below)
In all the capability based systems I am aware of, the capability "ticket" has to be presented for *every* operation involving a protected resource: not just when "opening" the resource [whatever that happens to mean]. "No tickey, no laundry." Other than being told explicitly, the ticket holder finds out his capability has been revoked when some operation involving the protected resource fails.
>> If a capability can be delegated >> transitively, the originating authority may neither know nor be able >> to communicate directly with all of the current holders of the >> capability. > >Unlike, (e.g.) Amoeba (apologies if I am misremembering references/ >implementations, here) -- where a capability is "just a (magic) number >(which can obviously be copied FREELY and indefinitely) -- I implement >capabilities as kernel based/maintained objects. What a task sees as a >"capability" is actually a *handle* to a capability (that the kernel >tracks).
That's perfectly reasonable. Amoeba chose to place permissions directly into the user-space "ticket" because its set of permissions largely was predefined [there were some user definable bits available but most were reserved.] When the scope of "permissions" is more or less arbitrary, you really do need some kind of server implementation [minimally] maintaining a key-value store DB. But you still can make use of cryptographic signing to make tickets that identify the authorized user.
>[E.g., Amoeba's implementation doesn't require the kernel to be aware >of where a capability "is", currently. It is only aware when operations >*on* the capability need to be performed (reminder: Amoeba makes the >capabilities UNFORGEABLE and little more).]
Yes and no. Both kernel and user space capabilities existed in Amoeba. Amoeba took the position that each service was responsible for administering its own capabilities. That included kernel services such as starting new processes, creating new ports, mapping process address space, etc. In Amoeba every service - filesystem, network, etc. - either was a resource owner itself or was a managing agent having delegated access granting authority. E.g., the filesystem service didn't "own" the files it managed (human users did), but it owned the means to access the files. So the filesystem was an agent with delegated authority to grant access to files based on owner/creator supplied rules [which in the case of Amoeba was simply Unix group membership].
>As my kernel knows where every capability is located, at the moment, >if can deliver an asynchronous notification ("signal") to the holder >of the capability (holder == task; so no guarantee which of the >task's threads will "see" that notification -- unless an exception >handler thread has been nominated). > >So, I *can* notify the holder. Or, wait for him to try to use the >capability and throw an error at that time. > >Of course, either approach can work. What I'm trying to decide is >the relative merits of each -- on both sides of that notification >fence!
Recall that Amoeba was built around a pretty straightforward delegate and trust chain model needed for the distributed filesystem. Much more complex scenarios involving agented agents, subcontractors, etc. and arbitrary degree trust chains technically were possible, but the administration of them was left as an exercise.
>> An agent can't simply hand out a copy of an original capability given >> to it - it needs to pass on a derived capability that is separate from >> but linked to the original. > >As holders only have handles to capabilities, I actually *can* "move" >the original capability. If a holder elects to create a *new* >capability embuing it with some formal subset of the "authorizations" >that are present in the original capability, it can (potentially!) >do so. [that's one of my conceived restrictions...]
Yes. However revoking a master capability must also revoke any other capabilities derived from it [even if located on another host]. If you (the user) suddenly decide to make a file read-only, any existing ticket granting write permission for that file, anywhere in the system, has to be revoked. Of course, that could be done lazily when the ticket eventually is presented for use ... however, if you (the user) again make the file writable, is it still the same file? Should the old tickets be honored if presented or must a new ticket be obtained? These are things you eventually will have to think about for a distributed capability system.
>> The derived capability has to be >> revocable both independently (by the agent itself) and in conjunction >> with revocation of the original capability (by the originating >> authority). >> >> How to handle transitive delegation is *the* major issue in designing >> a capability system. > ><grin> Hence the reason for my questions! They (below) pertain to he >sorts of operations you can perform *on* the capability. > >E.g., the Amoeba approach allows the holder to freely copy and >distribute *a* capability that it holds. It has to trust EVERY >recipient of those capability-copies. And, implicitly, any >one that *they* may conceivably pass yet other copies to!
Yes. But note that Amoeba also permitted creating new distinct capabilities - having the same or reduced permissions - that were separately revocable. That was part of the agent support. I.e. the decision of how to extend the trust chain was left to the application.
>> A debugger monitoring a channel, and possibly injecting traffic into >> it, is a special case of a "silent" participant. > >In the example I cited, should I have to trust D as much as I do B? >If I allow it to create a copy (for its own use) of the capability, >then I do. OTOH, if I create a "pass all or nothing" attribute >for the capability, then the only way that D can use the capability is >by denying it to B. > >(see where I'm going, here?)
Not really. A debugger isn't necessarily bound by the same rules as is a normal application. Turning from debuggers to a more generic discussion of "pipes thru filters" applications, then the scenario is only a problem if you permit anonymous "bearer" tickets. Consider that a ticket may incorporate the identity of the authorized process (see below), and that the system can positively identify the process presenting a ticket for service [at least within the system]. Under these conditions, a ticket might be "stolen", but it can't be used unless the thief also can successfully impersonate the authorized user. You can uniquely identify programs by cryptographically hashing the executable, particular instances of running programs by host and process ids, and also user/group ids, etc. These can be combined to create tickets that identify both the service granting access and the exact client (or clients) authorized by the ticket. Actual tickets (and their associated permissions) can be stored securely as in your model, they dosn't need to be user-space entities. But for a multihost system the user space "handle" has to encode host as well as ticket selector.
>>> - handling cases where the capability >>> can be held or propagated -- but not *duplicated*. I.e., *you* >>> can access this file; *or*, have someone else do it on your behalf; >>> but it's one option or the other... the capability can't multiply! >> >> Just a particular case of transitive delegation. > >But, IMO, the ability to do this effects the security that the >capability system provides. If you can always duplicate a >capability (or portions thereof), then you have to always trust >everyone you give it to.
I think I've shown that you don't. You can have both "bearer" tickets (useable by anyone) and "name" tickets (limited to particular users) together in the same system. The only limitation is to what extent you can reliably determine the identity of users - maybe only imperfectly in an open system, maybe perfectly in a closed one.
>> E.g., on the client side, connect() creates both a new connection and >> a capability for it, and transmits the capability to the accept() on >> the server side. > >Yes. Though connect() implies that you have previously been given the >authorization to connect to that object! :> I.e., if you should have >no need to talk to the file system, then you can't even *connect* >to that service!
It, at least, implies that you have a capability to use the relevant communication service ... the endpoint service is free to reject your connection attempt [always, but particularly if your identity can be established and it knows a priori to whom it should respond - you can look at it from either direction (or both)]. Bootstrap problem. There is a basic set of capabilities that must be given to every process, a somewhat larger set of capabilities that must be given to most processes, and an even larger set that must be given to network aware processes.
>> Within a host you can start a child task and transfer capabilities >> automagically during the fork(). > >Being careful about terms, here...
You can "spawn" tasks, "fork" threads, "launch" processes, "poke" servers, etc. ad nauseam. The actual entities and terms involved don't matter so much as the programming model. YMMV, George
On 05/11/13 00:06, Don Y wrote:
> Hi David, > > On 11/4/2013 12:49 AM, David Brown wrote: >> On 01/11/13 21:34, Don Y wrote: >>> Hi, >>> >>> Not sure exactly how I want to ask this question; >>> i.e., how best to differentiate the examples where >>> X should be allowed vs X should be prohibited. >> >> I can't see that you've been asking a question at all - it looks more >> like you have some ideas about what you think "capabilities" are and are >> trying to get a clearer picture. But I don't think your post here is >> ready for direct comments - you'll have to read a bit more, think a bit >> more, and figure out what you are trying to say, trying to ask, and >> trying to do. >> >> In the meantime, read up a bit on "posix capabilities" and their >> implementation in Linux: > > Different beast entirely -- in mvch the same way that "file permission > bits" differ from full-fledged ACLs. >
Fair enough. This is not a topic I know a lot about - I was just trying to give you some pointers that /might/ have helped, since no one else had replied to your post. Since you mention "file permission bits" vs. ACL's, I'd like you to be /very/ sure that you actually /need/ the complications of the system you are proposing. I've administered file servers with ACL's, and file servers with just Linux permission bits and groups. There is no doubt that ACL's give finer control - but I also have no doubt that with careful use of Linux group membership, group ownership of files and directories, and group "sticky" bits on directories, it is vastly easier to get good security where the right people have the right access to the files. Groups and permissions are quick and easy to work with, easy to understand, and easy to check. With the old ACL-based setup we had before there were endless battles - and these were often solved by simply making whole directories read/write for everyone (everyone with a valid user and password, and only on the local network, of course). That in turn often led to battles about not having permission to change the ACL's despite being an administrator - and thus having to recursively take ownership of the directories first. Before anyone starts to tell me how to handle ACL's "correctly", the point is that when you want to make something secure, having a clear, logical, obvious system is normally more important than having a very flexible with control of the smallest details. It is better to have a simple system that can be used correctly and /is/ used correctly, than a complex system that is used incorrectly because it is too difficult. And of course, the simple system is much easier to implement correctly, and test and verify correctly, and has far less chance of unexpected and unplanned holes. Maybe you've thought through this already. But a security idea that leads to the type of discussion in this group strikes me as one that is too complex to get 100% right - and if it is not 100% bulletproof, then it is worthless. mvh., David
> From your first reference, below: > "A capability (known in some systems as a key) is a communicable, > unforgeable token of authority. It refers to a value that references > an object along with an associated set of access rights." > > AFAICT, Linvx uses the term just to reference a finer grained set of > "permissions" afforded to processes (beyond "root == God"). > > [IMO, you can't *effectively* ADD capabilities to an existing "system" > except in very narrow, fortuitous places] > > For more info, you might want to look at Amoeba, Chorus, EROS/KeyKOS, > etc. (each to differing degrees). > >> <http://en.wikipedia.org/wiki/Capability-based_security> >> <http://man7.org/linux/man-pages/man7/capabilities.7.html> >> <http://en.wikipedia.org/wiki/Tahoe_Least-Authority_Filesystem> >> >> (and of course, google is your friend :-) >> >> I don't think this is the kind of stuff you want to do yourself - there >> are a great deal of things to get right for tasks to have enough access >> to what they need without opening security holes. >
On 01/11/13 20:34, Don Y wrote:
> Hi, > > Not sure exactly how I want to ask this question; > i.e., how best to differentiate the examples where > X should be allowed vs X should be prohibited. > > I have a capabilities based security model. Each > capability has "authorizations" associated with it > (trying to avoid using the word "capability", again :< ).
You may, or may not, find some inspiration from the information pointed to in this current comp.arch thread. If nothing else it may save you from blind alleys. Re: Bounded Pointers On 11/4/13, 1:57 AM, Ivan Godard wrote: > On 11/4/2013 12:58 AM, Michael S wrote: > <snip> > >> What are those "capabilities" that you are mentioning so often? >> In particular, how they help to augment chain of trust for "safe" languages? >> Lacking imagination, I can't see how "Quis custodiet ipsos custodes?" puzzle could be possibly >> solved. >> > > Start here: > https://en.wikipedia.org/wiki/Capability-based_security > https://en.wikipedia.org/wiki/Capability-based_addressing > > After that: Google is your friend Prof. Hank Levy's excellent but out of print book is now available as a set of PDFs from his webpage: http://homes.cs.washington.edu/~levy/capabook/ Definitely worth reading as it describes a bunch of cap. systems in detail.
Hi David,

On 11/5/2013 1:20 AM, David Brown wrote:

>> Different beast entirely -- in mvch the same way that "file permission >> bits" differ from full-fledged ACLs.
> Since you mention "file permission bits" vs. ACL's, I'd like you to be > /very/ sure that you actually /need/ the complications of the system you > are proposing.
Remember, this is c.a.e -- chances are, we aren't dealing with "files" but, rather, specific I/O's, mechanisms, etc. In a *closed* system, it's (relatively) easy to get "permissions" right: if task A has no business talking to the motor driver, then task A shouldn't contain any code that *talks* to the motor driver! Verify that this is, indeed, the case -- then release the codebase to production. OTOH, in an *open* system, you can't predict what tomorrow's application will do -- or *try* to do. How do you ensure it can't muck with things that it shouldn't? Typically, that's done by pushing "special" things into a protection domain (most often, the kernel). Then, hoping the application hasn't come up with a clever way to screw this up! Files have fixed operations. It's easy to come up with "gates" on those operations as they are few in number and tend to have static permissions. But, when your resources/IO's get to be more esoteric (which can mean "run-of-the-mill!), you can end up with lots of different operations and a desire to separate which agents can invoke each. With a capabilities-based model, you can delegate who can do what *dynamically* and with finer precision. E.g., "you can turn the motor off but you can't turn it *on*" (i.e., you can be a monitoring process that prevents the mechanism from running away... and, I have no fear that *you* will TELL the mechanism to run away! even if you fail to tell it to STOP!) Even filesystems often want finer-grained control IN A SINGLE FILE! E.g., parts of passwd(5) should be visible to all processes while other parts should be *hidden*. And, even different "versions" of passwd(5) for certain applications. (e.g., ~ftp/etc/passwd vs /etc/passwd vs master password) If "passwd" is treated as an object in a capabilities based system, then the capability that each "process" is given can cause the handler at the other end of that capability to provide the image of passwd that is most appropriate to that process (instead of exposing one of three files to that process).
> Before anyone starts to tell me how to handle ACL's "correctly", the > point is that when you want to make something secure, having a clear, > logical, obvious system is normally more important than having a very > flexible with control of the smallest details. It is better to have a > simple system that can be used correctly and /is/ used correctly, than a > complex system that is used incorrectly because it is too difficult.
The "simple system" is big monolithic kernel and hope everything is coded correctly (cuz the guy who is implementing the device driver for foo is operating in the same privileged space as the device driver for the disk system, scheduler, etc.). There is no fine-grained permission possible -- and definitely nothing "expandable" and consistent across the entire system (e.g., how would you implement the email_address_t I mentioned elsewhere with similar "security"?)
> And of course, the simple system is much easier to implement correctly, > and test and verify correctly, and has far less chance of unexpected and > unplanned holes.
See above. Unplanned holes can affect unrelated subsystems. An application (or a subsystem) can't create its own concept of how *its* objects should be managed EXCEPT in complete isolation. Each comes up with their own notion of object, permissions, security, etc. and hopes the others are somehow compatible (or, remain separate islands)
> Maybe you've thought through this already. But a security idea that > leads to the type of discussion in this group strikes me as one that is > too complex to get 100% right - and if it is not 100% bulletproof, then > it is worthless.
How do ACLs deal with a user asynchronously opting to change the permissions on a resource/file? What does the application do in that case? ("undefined behavior"?) The point of these discussions is to figure out what makes sense for that sort of situation because the "users" are applications: "Gee, you should avoid reading this file now because some other process is busy writing it. I'll just arrange for cron to run you 5 minutes later than him -- and HOPE he's finished by then..." Better to have each process *expect* to be (temporarily) denied access to a resource (file) and actively try to recover than to have them choke when they encounter "/* CAN'T HAPPEN */". Expect your capability to be revoked from time to time. Should you request it again? Or, should you blindly retry? Or... "Why is my request to move the motor being denied? That's not supposed to happen..." vs. "Hmmm... for some reason, I am not being allowed to move the motor right now. How should I react in this EXPECTED situation?"
Hi George,

[snips throughout for sole purpose of rimming message length]

>> Yes, of course [re: giver]. What I was trying to draw attention to is >> how The "holder" is effectively "made aware" of his loss of some/all of >> the "authorizations" that he had previously. (more below) > > In all the capability based systems I am aware of, the capability > "ticket" has to be presented for *every* operation involving a > protected resource: not just when "opening" the resource [whatever > that happens to mean].
Yes. In my case, the "capability" (I call them Handles -- for reasons that should become apparent) also indicates the object in question. So, the "authorizations" come along with the "reference".
> "No tickey, no laundry." Other than being told explicitly, the ticket > holder finds out his capability has been revoked when some operation > involving the protected resource fails.
Exactly. Though the holder can defer learning this (indefinitely), sooner or later he *will* learn. Presumably, you will code to account for return(NO_PRIVILEGE) so why not just let that *existing* coding handle the revocation case? If you need to know sooner, I can just as easily send you an asynchronous notification (signal) after the fact as I could send you an asynchronous notification *before*! (Yeah, it's nice to know the power is GOING to fail... but, you have to be able to deal with it HAVING FAILED, regardless!) (It just seems like giving advanced warning means MORE coding. And, a false sense of security: "Hey! You didn't TELL me that you were going to do that!" "Um, yes I did. Perhaps the message just hasn't been delivered, yet...")
> Amoeba chose to place permissions directly into the user-space > "ticket" because its set of permissions largely was predefined [there > were some user definable bits available but most were reserved.] > > When the scope of "permissions" is more or less arbitrary, you really > do need some kind of server implementation [minimally] maintaining a > key-value store DB. But you still can make use of cryptographic > signing to make tickets that identify the authorized user.
Amoeba's "ticket" is far more efficient than my approach. It can be copied, moved, etc. "for the cost of a long long" (IIRC). In my case, a trap to the kernel is required for each operation on a "Handle" -- because it's a kernel structure that is being manipulated (or referenced). I can still give user-land services the final say in what a Handle *means* (along with the "authorities" that it conveys to its bearer). But, you have to go *through* the kernel to get back to userland. A subtle difference: if "task" (again, forgetting lexicon differences) A decides to manipulate object H backed by service B, in Amoeba's case, B does all the work for each attempt A makes. EVEN IF THE ATTEMPT IS DISALLOWED by H's authorizations. B's resources are consumed even though A has no authority to use B's object (H)! If A is an Adversary, then B is brought to its knees by A's hostile actions. There is nothing B can do to prevent A from continuously trying to use object H! And its all done on B's dime! In my case, if A tries to use one of B's resources (H), it first must truly *be* one of B's resources (not just a long long that A *claims* is managed by B). If not, the kernel disallows the transaction. If H truly *is* backed ("handled") by B, then the kernel allows the transaction -- calling on B to enforce any finer grained authorities (besides "access"). I.e., B knows which authorities are available *in* H and can verify that the action requested is one of those allowed. Finally, if A persists in being a pain in the ass (Adversarial DoS behavior), B can tell the kernel to revoke his capabilities. And, thereafter, A can't even *talk* to B! Any attempts happen on *A's* dime!
> In Amoeba every service - filesystem, network, etc. - either was a > resource owner itself or was a managing agent having delegated access > granting authority.
Exactly. Every entity for me is an Object. Every Object has a Handler. Every reference to an Object includes a set of "authorizations" that apply to *that* reference and a granted to the "Holder" of that "Handle".
> E.g., the filesystem service didn't "own" the files it managed (human > users did), but it owned the means to access the files. So the > filesystem was an agent with delegated authority to grant access to > files based on owner/creator supplied rules [which in the case of > Amoeba was simply Unix group membership].
In my case, each file (currently being referenced) is done so by the use of a Handle. There can be multiple Handles to the same "physical" file. These can be Held by multiple tasks -- or the same task! Operations performed on that file are done through a specific Handle and must meet the authorities associated with that Handle (i.e., you might hold write access to a particular file but if the Handle that you use to access it doesn't include that authorization, then your write attempt will be disallowed.). The File Handler (there may be different ones for different types of files) is responsible for "backing" (handling) the File Objects. When you want to read a file's (referenced by a particular Handle) contents, the File Handler for that file provides the data to you (possibly by accessing different services associated with the various media supported in your system). So, .../timeofday could actually be a "file" that gets handled by a service that returns the current time-of-day (i.e., it isn't a file in the sense of other "storage" files). Having write access to that Handle would effectively allow you to set the time-of-day! Furthermore, attempting to set the time to "HH34kdiss" can throw a "write error" (for obvious reasons). (File systems are bad examples because they are so commonly used to implement namespaces and not just "files")
> Yes. However revoking a master capability must also revoke any other > capabilities derived from it [even if located on another host]. If > you (the user) suddenly decide to make a file read-only, any existing > ticket granting write permission for that file, anywhere in the > system, has to be revoked.
This means "something" must track history/relationships. It also says nothing about *when* the revocation takes place (effectively) and when notification of that event occurs. I.e., in Amoeba's case, the kernel never knows who is holding which (copies!) of a particular ticket (derived from some other ticket, etc.). So, there is no wy for it to know who to notify AT THE TIME OF REVOCATION. Instead, it has to rely on the Holder(s) noticing that fact when they *eventually* try to use their capabilities (tickets/keys). And, you are never sure when every ticket has been "discovered" to be voided -- a task can have a copy of a ticket (you can hold multiple copies of any ticket!) that he just hasn't got around to trying! Sort of like finding a bunch of keys in a desk drawer and not discarding them because you're not quite sure you *want* to discard them *maybe they still FIT something!)
> Of course, that could be done lazily when the ticket eventually is > presented for use ... however, if you (the user) again make the file > writable, is it still the same file? Should the old tickets be > honored if presented or must a new ticket be obtained?
Exactly. You need to force "issuers" to go back to the well to create "new" tickets. And, this process must implicitly randomize or serialize the identifiers embedded in the tickets to prevent reuse. If you only allow *downgrading* a capability, then any lingering tickets are safe from being reused as "full fledged" tickets once they have been downgraded/revoked *if* new ones always have new ID's!
> These are things you eventually will have to think about for a > distributed capability system.
In my case, kernels are the only things that *hold* capabilities. So, all kernels can be notified that a particular capability has been revoked and they all *are* revoked. Just like if your kernel chooses to delete a file descriptor (remembering that it is now a zombie), any future references by you (the task) to that fd can throw an error assuming you ignore the signal sent to notify you that it has been destroyed).
>> E.g., the Amoeba approach allows the holder to freely copy and >> distribute *a* capability that it holds. It has to trust EVERY >> recipient of those capability-copies. And, implicitly, any >> one that *they* may conceivably pass yet other copies to! > > Yes. But note that Amoeba also permitted creating new distinct > capabilities - having the same or reduced permissions - that were > separately revocable. That was part of the agent support. > > I.e. the decision of how to extend the trust chain was left to the > application.
Yes. My "factory" publishes Handles for key services that tasks may want to avail themselves of. These are accessed by a single "Service Locator" Handle that is given to each task (task == process == resource container) as the task is created. [Conceivably, the Handle for this service given to Task A can differ from Task B if the authorizations between A and B are to be different!]. Tasks locate the services that they want using this Service Locator. It provides a generic Handle that allow the service in question to be contacted. (i.e., this is all part of the bootstrap of the initial access to a service). The task can then contact the Handler behind that Handle -- i.e., the service in question -- and make whatever requests it is authorized to make (based on its Handle). More importantly, the creating task can do all of this for the "child" cramming the appropriate Handles for the Objects (incl Services) that the child will need AND THEN DELETING THAT INSTANCE OF THE SERVICE LOCATOR handle to effectively sandbox the child. I.e., these are the resources you can use and operate on -- nothing more!
> Turning from debuggers to a more generic discussion of "pipes thru > filters" applications, then the scenario is only a problem if you > permit anonymous "bearer" tickets. > > Consider that a ticket may incorporate the identity of the authorized > process (see below), and that the system can positively identify the > process presenting a ticket for service [at least within the system]. > Under these conditions, a ticket might be "stolen", but it can't be > used unless the thief also can successfully impersonate the authorized > user.
You can't "steal" Handles in my system because they are in the kernel. If you can trick the Holder to GIVE it to you, then its yours (just like if you trick me to give you the keys to my house). The current Holder of a Handle is implicitly known to the kernel (it's in A's resource container so A Holds it!) If I were to tag Handles with "rightful owners", then proxies would be more apparent. But, how do you validate a proxy's request for a Handle on behalf of another? ("Please give me Bob's door keys...")
> You can uniquely identify programs by cryptographically hashing the > executable, particular instances of running programs by host and > process ids, and also user/group ids, etc. These can be combined to > create tickets that identify both the service granting access and the > exact client (or clients) authorized by the ticket. > > Actual tickets (and their associated permissions) can be stored > securely as in your model, they dosn't need to be user-space entities. > But for a multihost system the user space "handle" has to encode host > as well as ticket selector.
That information need only be made aware to the (local) kernel. Any atempt to use the resource referenced by the Handle goes through the kernel so *it* is the only agency that needs this information. It also means it is easier for a service (handler) to move, "physically" as only the kernels holding refernces to the objects that a service backs need be notified. And, the *tasks* holding them can remain ignorant of a service's physical location! So, I can bring a spare processor on-line to handle times of heavy load and I don't have to run around telling all existing clients that the service has been migrated to that new processor. Similarly, if the load decreases, I can migrate that service back to a "less heavily overused" (?) processor and power down the surplus processor.
>> But, IMO, the ability to do this effects the security that the >> capability system provides. If you can always duplicate a >> capability (or portions thereof), then you have to always trust >> everyone you give it to. > > I think I've shown that you don't. > > You can have both "bearer" tickets (useable by anyone) and "name" > tickets (limited to particular users) together in the same system. The > only limitation is to what extent you can reliably determine the > identity of users - maybe only imperfectly in an open system, maybe > perfectly in a closed one.
Again, the "name" is always implicit in my case. Just like *your* stdin is not *my* stdin. If you want to be able to have a proxy *use* "your" stdin (presumably on your behalf), *I* require that the proxy *hold* that Handle. *You* had to give it to him. But, I don't keep track of where it came from (what happens if he wants someone else to act as a proxy for him? ad infinitum?)
>>> E.g., on the client side, connect() creates both a new connection and >>> a capability for it, and transmits the capability to the accept() on >>> the server side. >> >> Yes. Though connect() implies that you have previously been given the >> authorization to connect to that object! :> I.e., if you should have >> no need to talk to the file system, then you can't even *connect* >> to that service! > > It, at least, implies that you have a capability to use the relevant > communication service ... the endpoint service is free to reject your > connection attempt [always, but particularly if your identity can be > established and it knows a priori to whom it should respond - you can > look at it from either direction (or both)].
In my case, if you haven't got a Handle for a service, you can't use it. Having a handle means you can *connect* to it -- long enough for *it* to decide if what you are asking of it is consistent with your "authorizations"
> Bootstrap problem. There is a basic set of capabilities that must be > given to every process, a somewhat larger set of capabilities that > must be given to most processes, and an even larger set that must be > given to network aware processes.
See above.
>>> Within a host you can start a child task and transfer capabilities >>> automagically during the fork(). >> >> Being careful about terms, here... > > You can "spawn" tasks, "fork" threads, "launch" processes, "poke" > servers, etc. ad nauseam. The actual entities and terms involved > don't matter so much as the programming model.
Yes. I was trying to draw attention to the fact that people often think of "processes" in a legacy UNIX context: one thread in one resource container and new processes inherit their parent's environment/privilege/etc. In my case, only threads share resources implicitly. Tasks need to have their environments (resource sets) explicitly created. You don't just "inherit" whatever your creator happened to have. --don
Hi Tom,

On 11/5/2013 1:58 AM, Tom Gardner wrote:
> On 01/11/13 20:34, Don Y wrote:
>> I have a capabilities based security model. Each >> capability has "authorizations" associated with it >> (trying to avoid using the word "capability", again :< ). > > You may, or may not, find some inspiration from the > information pointed to in this current comp.arch thread. > > If nothing else it may save you from blind alleys.
> Prof. Hank Levy's excellent but out of print book is > now available as a set of PDFs from his webpage: > > http://homes.cs.washington.edu/~levy/capabook/ > > Definitely worth reading as it describes a bunch > of cap. systems in detail.
Thanks! Always willing to add texts to my collection -- especially if they don't take up any PHYSICAL space! (first glance, this looks like "early" material on the subject... when hardware was often part of a solution :< ) --don
Hi Richard,

[attrs elided]

On 11/4/2013 9:45 PM, Richard Damon wrote:

[revoking an authorization]

>>>> Revoking always should be asynchronous because it is solely at the >>>> discretion of the giver.
>>> I would disagree with this. If the use of a capability without >>> authorization causes the requester to malfunction, then having a >>> protocol that doesn't begin revocation with notification can make the >>> authorization nearly worthless, as the actor using it can't be sure it >>> is safe. >>> >>> You also need to make sure that you don't have "transactions" that might >>> be corrupted with a revocation in their midst. >> >> The problem with a "cooperative" approach is as you allude to below: >> how long do you wait for the "holder" to relinquish it? Do you wait >> in units of wall-clock time? (if so, how do you know the holder >> isn't blocked, preempted, etc and, as such, not even aware of your >> request through no fault of its own?) Even if you can be assured >> the holder has received your request and is currently executing, how >> long might it conceivably take for him to comply (in an orderly >> fashion)? > > All questions to be decided at design phase, with no "generic answer". > Presumably, if there is a deadline for when the acknowledgement can be > given, then presumably this spec is applied when designing such a real > time system.
But that's the problem. When is the design phase "over" for an open system? Someone (third party) adds a "feature" a year after product release. Does he get to claim the design phase extended to a period MONTHS after "initial release" -- because that was when *he* was working on the design of *his* feature? [of course not] At some point, you say, "this is the environment for which you have to design". Every mechanism that you make available is a mechanism that has to be maintained and utilized. And, also acts as a *constraint* on the system and its evolution: "Crap! I have to notify each Holder of a pending capability revocation 100ms before revocation. But, my satellite transmission path is twice that! I guess I just can't use satellites (or, can't revoke capabilities)" E.g., I handle physical resource revocation asynchronously BECAUSE I HAVE NO CONTROL OVER EXTERNAL EVENTS. If I wrap the resources in a capability, now I suddenly have to provide different semantics? ("Hey, you can't revoke the 'sunlight' capability!")
>> So, as you acknowledge below, your app design must be able to handle >> this case -- which is essentially the asynchronous case. >> >> I currently manage *physical* resources asynchronously (though with >> notification after the fact) -- because they *can* disappear even >> without my explicit control (e.g., power failure, drop in water >> pressure, etc.). So, this same sort of reaasoning would at least >> be *consistent*. >> >> I.e., do an operation and *check* to see if it completed as expected >> (just like checking return value of malloc). > > Some operations do not make checking at each operation so easy.
Life isn't guaranteed to be easy! :>
> What if > the resource is access to some memory, do you check for an "error" after > every access? This presumes that the system even gives you an > application level ability to continue pass this sort of error. What do > you do about cooperative "authorization" to access parts of structures > for things like synchronization where there isn't a hardware/OS > capability to stop you?
If "backing store" could go away while it was being used, then your "system" would obviously need a way of detecting that and informing the "holder" of that resource that this has, in fact, happened. The holder would also ned to be aware of what resources could "disappear" and code to accommodate those possibilities. If I am driving a motor, power to the motor driver/translator could fail while I am in the middle of an operation. Even if I have a backup power supply, the motor driver itself could fail. Even if I have a redundant motor driver, the *motor* could fail. Or, a gearbox, mechanism, *sensor*, etc. Shit Happens. If you don't plan to accommodate the (likely/consequential) failures, you have a bug.
> In your case, since the operation do have the > capability of suddenly starting to fail, an asynchronous revocation > likely doesn't cause problems that you didn't need to handle anyway, as > long as the system structure to allow it.
That's the point! You (developer) know shit CAN happen. Anything that you are "holding" can be revoked. Plan on it. (Heck, I can "kill -9" *you* without giving any advanced warning! Gee, *then* what?)
>>> Yes, sometimes just doing an asynchronous revocation may make sense, and >>> in many cases having it as a fall back if the cooperation method fails >>> to complete in a needed time is needed, but that doesn't mean that >>> asynchronous is generally preferred. >>> >>> As to the transitively granting, the same method could be used to relay >>> the request to revoke. >> >> This is a tougher call (though I think I have a solution that addresses >> these issues). Who does the relaying? The actor who delegated >> the capability? (what if he is now a zombie?) Or, does the kernel >> track "derived capabilities" and treat them as part of the original >> capability? > > I would generally say that the actor who was given a permission is > responsible for relaying the revocations to those it relayed to. If it > has shared a right that it might have revoked from it, it needs to > maintain a way to do that.
The actor may be gone! BY DESIGN! I.e., he has done <whatever> *he* needed to do (with "greater privilege") and is now leaving *you* to clean up (with some reduced capability). E.g., he can turn motor on, set direction and turn off. He starts motor in right direction, then delegates the "off" capability to you (your role being to watch a limit switch and turn off the motor at that time -- or, when some timeout is exceeded) and exits. (no need for him to hang around consuming ALL the resources that he originally needed to determine how the motor should be operated) However, since my capabilities reside in the kernel, I can opt to have the kernel track derivations and cascade revocations. But, this means all derived capabilities must come from a single "parent"
>> As I began my original post: >> "... i.e., how best to differentiate the examples where >> X should be allowed vs X should be prohibited." >> you can come up with examples where /each/ approach is "right" >> and the others *wrong*. :< >> >> Engineering: finding the least wrong solution to a problem. >> >> <frown> But, at least its interesting! :> > > This is why I object to the statement that it SHOULD ALWAYS be > asynchronous. The only real answer is that "it depends", and lists can > be made of what it depends on. Some examples include: > > If the authorization even remotely revokable? (Sometimes it isn't)
You obviously can't revoke authorization for a fait accompli. But, what other authorizations, once granted, can't be rescinded? Some may leave you in a predicament (e.g., never being able to turn off the power) but expecting the capability system to know about these sorts of dependencies is, I think, too much.
> What is the effect on the requesting task if the authorization goes away > unexpectedly?
The designer of the holding task would have to consider that in how the tasks actions and recoveries are structured. What would it have done had the authorization not been granted in the first place?
> What is the effect of delaying the revocation?
The big problem with "being considerate" is that it encourages others to be exploitative. There is no downside to their "selfishness" so, "why not?"! "Heads, I win; tails, you lose" OTOH, if you take a heavy-handed approach (unilaterally revoking capabilities) then sloppy coders pay a price -- by having thier code *crash* (presumably, users will then opt to avoid applications from those "developers") [There's no other pressure I can bring to bear on them to "do the right thing"]
On 05/11/13 19:56, Don Y wrote:
> Thanks! Always willing to add texts to my collection -- especially > if they don't take up any PHYSICAL space! > > (first glance, this looks like "early" material on the subject... > when hardware was often part of a solution :< )
You're welcome. I recommend you look at that whole thread. comp.arch is particularly interesting at the moment since there is a radically new processor architecture being slowly described - as patents are filed. The protagonists in the discussion (principally Glew and Goddard) would both like to make a caps architecture machine but don't know how to sell it. The new processor architecture will, it is claimed, work well with existing code, with roughly an order of magnitude speedup. They've managed to get DSP performance! I haven't followed all the discussions in detail, but they have serious previous form and haven't been shot down yet.
Hi Tom,
On 11/5/2013 2:47 PM, Tom Gardner wrote:
> On 05/11/13 19:56, Don Y wrote: >> Thanks! Always willing to add texts to my collection -- especially >> if they don't take up any PHYSICAL space! >> >> (first glance, this looks like "early" material on the subject... >> when hardware was often part of a solution :< ) > > You're welcome. > > I recommend you look at that whole thread.
Ah, sorry. I thought you were only pointing out the book...
> comp.arch is particularly interesting at the moment since > there is a radically new processor architecture being > slowly described - as patents are filed. The protagonists > in the discussion (principally Glew and Goddard) would > both like to make a caps architecture machine but don't > know how to sell it.
Ah, so pertinent to this (my) thread! The problem, as I see it, is that it's hard to take advantage of what capabilities have to offer "retroactively". Like tring to apply OOP to a procedural implementation.
> The new processor architecture will, it is claimed, > work well with existing code, with roughly an order > of magnitude speedup. They've managed to get DSP > performance!
I can't see how this speedup is a consequence of the capabilities themselves -- "with existing code". But, I've learned that software folks can be incredibly creative when they opt to look at something from a different -- nontraditional -- viewpoint.
> I haven't followed all the discussions in detail, > but they have serious previous form and haven't been > shot down yet.
I will go a-hunting... thanks!
On 05/11/13 22:19, Don Y wrote:
>> The new processor architecture will, it is claimed, >> work well with existing code, with roughly an order >> of magnitude speedup. They've managed to get DSP >> performance! > > I can't see how this speedup is a consequence of the capabilities > themselves -- "with existing code".
Correct, it isn't. CAP is a topic that came up as part of the non-objectives of the new architecture. Example of just how different the architecture is: it doesn't have registers and isn't a stack machine. Internal micro-op work with a use-it-or-lose-it "belt" where an "register" address is of the form "the fifth to last aritnmetic result".