EmbeddedRelated.com
Forums

Managing "capabilities" for security

Started by Don Y November 1, 2013
Hi Don


On Thu, 21 Nov 2013 09:50:38 -0700, Don Y <This.is@not.Me> wrote:

>If you needed a special ticket to use the socket service, how did you >talk to *any* service as you had no knowledge of what was "local" vs. >"remote"? I.e., that was the appeal of FLIP!
It probably would be faster to read the Amoeba documentation rather than to ask me these questions. 8-) Hopefully the following will all be clear. --- I covered already how every process has a set of tickets - either default or explicitly acquired - that authorize it to make particular "system calls". Technically Amoeba has only a handful of kernel calls which support message based IPC, however messaging to/from the standard servers largely is hidden by a library that presents a Un*x-like "system" API. FLIP itself is a service for which every process needs a ticket. However, FLIP is special because it is involved in *every* IPC call, so access to the local FLIP server is unconditional and the ticket specifies only whether FLIP should permit the process to talk to remote servers. FLIP isn't part of the kernel, it is a separate task. However, FLIP is tightly integrated with the kernel and has direct access to certain kernel functions. If you're familiar with Minix, in that model FLIP would exist as a "driver task". Each process has a bi-directional message "port" by which it talks to FLIP. The message port can have associated with it an optional service identifier [which is a network wide UUID]. Every program which will wants to be a server requires a service identifier in order to be located. Every instance of a particular service will use the same identifier, which also is used as the port-id in their service tickets. The kernel IPC calls copy (relatively) small request/response messages between the FLIP server and these process ports. All other communication occurs *within* FLIP. Requests may specify optional data buffers which FLIP memmaps for direct access or remaps for local handoff. For remote messages FLIP copies data between the local process and the remote host. FLIP itself implements a large datagram service within the Amoeba network - foreign transport protocols such as TCP/IP are implemented at higher levels using a normal service. [The network driver is separate from FLIP.] So how is it done? For each local process, FLIP creates a "FLIP port", yet another UUID which is associated with the process's message port. FLIP locates servers based on their service ids, but because there may be multiple instances, to support stateful servers FLIP maintains associations between clients and servers based on their unique FLIP ports. The FLIP port globally identifies a particular process within the Amoeba network and it can be used to track the process through host migrations (if they occur). FLIP implements a distributed name service: it locates servers by looking up the port id in the service ticket associated with the client request. If the service is unknown, FLIP broadcasts a name resolution query specifying the port id to its FLIP peers to locate servers within the network. Replies (if any) identify the hosts and FLIP ports of the server processes, which then are associated with the service's port id entry. Once a server's FLIP port is known, FLIP copies messages between the local client and the server. If the server also is local, FLIP copies the client's message directly. If the server is remote, FLIP packs the message into a datagram addressed to the server's FLIP port and sends it to its peer on the server's host. [Analogous events happen for the server->client response.] A send to a remote process may fail because the process has migrated (the old host will say it doesn't exist). If this happens, FLIP issues a new name resolution query specifying the specific FLIP port of the target process to try to find it again. If the process can be located, FLIP will retry the send. [Processes cannot migrate while sending a message, so migration of the remote process will never be the cause of a receive failure.] Name entries for local services (processes) are removed when the last instance terminates. Entries for remote services are kept separately and eventually time out and are removed if not used.
>> To do much of anything, a Mach server has to register a public service >> port with send rights - which any task in the network can scan for and >> try to connect to. Aside from limiting the count of available send >> rights to the port, there is no way to prevent anyone from connecting >> to it. >> >> Only *after* determining that the connection is unwanted can the >> server decide what to do about it. Using MIG didn't affect this: you >> couldn't encode port ids into the client using IDL - the only way for >> a client to find the server was to look up its registered public port. > >No. That's the way Mach was *applied* -- because they were looking >towards "UNIX as an application". *Obviously* (?) you need to be >able to find services! > >Or, *do* you? > >What's to stop me from creating a task with the following namespace: > CurrentTime > Display >AND NOTHING MORE!
Nothing stops *your system* from doing anything it wants ... however, we _were_ talking about Mach. George
Hi George,

On 11/24/2013 3:32 AM, George Neuner wrote:
> On Thu, 21 Nov 2013 09:50:38 -0700, Don Y<This.is@not.Me> wrote: > >> If you needed a special ticket to use the socket service, how did you >> talk to *any* service as you had no knowledge of what was "local" vs. >> "remote"? I.e., that was the appeal of FLIP! > > It probably would be faster to read the Amoeba documentation rather > than to ask me these questions. 8-)
I'd *love* to read them! Anything beyond what I've *already* read, that is! :> (remember, I tend to keep big archives). So, I went looking for more recent documents/sources (my Amoeba archive ends in the mid/late 90's). '"256-bit" Amoeba' seemed like a safe search criteria! This has been an amusing experience! :> The Wikipedia page <http://en.wikipedia.org/wiki/Amoeba_distributed_operating_system> doesn't describe (nor mention!) capabilities but offers a clue: "Each thread was assigned a 48-bit number called its "port", which would serve as its unique, network-wide "address" for communication." I.e., this is consistent with the 48-bit "server port" mentioned in other descriptions of 128-bit capabilities (though nothing about that precludes a larger capability implementation!) In <http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.52.8291&rep=rep1&type=pdf> (forgive the wrap), Tanenbaum et al. claim, in describing Amoeba *3.0*: "The structure of a capability is shown in Fig. 2. It is 128 bits long and contains four fields. The first field is the server port, and is used to identify the (server) process that manages the object. It is in effect a 48-bit random number chosen by the server. Later, when discussing planned changes: "Amoeba 4.0 uses 256-bit capabilites, rather than the 128-bit capabilities of Amoeba 3.0. The larger Check field is more secure against attack, and other security aspects have also been tightened, including the addition of secure, encrypted communication between client and server. Also, the larger capabilities now have room for a location hint which can be exploited by the SWAN servers for locating objects in the wide-area network. Third, all the fields of the new 256-bit capability are now all aligned at 32-bit boundaries which potentially may give better performance." OK, this lends strength to your comment that capabilities were "enlarged" to 256 bits. But wait, there's more! :> For example <doc.utwente.nl/55885/1/andrew.pdf> (Tanenbaum et al. in "The Communications of the ACM", Dec 1990), in describing "the current state of the system (Amoeba 4.0)" claims: "The structure of a capability is shown in Figure 2. It is 128 bits long and contains four fields. The first field is the /server port/, and is used to identify the (server) process that manages the object. ..." Later, it states: "Amoeba 5.0 will use 256-bit capabilities, rather than the 128-bit capabilities of Amoeba 4.0. The larger Check field will be more secure against attack. Other security aspects will also be tightened, including the addition of secure, encrypted communication between client and server. Also, the larger capabilities will have room for a location hint which can be exploited by the SWAN servers for locating objects in the wide-area network. Third, all the fields of the new 256-bit capability will be aligned at 32-bit boundaries, which potentially may give better performance." Sure looks like someone recycled a "paper" :> Did you catch the changes in the versions mentioned in each? ;) OK, so I went through *my* archive. The latest documentation I have is *5.3*! Everything there mentions 128-bit capabilities. Textbook (previously quoted), papers, user/system/programming manuals, etc. Hmmm... perhaps a "consistent typo" <grin> (where have I seen *that* sort of thing before?) Dig through my copy of the sources. From amoeba.h (note am_types.h defines the other types mentioned here without surprises): ------8<------8<------ #define PORTSIZE 6 #include <am_types.h> typedef struct { int8 _portbytes[PORTSIZE]; } port; typedef struct { int8 prv_object[3]; /* becomes an objnum in amoeba 4 */ uint8 prv_rights; /* becomes a rights_bits in amoeba 4 */ port prv_random; } private; typedef struct { port cap_port; private cap_priv; } capability; ------8<------8<------ I.e., a "capability" is *still* 128 bits as of Amoeba 5.3 despite Tanenbaum's published plans to enlarge it "in Amoeba 4.0"! And "in Amoeba 5.0"! OK, maybe there is "something newer" out there. Visit <ftp://ftp.cs.vu.nl/pub/amoeba/> and there's only an "amoeba5.3" directory. So, that's consistent with my most recent entities. And, the documents there seem to be identical to mine. As do the sources. <frown> Are you sure the documents you're referencing don't, for example contain passages like: "The structure of a capability is shown in Figure 2. It is 128 bits long and contains four fields. The first field is the /server port/, and is used to identify the (server) process that manages the object. ..." And, later: "Amoeba *6.0* will use 256-bit capabilities, rather than the 128-bit capabilities of Amoeba *5.0*. The larger Check field will be more secure against attack. Other security aspects will also be tightened, including the addition of secure, encrypted communication between client and server. Also, the larger capabilities will have room for a location hint which can be exploited by the SWAN servers for locating objects in the wide-area network. Third, all the fields of the new 256-bit capability will be aligned at 32-bit boundaries, which potentially may give better performance." <grin> [Actually, I'd enjoy looking at them and chasing down any sources if for no other reason than to enhance my archive! Google hasn't been very helpful to me.] Returning to the Wikipedia page cited above, it mentions: "Development at the Vrije Universiteit was stopped: the files in the latest version (5.3) were last modified on 12 February 2001." <frown> OK, so I guess my archive is up-to-date in light of that. It goes on to say: "Recent development is carried forward by Dr. Stefan Bosse at BSS Lab." From this, Google pointed me at <http://fsd-amoeba.sourceforge.net/start.html> But, grep(1)-ing the documents there seem to suggest capabilities are still 128-bits. [I've downloaded the sources available there so I can check in case this is a case of the documents simply not being updated to reflect changes in the implementation. I suspect that *won't* be the case for something as fundamental as this!] I.e., not only don't I see any *concrete* reference to 256-bit capabilities: "In the original version yes ... later they went to a 256-bit ticket to include more end-point information and better crypto- signing." "Yes. However, the enlarged capability was an improvement over the original because it carried information on client(s) authorized to use the capability." "That's part of the reason Amoeba used wide tickets. [1st version used 80-bits without the crypto-signing field, 128 bits in all. 2nd version capabilities were 256 bits]." "Later versions of the capability system widened tickets to include an authorized user/group ID field protected by a 2nd crypt signature. [And also enlarged the rights field.]" "That's only true of the original implementation. Later versions expanded the ticket to include an "authorized user" field which variously could be a process, host or host:process identifier, or a admin level user/group id. The user field was protected by a 2nd crypt signature. This enabled servers to restrict who could use a ticket." "About 4 times now I have told you that Amoeba could restrict the user of a ticket. No, the kernel doesn't know who has tickets or how many copies exist. Yes, a server can tell who is presenting it a ticket and decide whether the user is authorized." So, to *what* are you refering that gives you this extra information? I *have* to "ask [you] these questions" given that I can't "read the Amoeba documentation" that apparently contains these references! Share! :>
> Hopefully the following will all be clear. > --- > > I covered already how every process has a set of tickets - either > default or explicitly acquired - that authorize it to make particular > "system calls". Technically Amoeba has only a handful of kernel calls > which support message based IPC, however messaging to/from the > standard servers largely is hidden by a library that presents a > Un*x-like "system" API. > > FLIP itself is a service for which every process needs a ticket.
That's not the case in the documentation I've cited! In the second paper mentioned at the outset, it also states: "[Mach's CoW usage in IPC] Amoeba does not do this because we consider the key issue in a disributed system to be the communication speed between processes running on /different/ machines. That is the normal case. Only rarely will two processes happen to be on the same physical processor in a true distributed system, especially if there are hundreds of processors; therefore we have ou a lot of effort into optimizing the /distributed/ case, not the /local/ case. This is clearly a philosophical difference." It hardly seems likely that they would introduce extra mechanism for processes to access remote services (i.e., any capability whose server port is not found to be local) when they *expect* this to be the normal case. I.e., requiring a capability to use the network services as you mentioned previously. To wit, Tanenbaum's _Distributed Operating Systems_ text, section 7.5.1 explaining RPC primitives: "The RPC mechanism makes use of three principal kernel primitives: 1. get_request -- indicates a server's willingness to listen on a port 2. put_reply -- done by a server when it has a reply to send 3. trans -- send a message from client to server and wait for the reply The first two are used by servers. The third is used by clients to /transmit/ a message and wait for a reply. All three a true system calls, that is, they do not work by sending a message to a communication server thread. (If processes are able to send messages, why should they have to contact a server for the purpose of sending a message?)" Instead, the "server port" is extracted from the "ticket" and "resolved" in the RPC code (which may, in fact, run as a separate kernel *thread*/process --- but access to it is unanimously granted!). [The "Bosse" version gives capability based control over the TCP/IP stack -- but makes no mention of requiring additional capabilities to use FLIP for "typical" IPC (which is actually RPC in the "normal case" from the quote immediately above.)]
> However, FLIP is special because it is involved in *every* IPC call, > so access to the local FLIP server is unconditional and the ticket > specifies only whether FLIP should permit the process to talk to > remote servers.
I don't see that anywhere. The ticket holder has no idea *where* the "server port" mentioned in the ticket resides. It may have resided on the local host when it was isssued and since migrated to a remote host. The whole point of FLIP was to make all that transparent to the user.
> FLIP isn't part of the kernel, it is a separate task. However, FLIP is > tightly integrated with the kernel and has direct access to certain > kernel functions. If you're familiar with Minix, in that model FLIP > would exist as a "driver task". > > Each process has a bi-directional message "port" by which it talks to > FLIP. The message port can have associated with it an optional > service identifier [which is a network wide UUID]. Every program > which will wants to be a server requires a service identifier in order > to be located. Every instance of a particular service will use the > same identifier, which also is used as the port-id in their service > tickets. > > The kernel IPC calls copy (relatively) small request/response messages > between the FLIP server and these process ports. > > All other communication occurs *within* FLIP. Requests may specify > optional data buffers which FLIP memmaps for direct access or remaps > for local handoff. For remote messages FLIP copies data between the > local process and the remote host. FLIP itself implements a large > datagram service within the Amoeba network - foreign transport > protocols such as TCP/IP are implemented at higher levels using a > normal service. [The network driver is separate from FLIP.] > > So how is it done? > > For each local process, FLIP creates a "FLIP port", yet another UUID > which is associated with the process's message port. FLIP locates > servers based on their service ids, but because there may be multiple > instances, to support stateful servers FLIP maintains associations > between clients and servers based on their unique FLIP ports.
"All low-level comunication in Amoeba is based on FLIP addresses. Each process has exactly one FLIP address: a 64-bit random number chosen by the system when the process is created. If the process ever migrates, it takes its FLIP address with it. If the network is ever reconfigured, so that all machines are assigned new (hardware) network numbers or network addresses, the FLIP addresses still remain unchanged." I.e., each "local" process registers itself with the FLIP layer. Any *local* message goes through FLIP to see that the destination "port" (FLIP address) resides on the local host. Similarly, if the FLIP layer has "been in contact" with some remote FLIP address(es), if caches the information needed to reconnect with them in the future. If ever it is unable to contact the expected (or, unknown) host, it resorts to a (series of ever widening) broadcast queries. This is no different from how much of IP works. Except for the added layer of there mobile, "virtual" addresses (ports). [I spent a lot of time wading through the FLIP implementation as the same sorts of issues have to be addressed in *any* distributed OS. E.g., Amoeba's portable "service ports" are similar to Mach's *transferable* ports (esp receive rights). And, as it is far more common for a Mach port to be "moved" than an Amoeba *process* (to which the FLIP address is bound!) the costs of locating and tracking such entities across the system is of greater concern in a Mach-based approach! Broadcast queries, self-publishing, periodic tokens, etc. Lots of ways to deal with systemwide objects WITHOUT a central "control". But all of them have drawbacks and consequences so picking what's right for a particular *application* (no, not OS!) is tricky]
> The FLIP port globally identifies a particular process within the > Amoeba network and it can be used to track the process through host > migrations (if they occur). > > FLIP implements a distributed name service: it locates servers by > looking up the port id in the service ticket associated with the > client request. If the service is unknown, FLIP broadcasts a name > resolution query specifying the port id to its FLIP peers to locate > servers within the network. Replies (if any) identify the hosts and > FLIP ports of the server processes, which then are associated with the > service's port id entry. > > Once a server's FLIP port is known, FLIP copies messages between the > local client and the server. If the server also is local, FLIP copies > the client's message directly. If the server is remote, FLIP packs > the message into a datagram addressed to the server's FLIP port and > sends it to its peer on the server's host. > [Analogous events happen for the server->client response.] > > A send to a remote process may fail because the process has migrated > (the old host will say it doesn't exist). If this happens, FLIP > issues a new name resolution query specifying the specific FLIP port > of the target process to try to find it again. If the process can be > located, FLIP will retry the send. > [Processes cannot migrate while sending a message, so migration of the > remote process will never be the cause of a receive failure.] > > Name entries for local services (processes) are removed when the last > instance terminates. Entries for remote services are kept separately > and eventually time out and are removed if not used.
We don't disagree on our understandings of how FLIP is implemented. What I *don't* see is any "gating function" that prevents tasks from accessing a server that is remote (assuming I have a valid ticket bearing that server's "server port" -- which *may* migrate from local to remote in the cource of the ticket's lifetime in my context). I don't see anything that imposes additional "authorization" checks on local vs remote transactions in the sources. So, you're reading from a different play book than me. I just want to peek over your shoulder and read along! :>
>>> To do much of anything, a Mach server has to register a public service >>> port with send rights - which any task in the network can scan for and >>> try to connect to. Aside from limiting the count of available send >>> rights to the port, there is no way to prevent anyone from connecting >>> to it. >>> >>> Only *after* determining that the connection is unwanted can the >>> server decide what to do about it. Using MIG didn't affect this: you >>> couldn't encode port ids into the client using IDL - the only way for >>> a client to find the server was to look up its registered public port. >> >> No. That's the way Mach was *applied* -- because they were looking >> towards "UNIX as an application". *Obviously* (?) you need to be >> able to find services! >> >> Or, *do* you? >> >> What's to stop me from creating a task with the following namespace: >> CurrentTime >> Display >> AND NOTHING MORE! > > Nothing stops *your system* from doing anything it wants ... however, > we _were_ talking about Mach.
Ah, but there's no practical difference between the two in that regard! :> Mach "out of the box" can do exactly the same thing! It's just a matter of which port you pass to each task as you create the task that defines it's "namespace". I.e., instead of using a SINGLE, SHARED, GLOBAL namespace "for all", you could just as easily build one for each task. Or, different ones for different *types* of tasks. I.e., it is trivial to pass (send rights) fot *different* ports to each task on instantiation and have the receive rights for each of those handled by the same "netmsgserver". But, as that *NAME* server would be able to identify on which port any particular name "lookup()" request (IPC/RPC) was issued, it could tesolve the name in a context DEFINED FOR and associated with the task(s) that have send rights *to* that particular port! E.g., all "user" tasks could have "/dev" removed from their namespaces simply by eliding those names from the "tables" that you build in the netmsgserver to service the ports (send rights) *given* to "user tasks". But, the UNIX-orientation crept in, yet again. UNIX has a single shared namespace so why not implement a single shared namespace?! :< It apparently never occured to them that separate namespaces are a powerful tool! E.g., "UNIX namespace", "DOS namespace", "task 1's namespace", etc. And, all *might* map to similar/shared objects, concurrently! "COM1:" in the DOS namespace maps to the same device that "/dev/cuaa0" maps to in the UNIX namespace -- neither of which exist in task 1's namespace because he's not supposed to be using that sort of I/O... [C's email machine has died. I'll have to fix it tonight lest I get the pouty face come tomorrow! (sigh) Sure would be nice NOT to have to be an IT department!! :< ]
Hi Don,


Hope you and C had a nice Thanksgiving.



On Sun, 24 Nov 2013 22:30:56 -0700, Don Y <This.is@not.Me> wrote:

> : > >OK, so I went through *my* archive. The latest documentation I >have is *5.3*! Everything there mentions 128-bit capabilities. >Textbook (previously quoted), papers, user/system/programming manuals, >etc. > > : > >I.e., not only don't I see any *concrete* reference to 256-bit >capabilities:
Sorry for the confusion. When I played with Amoeba - circa ~1990 - the basic OS had 128-bit capabilities, but extended capabilities were available as patches. At that time there were a number of competing versions (with slightly different structure) providing additional and longer fields [I actually worked with 2 different ones]. I've tried to search some for the patches, but most of the old Amoeba archives seem to be gone now. When I was in school, there were dozens of universities using Amoeba. I didn't pay a lot of attention to it after leaving school, but since Tanenbaum announced that (some version of) extended capabilities were to be in version 4, I assumed that they had agreed on what they wanted. From what I can see now, it appears that nothing ever happened. FWIW: the most up to date version of Amoeba appears to be the Fireball distribution (http://fsd-amoeba.sourceforge.net). It's based on 5.3 with a number of additional goodies. However it still doesn't include any version of extended capabilities.
>> However, FLIP is special because it is involved in *every* IPC call, >> so access to the local FLIP server is unconditional and the ticket >> specifies only whether FLIP should permit the process to talk to >> remote servers. > >I don't see that anywhere. The ticket holder has no idea *where* the >"server port" mentioned in the ticket resides. It may have resided on >the local host when it was isssued and since migrated to a remote >host. The whole point of FLIP was to make all that transparent to the >user.
Unless they changed things dramatically, it should be apparent in the in the code for exec() and in the kernel IPC code. There was an extended exec call which took flags and an array of tickets to set as defaults for the child process. There wasn't any explicit FLIP "ticket" because FLIP wasn't an addressable service. IIRC, the same-host restriction was a bit in the process's local message port identifier (which the kernel provides to FLIP when the process makes an IPC call).
>> For each local process, FLIP creates a "FLIP port", yet another UUID >> which is associated with the process's message port. FLIP locates >> servers based on their service ids, but because there may be multiple >> instances, to support stateful servers FLIP maintains associations >> between clients and servers based on their unique FLIP ports. > > "All low-level comunication in Amoeba is based on FLIP addresses. > Each process has exactly one FLIP address: a 64-bit random number > chosen by the system when the process is created. If the process > ever migrates, it takes its FLIP address with it. If the network > is ever reconfigured, so that all machines are assigned new > (hardware) network numbers or network addresses, the FLIP addresses > still remain unchanged."
When I used Amoeba, FLIP ports were 48-bits, same as server ports.
>We don't disagree on our understandings of how FLIP is implemented. >What I *don't* see is any "gating function" that prevents tasks from >accessing a server that is remote (assuming I have a valid ticket >bearing that server's "server port" -- which *may* migrate from >local to remote in the cource of the ticket's lifetime in my context). > >I don't see anything that imposes additional "authorization" checks >on local vs remote transactions in the sources. > >So, you're reading from a different play book than me. I just >want to peek over your shoulder and read along! :>
It could be something that simply faded away. Tanenbaum and Mullender famously disagreed about having special support for (semi-)private workstations and servers. Their groups created early versions of Amoeba that had different extensions. George
Hi George,

On 12/2/2013 5:40 PM, George Neuner wrote:
> Hope you and C had a nice Thanksgiving.
Yup -- homemade pizza! :) I assume "mom" refused all (most) efforts for help?
>> OK, so I went through *my* archive. The latest documentation I >> have is *5.3*! Everything there mentions 128-bit capabilities. >> Textbook (previously quoted), papers, user/system/programming manuals, >> etc. >> >> : >> >> I.e., not only don't I see any *concrete* reference to 256-bit >> capabilities: > > Sorry for the confusion. > > When I played with Amoeba - circa ~1990 - the basic OS had 128-bit > capabilities, but extended capabilities were available as patches. At > that time there were a number of competing versions (with slightly > different structure) providing additional and longer fields > [I actually worked with 2 different ones].
OK.
> I've tried to search some for the patches, but most of the old Amoeba > archives seem to be gone now. When I was in school, there were dozens > of universities using Amoeba.
As I said, I wasn't able to find anything -- I figured "256 bit amoeba" would be about as *vague* as I could devise for a search criteria (not even mentioning "capabilities"!)
> I didn't pay a lot of attention to it after leaving school, but since > Tanenbaum announced that (some version of) extended capabilities were > to be in version 4, I assumed that they had agreed on what they > wanted. From what I can see now, it appears that nothing ever > happened.
So, a learning opportunity is lost. It would have been informative for them to crank out another paper explaining what the problems with the 256 bit implementation were that caused it not to be pursued "formally". I'd also like to have seen how they handled passing the capability to a surrogate (e.g., how do you interpose a "debugger" agent without the actor being debugged having to "cooperate", etc.)
> FWIW: the most up to date version of Amoeba appears to be the Fireball > distribution (http://fsd-amoeba.sourceforge.net). It's based on 5.3 > with a number of additional goodies. However it still doesn't include > any version of extended capabilities.
Yes. I pulled down the sources and see not much has *significantly* changed. Actually, appears the guy working on that release has gone in directions that AST had "belittled" in previous pubs. :>
>>> However, FLIP is special because it is involved in *every* IPC call, >>> so access to the local FLIP server is unconditional and the ticket >>> specifies only whether FLIP should permit the process to talk to >>> remote servers. >> >> I don't see that anywhere. The ticket holder has no idea *where* the >> "server port" mentioned in the ticket resides. It may have resided on >> the local host when it was isssued and since migrated to a remote >> host. The whole point of FLIP was to make all that transparent to the >> user. > > Unless they changed things dramatically, it should be apparent in the > in the code for exec() and in the kernel IPC code. There was an > extended exec call which took flags and an array of tickets to set as > defaults for the child process. > > There wasn't any explicit FLIP "ticket" because FLIP wasn't an > addressable service. IIRC, the same-host restriction was a bit in the > process's local message port identifier (which the kernel provides to > FLIP when the process makes an IPC call).
I can see the "decision" where the "local" branch is invoked.
>>> For each local process, FLIP creates a "FLIP port", yet another UUID >>> which is associated with the process's message port. FLIP locates >>> servers based on their service ids, but because there may be multiple >>> instances, to support stateful servers FLIP maintains associations >>> between clients and servers based on their unique FLIP ports. >> >> "All low-level comunication in Amoeba is based on FLIP addresses. >> Each process has exactly one FLIP address: a 64-bit random number >> chosen by the system when the process is created. If the process >> ever migrates, it takes its FLIP address with it. If the network >> is ever reconfigured, so that all machines are assigned new >> (hardware) network numbers or network addresses, the FLIP addresses >> still remain unchanged." > > When I used Amoeba, FLIP ports were 48-bits, same as server ports. > >> We don't disagree on our understandings of how FLIP is implemented. >> What I *don't* see is any "gating function" that prevents tasks from >> accessing a server that is remote (assuming I have a valid ticket >> bearing that server's "server port" -- which *may* migrate from >> local to remote in the cource of the ticket's lifetime in my context). >> >> I don't see anything that imposes additional "authorization" checks >> on local vs remote transactions in the sources. >> >> So, you're reading from a different play book than me. I just >> want to peek over your shoulder and read along! :> > > It could be something that simply faded away. Tanenbaum and Mullender > famously disagreed about having special support for (semi-)private > workstations and servers. Their groups created early versions of > Amoeba that had different extensions.
<frown> I think what I have to do is extend the "authorizations" that are implied (by the server backing the object) to also provide other "authorizations" for the underlying port/Handle itself. I.e., if I want to *know* who is Holding a particular Handle, then disable the ability to copy and/or propagate that Handle when I give it to the "Holder". Then, I know any activity on that particular Handle *must* be coming from that Holder (because the kernel is involved in the duplication process -- unlike Amoeba's tickets). Dunno. I'll have to see what sorts of problems this presents. And, the costs as tasks migrate (IIRC, Amoeba really didn't migrate tasks as much as "picking a suitable INITIAL HOST" for a particular task) Biscotti, tonight. Have to start tackling the holiday chores :< --don
Hi Don,

On Mon, 02 Dec 2013 19:34:53 -0700, Don Y <this@isnotme.com> wrote:

>So, a learning opportunity is lost. It would have been informative >for them to crank out another paper explaining what the problems >with the 256 bit implementation were that caused it not to be >pursued "formally".
There was quite a bit they did agree on. AFAIK the main arguments were over how much to increase signing strength and whether increased flexibility justified adding a second signature. I worked with 2 of the proposed extensions: 48: server port 32: object 32: rights 48: user port 64: signature 32: reserved and 48: server port 32: object 32: rights 64: signature_1 48: user port 64: signature_2 The first was Tanenbaum's own proposal, which actually defined only 224 bits but reserved bits for future strengthening of the signature. The second was Queensland's proposal. It defined 288 bits (36 bytes) which was an unwieldy length but featured independent signing of the user port field which made delegation simpler: a surrogate could take an existing ticket and fill in a new user without needing the object server to resign the rights. There also was talk of making Amoeba ids 64 bits, which Tanenbaum's structure could accommodate. Queensland's structure would have grown to 320 bits, but in either case all the fields would have been 32-bit aligned (so hopefully quicker to work with). And it was expected that both memory sizes and network speeds would be significantly increased in the near future, so nobody really was worried about ticket sizes. When Tanenbaum announced 256-bit capabilities for v4, I assumed that dual signatures had lost because everyone previously had agreed that the existing 48-bit signing was insufficient. It didn't seem likely that Queensland's dual signatures would be squeezed into 256 bits. ??? It's all academic at this point.
>I'd also like to have seen how they handled passing the capability >to a surrogate (e.g., how do you interpose a "debugger" agent >without the actor being debugged having to "cooperate", etc.)
Simple: you couldn't. Not that it would have been impossible, but Amoeba didn't provide any way to do it. FLIP made connections based on *private* identifiers and maintained specific client:server pair associations for stateful services. It was possible for a debugger to impersonate the *public* port id of either endpoint (or even both), but it was not possible to break into an existing connection, nor could the debugger steer new connections to itself if the endpoint was already running ... FLIP would simply see the debugger as yet another instance of the requested target. WRT surrogates, I'm not sure what really is the question. You either connect to the surrogate directly and pass the ticket in a message, or name the ticket and stick it in the directory where the surrogate can find it.
>And, the costs as tasks migrate (IIRC, Amoeba really didn't >migrate tasks as much as "picking a suitable INITIAL HOST" >for a particular task)
Yes. Amoeba didn't do task migration out of the box - it simply provided location transparency so that migration services could be added (relatively) easily on top of it. George
Hi George,

On 12/4/2013 12:53 AM, George Neuner wrote:

>> So, a learning opportunity is lost. It would have been informative >> for them to crank out another paper explaining what the problems >> with the 256 bit implementation were that caused it not to be >> pursued "formally". > > There was quite a bit they did agree on. AFAIK the main arguments > were over how much to increase signing strength and whether increased > flexibility justified adding a second signature. > > I worked with 2 of the proposed extensions:
> The first was Tanenbaum's own proposal, which actually defined only > 224 bits but reserved bits for future strengthening of the signature. > > The second was Queensland's proposal. It defined 288 bits (36 bytes) > which was an unwieldy length but featured independent signing of the > user port field which made delegation simpler: a surrogate could take > an existing ticket and fill in a new user without needing the object > server to resign the rights. > > There also was talk of making Amoeba ids 64 bits, which Tanenbaum's > structure could accommodate. Queensland's structure would have grown > to 320 bits, but in either case all the fields would have been 32-bit > aligned (so hopefully quicker to work with). > > And it was expected that both memory sizes and network speeds would be > significantly increased in the near future, so nobody really was > worried about ticket sizes.
So, if they had one (or two) implementations, why not release either/both of them? And/or a paper(s) describing the pros/cons of each? Academics seem to *live* to write papers!! :>
> When Tanenbaum announced 256-bit capabilities for v4, I assumed that > dual signatures had lost because everyone previously had agreed that > the existing 48-bit signing was insufficient. It didn't seem likely > that Queensland's dual signatures would be squeezed into 256 bits. > > ??? It's all academic at this point.
Yes. Perhaps I will try writing to AST to see if there are any odds and ends hiding in a private archive. From past experiences, that has been a workable means of getting at things that weren't formally "released" or that may have had too many blemishes and too little time to clean up.
>> I'd also like to have seen how they handled passing the capability >> to a surrogate (e.g., how do you interpose a "debugger" agent >> without the actor being debugged having to "cooperate", etc.) > > Simple: you couldn't. Not that it would have been impossible, but > Amoeba didn't provide any way to do it. > > FLIP made connections based on *private* identifiers and maintained > specific client:server pair associations for stateful services. It > was possible for a debugger to impersonate the *public* port id of > either endpoint (or even both), but it was not possible to break into > an existing connection, nor could the debugger steer new connections > to itself if the endpoint was already running ... FLIP would simply > see the debugger as yet another instance of the requested target.
In Mach, I can slip an agent into any "communication path" (which, after all, is what all these ports represent) by manipulating the task (which, of course, is just another "object" -- represented by the same sorts of mechanisms!) directly (using an actor of very high privilege -- i.e., one holding send rights to the port that represents the task being modified!)
> WRT surrogates, I'm not sure what really is the question. You either > connect to the surrogate directly and pass the ticket in a message, or > name the ticket and stick it in the directory where the surrogate can > find it.
Imagine a file (ick). "Owner" owns the file. He creates a restricted capability (read+write access, but no delete) for that file and passes it to "Task". "Task" is charged with scanning the contents of the file (perhaps it is a structured array of data samples for a waveform) and altering them based on some particular criteria. "Task" wants to invoke a service that is very good at analyzing waveforms -- "Analyzer". But, there is no reason "Analyzer" needs to modify the file -- analysis doesn't require the ability to alter the file's contents! So, "Task" wants to hand the file (as represented by it's "capability") off to "Analyzer". However, when Analyzer tries to access the contents of the file (which it does by talking to the server that *backs* that file object), the server notices that "Analyzer" is not the entity for which the capability was created/signed. Furthermore, "Task" can't create a (*further*) restricted capability for the file because the capability that "Task" holds is not the OWNER capability (at least one of the rights bits has been cleared... giving him *limited* -- read+write; no delete -- access to that object). "Analyzer", in turn, may want to pass the file on to some other actor to perform some particular analysis aspect on behalf of Analyzer (who is doing this on behalf of Task). So, you want to be able to take *any* object "Handle" (returning to my terminology) and adjust the "authorizations" downward... regardless of whether you are the "paramount" holder of that object. And, keep passing less_than_or_equal rights along the chain.
>> And, the costs as tasks migrate (IIRC, Amoeba really didn't >> migrate tasks as much as "picking a suitable INITIAL HOST" >> for a particular task) > > Yes. Amoeba didn't do task migration out of the box - it simply > provided location transparency so that migration services could be > added (relatively) easily on top of it.
Well, it's still a fair bit of work bottling up an executing task, all its state and shipping it off to another CPU in a potentially heterogeneous execution environment! :> One of Mach's big IPC/RPC performance hits came from having to convey type information in a neutral fashion across the network. (Not just big/little Endian-ness but, also, representations of floats, etc.) That's where my use of a VM (at the application level) is a win.