EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

64-bit embedded computing is here and now

Started by James Brakefield June 7, 2021
On 6/11/2021 0:09, Don Y wrote:
> On 6/10/2021 8:32 AM, Dimiter_Popoff wrote: >> On 6/10/2021 16:55, Don Y wrote: >>> On 6/10/2021 3:45 AM, Dimiter_Popoff wrote: >>> >>> [attrs elided] >>  > >> Don, this becomes way too lengthy and repeating itself. >> >> You keep on saying that a linear 64 bit address space means exposing >> everything to everybody after I explained this is not true at all. > > Task A has built a structure -- a page worth of data residing > at 0x123456.  It wants to pass this to TaskB so that TaskB can perform > some operations on it. > > Can TaskB acccess the data at 0x123456 *before* TaskA has told it > to do so > > Can TaskB access the data at 0x123456 WHILE TaskA is manipulating it? > > Can TaskA alter the data at 0x123456 *after* it has "passed it along" > to TaskB -- possibly while TaskB is still using it?
If task A does not want any of the above it just places them in a page to which it only has access. Or it can allow read access only. *Why* do you confuse this with linear address space? What does the one have to do with the other?
> >> You keep on claiming this or that about how I do things without >> bothering to understand what I said - like your claim that I use the MMU >> for "protection only". > > I didn't say that YOU did that.  I said that to be able to ignore > the MMU after setting it up, you can ONLY use it to protect > code from alteration, data from execution, etc.  The "permissions" > that it applies have to be invariant over the execution time of > ALL of the code. > > So, if you DON'T use it "for protection only", then you are admitting > to having to dynamically tweek it.
Of course dps is dealing with it, all the time. The purpose of the linear *logical* address space is just orthogonality and simplicity, like not having to remap passed addresses (which can have a lot of further implications, like inability to use addresses in another tasks structure).
> > *THIS* is the cost that the OS incurs -- and having a flat address > space doesn't make it any easier!  If you aren't incurring that cost, > then you're not protecting something.
Oh but it does - see my former paragraph.
> >> NO, this is not true either. On 32 bit machines - as mine in >> production are - mapping 4G logical space into say 128M of physical >> memory goes all the way through page translation, block translation >> for regions where page translation would be impractical etc. >> You sound the way I would have sounded before I had written and >> built on for years what is now dps. The devil is in the detail :-). >> >> You pass "objects", pages etc. Well guess what, it *always* boils >> down to an *address* for the CPU. The rest is generic talk. > > Yes, the question is "who manages the protocol for sharing". > Since forever, you could pass pointers around and let anyone > access anything they wanted.  You could impose -- but not > ENFORCE -- schemes that ensured data was shared properly > (e.g., so YOU wouldn't be altering data that *I* was using). > > [Monitors can provide some structure to that sharing but > are costly when you consider the number of things that may > potentially need to be shared.  And, you can still poke > directly at the data being shared, bypassing the monitor, > if you want to (or have a bug)] > > But, you had to rely on programming discipline to ensure this > worked.  Just like you have to rely on discipline to ensure > code is "bugfree" (how's that worked for the industry?) > >> And if you choose to have overlapping address spaces when you >> pass a pointer from one task to another the OS has to deal with this >> at a significant cost. > > How does your system handle the above example?  How do you "pass" the > pointer from TaskA to TaskB -- if not via the OS?  Do you expose a > shared memory region that both tasks can use to exchange data > and hope they follow some rules?  Always use synchronization > primitives for each data exchange?  RELY on the developer to > get it right?  ALWAYS?
I already explained that. If task A wants to leave a message into task B memory it goes through a call (signd7$ or whatever, there are variations) and the message is left there. If task A did not want to receive messages it won't even be attempted by the OS, will return a straight error (task does not support... whatever). If the message is illegal the result is similar. And if it happens that task A tries to access directly memory of task B which it is not supposed to it will just go to the "task A memory access violation. Press CR to kill it". You have to rely on the developer to get it right if they write supervisor code. Otherwise you need not. The signalling system works in user mode though you can write supervisor level code which uses it, but if you are allowed to write at that level you can mess up pretty much everything, I hope you are not trying to wrestle *that* one.
> > Once you've passed the pointer, how does TaskB access that data > WITHOUT having to update the MMU?  Or, has TaskB had access to > the data all along?
By just writing to the address task A has listed for the purpose. It is not in a protected area so the only thing the MMU may have to do is a tablewalk. *THIS* demonstrates the advantage of the linear logical address space very well.
> > What happens when B wants to pass the modified data to C? > Does the MMU have to be updated (C's tables) to grant that > access?  Or, like B, has C had access all along?  And, has > C had to remain disciplined enough not to go mucking around > with that region of memory until A *and* B have done modifying > it?
Either of these has its area which allows messaging. I don't see what you want to achieve by making it only more cumbersome (but not less possible) to do.
> I don't allow anyone to see anything -- until the owner of that thing > explicitly grants access.  If you try to access something before it's > been made available for your access, the OS traps and aborts your > process -- you've violated the discipline and the OS is going to > enforce it!  In an orderly manner that doesn't penalize other > tasks that have behaved properly.
So well, how is the linear address space in your way of doing that? It certainly is not in my way when I do it.
> >> In a linear address space, you pass the pointer *as is* so the OS does >> not have to deal with anything except access restrictions. >> In dps, you can send a message to another task - the message being >> data the OS will copy into that tasks memory, the data being >> perfectly able to be an address of something in another task's > > So, you don't use the MMU to protect TaskA's resources from TaskB > (or TaskC!) access.  You expect LESS from your OS.
Why on Earth do you think that? And what does the linear address space have to do with *any* of it? Pages can be as small as 4k why do you not just have them properly setup upon task start or at some time by having the page which can receive messages open to accesses and the rest closed? And again, how on Earth do you see any relevance between a linear logical address space and all this.
> >> memory. If a task accesses an address it is not supposed to >> the user is notified and allowed to press CR to kill that task. > > What are the addresses "it's not supposed to?"  Some *subset* of > the addresses that "belong" to other tasks?  Perhaps I can > access a buffer that belongs to TaskB but not TaskB's code > Or, some OTHER buffer that TaskB doesn't want me to see?  Do > you explicitly have to locate ("org") each buffer so that you > can place SOME in protected portions of the address space and > others in shared areas?  How do you change these distinctions > dynamically -- or, do you do a lot of data copying from > "protected" space to "shared" space?
This is up to the tasks, they can make system calls to mark pages non-swappable, write protected etc., you name it. And again, ***this has nothing to do with orthogonality of the logical address space***.
> >> Then there are common data sections for groups of tasks etc., >> it is pretty huge really. > > Again, you expose things by default -- even if only a subset > of things.  You create shared memory regions where there are > no protections and then rely on your application to behave and > not access data (that has been exposed for its access) until > it *should*.
Why would you want to protect regions you don't want protected? The common data sections are quite useful when you write a largish piece of software which runs as multiple tasks in multiple windows - e.g. nuvi, the spectrometry software - it has multiple "display" windows, a command window into which one can also run dps scripts etc., why would you want to deprive them of that common section? They are all part of the same software package. But I suppose you are not that far yet since you still wrestle scheduling and memory protection.
> > Everybody does this.  And everyone has bugs as a result.  You > are relying on the developer to *repeatedly* implement the sharing > protocol -- instead of relying on the OS to enforce that for you.
Not at all. And for I don't know which time, this has 0% to do with the linearity of the logical address space which is what you objected. Please let us just get back to it and just agree with the obvious, which is that linear logical address space has *nothing* to do with security. Leave DPS alone. DPS is a large thing and even I could not tell you everything about it even if I had the weeks it would take simply because there are things I have to look at to remember. Please don't try to tell the world how the OS you want to write is better than what you simply do not know. Tell me about the filesystem you have implemented for it (I'd say you have none by the way you sound), how you implemented your tcp/ip stack, how your distributed file system works (in dps, I have dfs - a device driver which allows access to remote files just as if they are local provided the dfs server has allowed access to that user/path etc.). Then tell me how you implemented windowing, how do you deal with offscreen buffering, how do you refresh which part and how do you manipulate which gets pulled where etc. etc., it is a long way to go but once you have some screenshots it will be interesting to compare this or that. Mine are there to see and well, I have not stopped working either. Dimiter ====================================================== Dimiter Popoff, TGI http://www.tgi-sci.com ====================================================== http://www.flickr.com/photos/didi_tgi/
On 6/10/2021 3:13 PM, Dimiter_Popoff wrote:
> On 6/11/2021 0:09, Don Y wrote: >> On 6/10/2021 8:32 AM, Dimiter_Popoff wrote: >>> On 6/10/2021 16:55, Don Y wrote: >>>> On 6/10/2021 3:45 AM, Dimiter_Popoff wrote: >>>> >>>> [attrs elided] >>> > >>> Don, this becomes way too lengthy and repeating itself. >>> >>> You keep on saying that a linear 64 bit address space means exposing >>> everything to everybody after I explained this is not true at all. >> >> Task A has built a structure -- a page worth of data residing >> at 0x123456. It wants to pass this to TaskB so that TaskB can perform >> some operations on it. >> >> Can TaskB acccess the data at 0x123456 *before* TaskA has told it >> to do so > >> Can TaskB access the data at 0x123456 WHILE TaskA is manipulating it? >> >> Can TaskA alter the data at 0x123456 *after* it has "passed it along" >> to TaskB -- possibly while TaskB is still using it? > > If task A does not want any of the above it just places them in a > page to which it only has access. Or it can allow read access only. > *Why* do you confuse this with linear address space? What does the > one have to do with the other?
As I tease more of your design out of you, it becomes apparent why you "need" a flat address space. You push much of the responsibility for managing the environment into the developer's hands. *He* decides which regions of memory to share. He talks to the MMU (even if through an API). He directly retrieves values from other tasks. Etc. So, he must be able to get anywhere and do anything at any time (by altering permissions, if need be). By contrast, I remove all of that from the developer's shoulders. I only expect a developer to be able to read the IDL for the objects that he "needs" to access and understand the syntax required for each such access (RMI/RPC). The machine LOOKS like it is a simple uniprocessor with no synchronization issues that the developer has to contend with, no network addressing, no cache or memory management, etc. EVERYTHING is done *indirectly* in my world. Much like a file system interface (your developer doesn't directly write bytes onto the disk but, rather, lets the file system resolve a filename and create a file handle which is then used to route bytes to the media). The interface to EVERYTHING in my system is through such an extra layer of indirection. Because things exist in different address spaces, on different processors, etc. the OS mediates all such accesses. ALL of them! Yes, it's inefficient. But, the processor runs at 500MHz and I have 244 of them in my (small!) alpha site -- I figure I can *afford* to be a little inefficient (especially as you want to *minimize* interactions between objects just as a general design goal) Because of this, I can enforce fine-grained protection mechanisms; I can let you increment a counter -- but not decrement it (assuming a counter is an object). Or, let you read its contents but never alter them. Meanwhile, some other client (task) can reset it but never read it. And, the OS can act as a bridge/proxy to an object residing on a different node -- what "address" do you access to reference Counter 34 on node 56? Who tells you that it resides on 56 and hasn't been moved to node 29?? Because the OS can provide that proxy interface, I can *move* an object between successive accesses -- without its clients knowing this has happened. As if the file server you are accessing had suddenly been replaced by another machine at a different IP address WHILE you were accessing files! Likewise, because the access is indirect, I can interpose an agency on selective objects to implement redundancy for that object without the client using THAT interface ever knowing. Or, support different versions of an interface simultaneously (which address do you access to see the value of the counter as an unsigned binary long? which address to see it as a floating point number? which address to see it as an ASCII string?) Note that I can do all of these things with a flat *or* overlapping address space. Because a task doesn't need to be able to DIRECTLY access anything -- other than the system trap! You, on the other hand, have to build a different mechanism (e.g., your distributed filesystem) to access certain TYPES of objects (e.g., files) without concern for their location. That ability comes "free" for EVERY type of object in my world. It is essential as I expect to be interacting with other nodes continuously -- and those nodes can be powered up or down independent of my wishes. Can I pull a board out of your MCA and expect it to keep running? Unplug one of my nodes (or cut the cable, light it on fire, etc.) and there will be a hiccup while I respawn the services/objects that were running on that node to another node. But, clients of those services/objects will just see a prolonged RMI/RPC (if one was in progress when the node was killed) Note that I've not claimed it is "better". What I have claimed is that it "does more" (than <whatever>). And, because it does more (because I EXPECT it to), any perceived advantages of a flat address space are just down in the "noise floor". They don't factor into the implementation decisions. By the time I "bolted on" these features OUTSIDE your OS onto your implementation, I'd have a BIGGER solution to the same problem! ["Access this memory address directly -- unless the object you want has been moved to another node. In which case, access this OTHER address to figure out where it's gone to; then access yet another address to actually do what you initially set out to do, had the object remained 'local'"] This sums up our differences:
> Why would you want to protect regions you don't want protected?
Why WOULDN"T you want to protect EVERYTHING?? Sharing should be an exception. It should be more expensive to share than NOT to share. You don't want things comingling unless they absolutely MUST. And, the more such interaction, the more you should look at the parties involved to see if refactoring may be warranted. "Opaque" is the operative word. The more you expose, the more interdependencies you create.
On 6/11/2021 7:55, Don Y wrote:
> ... > As I tease more of your design out of you, it becomes apparent why > you "need" a flat address space.&nbsp; You push much of the responsibility > for managing the environment into the developer's hands.&nbsp; *He* decides > which regions of memory to share.&nbsp; He talks to the MMU (even if through > an API).&nbsp; He directly retrieves values from other tasks.&nbsp; Etc.
It is not true that the developer is in control of all that. Messaging from one task to another goes through a system call. Anyway, I am not interested in discussing dps here/now. The *only* thing I would like you to answer me is why you think a linear 64 bit address space can add vulnerability to a design. Dimiter
On 6/11/2021 4:14 AM, Dimiter_Popoff wrote:
> On 6/11/2021 7:55, Don Y wrote: >> ... >> As I tease more of your design out of you, it becomes apparent why >> you "need" a flat address space. You push much of the responsibility >> for managing the environment into the developer's hands. *He* decides >> which regions of memory to share. He talks to the MMU (even if through >> an API). He directly retrieves values from other tasks. Etc. > > It is not true that the developer is in control of all that. Messaging > from one task to another goes through a system call.
But the client directly retrieves the values. The OS doesn't provide them (at least, that's what you said previously)
> Anyway, I am not interested in discussing dps here/now. > > The *only* thing I would like you to answer me is why you think > a linear 64 bit address space can add vulnerability to a design.
Please tell me where I said it -- in and of itself -- makes a design vulnerable? HOW any aspect of an MCU is *used* is the cause of vulnerability; to internal bugs, external threats, etc. The more stuff that's exposed, the more places fault can creep into a design. It's why we litter code with invariants, check for the validity of input parameters, etc. Every interface is a potential for a fault; and an *opportunity* to bolster your confidence in the design (by verifying the interfaces are being used correctly!) [Do you think all of these ransomware attacks we hear of are the result of developers being INCREDIBLY stupid? Or, just "not paranoid enough"??] Turning off an MMU (when you have one available) is obviously putting you in a more "exposed" position than correctly *using* it (all else being equal). Unless, of course, you don't have the skills to use it properly. There are firewire implementations that actually let the external peripheral DMA directly into the host's memory. Any fault in the implementation *inside* the host obviously exposes the internals of the system to an external agent. Can you be 100.0% sure that the device you're plugging in (likely sold with your type of computer in mind and, thus, aware of what's where, inside!) is benign? <https://en.wikipedia.org/wiki/DMA_attack> Is there anything *inherently* wrong with DMA? Or Firewire? No. Do they create the potential for a VULNERABILITY in a system? Yes. The vulnerability is a result of how they are *used*. My protecting-everything-from-everything-else is intended to eliminate unanticipated attack vectors before a hostile actor (third party software or external agent) can discover an exploit. Or, a latent bug can compromise the proper operation of the system. It's why I *don't* have any global namespaces (if you can't NAME something, then you can't ACCESS it -- even if you KNOW it exists, somewhere; controlling the names you can see controls the things you can access) It's why I require you to have a valid "Handle" to every object with which you want to interact; if you don't have a handle to the object, then you can't talk to it. You can't consume it's resources or try to exploit vulnerabilities that may be present. Or, just plain ask it (mistakenly) to do something incorrect! It's why I don't let you invoke EVERY method on a particular object, even if you have a valid handle! Because you don't need to be ABLE to do something that you don't NEED to do! Attempting to do so is indicative of either a bug (because you didn't declare a need to access that method when you were installed!) or an attempted exploit. In either case, there is no value to letting you continue with a simple error message. <https://en.wikipedia.org/wiki/Principle_of_least_privilege> It's why each object can decide to *sever* your "legitimate" connection to any of it's Interfaces if it doesn't like what you are doing or asking it to do. "Too bad, so sad. Take it up with Management! And, no, we won't be letting you get restarted cuz we know there's something unhealthy about you!" It's why access controls are applied on the *client* side of a transaction instead of requiring the server/object to make that determination (like some other capability-based systems). Because any server-side activities consume the server's resources, even if it will ultimately deny your request (move the denial into YOUR resources) It's why I enforce quotas on the resources you can consume -- or have others consume for your *benefit* -- so an application's (task) "load" on the system can be constrained. If you want to put staff in place to vet each third party application before "offering it in your store", then you have to assume that overhead -- and HOPE you catch any malevolent/buggy actors before folks install those apps. I think that's the wrong approach as it requires a sizeable effort to test/validate any submitted application "thoroughly" (you end up doing the developer's work FOR him!) Note that bugs also exist, even in the absence of "bad intent". Should they be allowed to bring down your product/system? Or, should their problems be constrained to THEIR demise?? [I'm assuming your MCA has the ability to "print" hardcopy of <whatever>. Would it be acceptable if a bug in your print service brought down the instrument? This *session*? Silently corrupted the data that it was asked to print?] ANYTHING (and EVERYTHING) that I can do to make my system more robust is worth the effort. Hardware is cheap (relatively speaking). Debugging time is VERY costly. And, "user inconvenience/aggravation" is *outrageously* expensive! I let the OS "emulate" features that I wished existed in the silicon -- because, there, they would likely be less expensive to utilize (time, resources) This is especially true in my alpha site application. Imagine being blind, deaf, wheelchair confined, paralyzed/amputee, early onset altzheimers, or "just plain old", etc. and having to deal with something that is misbehaving ALL AROUND YOU (because it pervades your home environment). It was intended to *facilitate* your continued presence in YOUR home, delaying your transfer to an a$$i$ted care facility. Now, it's making life "very difficult"! "Average Joes" get pissed off when their PC misbehaves. Imagine your garage door opening in the middle of the night. Or, the stereo turns on -- loud -- while you're on the phone. Or, the phone hangs up mid conversation. Or, the wrong audio stream accompanies a movie you're viewing. Or, a visitor is announced at the front door, but noone is there! Or, the coffee maker turned on too early and your morning coffee is mud. Or, the heat turns on midafternoon on a summer day. Or, the garage door closes on your vehicle as you are exiting. Or, your bedside alarm goes off at 3AM. How long will you wait for "repair" in that sort of environment? When are you overwhelmed by the technology (that is supposed to be INVISIBLE) coupled with your current condition -- and just throw in the towel? YOU can sell a spare MCA to a customer who wants to minimize his downtime "at any cost". Should I make "spare houses" available? Maybe deeply discounted?? :< What about spare factories??
On 6/11/2021 15:10, Don Y wrote:
> On 6/11/2021 4:14 AM, Dimiter_Popoff wrote: >> On 6/11/2021 7:55, Don Y wrote: >>> ... >>> As I tease more of your design out of you, it becomes apparent why >>> you "need" a flat address space.&nbsp; You push much of the responsibility >>> for managing the environment into the developer's hands.&nbsp; *He* decides >>> which regions of memory to share.&nbsp; He talks to the MMU (even if through >>> an API).&nbsp; He directly retrieves values from other tasks.&nbsp; Etc. >> >> It is not true that the developer is in control of all that. Messaging >> from one task to another goes through a system call. > > But the client directly retrieves the values.&nbsp; The OS doesn't provide > them (at least, that's what you said previously)
I am not sure what this means. The recipient task has advertised a field where messages can be queued, the sending task makes a system call designating the message and which task is to receive it; during that call execution the message is written into the memory of the recipient. Then at some point later the recipient can see that and process the message. What more do you need?
> >> Anyway, I am not interested in discussing dps here/now. >> >> The *only* thing I would like you to answer me is why you think >> a linear 64 bit address space can add vulnerability to a design. > > Please tell me where I said it -- in and of itself -- makes a > design vulnerable?
This is how the exchange started:
>>>> Dimiter_Popoff wrote: >>>> >>>> >>>> The real value in 64 bit integer registers and 64 bit address space is >>>> just that, having an orthogonal "endless" space (well I remember some >>>> 30 years ago 32 bits seemed sort of "endless" to me...). >>>> >>>> Not needing to assign overlapping logical addresses to anything >>>> can make a big difference to how the OS is done. >>> >>> That depends on what you expect from the OS. If you are >>> comfortable with the possibility of bugs propagating between >>> different subsystems, then you can live with a logical address >>> space that exactly coincides with a physical address space. >> >> So how does the linear 64 bt address space get in the way of >> any protection you want to implement? Pages are still 4 k and >> each has its own protection attributes governed by the OS, >> it is like that with 32 bit processors as well (I talk power, I am >> not interested in half baked stuff like ARM, risc-v etc., I don't >> know if there could be a problem like that with one of these). > > With a linear address space, you typically have to link EVERYTHING > as a single image to place each thing in its own piece of memory > (or use segment based addressing).
Now if you have missed the "logical" word in my post I can understand why you went into all that. But I was quite explicit about it. Anyway, I am glad we agree that a 64 bit logical address space is no obstacle to security. From there on it can only be something to make programming life easier. Dimiter
> > HOW any aspect of an MCU is *used* is the cause of vulnerability; > to internal bugs, external threats, etc.&nbsp; The more stuff that's exposed, > the more places fault can creep into a design.&nbsp; It's why we litter code > with invariants, check for the validity of input parameters, etc. > Every interface is a potential for a fault; and an *opportunity* > to bolster your confidence in the design (by verifying the interfaces > are being used correctly!) > > [Do you think all of these ransomware attacks we hear of are > the result of developers being INCREDIBLY stupid?&nbsp; Or, just > "not paranoid enough"??] > > Turning off an MMU (when you have one available) is obviously > putting you in a more "exposed" position than correctly > *using* it (all else being equal).&nbsp; Unless, of course, you > don't have the skills to use it properly. > > There are firewire implementations that actually let the external > peripheral DMA directly into the host's memory.&nbsp; Any fault in the > implementation *inside* the host obviously exposes the internals > of the system to an external agent.&nbsp; Can you be 100.0% sure that > the device you're plugging in (likely sold with your type of > computer in mind and, thus, aware of what's where, inside!) is > benign? > > <https://en.wikipedia.org/wiki/DMA_attack> > > Is there anything *inherently* wrong with DMA?&nbsp; Or Firewire?&nbsp; No. > Do they create the potential for a VULNERABILITY in a system?&nbsp; Yes. > The vulnerability is a result of how they are *used*. > > My protecting-everything-from-everything-else is intended to eliminate > unanticipated attack vectors before a hostile actor (third party > software or external agent) can discover an exploit.&nbsp; Or, a latent > bug can compromise the proper operation of the system.&nbsp; It's why I > *don't* have any global namespaces (if you can't NAME something, > then you can't ACCESS it -- even if you KNOW it exists, somewhere; > controlling the names you can see controls the things you can access) > > It's why I require you to have a valid "Handle" to every object with > which you want to interact; if you don't have a handle to the > object, then you can't talk to it.&nbsp; You can't consume it's resources > or try to exploit vulnerabilities that may be present.&nbsp; Or, just plain > ask it (mistakenly) to do something incorrect! > > It's why I don't let you invoke EVERY method on a particular object, > even if you have a valid handle!&nbsp; Because you don't need to be ABLE > to do something that you don't NEED to do!&nbsp; Attempting to do so > is indicative of either a bug (because you didn't declare a need > to access that method when you were installed!) or an attempted > exploit.&nbsp; In either case, there is no value to letting you continue > with a simple error message. > > <https://en.wikipedia.org/wiki/Principle_of_least_privilege> > > It's why each object can decide to *sever* your "legitimate" connection > to any of it's Interfaces if it doesn't like what you are doing > or asking it to do.&nbsp; "Too bad, so sad.&nbsp; Take it up with Management! > And, no, we won't be letting you get restarted cuz we know there's > something unhealthy about you!" > > It's why access controls are applied on the *client* side of > a transaction instead of requiring the server/object to make > that determination (like some other capability-based systems). > Because any server-side activities consume the server's > resources, even if it will ultimately deny your request > (move the denial into YOUR resources) > > It's why I enforce quotas on the resources you can consume -- or > have others consume for your *benefit* -- so an application's > (task) "load" on the system can be constrained. > > If you want to put staff in place to vet each third party application > before "offering it in your store", then you have to assume that > overhead -- and HOPE you catch any malevolent/buggy actors before > folks install those apps.&nbsp; I think that's the wrong approach as > it requires a sizeable effort to test/validate any submitted > application "thoroughly" (you end up doing the developer's work > FOR him!) > > Note that bugs also exist, even in the absence of "bad intent". > Should they be allowed to bring down your product/system?&nbsp; Or, > should their problems be constrained to THEIR demise?? > > [I'm assuming your MCA has the ability to "print" hardcopy > of <whatever>.&nbsp; Would it be acceptable if a bug in your print > service brought down the instrument?&nbsp; This *session*? > Silently corrupted the data that it was asked to print?] > > ANYTHING (and EVERYTHING) that I can do to make my system more robust > is worth the effort.&nbsp; Hardware is cheap (relatively speaking). > Debugging time is VERY costly.&nbsp; And, "user inconvenience/aggravation" > is *outrageously* expensive!&nbsp; I let the OS "emulate" features that > I wished existed in the silicon -- because, there, they would > likely be less expensive to utilize (time, resources) > > This is especially true in my alpha site application.&nbsp; Imagine being > blind, deaf, wheelchair confined, paralyzed/amputee, early onset > altzheimers, or "just plain old", etc. and having to deal with something > that is misbehaving ALL AROUND YOU (because it pervades your home > environment).&nbsp; It was intended to *facilitate* your continued presence > in YOUR home, delaying your transfer to an a$$i$ted care facility. > Now, it's making life "very difficult"! > > "Average Joes" get pissed off when their PC misbehaves. > &nbsp; Imagine your garage door opening in the middle of the night. > &nbsp; Or, the stereo turns on -- loud -- while you're on the phone. > &nbsp; Or, the phone hangs up mid conversation. > &nbsp; Or, the wrong audio stream accompanies a movie you're viewing. > &nbsp; Or, a visitor is announced at the front door, but noone is there! > &nbsp; Or, the coffee maker turned on too early and your morning coffee is mud. > &nbsp; Or, the heat turns on midafternoon on a summer day. > &nbsp; Or, the garage door closes on your vehicle as you are exiting. > &nbsp; Or, your bedside alarm goes off at 3AM. > How long will you wait for "repair" in that sort of environment? > When are you overwhelmed by the technology (that is supposed to be > INVISIBLE) coupled with your current condition -- and just throw > in the towel? > > YOU can sell a spare MCA to a customer who wants to minimize his > downtime "at any cost".&nbsp; Should I make "spare houses" available? > Maybe deeply discounted??&nbsp; :< > > What about spare factories??
On 6/11/2021 6:35 AM, Dimiter_Popoff wrote:
> On 6/11/2021 15:10, Don Y wrote: >> On 6/11/2021 4:14 AM, Dimiter_Popoff wrote: >>> On 6/11/2021 7:55, Don Y wrote: >>>> ... >>>> As I tease more of your design out of you, it becomes apparent why >>>> you "need" a flat address space. You push much of the responsibility >>>> for managing the environment into the developer's hands. *He* decides >>>> which regions of memory to share. He talks to the MMU (even if through >>>> an API). He directly retrieves values from other tasks. Etc. >>> >>> It is not true that the developer is in control of all that. Messaging >>> from one task to another goes through a system call. >> >> But the client directly retrieves the values. The OS doesn't provide >> them (at least, that's what you said previously) > > I am not sure what this means. The recipient task has advertised a field > where messages can be queued, the sending task makes a system call > designating the message and which task is to receive it; during that > call execution the message is written into the memory of the recipient. > Then at some point later the recipient can see that and process the > message. What more do you need? > >> >>> Anyway, I am not interested in discussing dps here/now. >>> >>> The *only* thing I would like you to answer me is why you think >>> a linear 64 bit address space can add vulnerability to a design. >> >> Please tell me where I said it -- in and of itself -- makes a >> design vulnerable? > > This is how the exchange started: > > >>>>> Dimiter_Popoff wrote: >>>>> >>>>> >>>>> The real value in 64 bit integer registers and 64 bit address space is >>>>> just that, having an orthogonal "endless" space (well I remember some >>>>> 30 years ago 32 bits seemed sort of "endless" to me...). >>>>> >>>>> Not needing to assign overlapping logical addresses to anything >>>>> can make a big difference to how the OS is done. >>>> >>>> That depends on what you expect from the OS. If you are >>>> comfortable with the possibility of bugs propagating between >>>> different subsystems, then you can live with a logical address >>>> space that exactly coincides with a physical address space. >>> >>> So how does the linear 64 bt address space get in the way of >>> any protection you want to implement? Pages are still 4 k and >>> each has its own protection attributes governed by the OS, >>> it is like that with 32 bit processors as well (I talk power, I am >>> not interested in half baked stuff like ARM, risc-v etc., I don't >>> know if there could be a problem like that with one of these). >> >> With a linear address space, you typically have to link EVERYTHING >> as a single image to place each thing in its own piece of memory >> (or use segment based addressing). > > Now if you have missed the "logical" word in my post I can > understand why you went into all that. But I was quite explicit > about it.
It's easier to get some part of a flat address space *wrong*. And, as you've exposed (even if hiding behind an MMU) everything, that presents an opportunity for SOMETHING to leak -- that shouldn't. Alpha (OS) took this to an extreme. Each object had its own address space fitted neatly onto some number of (contiguous?) pages. When you invoked a method on an object, you trapped to the OS. The OS marshalled your arguments into an "input page(s)" and created an empty "ouput page(s)". It then built an address space consisting of the input and output pages (at logical addresses that an object recognized, by convention) AND the page(s) for the object's code. NOTHING ELSE. Then, transfered control to one of N entry points in the first page of the object's implementation (another convention). So, the object's code had free access to its inputs, its own code and it's outputs. Attempting to reference anything else would signal a protection violation. There *is* nothing else! A bug, errant pointer, exploit, etc. would just land on unmapped memory! [Note that an object could invoke ANOTHER object -- but, that object would then be built up in yet another address space while the current object's address space was idled] This approach makes it hard for "active" objects to be made persistent (e.g., a *process* that is "doing something yet has an object interface) so objects tend to want to be passive. I don't go to these extremes. But close! An instance of a "foo" object is served by (an instance of) a foo_object_server. That server can serve multiple foo objects concurrently (and can even use multiple threads to do so). The object_server accepts the messages for the object and invokes the corresponding method in the context of that particular object. Because it exists in an isolated address space, no other objects/clients can examine the object's implementation -- the code executed by the object_server to handle each request/method as well as the "private data" associated with an object instance. Nor can they interfere with any of this. Another instance of a foo_object_server can serve other foo objects -- on the same node or on some other node. [i.e., to migrate a foo object to another node, instantiate a foo_object_server on the target node -- if one doesn't already exist there -- and then transfer the internal representation of the desired foo object to that other server. And, arrange for all "Interfaces" to that particular object to simultaneously be transferred (so future connections to the object are unaware of the migration)] As my *_object_servers can be persistent, an object can be active -- can continue doing something even after (or before!) every request (method invocation) has been serviced. It's up to the object designer to decide the semantics of each method. E.g., should garagedoor.open() start the opening process and wait until it is complete before returning to the caller? Or, should it just start the process and return immediately? Should rosebush.irrigate() block the caller for the hour or two it takes to irrigate?
> Anyway, I am glad we agree that a 64 bit logical address space > is no obstacle to security. From there on it can only be something > to make programming life easier.
It's not an obstacle. But, it's an *opportunity* for a bug to manifest or information to leak or an exploit to gain a beachhead. The less stuff that can be exposed (even accidentally), the less opportunity for these problems to manifest. Think of all the "account compromises" that have occurred. All of the "personal/account information" that has leaked. Why was that information accessible? I'm sure it wasn't SUPPOSED to be accessible. But, why was it even resident ANYWHERE on an out-facing machine? If I make a purchase, I provide a CC number, name ("as it appears on the card") and the CVC number off the back. To validate my card, it would be foolish to: select name, cvc from credit_cards where card_number = CC_provided if name != name_provided or cvc != cvc_provided then reject_transaction while this, by itself, is correct and "secure", the approach requires the credit_cards table to contain ALL of the data for every credit card. AND, has the potential to allow an adversary to trick the software into revealing all or part of it! If, OTOH, the implementation was: if !verify(CC_provided, name_provided, cvc_provided) then reject_transaction all of the details can be hidden behind verify() which can execute on a more secure processor with a more secure protocol. E.g., compute a hash of these three values and ask the DBMS if the hash is correct, without ever having the raw data stored! An adversary could try to *guess* a valid combination of name/cc/cvc --- but, would have to repeatedly issue "validate()" requests -- which can then attract suspicion. [This is why I allow an object to sever an incoming connection if *it* detects behaviours that it considers harmful or suspicious; the OS has no way of making that determination!] Making MORE things inaccessible (i.e., everything that doesn't NEED to be accessible -- like Alpha's approach) improves reliability and security. Because REAL implementations always have oversights and bugs -- things that you hadn't considered or for which you hadn't tested. Again, your application and application domain is likely far more benign than mine. The people operating your devices are likely more technically capable (which MIGHT lead to better process compliance). How likely would one of your instruments find itself targeted by an attacker? And, your budget is likely tighter than mine (in terms of money, resources and performance). I can *afford* a more featureful OS to offload more of the work/vulnerability from developers. Doing so lets more developers tinker in my environment. And, makes the resulting design (which evolves as each developer adds to the system) more robust from attack, fault, compromise. [An adversary is more likely to target one of my systems than yours: "Deposit $1,000 in this bitcoin account to regain control over your home/factory/business..."]
On Wed, 9 Jun 2021 03:12:12 -0700, Don Y <blockedofcourse@foo.invalid>
wrote:

>On 6/9/2021 12:17 AM, David Brown wrote: > >> Process geometries are not targeted at 64-bit. They are targeted at >> smaller, faster and lower dynamic power. In order to produce such a big >> design as a 64-bit cpu, you'll aim for a minimum level of process >> sophistication - but that same process can be used for twice as many >> 32-bit cores, or bigger sram, or graphics accelerators, or whatever else >> suits the needs of the device. > >They will apply newer process geometries to newer devices. >No one is going to retool an existing design -- unless doing >so will result in a significant market enhancement. > >Why don't we have 100MHz MC6800's?
A number of years ago somebody had a 200MHz 6502. Granted, it was a soft core implemented in an ASIC. No idea what it was used for.
>> But you are absolutely right about maths (floating point or integer) - >> having 32-bit gives you a lot more freedom and less messing around with >> scaling back and forth to make things fit and work efficiently in 8-bit >> or 16-bit. And if you have floating point hardware (and know how to use >> it properly), that opens up new possibilities. >> >> 64-bit cores will extend that, but the step is almost negligable in >> comparison. It would be wrong to say "int32_t is enough for anyone", >> but it is /almost/ true. It is certainly true enough that it is not a >> problem that using "int64_t" takes two instructions instead of one. > >Except that int64_t can take *four* instead of one (add/sub/mul two >int64_t's with 32b hardware).
A 32b CPU could require a dozen instructions to do 64b math depending on whether it has condition flags, whether math ops set the condition flags (vs requiring explicit compare or compare/branch), and whether it even has carry aware ops [some chips don't] If detecting wrap-around/overflow requires comparing the result against the operands, multi-word arithmetic (even just 2 words) quickly becomes long and messy. YMMV, George
Hi George,

On 6/12/2021 9:58 AM, George Neuner wrote:
> On Wed, 9 Jun 2021 03:12:12 -0700, Don Y <blockedofcourse@foo.invalid> > wrote: > >> On 6/9/2021 12:17 AM, David Brown wrote: >> >>> Process geometries are not targeted at 64-bit. They are targeted at >>> smaller, faster and lower dynamic power. In order to produce such a big >>> design as a 64-bit cpu, you'll aim for a minimum level of process >>> sophistication - but that same process can be used for twice as many >>> 32-bit cores, or bigger sram, or graphics accelerators, or whatever else >>> suits the needs of the device. >> >> They will apply newer process geometries to newer devices. >> No one is going to retool an existing design -- unless doing >> so will result in a significant market enhancement. >> >> Why don't we have 100MHz MC6800's? > > A number of years ago somebody had a 200MHz 6502. Granted, it was a > soft core implemented in an ASIC. > > No idea what it was used for.
AFAICT, the military still uses them. I know there was a radhard 8080 (or 8085?) made some years back. I suspect it would just be a curiosity piece, though. You'd need < 10ns memory to use it in its original implementation. Easier to write an emulator and run it on a faster COTS machine!
>>> But you are absolutely right about maths (floating point or integer) - >>> having 32-bit gives you a lot more freedom and less messing around with >>> scaling back and forth to make things fit and work efficiently in 8-bit >>> or 16-bit. And if you have floating point hardware (and know how to use >>> it properly), that opens up new possibilities. >>> >>> 64-bit cores will extend that, but the step is almost negligable in >>> comparison. It would be wrong to say "int32_t is enough for anyone", >>> but it is /almost/ true. It is certainly true enough that it is not a >>> problem that using "int64_t" takes two instructions instead of one. >> >> Except that int64_t can take *four* instead of one (add/sub/mul two >> int64_t's with 32b hardware). > > A 32b CPU could require a dozen instructions to do 64b math depending > on whether it has condition flags, whether math ops set the condition > flags (vs requiring explicit compare or compare/branch), and whether > it even has carry aware ops [some chips don't] > > If detecting wrap-around/overflow requires comparing the result > against the operands, multi-word arithmetic (even just 2 words) > quickly becomes long and messy.
If you look back to life with 8b registers, you understand the pain of even 32b operations. Wider architectures make data manipulation easier. Bigger *address* spaces (wider address buses) make it easier to "do more". So, an 8b CPU with extended address space (bank switching, etc.) can tackle a bigger (more varied) problem (at a slow rate). But a wider CPU with a much smaller address space can handle a smaller (in scope) problem at a much faster rate (all else being equal -- memory speed, etc.) When doing video games, this was a common discussion (price sensitive); do you move to a wider processor to gain performance? or, do you move to a faster one? (where you put the money changes)
On Tue, 8 Jun 2021 18:29:24 -0700, Don Y <blockedofcourse@foo.invalid>
wrote:

>On 6/8/2021 3:01 PM, Dimiter_Popoff wrote: > >>> Am trying to puzzle out what a 64-bit embedded processor should look like. >>> At the low end, yeah, a simple RISC processor. And support for complex >>> arithmetic >>> using 32-bit floats? And support for pixel alpha blending using quad 16-bit >>> numbers? >>> 32-bit pointers into the software? >> >> The real value in 64 bit integer registers and 64 bit address space is >> just that, having an orthogonal "endless" space (well I remember some >> 30 years ago 32 bits seemed sort of "endless" to me...). >> >> Not needing to assign overlapping logical addresses to anything >> can make a big difference to how the OS is done. > >That depends on what you expect from the OS. If you are >comfortable with the possibility of bugs propagating between >different subsystems, then you can live with a logical address >space that exactly coincides with a physical address space.
Propagation of bugs is mostly independent of the logical address space. In actual fact, existing SAS operating systems are MORE resistant to problems than MPAS systems.
>But, consider how life was before Windows used compartmentalized >applications (and OS). How easily it is for one "application" >(or subsystem) to cause a reboot -- unceremoniously.
You can kill a Windows systems with 2 lines of code: : SetThreadPriority(GetCurrentThread(),THREAD_PRIORITY_TIME_CRITICAL); while( 1 ); :
>The general direction (in software development, and, by >association, hardware) seems to be to move away from unrestrained >access to the underlying hardware in an attempt to limit the >amount of damage that a "misbehaving" application can cause. > >You see this in languages designed to eliminate dereferencing >pointers, pointer arithmetic, etc. Languages that claim to >ensure your code can't misbehave because it can only do >exactly what the language allows (no more injecting ASM >into your HLL code).
Managed runtimes. Pointers, per se, are not a problem. However, explicit pointer arithmetic /is/ known to be the cause of many program bugs. IMO high level languages should not allow it - there's nothing you can do with (source level) pointer arithmetic can do that can't be done more safely using array indexing. The vast majority of programmers should just let the compiler deal with it.
>> 32 bit FPU seems useless to me, 64 bit is OK. Although 32 FP >> *numbers* can be quite useful for storing/passing data. > >32 bit numbers have appeal if you're registers are 32b; >they "fit nicely". Ditto 64b in 64b registers.
Depends on the problem domain. If you don't need the extra precision, calculations with 32b floats often are twice or more as fast as with 64b doubles. Particularly with SIMD, you gain both by 32b calculations taking fewer cycles than 64b, and by being able to perform twice as many simultaneous calculations. YMMV, George

The 2024 Embedded Online Conference