EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

64-bit embedded computing is here and now

Started by James Brakefield June 7, 2021
On 09/06/2021 02:30, Don Y wrote:
> On 6/8/2021 4:04 AM, David Brown wrote: >> On 08/06/2021 09:39, Don Y wrote: >>> On 6/7/2021 10:59 PM, David Brown wrote: >>>> 8-bit microcontrollers are still far more common than 32-bit devices in >>>> the embedded world (and 4-bit devices are not gone yet).  At the other >>>> end, 64-bit devices have been used for a decade or two in some kinds of >>>> embedded systems. >>> >>> I contend that a good many "32b" implementations are really glorified >>> 8/16b applications that exhausted their memory space. >> >> Sure.  Previously you might have used 32 kB flash on an 8-bit device, >> now you can use 64 kB flash on a 32-bit device.  The point is, you are >> /not/ going to find yourself hitting GB limits any time soon.  The step > > I don't see the "problem" with 32b devices as one of address space limits > (except devices utilizing VMM with insanely large page sizes).  As I said, > in my application, task address spaces are really just a handful of pages. >
32 bit address space is not typically a problem or limitation. (One other use of 64-bit address space is for debug tools like valgrind or "sanitizers" that use large address spaces along with MMU protection and specialised memory allocation to help catch memory errors. But these also need sophisticated MMU's and a lot of other resources not often found on small embedded systems.)
> I *do* see (flat) address spaces that find themselves filling up with > stack-and-heap-per-task, big chunks set aside for "onboard" I/Os, > *partial* address decoding for offboard I/Os, etc.  (i.e., you're > not likely going to fully decode a single address to access a set > of DIP switches as the decode logic is disproportionately high > relative to the functionality it adds) > > How often do you see a high-order address line used for kernel/user? > (gee, now your "user" space has been halved)
Unless you are talking about embedded Linux and particularly demanding (or inefficient!) tasks, halving your address space is not going to be a problem.
> >> from 8-bit or 16-bit to 32-bit is useful to get a bit more out of the >> system - the step from 32-bit to 64-bit is totally pointless for 99.99% >> of embedded systems.  (Even for most embedded Linux systems, you usually >> only have a 64-bit cpu because you want bigger and faster, not because >> of memory limitations.  It is only when you have a big gui with fast >> graphics that 32-bit address space becomes a limitation.) > > You're assuming there has to be some "capacity" value to the 64b move. >
I'm trying to establish if there is any value at all in moving to 64-bit. And I have no doubt that for the /great/ majority of embedded systems, it would not. I don't even see it as having noticeable added value in the solid majority of embedded Linux systems produced. But in those systems, the cost is minor or irrelevant once you have a big enough processor.
> You might discover that the ultralow power devices (for phones!) > are being offered in the process geometries targeted for the 64b > devices.
Process geometries are not targeted at 64-bit. They are targeted at smaller, faster and lower dynamic power. In order to produce such a big design as a 64-bit cpu, you'll aim for a minimum level of process sophistication - but that same process can be used for twice as many 32-bit cores, or bigger sram, or graphics accelerators, or whatever else suits the needs of the device. A major reason you see 64-bit cores in big SOC's is that the die space is primarily taken up by caches, graphics units, on-board ram, networking, interfaces, and everything else. Moving the cpu core from 32-bit to 64-bit only increases the die size by a few percent, and for some tasks it will also increase the the performance of the code by a small but helpful amount. So it is not uncommon, even if you don't need the additional address space. (The other major reason is that for some systems, you want to work with more than about 2 GB ram, and then life is much easier with 64-bit cores.) On microcontrollers - say, a random Cortex-M4 or M7 device - changing to a 64-bit core will increase the die by maybe 30% and give roughly /zero/ performance increase. You don't use 64-bit unless you really need it.
>  Or, that some integrated peripheral "makes sense" for > phones (but not MCUs targeting motor control applications).  Or, > that there are additional power management strategies supported > in the hardware. > > In my mind, the distinction brought about by "32b" was more advanced > memory protection/management -- even if not used in a particular > application.  You simply didn't see these sorts of mechanisms > in 8/16b offerings.  Likewise, floating point accelerators.  Working > in smaller processors meant you had to spend extra effort to > bullet-proof your code, economize on math operators, etc.
You need to write correct code regardless of the size of the device. I disagree entirely about memory protection being useful there. This is comp.arch.embedded, not comp.programs.windows (or whatever). An MPU might make it easier to catch and fix bugs while developing and testing, but code that hits MPU traps should not leave your workbench. But you are absolutely right about maths (floating point or integer) - having 32-bit gives you a lot more freedom and less messing around with scaling back and forth to make things fit and work efficiently in 8-bit or 16-bit. And if you have floating point hardware (and know how to use it properly), that opens up new possibilities. 64-bit cores will extend that, but the step is almost negligable in comparison. It would be wrong to say "int32_t is enough for anyone", but it is /almost/ true. It is certainly true enough that it is not a problem that using "int64_t" takes two instructions instead of one.
>> Some parts of code and data /do/ double in size - but not uniformly, of >> course.  But your chip is bigger, faster, requires more power, has wider >> buses, needs more advanced memories, has more balls on the package, >> requires finer pitched pcb layouts, etc. > > And has been targeted to a market that is EXTREMELY power sensitive > (phones!).
A phone cpu takes orders of magnitude more power to do the kinds of tasks that might be typical for a microcontroller cpu - reading sensors, controlling outputs, handling UARTs, SPI and I²C buses, etc. Phone cpus are optimised for doing the "big phone stuff" efficiently - because that's what takes the time, and therefore the power. (I'm snipping because there is far too much here - I have read your comments, but I'm trying to limit the ones I reply to.)
>> >> We will see that on devices that are, roughly speaking, tablets - >> embedded systems with a good gui, a touchscreen, networking.  And that's >> fine.  But these are a tiny proportion of the embedded devices made. > > Again, I disagree.
I assume you are disagreeing about seeing 64-bit cpus only on devices that need a lot of memory or processing power, rather than disagreeing that such devices are only a tiny proportion of embedded devices.
> You've already admitted to using 32b processors > where 8b could suffice.  What makes you think you won't be using 64b > processors when 32b could suffice?
As I have said, I think there will be an increase in the proportion of 64-bit embedded devices - but I think it will be very slow and gradual. Perhaps in 20 years time 64-bit will be in the place that 32-bit is now. But it won't happen for a long time. Why do I use 32-bit microcontrollers where an 8-bit one could do the job? Well, we mentioned above that you can be freer with the maths. You can, in general, be freer in the code - and you can use better tools and languages. With ARM microcontrollers I can use the latest gcc and C++ standards - I don't have to program in a weird almost-C dialect using extensions to get data in flash, or pay thousands for a limited C++ compiler with last century's standards. I don't have to try and squeeze things into 8-bit scaled integers, or limit my use of pointers due to cpu limitations. And manufacturers make the devices smaller, cheaper, lower power and faster than 8-bit devices in many cases. If manufactures made 64-bit devices that are smaller, cheaper and lower power than the 32-bit ones today, I'd use them. But they would not be better for the job, or better to work with and better for development in the way 32-bit devices are better than 8-bit and 16-bit.
> > It's just as hard for me to prototype a 64b SoC as it is a 32b SoC. > The boards are essentially the same size.  "System" power consumption > is almost identical.  Cost is the sole differentiating factor, today.
For you, perhaps. Not necessarily for others. We design, program and manufacture electronics. Production and testing of simpler cards is cheaper. The pcbs are cheaper. The chips are cheaper. The mounting is faster. The programming and testing is faster. You don't mix big, thick tracks and high power on the same board as tight-packed BGA with blind/buried vias - but you /can/ happily work with less dense packages on the same board. If you are talking about replacing one 400-ball SOC with another 400-ball SOC with a 64-bit core instead of a 32-bit core, then it will make no difference in manufacturing. But if you are talking about replacing a Cortex-M4 microcontroller with a Cortex-A53 SOC, it /will/ be a lot more expensive in most volumes. I can't really tell what kinds of designs you are discussing here. When I talk about embedded systems in general, I mean microcontrollers running specific programs - not general-purpose computers in embedded formats (such as phones). (For very small volumes, the actual physical production costs are a small proportion of the price, and for very large volumes you have dedicated machines for the particular board.)
>>> Possibly.  Or, just someone that wanted to stir up discussion... >> >> Could be.  And there's no harm in that! > > On that, we agree. > > Time for ice cream (easiest -- and most enjoyable -- way to lose weight)!
I've not heard of that as a dieting method, but I shall give it a try :-)
On 09/06/2021 06:16, George Neuner wrote:
> On Tue, 8 Jun 2021 22:11:18 +0200, David Brown > <david.brown@hesbynett.no> wrote: > > >> Pretty much all processors except x86 and brain-dead old-fashioned 8-bit >> CISC devices are RISC... > > It certainly is correct to say of the x86 that its legacy, programmer > visible, instruction set is CISC ... but it is no longer correct to > say that the chip design is CISC. > > Since (at least) the Pentium 4 x86 really are a CISC decoder bolted > onto the front of what essentially is a load/store RISC. >
Absolutely. But from the user viewpoint, it is the ISA that matters - it is a CISC ISA. The implementation details are mostly hidden (though sometimes it is useful to know about timings).
> "Complex" x86 instructions (in RAM and/or $I cache) are dynamically > translated into equivalent short sequences[*] of RISC-like wide format > instructions which are what actually is executed. Those sequences > also are stored into a special trace cache in case they will be used > again soon - e.g., in a loop - so they (hopefully) will not have to be > translated again. > > > [*] Actually, a great many x86 instructions map 1:1 to internal RISC > instructions - only a small percentage of complex x86 instructions > require "emulation" via a sequence of RISC instructions. >
And also, some sequences of several x86 instructions map to single RISC instructions, or to no instructions at all. It is, of course, a horrendously complex mess - and is a major reason for x86 cores taking more power and costing more than RISC cores for the same performance.
> >> ... Not all [RISC] are simple. > > Correct. Every successful RISC CPU has supported a suite of complex > instructions. >
Yes. People often parse RISC as R(IS)C - i.e., they think it means the ISA has a small instruction set. It should be parsed (RI)SC - the instructions are limited compared to those on a (CI)SC cpu.
> > Of course, YMMV. > George >
On 08/06/2021 22:39, Dimiter_Popoff wrote:
> On 6/8/2021 23:18, David Brown wrote: >> On 08/06/2021 16:46, Theo wrote: >>> ...... >> >>> Memory bus/cache width >> >> No, that is not a common way to measure cpu "width", for many reasons. >> A chip is likely to have many buses outside the cpu core itself (and the >> cache(s) may or may not be considered part of the core).&nbsp; It's common to >> have 64-bit wide buses on 32-bit processors, it's also common to have >> 16-bit external databuses on a microcontroller.&nbsp; And the cache might be >> 128 bits wide. > > I agree with your points and those of Theo, but the cache is basically > as wide as the registers? Logically, that is; a cacheline is several > times that, probably you refer to that. > Not that it makes much of a difference to the fact that 64 bit data > buses/registers in an MCU (apart from FPU registers, 32 bit FPUs are > useless to me) are unlikely to attract much interest, nothing of > significance to be gained as you said. > To me 64 bit CPUs are of interest of course and thankfully there are > some available, but this goes somewhat past what we call&nbsp; "embedded". > Not long ago in a chat with a guy who knew some of ARM 64 bit I gathered > there is some real mess with their out of order execution, one needs to > do... hmmmm.. "sync", whatever they call it, all the time and there is > a huge performance cost because of that. Anybody heard anything about > it? (I only know what I was told). >
sync instructions of various types can be needed to handle thread/process synchronisation, atomic accesses, and coordination between software and hardware registers. Software normally runs with the idea that it is the only thing running, and the cpu can re-order and re-arrange the instructions and execution as long as it maintains the illusion that the assembly instructions in the current thread are executed one after the other. These re-arrangements and parallel execution can give very large performance benefits. But it also means that when you need to coordinate with other things, you need syncs, perhaps cache flushes, etc. Full syncs can take hundreds of cycles to execute on large processors. So you need to distinguish between reads and writes, acquires and releases, syncs on single addresses or general memory syncs. Big processors are optimised for throughput, not latency or quick reaction to hardware events. There are good reasons why big cpus are often paired with a Cortex-M core in SOCs.
On 6/9/2021 12:17 AM, David Brown wrote:

>>> from 8-bit or 16-bit to 32-bit is useful to get a bit more out of the >>> system - the step from 32-bit to 64-bit is totally pointless for 99.99% >>> of embedded systems. (Even for most embedded Linux systems, you usually >>> only have a 64-bit cpu because you want bigger and faster, not because >>> of memory limitations. It is only when you have a big gui with fast >>> graphics that 32-bit address space becomes a limitation.) >> >> You're assuming there has to be some "capacity" value to the 64b move. > > I'm trying to establish if there is any value at all in moving to > 64-bit. And I have no doubt that for the /great/ majority of embedded > systems, it would not.
That;s a no-brainer -- most embedded systems are small MCUs. Consider the PC I'm sitting at has an MCU in the keyboard; another in the mouse; one in the optical disk drive; one in the rust disk drive; one in the printer; two in the UPS; one in the wireless "modem"; one in the router; one in the thumb drive; etc. All offsetting the "big" CPU in the computer, itself.
> I don't even see it as having noticeable added value in the solid > majority of embedded Linux systems produced. But in those systems, the > cost is minor or irrelevant once you have a big enough processor.
My point is that the market can distort the "price/value" relationship in ways that might not, otherwise, make sense. A "better" device may end up costing less than a "worse" device -- simply because of the volumes that the population of customers favor.
>> You might discover that the ultralow power devices (for phones!) >> are being offered in the process geometries targeted for the 64b >> devices. > > Process geometries are not targeted at 64-bit. They are targeted at > smaller, faster and lower dynamic power. In order to produce such a big > design as a 64-bit cpu, you'll aim for a minimum level of process > sophistication - but that same process can be used for twice as many > 32-bit cores, or bigger sram, or graphics accelerators, or whatever else > suits the needs of the device.
They will apply newer process geometries to newer devices. No one is going to retool an existing design -- unless doing so will result in a significant market enhancement. Why don't we have 100MHz MC6800's?
> A major reason you see 64-bit cores in big SOC's is that the die space > is primarily taken up by caches, graphics units, on-board ram, > networking, interfaces, and everything else. Moving the cpu core from > 32-bit to 64-bit only increases the die size by a few percent, and for > some tasks it will also increase the the performance of the code by a > small but helpful amount. So it is not uncommon, even if you don't need > the additional address space. > > (The other major reason is that for some systems, you want to work with > more than about 2 GB ram, and then life is much easier with 64-bit cores.) > > On microcontrollers - say, a random Cortex-M4 or M7 device - changing to > a 64-bit core will increase the die by maybe 30% and give roughly /zero/ > performance increase. You don't use 64-bit unless you really need it.
Again, "... unless the market has made those devices cheaper than their previous choices" People don't necessarily "fit" their applications to the devices they choose; they consider other factors (cost, package type, availability, etc.) in deciding what to actual design into the product. You might "need" X MB of RAM but will "tolerate" 4X -- if the price is better than for the X MB *or* the X MB devices are not available. If the PCB layout can directly accommodate such a solution, then great! But, even if not, a PCB revision is a cheap expenditure if it lets you take advantage of a different component. I've made very deliberate efforts NOT to use many of the "I/Os" on the MCUs that I'm designing around so I can have more leeway in making that selection when released to production (every capability used represents a constraint that OTHER selections must satisfy)
>> Or, that some integrated peripheral "makes sense" for >> phones (but not MCUs targeting motor control applications). Or, >> that there are additional power management strategies supported >> in the hardware. >> >> In my mind, the distinction brought about by "32b" was more advanced >> memory protection/management -- even if not used in a particular >> application. You simply didn't see these sorts of mechanisms >> in 8/16b offerings. Likewise, floating point accelerators. Working >> in smaller processors meant you had to spend extra effort to >> bullet-proof your code, economize on math operators, etc. > > You need to write correct code regardless of the size of the device. I > disagree entirely about memory protection being useful there. This is > comp.arch.embedded, not comp.programs.windows (or whatever). An MPU > might make it easier to catch and fix bugs while developing and testing, > but code that hits MPU traps should not leave your workbench.
You're assuming you (or I) have control over all of the code that executes on a product/platform. And, that every potential bug manifests *in* testing. (If that were the case, we'd never see bugs in the wild!) In my case, "third parties" (who the hell is the SECOND party??) can install code that I've no control over. That code could be buggy -- or malevolent. Being able to isolate "actors" from each other means the OS can detect "can't happens" at run time and shut down the offender -- instead of letting it corrupt some part of the system.
> But you are absolutely right about maths (floating point or integer) - > having 32-bit gives you a lot more freedom and less messing around with > scaling back and forth to make things fit and work efficiently in 8-bit > or 16-bit. And if you have floating point hardware (and know how to use > it properly), that opens up new possibilities. > > 64-bit cores will extend that, but the step is almost negligable in > comparison. It would be wrong to say "int32_t is enough for anyone", > but it is /almost/ true. It is certainly true enough that it is not a > problem that using "int64_t" takes two instructions instead of one.
Except that int64_t can take *four* instead of one (add/sub/mul two int64_t's with 32b hardware).
>>> Some parts of code and data /do/ double in size - but not uniformly, of >>> course. But your chip is bigger, faster, requires more power, has wider >>> buses, needs more advanced memories, has more balls on the package, >>> requires finer pitched pcb layouts, etc. >> >> And has been targeted to a market that is EXTREMELY power sensitive >> (phones!). > > A phone cpu takes orders of magnitude more power to do the kinds of > tasks that might be typical for a microcontroller cpu - reading sensors, > controlling outputs, handling UARTs, SPI and I&sup2;C buses, etc. Phone cpus > are optimised for doing the "big phone stuff" efficiently - because > that's what takes the time, and therefore the power.
But you're making assumptions about what the "embedded microcontroller" will actually be called upon to do! Most of my embedded devices have "done more" than the PCs on which they were designed -- despite the fact that the PC can defrost bagels!
> (I'm snipping because there is far too much here - I have read your > comments, but I'm trying to limit the ones I reply to.) > >>> >>> We will see that on devices that are, roughly speaking, tablets - >>> embedded systems with a good gui, a touchscreen, networking. And that's >>> fine. But these are a tiny proportion of the embedded devices made. >> >> Again, I disagree. > > I assume you are disagreeing about seeing 64-bit cpus only on devices > that need a lot of memory or processing power, rather than disagreeing > that such devices are only a tiny proportion of embedded devices.
I'm disagreeing with the assumption that 64bit CPUs are solely used on "tablets, devices with good GUIs, touchscreens, networking" (in the embedded domain).
>> You've already admitted to using 32b processors >> where 8b could suffice. What makes you think you won't be using 64b >> processors when 32b could suffice? > > As I have said, I think there will be an increase in the proportion of > 64-bit embedded devices - but I think it will be very slow and gradual. > Perhaps in 20 years time 64-bit will be in the place that 32-bit is > now. But it won't happen for a long time.
And how is that any different from 32b processors introduced in 1980 only NOW seeing any sort of "widespread" use? The adoption of new technologies accelerates, over time. People (not "everyone") are more willing to try new things -- esp if it is relatively easy to do so. I can buy a 64b evaluation kit for a few hundred dollars -- I paid more than that for my first 8" floppy drive. I can run/install some demo software and get a feel for the level of performance, how much power is consumed, etc. I don't need to convince my employer to make that investment (so *I* can explore). In a group environment, if such a solution is *suggested*, I can then lend my support -- instead of shying away out of fear of the unknown risks.
> Why do I use 32-bit microcontrollers where an 8-bit one could do the > job? Well, we mentioned above that you can be freer with the maths. > You can, in general, be freer in the code - and you can use better tools > and languages.
Exactly. It's "easier" and you're less concerned with sorting out (later) what might not fit or be fast enough, etc. I could have done my current project with a bunch of PICs talking to a "big machine" over EIA485 links (I'd done an industrial automation project like that, before). But, unless you can predict how many sensors/actuators ("motes") there will EVER be, it's hard to determine how "big" that computer needs to be! Given that the cost of the PIC is only partially reflective of the cost of the DEPLOYED mote (run cable, attach and calibrate sensors/actuators, etc.) the added cost of moving to a bigger device on that mote disappears. Especially when you consider the flexibility it affords (in terms of scaling)
> With ARM microcontrollers I can use the latest gcc and > C++ standards - I don't have to program in a weird almost-C dialect > using extensions to get data in flash, or pay thousands for a limited > C++ compiler with last century's standards. I don't have to try and > squeeze things into 8-bit scaled integers, or limit my use of pointers > due to cpu limitations. > > And manufacturers make the devices smaller, cheaper, lower power and > faster than 8-bit devices in many cases. > > If manufactures made 64-bit devices that are smaller, cheaper and lower > power than the 32-bit ones today, I'd use them. But they would not be > better for the job, or better to work with and better for development in > the way 32-bit devices are better than 8-bit and 16-bit.
Again, you're making predictions about what those devices will be. Imagine 64b devices ARE equipped with radios. You can ADD a radio to your "better suited" 32b design. Or, *buy* the radio already integrated into the 64b solution. Are you going to stick with 32b devices because they are "better suited" to the application? Or, will you "suffer" the pains of embracing the 64b device? It's not *just* a CPU core that you're dealing with. Just like the 8/16 vs 32b decision isn't JUST about the width of the registers in the device or size of the address space. I mentioned my little experimental LFC device to discipline my NTPd. It would have been *nice* if it had an 8P8C onboard so I could talk to it "over the wire". But, that's not the appropriate sort of connectivity for an 8b device -- a serial port is. If I didn't have a means of connecting to it thusly, the 8b solution -- despite being a TINY development effort -- would have been impractical; bolting on a network stack and NIC would greatly magnify the cost (development time) of that platform.
>> It's just as hard for me to prototype a 64b SoC as it is a 32b SoC. >> The boards are essentially the same size. "System" power consumption >> is almost identical. Cost is the sole differentiating factor, today. > > For you, perhaps. Not necessarily for others. > > We design, program and manufacture electronics. Production and testing > of simpler cards is cheaper. The pcbs are cheaper. The chips are > cheaper. The mounting is faster. The programming and testing is > faster. You don't mix big, thick tracks and high power on the same > board as tight-packed BGA with blind/buried vias - but you /can/ happily > work with less dense packages on the same board. > > If you are talking about replacing one 400-ball SOC with another > 400-ball SOC with a 64-bit core instead of a 32-bit core, then it will > make no difference in manufacturing. But if you are talking about > replacing a Cortex-M4 microcontroller with a Cortex-A53 SOC, it /will/ > be a lot more expensive in most volumes. > > I can't really tell what kinds of designs you are discussing here. When > I talk about embedded systems in general, I mean microcontrollers > running specific programs - not general-purpose computers in embedded > formats (such as phones).
I cite phones as an example of a "big market" that will severely impact the devices (MCUs) that are actually manufactured and sold. I increasingly see "applications" growing in complexity -- beyond "single use" devices in the past. Devices talk to more things (devices) than they had, previously. Interfaces grow in complexity (markets often want to exercise some sort of control or configuration over a device -- remotely -- instead of just letting it do its ONE thing). In the past, additional functionality was an infrequent upgrade. Now, designs accommodate it "in the field" -- because they are expected to (no one wants to mail a device back to the factory for a software upgrade -- or have a visit from a service tech for that purpose). Rarely does a product become LESS complex, with updates. I've often found myself updating a design only to discover I've run out of some resource ("ROM", RAM, real-time, etc.). This never causes the update to be aborted; rather, it forces an unexpected diversion into shoehorning the "new REQUIREMENTS" into the old "5 pound sack". In *my* case, there are fixed applications (MANY) running on the hardware. But, the system is designed to allow for new applications to be added, old ones replaced (or retired), augmented with additional hardware, etc. It's not the "closed unless updated" systems previously common. We made LORAN-C position plotters, ages ago. Conceptually, cut a portion of a commercially available map and adhere it to the plotter bed. Position the pen at your current location on the map. Turn on. Start driving ("sailing"). The pen will move to indicate your NEW current position as well as a track indicating your path TO that (from wherever you were a moment ago). [This uses 100% of an 8b processor's real-time to keep up with the updates from the navigation receiver.] "Gee, what if the user doesn't have a commercial map, handy? Can't we *draw* one for him?" [Hmmm... if we concentrate on JUST drawing a map, then we can spend 100% of the CPU on THAT activity! We'll just need to find some extra space to store the code required and RAM to hold the variables we'll need...] "Gee, when the fisherman drops a lobster pot over the side, he has to run over to the plotter to mark the current location -- so he can return to it at some later date. Why can't we give him a button (on a long cable) that automatically draws an 'X' on the plot each time he depresses it?" You can see where this is going... Devices grow in features and complexity. If that plotter was designed today, it would likely have a graphic display (instead of pen and ink). And the 'X' would want to be displayed in RED (or, some user-configured color). And another color for the map to distinguish it from the "track". And updates would want to be distributed via a phone or thumbdrive or other "user accessible" medium. This because the needs of such a device will undoubtedly evolve. How often have you updated the firmware in your disk drives? Optical drives? Mice? Keyboard? Microwave oven? TV? We designed medical instruments where the firmware resided in a big, bulky "module" that could easily be removed (expensive ZIF connector!) -- so that medtechs could perform the updates in minutes (instead of taking the device out of service). But, as long as we didn't overly tax the real-time demands of the "base hardware", we were free (subject to pricing issues) to enhance that "module" to accommodate whatever new features were required. The product could "remain current". Like adding RAM to a PC to extend its utility (why can't I add RAM to my SmartTVs? Why can't I update their codecs?) The upgradeable products are designed for longer service lives than the nonupgradable examples, here. So, they have to be able to accommodate (in their "base designs" a wider variety of unforeseeable changes. If you expect a short service life, then you can rationalize NOT upgrading/updating and simply expecting the user to REPLACE the device at some interval that your marketeers consider appropriate.
> (For very small volumes, the actual physical production costs are a > small proportion of the price, and for very large volumes you have > dedicated machines for the particular board.) > >>>> Possibly. Or, just someone that wanted to stir up discussion... >>> >>> Could be. And there's no harm in that! >> >> On that, we agree. >> >> Time for ice cream (easiest -- and most enjoyable -- way to lose weight)! > > I've not heard of that as a dieting method, but I shall give it a try :-)
It's not recommended. I suspect it is evidence of some sort of food allergy that causes my body not to process calories properly (a tablespoon is 200+ calories; an enviable "scoop" is well over a thousand!). It annoys my other half to no end cuz she gains weight just by LOOKING at the stuff! :> So, its best for me to "sneak" it when she can't set eyes on it. Or, for me to make flavors that she's not keen on (this was butter pecan so she is REALLY annoyed!)
Don Y <blockedofcourse@foo.invalid> wrote:
> On 6/8/2021 7:46 AM, Theo wrote: > > I think there will be divergence about what people mean by an N-bit system: > > > > Register size > > Unit of logical/arithmetical processing > > Memory address/pointer size > > Memory bus/cache width > > (General) Register size is the primary driver.
Is it, though? What's driving that? Why do you want larger registers without a larger ALU width? I don't think register size is of itself a primary pressure. On larger CPUs with lots of rename or vector registers, they have kilobytes of SRAM to hold the registers, and increasing the size is a cost. On a basic in-order MCU with 16 or 32 registers, is the register width an issue? We aren't designing them on 10 micron technology any more. I would expect datapath width to be more critical, but again that's relatively small on an in-order CPU, especially compared with on-chip SRAM.
> However, it support 16b operations -- on register PAIRs > (an implicit acknowledgement that the REGISTER is smaller > than the register pair). This is common on many smaller > processors. The address space is 16b -- with a separate 16b > address space for I/Os. The Z180 extends the PHYSICAL > address space to 20b but the logical address space > remains unchanged at 16b (if you want to specify a physical > address, you must use 20+ bits to represent it -- and invoke > a separate mechanism to access it!). The ALU is *4* bits.
This is not really the world of a current 32-bit MCU, which has a 32 bit datapath and 32 bit registers. Maybe it does 64 bit arithmetic in 32 bit chunks, which then leads to the question of which MCU workloads require 64 bit arithmetic?
> But you don't buy MCUs with a-la-carte pricing. How much does an extra > timer cost me? What if I want it to also serve as a *counter*? What > cost for 100K of internal ROM? 200K? > > [It would be an interesting exercise to try to do a linear analysis of > product prices with an idea of trying to tease out the "costs" (to > the developer) for each feature in EXISTING products!] > > Instead, you see a *price* that is reflective of how widely used the > device happens to be, today. You are reliant on the preferences of others > to determine which is the most cost effective product -- for *you*.
Sure, what you buy is a 'highest common denominator' - you get things you don't use, but that other people do. But it still depends on a significant chunk of the market demanding those features. It's then a cost function of how much the market wants a feature against how much it'll cost to implement (and at runtime). If the cost is tiny, it may well get implemented even if almost nobody asked for it. If there's a use case, people will pay for it. (although maybe not enough) Theo
On 6/9/2021 5:10 AM, Theo wrote:
> Don Y <blockedofcourse@foo.invalid> wrote: >> On 6/8/2021 7:46 AM, Theo wrote: >>> I think there will be divergence about what people mean by an N-bit system: >>> >>> Register size >>> Unit of logical/arithmetical processing >>> Memory address/pointer size >>> Memory bus/cache width >> >> (General) Register size is the primary driver. > > Is it, though? What's driving that? > Why do you want larger registers without a larger ALU width?
You can use a smaller ALU (in the days when silicon was expensive) to do the work of a larger one -- if you spread the operation over time.
> I don't think register size is of itself a primary pressure. On larger CPUs > with lots of rename or vector registers, they have kilobytes of SRAM to hold > the registers, and increasing the size is a cost. On a basic in-order MCU > with 16 or 32 registers, is the register width an issue? We aren't > designing them on 10 micron technology any more.
It's just how people think of CPU widths. If there's no cost to register width, then why didn't 8b CPUs have 64 bit accumulators (and register files)?
> I would expect datapath width to be more critical, but again that's > relatively small on an in-order CPU, especially compared with on-chip SRAM. > >> However, it support 16b operations -- on register PAIRs >> (an implicit acknowledgement that the REGISTER is smaller >> than the register pair). This is common on many smaller >> processors. The address space is 16b -- with a separate 16b >> address space for I/Os. The Z180 extends the PHYSICAL >> address space to 20b but the logical address space >> remains unchanged at 16b (if you want to specify a physical >> address, you must use 20+ bits to represent it -- and invoke >> a separate mechanism to access it!). The ALU is *4* bits. > > This is not really the world of a current 32-bit MCU, which has a 32 bit > datapath and 32 bit registers.
Correct. I was just illustrating how you can have different "widths" in a single architecture; yet a single "CPU width" has to be used to describe it.
> Maybe it does 64 bit arithmetic in 32 bit > chunks, which then leads to the question of which MCU workloads require 64 > bit arithmetic?
I treat time as a 64b entity (32b being inadequate). IPv6 addresses won't fit in 32b. There are also algorithms that can benefit from processing data in wider chunks (e.g., count the number of set bits in a 64b array goes faster in a 64b register than on a 32) My BigRationals would be noticeably faster if I could process 64b at a time, instead of 32. [This, of course, assumes D cache can hold "as much data" in each case.] And you don't always need the full width of a register -- do you use all 32b of a register when you use it to keep track of the remaining number of iterations of a loop? Or, the index into an array? Or the time remaining until an upcoming deadline? Or processing characters in a string?
>> But you don't buy MCUs with a-la-carte pricing. How much does an extra >> timer cost me? What if I want it to also serve as a *counter*? What >> cost for 100K of internal ROM? 200K? >> >> [It would be an interesting exercise to try to do a linear analysis of >> product prices with an idea of trying to tease out the "costs" (to >> the developer) for each feature in EXISTING products!] >> >> Instead, you see a *price* that is reflective of how widely used the >> device happens to be, today. You are reliant on the preferences of others >> to determine which is the most cost effective product -- for *you*. > > Sure, what you buy is a 'highest common denominator' - you get things you > don't use, but that other people do. But it still depends on a significant > chunk of the market demanding those features.
Yes. Or, an application domain that consumes lots of parts.
> It's then a cost function of > how much the market wants a feature against how much it'll cost to implement > (and at runtime). If the cost is tiny, it may well get implemented even if > almost nobody asked for it.
You also have to remember that the seller isn't the sole actor in that negotiation. Charge too much and the customer can opt for a different (possibly "second choice") implementation. So, it is in the seller's interest to make his product as cost-effectively as possible. *Or*, have something that can't be obtained elsewhere. Nowadays, there are no second sources as there were in decades past. OTOH, I can find *another* ARM (for example) that may be "close enough" to what I need and largely compatible with my existing codebase. So, try to "hold me up" (overcharge) and I may find myself motivated to visit one of your competitors. [As HLLs are increasingly used, it's considerably easier to port a design to a different processor family entirely! Not so when you had 100K of ASM to leverage] I worked in a Motogorilla shop, years ago. When I started my design, I brought in folks from other vendors. The Motogorilla rep got spooked; to lose a design to another house would require answering some serious questions from his superiors ("How did you lose the account?"). He was especially nervous that the only Moto offering that I was considering was second sourced by 7 or 8 other vendors... so, even if the device got the design, he would likely have competitors keeping his pricing in line.
> If there's a use case, people will pay for it. > (although maybe not enough)
Designers often have somewhat arbitrary criteria for their decisions. Maybe you're looking for something that will be available for at least a decade. Or, have alternate sources that could be called upon in case your fab was compromised or oversold (nothing worse than hearing parts are "on allocation"!) So, a vendor can't assume he has the "right" solution (or price) for a given application. Maybe the designer has a "history" with a particular vendor or product line and can leverage that experience in ways that wouldn't apply to a different vendor. A vendor's goal should always be to produce the best device for his perceived/targeted audience at the best price point. Then, get it into their hands so they are ready to embrace it when the opportunity presents. Microchip took an interesting approach trying to buy into "hobbyists" with cheap evaluation boards and tools. I'm sure these were loss leaders. But, if they ended up winning a design (or two) because the "hobbyist" was in a position to influence a purchasing decision...
David Brown <david.brown@hesbynett.no> writes:
> I can't really tell what kinds of designs you are discussing here. When > I talk about embedded systems in general, I mean microcontrollers > running specific programs - not general-purpose computers in embedded > formats (such as phones).
Philip Munts made a comment a while back that stayed with me: that these days, in anything mains powered, there is usually little reason to use an MCU instead of a Linux board.
Paul Rubin <no.email@nospam.invalid> wrote:
> James Brakefield <jim.brakefield@ieee.org> writes: > > Am trying to puzzle out what a 64-bit embedded processor should look like. > > Buy yourself a Raspberry Pi 4 and set it up to run your fish tank via a > remote web browser. There's your 64 bit embedded system.
I suppose there's a question of what embedded tasks intrinsically require
>4GiB RAM, and those that do so because it makes programmers' lives easier?
In other words, you /can/ write a function to detect if your fish tank is hot or cold in Javascript that runs in a web app on top of Chromium on top of Linux. Or you could make it out of a 6502, or a pair of logic gates. That's complexity that's not fundamental to the application. OTOH maintaining a database that's larger than 4GB physically won't work without that amount of memory (or storage, etc). There are obviously plenty of computer systems doing that, but the question I don't know is what applications can be said to be 'embedded' but need that kind of RAM. Theo
On 6/9/2021 9:41 AM, Paul Rubin wrote:
> David Brown <david.brown@hesbynett.no> writes: >> I can't really tell what kinds of designs you are discussing here. When >> I talk about embedded systems in general, I mean microcontrollers >> running specific programs - not general-purpose computers in embedded >> formats (such as phones). > > Philip Munts made a comment a while back that stayed with me: that these > days, in anything mains powered, there is usually little reason to use > an MCU instead of a Linux board.
I note that anytime you use a COTS "module" of any kind, you're still stuck having to design and layout some sort of "add-on" card that handles your specific I/O needs; few real world devices can be controlled with just serial ports, NICs and "storage interfaces". And, you're now dependant on a board supplier as well as having to understand what's on (and in) that board as they are now critical components of YOUR product. The same applies to any firmware or software that it runs. I'm sure the FAA, FDA, etc. will gladly allow you to formally validate some other party's software and assume responsibility for its proper operation!
Paul Rubin <no.email@nospam.invalid> writes:
> Philip Munts made a comment a while back that stayed with me: that these > days, in anything mains powered, there is usually little reason to use > an MCU instead of a Linux board.
I have a friend who has a ceiling fan with a raspberry pi in it, because that was the easiest solution to turning it on and off remotely... So yeah, I agree, "with a computer" is becoming a default answer. On the other hand, my furnace (now geothermal) has been controlled by a linux board since 2005 or so... maybe I'm not the typical user ;-)

The 2024 Embedded Online Conference