EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

64-bit embedded computing is here and now

Started by James Brakefield June 7, 2021
Sometimes things move faster than expected.
As someone with an embedded background this caught me by surprise:

Tera-Byte microSD cards are readily available and getting cheaper.
Heck, you can carry ten of them in a credit card pouch.
Likely to move to the same price range as hard disks ($20/TB).

That means that a 2+ square inch PCB can hold a 64-bit processor and enough storage for memory mapped files larger than 4GB.

Is the 32-bit embedded processor cost vulnerable to 64-bit 7nm devices as the FABs mature? Will video data move to the IOT edge? Will AI move to the edge?  Will every embedded CPU have a built-in radio?

Wait a few years and find out.

On 6/7/2021 7:47 AM, James Brakefield wrote:
> > Sometimes things move faster than expected. As someone with an embedded > background this caught me by surprise: > > Tera-Byte microSD cards are readily available and getting cheaper. Heck, you > can carry ten of them in a credit card pouch. Likely to move to the same > price range as hard disks ($20/TB). > > That means that a 2+ square inch PCB can hold a 64-bit processor and enough > storage for memory mapped files larger than 4GB.
Kind of old news. I've been developing on a SAMA5D36 platform with 256M of FLASH and 256M of DDR2 for 5 or 6 years, now. PCB is just over 2 sq in (but most of that being off-board connectors). Granted, it's a 32b processor but I'll be upgrading that to something "wider" before release (software and OS have been written for a 64b world -- previously waiting for costs to fall to make it as economical as the 32b was years ago; now waiting to see if I can leverage even MORE hardware-per-dollar!). Once you have any sort of connectivity, it becomes practical to support files larger than your physical memory -- just fault the appropriate page in over whatever interface(s) you have available (assuming you have other boxes that you can talk to/with)
> Is the 32-bit embedded processor cost vulnerable to 64-bit 7nm devices as > the FABs mature? Will video data move to the IOT edge? Will AI move to the > edge? Will every embedded CPU have a built-in radio?
In my case, video is already *at* the edge. The idea of needing a "bigger host" or "the cloud" is already obsolescent. Even the need for bulk storage -- whether on-board (removable flash, as you suggest) or remotely served -- is dubious. How much persistent store do you really need, beyond your executables, in a typical application? I've decided that RAM is the bottleneck as you can't XIP out of an SD card... Radios? <shrug> Possibly as wireless is *so* much easier to interconnect than wired. But, you're still left with the power problem; even at a couple of watts, wall warts are unsightly and low voltage DC isn't readily available *everywhere* that you may want to site a device. (how many devices do you want tethered to a USB host before it starts to look a mess?) The bigger challenge is moving developers to think in terms of the capabilities that the hardware will afford. E.g., can you exploit *true* concurrency in your application? Or, will you "waste" a second core/thread context on some largely decoupled activity? How much capability will you be willing to sacrifice to your hosting OS -- and what NEW capabilities will it 0provide you?
> Wait a few years and find out.
The wait won't even be *that* long...
James Brakefield <jim.brakefield@ieee.org> writes:
> Is the 32-bit embedded processor cost vulnerable to 64-bit 7nm devices > as the FABs mature? Will video data move to the IOT edge? Will AI move > to the edge? Will every embedded CPU have a built-in radio?
I don't care what the people say-- 32 bits are here to stay.
On 08/06/2021 07:31, Paul Rubin wrote:
> James Brakefield <jim.brakefield@ieee.org> writes: >> Is the 32-bit embedded processor cost vulnerable to 64-bit 7nm devices >> as the FABs mature? Will video data move to the IOT edge? Will AI move >> to the edge? Will every embedded CPU have a built-in radio? > > I don't care what the people say-- > 32 bits are here to stay. >
8-bit microcontrollers are still far more common than 32-bit devices in the embedded world (and 4-bit devices are not gone yet). At the other end, 64-bit devices have been used for a decade or two in some kinds of embedded systems. We'll see 64-bit take a greater proportion of the embedded systems that demand high throughput or processing power (network devices, hard cores in expensive FPGAs, etc.) where the extra cost in dollars, power, complexity, board design are not a problem. They will probably become more common in embedded Linux systems as the core itself is not usually the biggest part of the cost. And such systems are definitely on the increase. But for microcontrollers - which dominate embedded systems - there has been a lot to gain by going from 8-bit and 16-bit to 32-bit for little cost. There is almost nothing to gain from a move to 64-bit, but the cost would be a good deal higher. So it is not going to happen - at least not more than a very small and very gradual change. The OP sounds more like a salesman than someone who actually works with embedded development in reality.
On 6/7/2021 10:59 PM, David Brown wrote:
> 8-bit microcontrollers are still far more common than 32-bit devices in > the embedded world (and 4-bit devices are not gone yet). At the other > end, 64-bit devices have been used for a decade or two in some kinds of > embedded systems.
I contend that a good many "32b" implementations are really glorified 8/16b applications that exhausted their memory space. I still see lots of designs build on a small platform (8/16b) and augment it -- either with some "memory enhancement" technology or additional "slave" processors to split the binaries. Code increases in complexity but there doesn't seem to be a need for the "work-per-unit-time" to. [This has actually been the case for a long time. The appeal of newer CPUs is often in the set of peripherals that accompany the processor, not the processor itself.]
> We'll see 64-bit take a greater proportion of the embedded systems that > demand high throughput or processing power (network devices, hard cores > in expensive FPGAs, etc.) where the extra cost in dollars, power, > complexity, board design are not a problem. They will probably become > more common in embedded Linux systems as the core itself is not usually > the biggest part of the cost. And such systems are definitely on the > increase. > > But for microcontrollers - which dominate embedded systems - there has > been a lot to gain by going from 8-bit and 16-bit to 32-bit for little
I disagree. The "cost" (barrier) that I see clients facing is the added complexity of a 32b platform and how it often implies (or even *requires*) a more formal OS underpinning the application. Where you could hack together something on bare metal in the 8/16b worlds, moving to 32 often requires additional complexity in managing mechanisms that aren't usually present in smaller CPUs (caches, MMU/MPU, DMA, etc.) Developers (and their organizations) can't just play "coder cowboy" and coerce the hardware to behaving as they would like. Existing staff (hired with the "bare metal" mindset) are often not equipped to move into a more structured environment. [I can hack together a device to meet some particular purpose much easier on "development hardware" than I can on a "PC" -- simply because there's too much I have to "work around" on a PC that isn't present on development hardware.] Not every product needs a filesystem, network stack, protected execution domains, etc. Those come with additional costs -- often in the form of a lack of understanding as to what the ACTUAL code in your product is doing at any given time. (this isn't the case in the smaller MCU world; it's possible for a developer to have written EVERY line of code in a smaller platform)
> cost. There is almost nothing to gain from a move to 64-bit, but the > cost would be a good deal higher.
Why is the cost "a good deal higher"? Code/data footprints don't uniformly "double" in size. The CPU doesn't slow down to handle bigger data. The cost is driven by where the market goes. Note how many 68Ks found design-ins vs. the T11, F11, 16032, etc. My first 32b design was physically large, consumed a boatload of power and ran at only a modest improvement (in terms of system clock) over 8b processors of its day. Now, I can buy two orders of magnitude more horsepower PLUS a bunch of built-in peripherals for two cups of coffee (at QTY 1)
> So it is not going to happen - at > least not more than a very small and very gradual change.
We got 32b processors NOT because the embedded world cried out for them but, rather, because of the influence of the 32b desktop world. We've had 32b processors since the early 80's. But, we've only had PCs since about the same timeframe! One assumes ubiquity in the desktop world would need to happen before any real spillover to embedded. (When the "desktop" was an '11 sitting in a back room, it wasn't seen as ubiquitous.) In the future, we'll see the 64b *phone* world drive the evolution of embedded designs, similarly. (do you really need 32b/64b to make a phone? how much code is actually executing at any given time and in how many different containers?) [The OP suggests MCus with radios -- maybe they'll be cell phone radios and *not* wifi/BLE as I assume he's thinking! Why add the need for some sort of access point to a product's deployment if the product *itself* can make a direct connection??] My current design can't fill a 32b address space (but, that's because I've decomposed apps to the point that they can be relatively small). OTOH, designing a system with a 32b limitation seems like an invitation to do it over when 64b is "cost effective". The extra "baggage" has proven to be relatively insignificant (I have ports of my codebase to SPARC as well as Atom running alongside a 32b ARM)
> The OP sounds more like a salesman than someone who actually works with > embedded development in reality.
Possibly. Or, just someone that wanted to stir up discussion...
On 08/06/2021 09:39, Don Y wrote:
> On 6/7/2021 10:59 PM, David Brown wrote: >> 8-bit microcontrollers are still far more common than 32-bit devices in >> the embedded world (and 4-bit devices are not gone yet).&nbsp; At the other >> end, 64-bit devices have been used for a decade or two in some kinds of >> embedded systems. > > I contend that a good many "32b" implementations are really glorified > 8/16b applications that exhausted their memory space.&nbsp;
Sure. Previously you might have used 32 kB flash on an 8-bit device, now you can use 64 kB flash on a 32-bit device. The point is, you are /not/ going to find yourself hitting GB limits any time soon. The step from 8-bit or 16-bit to 32-bit is useful to get a bit more out of the system - the step from 32-bit to 64-bit is totally pointless for 99.99% of embedded systems. (Even for most embedded Linux systems, you usually only have a 64-bit cpu because you want bigger and faster, not because of memory limitations. It is only when you have a big gui with fast graphics that 32-bit address space becomes a limitation.) A 32-bit microcontroller is simply much easier to work with than an 8-bit or 16-bit with "extended" or banked memory to get beyond 64 K address space limits.
> >> We'll see 64-bit take a greater proportion of the embedded systems that >> demand high throughput or processing power (network devices, hard cores >> in expensive FPGAs, etc.) where the extra cost in dollars, power, >> complexity, board design are not a problem.&nbsp; They will probably become >> more common in embedded Linux systems as the core itself is not usually >> the biggest part of the cost.&nbsp; And such systems are definitely on the >> increase. >> >> But for microcontrollers - which dominate embedded systems - there has >> been a lot to gain by going from 8-bit and 16-bit to 32-bit for little > > I disagree.&nbsp; The "cost" (barrier) that I see clients facing is the > added complexity of a 32b platform and how it often implies (or even > *requires*) a more formal OS underpinning the application.
Yes, that is definitely a cost in some cases - 32-bit microcontrollers are usually noticeably more complicated than 8-bit ones. How significant the cost is depends on the balances of the project between development costs and production costs, and how beneficial the extra functionality can be (like moving from bare metal to RTOS, or supporting networking).
> >> cost.&nbsp; There is almost nothing to gain from a move to 64-bit, but the >> cost would be a good deal higher. > > Why is the cost "a good deal higher"?&nbsp; Code/data footprints don't > uniformly "double" in size.&nbsp; The CPU doesn't slow down to handle > bigger data.
Some parts of code and data /do/ double in size - but not uniformly, of course. But your chip is bigger, faster, requires more power, has wider buses, needs more advanced memories, has more balls on the package, requires finer pitched pcb layouts, etc. In theory, you /could/ make a microcontroller in a 64-pin LQFP and replace the 72 MHz Cortex-M4 with a 64-bit ARM core at the same clock speed. The die would only cost two or three times more, and take perhaps less than 10 times the power for the core. But it would be so utterly pointless that no manufacturer would make such a device. So a move to 64-bit in practice means moving from a small, cheap, self-contained microcontroller to an embedded PC. Lots of new possibilities, lots of new costs of all kinds. Oh, and the cpu /could/ be slower for some tasks - bigger cpus that are optimised for throughput often have poorer latency and more jitter for interrupts and other time-critical features.
> >> &nbsp;So it is not going to happen - at >> least not more than a very small and very gradual change. > > We got 32b processors NOT because the embedded world cried out for > them but, rather, because of the influence of the 32b desktop world. > We've had 32b processors since the early 80's.&nbsp; But, we've only had > PCs since about the same timeframe!&nbsp; One assumes ubiquity in the > desktop world would need to happen before any real spillover to embedded. > (When the "desktop" was an '11 sitting in a back room, it wasn't seen > as ubiquitous.)
I don't assume there is any direct connection between the desktop world and the embedded world - the needs are usually very different. There is a small overlap in the area of embedded devices with good networking and a gui, where similarity to the desktop world is useful. We have had 32-bit microcontrollers for decades. I used a 16-bit Windows system when working with my first 32-bit microcontroller. But at that time, 32-bit microcontrollers cost a lot more and required more from the board (external memories, more power, etc.) than 8-bit or 16-bit devices. That has gradually changed with an almost total disregard for what has happened in the desktop world. Yes, the embedded world /did/ cry out for 32-bit microcontrollers for an increasing proportion of tasks. We cried many tears when then microcontroller manufacturers offered to give more flash space to their 8-bit devices by having different memory models, banking, far jumps, and all the other shit that goes with not having a big enough address space. We cried out when we wanted to have Ethernet and the microcontroller only had a few KB of ram. I have used maybe 6 or 8 different 32-bit microcontroller processor architectures, and I used them because I needed them for the task. It's only in the past 5+ years that I have been using 32-bit microcontrollers for tasks that could be done fine with 8-bit devices, but the 32-bit devices are smaller, cheaper and easier to work with than the corresponding 8-bit parts.
> > In the future, we'll see the 64b *phone* world drive the evolution > of embedded designs, similarly.&nbsp; (do you really need 32b/64b to > make a phone?&nbsp; how much code is actually executing at any given > time and in how many different containers?) >
We will see that on devices that are, roughly speaking, tablets - embedded systems with a good gui, a touchscreen, networking. And that's fine. But these are a tiny proportion of the embedded devices made.
> >> The OP sounds more like a salesman than someone who actually works with >> embedded development in reality. > > Possibly.&nbsp; Or, just someone that wanted to stir up discussion... >
Could be. And there's no harm in that!
David Brown <david.brown@hesbynett.no> wrote:
> But for microcontrollers - which dominate embedded systems - there has > been a lot to gain by going from 8-bit and 16-bit to 32-bit for little > cost. There is almost nothing to gain from a move to 64-bit, but the > cost would be a good deal higher. So it is not going to happen - at > least not more than a very small and very gradual change.
I think there will be divergence about what people mean by an N-bit system: Register size Unit of logical/arithmetical processing Memory address/pointer size Memory bus/cache width I think we will increasingly see parts which have different sizes on one area but not the other. For example, for doing some kinds of logical operations (eg crypto), having 64-bit registers and ALU makes sense, but you might only need kilobytes of memory so only have <32 address bits. For something else, like a microcontroller that's hung off the side of a bigger system (eg the MCU on a PCIe card) you might want the ability to handle 64 bit addresses but don't need to pay the price for 64-bit registers. Or you might operate with 16 or 32 bit wide external RAM chip, but your cache could extend that to a wider word width. There are many permutations, and I think people will pay the cost where it benefits them and not where it doesn't. This is not a new phenomenon, of course. But for a time all these numbers were in the range between 16 and 32 bits, which made 32 simplest all round. Just like we previously had various 8/16 hybrids (eg 8 bit datapath, 16 bit address) I think we're going to see more 32/64 hybrids. Theo
On Tuesday, June 8, 2021 at 2:39:29 AM UTC-5, Don Y wrote:
> On 6/7/2021 10:59 PM, David Brown wrote: > > 8-bit microcontrollers are still far more common than 32-bit devices in > > the embedded world (and 4-bit devices are not gone yet). At the other > > end, 64-bit devices have been used for a decade or two in some kinds of > > embedded systems. > I contend that a good many "32b" implementations are really glorified > 8/16b applications that exhausted their memory space. I still see lots > of designs build on a small platform (8/16b) and augment it -- either > with some "memory enhancement" technology or additional "slave" > processors to split the binaries. Code increases in complexity but > there doesn't seem to be a need for the "work-per-unit-time" to. > > [This has actually been the case for a long time. The appeal of > newer CPUs is often in the set of peripherals that accompany the > processor, not the processor itself.] > > We'll see 64-bit take a greater proportion of the embedded systems that > > demand high throughput or processing power (network devices, hard cores > > in expensive FPGAs, etc.) where the extra cost in dollars, power, > > complexity, board design are not a problem. They will probably become > > more common in embedded Linux systems as the core itself is not usually > > the biggest part of the cost. And such systems are definitely on the > > increase. > > > > But for microcontrollers - which dominate embedded systems - there has > > been a lot to gain by going from 8-bit and 16-bit to 32-bit for little > I disagree. The "cost" (barrier) that I see clients facing is the > added complexity of a 32b platform and how it often implies (or even > *requires*) a more formal OS underpinning the application. Where you > could hack together something on bare metal in the 8/16b worlds, > moving to 32 often requires additional complexity in managing > mechanisms that aren't usually present in smaller CPUs (caches, > MMU/MPU, DMA, etc.) Developers (and their organizations) can't just > play "coder cowboy" and coerce the hardware to behaving as they > would like. Existing staff (hired with the "bare metal" mindset) > are often not equipped to move into a more structured environment. > > [I can hack together a device to meet some particular purpose > much easier on "development hardware" than I can on a "PC" -- simply > because there's too much I have to "work around" on a PC that isn't > present on development hardware.] > > Not every product needs a filesystem, network stack, protected > execution domains, etc. Those come with additional costs -- often > in the form of a lack of understanding as to what the ACTUAL > code in your product is doing at any given time. (this isn't the > case in the smaller MCU world; it's possible for a developer to > have written EVERY line of code in a smaller platform) > > cost. There is almost nothing to gain from a move to 64-bit, but the > > cost would be a good deal higher. > Why is the cost "a good deal higher"? Code/data footprints don't > uniformly "double" in size. The CPU doesn't slow down to handle > bigger data. > > The cost is driven by where the market goes. Note how many 68Ks found > design-ins vs. the T11, F11, 16032, etc. My first 32b design was > physically large, consumed a boatload of power and ran at only a modest > improvement (in terms of system clock) over 8b processors of its day. > Now, I can buy two orders of magnitude more horsepower PLUS a > bunch of built-in peripherals for two cups of coffee (at QTY 1) > > So it is not going to happen - at > > least not more than a very small and very gradual change. > We got 32b processors NOT because the embedded world cried out for > them but, rather, because of the influence of the 32b desktop world. > We've had 32b processors since the early 80's. But, we've only had > PCs since about the same timeframe! One assumes ubiquity in the > desktop world would need to happen before any real spillover to embedded. > (When the "desktop" was an '11 sitting in a back room, it wasn't seen > as ubiquitous.) > > In the future, we'll see the 64b *phone* world drive the evolution > of embedded designs, similarly. (do you really need 32b/64b to > make a phone? how much code is actually executing at any given > time and in how many different containers?) > > [The OP suggests MCus with radios -- maybe they'll be cell phone > radios and *not* wifi/BLE as I assume he's thinking! Why add the > need for some sort of access point to a product's deployment if > the product *itself* can make a direct connection??] > > My current design can't fill a 32b address space (but, that's because > I've decomposed apps to the point that they can be relatively small). > OTOH, designing a system with a 32b limitation seems like an invitation > to do it over when 64b is "cost effective". The extra "baggage" has > proven to be relatively insignificant (I have ports of my codebase > to SPARC as well as Atom running alongside a 32b ARM) > > The OP sounds more like a salesman than someone who actually works with > > embedded development in reality. > Possibly. Or, just someone that wanted to stir up discussion...
|> I contend that a good many "32b" implementations are really glorified |> 8/16b applications that exhausted their memory space. The only thing that will take more than 4GB is video or a day's worth of photos. So there is likely to be some embedded aps that need a > 32-bit address space. Cost, size or storage capacity are no longer limiting factors. Am trying to puzzle out what a 64-bit embedded processor should look like. At the low end, yeah, a simple RISC processor. And support for complex arithmetic using 32-bit floats? And support for pixel alpha blending using quad 16-bit numbers? 32-bit pointers into the software?
On 08/06/2021 21:38, James Brakefield wrote:

Could you explain your background here, and what you are trying to get
at?  That would make it easier to give you better answers.

> The only thing that will take more than 4GB is video or a day's worth of photos.
No, video is not the only thing that takes 4GB or more. But it is, perhaps, one of the more common cases. Most embedded systems don't need anything remotely like that much memory - to the nearest percent, 100% of embedded devices don't even need close to 4MB of memory (ram and flash put together).
> So there is likely to be some embedded aps that need a > 32-bit address space.
Some, yes. Many, no.
> Cost, size or storage capacity are no longer limiting factors.
Cost and size (and power) are /always/ limiting factors in embedded systems.
> > Am trying to puzzle out what a 64-bit embedded processor should look like.
There are plenty to look at. There are ARMs, PowerPC, MIPS, RISC-V. And of course there are some x86 processors used in embedded systems.
> At the low end, yeah, a simple RISC processor.
Pretty much all processors except x86 and brain-dead old-fashioned 8-bit CISC devices are RISC. Not all are simple.
> And support for complex arithmetic > using 32-bit floats?
A 64-bit processor will certainly support 64-bit doubles as well as 32-bit floats. Complex arithmetic is rarely needed, except perhaps for FFT's, but is easily done using real arithmetic. You can happily do 32-bit complex arithmetic on an 8-bit AVR, albeit taking significant code space and run time. I believe the latest gcc for the AVR will do 64-bit doubles as well - using exactly the same C code you would on any other processor.
> And support for pixel alpha blending using quad 16-bit numbers?
You would use a hardware 2D graphics accelerator for that, not the processor.
> 32-bit pointers into the software? >
With 64-bit processors you usually use 64-bit pointers.
On 08/06/2021 16:46, Theo wrote:
> David Brown <david.brown@hesbynett.no> wrote: >> But for microcontrollers - which dominate embedded systems - there has >> been a lot to gain by going from 8-bit and 16-bit to 32-bit for little >> cost. There is almost nothing to gain from a move to 64-bit, but the >> cost would be a good deal higher. So it is not going to happen - at >> least not more than a very small and very gradual change. > > I think there will be divergence about what people mean by an N-bit system:
There has always been different ways to measure the width of a cpu, and different people have different preferences.
> > Register size
Yes, that is common.
> Unit of logical/arithmetical processing
As is that. Sometimes the width supported by general instructions differs from the ALU width, however, resulting in classifications like 8/16-bit for the Z80 and 16/32-bit for the 68000.
> Memory address/pointer size
Yes, also common.
> Memory bus/cache width
No, that is not a common way to measure cpu "width", for many reasons. A chip is likely to have many buses outside the cpu core itself (and the cache(s) may or may not be considered part of the core). It's common to have 64-bit wide buses on 32-bit processors, it's also common to have 16-bit external databuses on a microcontroller. And the cache might be 128 bits wide.
> > I think we will increasingly see parts which have different sizes on one > area but not the other. >
That has always been the case.
> For example, for doing some kinds of logical operations (eg crypto), having > 64-bit registers and ALU makes sense, but you might only need kilobytes of > memory so only have <32 address bits.
You need quite a few KB of ram for more serious cryptography. But it sounds more like you are talking about SIMD or vector operations here, which are not considered part of the "normal" width of the cpu. Modern x86 cpus might have 512 bit SIMD registers - but they are still 64-bit processors. But you are right that you might want some parts of the system to be wider and other parts thinner.
> > For something else, like a microcontroller that's hung off the side of a > bigger system (eg the MCU on a PCIe card) you might want the ability to > handle 64 bit addresses but don't need to pay the price for 64-bit > registers. > > Or you might operate with 16 or 32 bit wide external RAM chip, but your > cache could extend that to a wider word width. > > There are many permutations, and I think people will pay the cost where it > benefits them and not where it doesn't. >
Agreed.
> This is not a new phenomenon, of course. But for a time all these numbers > were in the range between 16 and 32 bits, which made 32 simplest all round. > Just like we previously had various 8/16 hybrids (eg 8 bit datapath, 16 bit > address) I think we're going to see more 32/64 hybrids. >
32-bit processors have often had 64-bit registers for floating point, and 64-bit operations of various sorts. It is not new.

The 2024 Embedded Online Conference