EmbeddedRelated.com
Forums
Memfault Beyond the Launch

large microprocessors?

Started by Paul Rubin February 3, 2013
stephenXXX@mpeforth.com (Stephen Pelc) writes:
> STM32F4xx have 192kb or 256kb RAM and 1Mb or more of Flash. > These are excellent Cortex-M4 devices.
Thanks! I think I saw these before, but forgot about them. The STM32F4 Discovery board uses the STM32F407VGT6 which has 192kb of ram and a lot of other cool stuff too. I just spent a while looking at the data sheet. Memory protection, peripheral interfaces galore, 4k of ultra-low-power backup SRAM, realtime clock, hardware RNG, what's not to like? Wow! This thing can reach into areas where I had been thinking of using a Linux board with DRAM. I remember having some issue with the Discovery board, probably about the toolchain, but as pure hardware goes it's pretty impressive.
On 03/02/13 07:39, Robert Wessel wrote:
> On Sat, 02 Feb 2013 22:17:29 -0800, Paul Rubin > <no.email@nospam.invalid> wrote: > >> I notice that the ram capacity (ignore program flash for now, but it >> tends to basically be proportionate) of microcontrollers seems to grow >> fairly continuously (say in 2x jumps) from very small (a dozen or so >> bytes in an 8 bitter, lots of MSP430's in the 128 to 1k byte range, >> Cortex M0's with 4k, etc.), up to about 32k (Cortex M4). Above that >> there are a few chips with 64k or 128k, that are quite expensive, and >> above that not much is available til you get to external DRAM which on >> ready-made boards usually starts at 32 meg or even 64 meg (Olimex >> Olinuxino) or 512 meg (Raspberry Pi). So there is a big jump from 32k >> to 32 meg. It would be nice to have a low cost, single chip, 256k or 1 >> megabyte device but this doesn't seem to exist.
32-bit microcontrollers usually have significantly more ram than 8/16-bit microcontrollers in the same price class. But I haven't seen many with more than 128 KB (Freescale's M4 series stops there) - Freescale's MPC56xx PPC-based chips are the only ones I've used, and they don't count as small or low-cost.
>> >> Is there some technical reason for this, or is it just a >> market-determined thing? I know that desktop cpu's often have megabytes >> of sram cache, so it's certainly technologically feasible to do >> something similar with a smaller cpu. > > > On-chip ran is usually SRAM, and that's usually at least a factor of > six time less dense than DRAM, so large on-chip memories usually > require fairly large dies. And it's worse in practice since most > microcontrollers are not implemented in the latest processes, and DRAM > process are highly optimized for density, both of which multiply the > overhead. >
It's not actually a question of the "latest" processes, but the "optimised" process. When making a chip design, you have a lot of factors to consider - then number of layers, the types of layers, the size of the geometry, etc. The layer stackups suited for DRAM, SRAM, Flash, and low-power digital, high-speed digital, high accuracy analogue, and low-power analogue are all different. So when a designer wants to combine a large SRAM with a fast microcontroller on the same die, he must choose between having the SRAM larger, slower, and more expensive per bit - or having the microcontroller larger, slower, and more power-consuming.
> Things like eDRAM are possible, but require considerable extra > processing in the fab, so are largely impossible from a cost > perspective for low cost devices. > > Since external DRAMs are (mostly) commodity items, the price pressure > on the manufacturers are severe, leading to excellent price per bit. > > Smaller external DRAMs are certainly possible, but there's not much of > a price break below 32MB or so. > > I suspect we'll see stacked dies before too long, which would provide > the large capacity without the hassle of an external DRAM. >
Stacked dies do exist, as do side-by-side multi-die chips. But they are a lot more expensive to manufacture and test, and introduce big challenges for power distribution on the die, and heat dissipation. It is certainly a technology that is up-and-coming for memories (DRAM and Flash), but in these chips you have multiple identical dies which makes it much easier. I've seen articles about I/O standards and drivers aimed at in-chip inter-die buses, but I won't hold my breath waiting for them to appear in low-cost microcontrollers.
On Sun, 03 Feb 2013 09:40:11 -0800, Paul Rubin
<no.email@nospam.invalid> wrote:

>stephenXXX@mpeforth.com (Stephen Pelc) writes: >> STM32F4xx have 192kb or 256kb RAM and 1Mb or more of Flash. >> These are excellent Cortex-M4 devices. > >Thanks! I think I saw these before, but forgot about them. The STM32F4 >Discovery board uses the STM32F407VGT6 which has 192kb of ram and a lot >of other cool stuff too. I just spent a while looking at the data >sheet. Memory protection, peripheral interfaces galore, 4k of >ultra-low-power backup SRAM, realtime clock, hardware RNG, what's not to >like?
Sounds like a PDP-11/34 on chip :-).
>Wow! This thing can reach into areas where I had been thinking >of using a Linux board with DRAM.
192 KiB might be on the low side for Linux, but some older RSX-11 or early Unixes style OSes would run fine in this amount of RAM. Of course, these OSes were disk based, i.e. programs (and overlay segments) were loaded from disk into core/RAM, thus some (shared) Flash or even rotating disks might be needed for program storage. But still it might be interesting to develop processor arrays, in which tasks to be reconfigured much faster than burning Flash.

Paul Rubin wrote:

> I notice that the ram capacity (ignore program flash for now, but it > tends to basically be proportionate) of microcontrollers seems to grow > fairly continuously (say in 2x jumps) from very small (a dozen or so > bytes in an 8 bitter, lots of MSP430's in the 128 to 1k byte range, > Cortex M0's with 4k, etc.), up to about 32k (Cortex M4). Above that > there are a few chips with 64k or 128k, that are quite expensive, and > above that not much is available til you get to external DRAM which on > ready-made boards usually starts at 32 meg or even 64 meg (Olimex > Olinuxino) or 512 meg (Raspberry Pi). So there is a big jump from 32k > to 32 meg. It would be nice to have a low cost, single chip, 256k or 1 > megabyte device but this doesn't seem to exist. > > Is there some technical reason for this, or is it just a > market-determined thing? I know that desktop cpu's often have megabytes > of sram cache, so it's certainly technologically feasible to do > something similar with a smaller cpu. >
In the small processors the amount of RAM requirements for code compiled with a good compiler is typically 16% of ROM and for assembler typically 20%. When we were doing studies on this a few years ago these numbers were remarkably constant. (Before someone says it is application dependent, it is but so is the selection of the processor application dependent) On the larger processors these numbers don't hold as well. w..
On Sun, 03 Feb 2013 17:06:30 -0500, Walter Banks
<walter@bytecraft.com> wrote:

>Paul Rubin wrote: > >> I notice that the ram capacity (ignore program flash for now, but it >> tends to basically be proportionate) of microcontrollers seems to grow >> fairly continuously (say in 2x jumps) from very small (a dozen or so >> bytes in an 8 bitter, lots of MSP430's in the 128 to 1k byte range, >> Cortex M0's with 4k, etc.), up to about 32k (Cortex M4). Above that >> there are a few chips with 64k or 128k, that are quite expensive, and >> above that not much is available til you get to external DRAM which on >> ready-made boards usually starts at 32 meg or even 64 meg (Olimex >> Olinuxino) or 512 meg (Raspberry Pi). So there is a big jump from 32k >> to 32 meg. It would be nice to have a low cost, single chip, 256k or 1 >> megabyte device but this doesn't seem to exist. >> >> Is there some technical reason for this, or is it just a >> market-determined thing? I know that desktop cpu's often have megabytes >> of sram cache, so it's certainly technologically feasible to do >> something similar with a smaller cpu. >> > >In the small processors the amount of RAM requirements for >code compiled with a good compiler is typically 16% of ROM and >for assembler typically 20%. When we were doing studies on this >a few years ago these numbers were remarkably constant. > >(Before someone says it is application dependent, it is but so is the >selection of the processor application dependent) On the larger >processors these numbers don't hold as well. > >w..
I think you addressed the caution I'd add. I'd just word it differently. When you go to a doctor because you are sick, it's not appropriate for the doctor to immediately start out telling you what the most likely cause of your illness is based upon what is more likely based on everyone else who gets sick. The doctor should listen to the symptoms (details) of your situation. Even then, statistics don't help decide what you have. It's always in the details. After the doctor determines what you have, then your illness becomes part of the statistics. The only thing reason a doctor should be thinking about "most probable" in your case should be about saving money in tests, not in determining what you have. And statistics are most useful when allocating annual budgets. Financial stuff. Not in deciding cases. That should be based on the individual facts of the situation. Same thing with processor choices. So 16%/20% is great information for those making business decisions about product placement and features. But not so great when you face a task at hand. Different things. Jon
On Monday, February 4, 2013 11:06:30 AM UTC+13, Walter Banks wrote:
> > In the small processors the amount of RAM requirements for > code compiled with a good compiler is typically 16% of ROM and > for assembler typically 20%. When we were doing studies on this > a few years ago these numbers were remarkably constant.
Those numbers sound high for 8 bit micros ? The mainstream 8051 families, come in around 3~6% of Max Code, at the Intel choices of 128:4096 and 256:8192 and the more modern 4096:65536 Or, are you saying the chips were only 33% code-full,(but used all RAM) and so lifted the RAM:code ratio ? ;) 32 bit micros tend to have larger RAM numbers, as they get quite lazy with bits and bytes. Thus we see the NXP small 32 bit offerings with 25% ratio of RAM:CODE The new Infineon variants have 16kB of RAM, which varies from 25% to 200%(!) of code.
On Sunday, February 3, 2013 8:06:59 PM UTC+13, Paul Rubin wrote:
> > Thanks. This appears to be an older and rather expensive part with a > not-so-common architecture (Hitachi SH2) but it's good to know about.
If you want something newer, then Nuvoton has a series of Stacked-die parts, they call ARM Video SoC, come in gull wing LQFP-128 / LQFP-64 parts. See: http://www.nuvoton.com/NuvotonMOSS/Community/ProductInfo.aspx?tp_GUID=66feb925-4931-4d99-938d-9a6f89fc0ac6 Choice of 16MB or 32MB Stacked RAM. Some upcoming ones, even have ethernet MAC... (as well as USB). -jg

j.m.granville@gmail.com wrote:

> On Monday, February 4, 2013 11:06:30 AM UTC+13, Walter Banks wrote: > > > > In the small processors the amount of RAM requirements for > > code compiled with a good compiler is typically 16% of ROM and > > for assembler typically 20%. When we were doing studies on this > > a few years ago these numbers were remarkably constant. > > Those numbers sound high for 8 bit micros ? > > The mainstream 8051 families, come in around 3~6% of Max Code, at the Intel choices of 128:4096 and 256:8192 and the more modern 4096:65536 > > Or, are you saying the chips were only 33% code-full,(but used all RAM) and so lifted the RAM:code ratio ? ;) > > 32 bit micros tend to have larger RAM numbers, as they get quite lazy with bits and bytes. > Thus we see the NXP small 32 bit offerings with 25% ratio of RAM:CODE > > The new Infineon variants have 16kB of RAM, which varies from 25% to 200%(!) of code.
The numbers came from a study we did of several hundred embed applications and reference designs. They are the used ram and rom in the application. The applications we looked at were specifically 8 bit data path processors. ROM was normalized to bytes (Microchip PIC 14 bit mid range for example) w..

Jon Kirwan wrote:

> On Sun, 03 Feb 2013 17:06:30 -0500, Walter Banks > <walter@bytecraft.com> wrote: > > >Paul Rubin wrote: > > > >> I notice that the ram capacity (ignore program flash for now, but it > >> tends to basically be proportionate) of microcontrollers seems to grow > >> fairly continuously (say in 2x jumps) from very small (a dozen or so > >> bytes in an 8 bitter, lots of MSP430's in the 128 to 1k byte range, > >> Cortex M0's with 4k, etc.), up to about 32k (Cortex M4). Above that > >> there are a few chips with 64k or 128k, that are quite expensive, and > >> above that not much is available til you get to external DRAM which on > >> ready-made boards usually starts at 32 meg or even 64 meg (Olimex > >> Olinuxino) or 512 meg (Raspberry Pi). So there is a big jump from 32k > >> to 32 meg. It would be nice to have a low cost, single chip, 256k or 1 > >> megabyte device but this doesn't seem to exist. > >> > >> Is there some technical reason for this, or is it just a > >> market-determined thing? I know that desktop cpu's often have megabytes > >> of sram cache, so it's certainly technologically feasible to do > >> something similar with a smaller cpu. > >> > > > >In the small processors the amount of RAM requirements for > >code compiled with a good compiler is typically 16% of ROM and > >for assembler typically 20%. When we were doing studies on this > >a few years ago these numbers were remarkably constant. > > > >(Before someone says it is application dependent, it is but so is the > >selection of the processor application dependent) On the larger > >processors these numbers don't hold as well. > > > >w.. > > I think you addressed the caution I'd add. I'd just word it > differently. > > When you go to a doctor because you are sick, it's not > appropriate for the doctor to immediately start out telling > you what the most likely cause of your illness is based upon > what is more likely based on everyone else who gets sick. The > doctor should listen to the symptoms (details) of your > situation. Even then, statistics don't help decide what you > have. It's always in the details. After the doctor determines > what you have, then your illness becomes part of the > statistics. > > The only thing reason a doctor should be thinking about "most > probable" in your case should be about saving money in tests, > not in determining what you have. And statistics are most > useful when allocating annual budgets. Financial stuff. Not > in deciding cases. That should be based on the individual > facts of the situation. > > Same thing with processor choices. > > So 16%/20% is great information for those making business > decisions about product placement and features. But not so > great when you face a task at hand. > > Different things. > > Jon
I agree that the ratio is application dependent, but in the study we did the standard deviation of ram rom ratios used was surprisingly small. This was specifically for processors with 8 bit data paths. w..
Walter Banks <walter@bytecraft.com> writes:
> The numbers came from a study we did of several hundred embed > applications and reference designs. They are the used ram and > rom in the application. The applications we looked at were > specifically 8 bit data path processors....
I can believe programs on those small micros tend to use a few static storage areas for parameters, buffers, etc. but not tend to have concurrent tasks created on the fly, lookup structures of significant size built at runtime, languages with garbage collection, etc.: typical things done in programs on larger cpu's. Small cpu's constrain the programs and programming styles that can run in them. It's not that the algorithms with bottomless memory appetites have to curb their desires on those cpu's--it's that they normally aren't used on those cpus at all.

Memfault Beyond the Launch