Paul Rubin wrote:> Walter Banks <walter@bytecraft.com> writes: > > The numbers came from a study we did of several hundred embedded > > applications and reference designs. They are the used ram and > > rom in the application. The applications we looked at were > > specifically 8 bit data path processors.... > > I can believe programs on those small micros tend to use a few static > storage areas for parameters, buffers, etc. but not tend to have > concurrent tasks created on the fly, lookup structures of significant > size built at runtime, languages with garbage collection, etc.: typical > things done in programs on larger cpu's. Small cpu's constrain the > programs and programming styles that can run in them. It's not that the > algorithms with bottomless memory appetites have to curb their desires > on those cpu's--it's that they normally aren't used on those cpus at > all.I basically agree with you. The processor we looked at are mostly used in consumer products and small scale process control systems. w..
large microprocessors?
Started by ●February 3, 2013
Reply by ●February 4, 20132013-02-04
Reply by ●February 4, 20132013-02-04
On 2/3/13 1:17 AM, Paul Rubin wrote:> I notice that the ram capacity (ignore program flash for now, but it > tends to basically be proportionate) of microcontrollers seems to grow > fairly continuously (say in 2x jumps) from very small (a dozen or so > bytes in an 8 bitter, lots of MSP430's in the 128 to 1k byte range, > Cortex M0's with 4k, etc.), up to about 32k (Cortex M4). Above that > there are a few chips with 64k or 128k, that are quite expensive, and > above that not much is available til you get to external DRAM which on > ready-made boards usually starts at 32 meg or even 64 meg (Olimex > Olinuxino) or 512 meg (Raspberry Pi). So there is a big jump from 32k > to 32 meg. It would be nice to have a low cost, single chip, 256k or 1 > megabyte device but this doesn't seem to exist. > > Is there some technical reason for this, or is it just a > market-determined thing? I know that desktop cpu's often have megabytes > of sram cache, so it's certainly technologically feasible to do > something similar with a smaller cpu. > > Thanks. >I think the reason comes to economics of scale for production. The desk top cpu has much more area devoted to building the processor, so adding the additional ram for the cache has economic viability. These processors use more expensive chip technology to fit all this on the chip, but the processors have value worth the expense. For the lower end processors, the technology doesn't really allow for that large of a memory array to keep within the value range of the processor. The fact that this also aligns with the memory needed for typical uses of these chips is a bonus that reduces the need to try and develop the exception processor. The way to get a microcontroller with a somewhat larger memory space is to use an external memory chip (not a memory stick with multiple chips). SRAM would be a smaller/cheap choice than DRAM.
Reply by ●February 4, 20132013-02-04
On Feb 4, 4:39�am, Walter Banks <wal...@bytecraft.com> wrote:> Jon Kirwan wrote: > > On Sun, 03 Feb 2013 17:06:30 -0500, Walter Banks > > <wal...@bytecraft.com> wrote: > > > >Paul Rubin wrote: > > > >> I notice that the ram capacity (ignore program flash for now, but it > > >> tends to basically be proportionate) of microcontrollers seems to grow > > >> fairly continuously (say in 2x jumps) from very small (a dozen or so > > >> bytes in an 8 bitter, lots of MSP430's in the 128 to 1k byte range, > > >> Cortex M0's with 4k, etc.), up to about 32k (Cortex M4). �Above that > > >> there are a few chips with 64k or 128k, that are quite expensive, and > > >> above that not much is available til you get to external DRAM which on > > >> ready-made boards usually starts at 32 meg or even 64 meg (Olimex > > >> Olinuxino) or 512 meg (Raspberry Pi). �So there is a big jump from 32k > > >> to 32 meg. �It would be nice to have a low cost, single chip, 256k or 1 > > >> megabyte device but this doesn't seem to exist. > > > >> Is there some technical reason for this, or is it just a > > >> market-determined thing? �I know that desktop cpu's often have megabytes > > >> of sram cache, so it's certainly technologically feasible to do > > >> something similar with a smaller cpu. > > > >In the small processors the amount of RAM requirements for > > >code compiled with a good compiler is typically 16% of ROM and > > >for assembler typically 20%. When we were doing studies on this > > >a few years ago these numbers were remarkably constant. > > > >(Before someone says it is application dependent, it is but so is the > > >selection of the processor application dependent) On the larger > > >processors these numbers don't hold as well. > > > >w.. > > > I think you addressed the caution I'd add. I'd just word it > > differently. > > > When you go to a doctor because you are sick, it's not > > appropriate for the doctor to immediately start out telling > > you what the most likely cause of your illness is based upon > > what is more likely based on everyone else who gets sick. The > > doctor should listen to the symptoms (details) of your > > situation. Even then, statistics don't help decide what you > > have. It's always in the details. After the doctor determines > > what you have, then your illness becomes part of the > > statistics. > > > The only thing reason a doctor should be thinking about "most > > probable" in your case should be about saving money in tests, > > not in determining what you have. And statistics are most > > useful when allocating annual budgets. Financial stuff. Not > > in deciding cases. That should be based on the individual > > facts of the situation. > > > Same thing with processor choices. > > > So 16%/20% is great information for those making business > > decisions about product placement and features. But not so > > great when you face a task at hand. > > > Different things. > > > Jon > > I agree that the ratio is application dependent, but in the > study we did the standard deviation of ram rom ratios > used was surprisingly small. This was specifically for > processors with 8 bit data paths. > > w..I can imagine how on small processors the figures can be consistent (though I find the assembly/C figures way too close to be supporting this, i.e. they are so close it looks things are as Jon suggests, one uses whatever is available). But on larger systems, where buffering may or may not be needed things can quickly change by some huge factor from application to application. A tcp/ip stack running at 100 MbpS (and actually using it for streaming lots of data) alone can eat up a few hundred kilobytes or even a few megabytes, for example, this just for inbound packet buffering. Dimiter ------------------------------------------------------ Dimiter Popoff Transgalactic Instruments http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/sets/72157600228621276/
Reply by ●February 4, 20132013-02-04
Richard Damon <news.x.richarddamon@xoxy.net> writes:> For the lower end processors, the technology doesn't really allow for > that large of a memory array to keep within the value range of the > processor.It occurs to me also, most microcontrollers are mixed signal chips (a/d's and so forth). That might dictate some fab process decisions that don't play well with high density memory.> For the lower end processors, the technology doesn't really allow for > that large of a memory array to keep within the value range of theI'm pretty impressed with that STM part that Stephen Pelc mentioned. 1M flash, 192kb ram (I didn't see a 256kb version but 192kb is almost as good), tons of on-chip peripherals, and there's an ultra cheap development board for it. The next step up is external memory as you mentioned.
Reply by ●February 4, 20132013-02-04
upsidedown@downunder.com writes:>>Wow! This thing can reach into areas where I had been thinking >>of using a Linux board with DRAM. > 192 KiB might be on the low side for Linux, but some older RSX-11 or > early Unixes style OSes would run fine in this amount of RAM.Actually quite a bit less ram, and a lot of the ram they used actually held program code that would run from flash in the case of this STM part.> Of course, these OSes were disk based,The chip has an SDIO controller. It's too bad that the Discovery evaluation board doesn't have an SD card socket. That would allow re-creating the PDP-11 Un*x experience ;-). But, I think in reality I'd run a simple RTOS or a standalone application on this chip. The next step up would be a Linux board as those have also gotten quite inexpensive (various boards inspired by the Raspberry Pi). They just have more power drain and more software to deal with, and a bit less realtime capability.
Reply by ●February 4, 20132013-02-04
On 04/02/13 03:39, Walter Banks wrote:> > > Jon Kirwan wrote: > >> On Sun, 03 Feb 2013 17:06:30 -0500, Walter Banks >> <walter@bytecraft.com> wrote: >> >>> Paul Rubin wrote: >>> >>>> I notice that the ram capacity (ignore program flash for now, but it >>>> tends to basically be proportionate) of microcontrollers seems to grow >>>> fairly continuously (say in 2x jumps) from very small (a dozen or so >>>> bytes in an 8 bitter, lots of MSP430's in the 128 to 1k byte range, >>>> Cortex M0's with 4k, etc.), up to about 32k (Cortex M4). Above that >>>> there are a few chips with 64k or 128k, that are quite expensive, and >>>> above that not much is available til you get to external DRAM which on >>>> ready-made boards usually starts at 32 meg or even 64 meg (Olimex >>>> Olinuxino) or 512 meg (Raspberry Pi). So there is a big jump from 32k >>>> to 32 meg. It would be nice to have a low cost, single chip, 256k or 1 >>>> megabyte device but this doesn't seem to exist. >>>> >>>> Is there some technical reason for this, or is it just a >>>> market-determined thing? I know that desktop cpu's often have megabytes >>>> of sram cache, so it's certainly technologically feasible to do >>>> something similar with a smaller cpu. >>>> >>> >>> In the small processors the amount of RAM requirements for >>> code compiled with a good compiler is typically 16% of ROM and >>> for assembler typically 20%. When we were doing studies on this >>> a few years ago these numbers were remarkably constant. >>> >>> (Before someone says it is application dependent, it is but so is the >>> selection of the processor application dependent) On the larger >>> processors these numbers don't hold as well. >>> >>> w.. >> >> I think you addressed the caution I'd add. I'd just word it >> differently. >> >> When you go to a doctor because you are sick, it's not >> appropriate for the doctor to immediately start out telling >> you what the most likely cause of your illness is based upon >> what is more likely based on everyone else who gets sick. The >> doctor should listen to the symptoms (details) of your >> situation. Even then, statistics don't help decide what you >> have. It's always in the details. After the doctor determines >> what you have, then your illness becomes part of the >> statistics. >> >> The only thing reason a doctor should be thinking about "most >> probable" in your case should be about saving money in tests, >> not in determining what you have. And statistics are most >> useful when allocating annual budgets. Financial stuff. Not >> in deciding cases. That should be based on the individual >> facts of the situation. >> >> Same thing with processor choices. >> >> So 16%/20% is great information for those making business >> decisions about product placement and features. But not so >> great when you face a task at hand. >> >> Different things. >> >> Jon > > I agree that the ratio is application dependent, but in the > study we did the standard deviation of ram rom ratios > used was surprisingly small. This was specifically for > processors with 8 bit data paths. >That's not particularly surprising - "general" code will use stack and local variables at a statistically fairly similar rate, so ram-to-rom ratio will be reasonably consistent. When moving to 32-bit, the rom usage (for the same program functionality) typically increases a little, but the ram usage increases by a factor of 2 to 4 (due to the wider integers). But with this correction factor, the ratio will again be reasonably consistent. The exception to this is buffer space - for arrays of sample data, communication buffers, etc. This is particularly common in bigger micros, especially ones with high speed communication (USB or Ethernet). Thus when manufacturers make a family of devices, they will typically have a range of ram/flash sizes for different uses, but have a similar ratio across the family. And a 32-bit family will have about 4-8 times the ram for the same flash size as an 8-bit family would do. Statistically, this all makes economic sense. But as Jon says, it's a pain if that doesn't fit the task in hand.
Reply by ●February 4, 20132013-02-04
In article <7x8v74mvyo.fsf@ruckus.brouhaha.com>, no.email@nospam.invalid says...> > upsidedown@downunder.com writes: > >>Wow! This thing can reach into areas where I had been thinking > >>of using a Linux board with DRAM. > > 192 KiB might be on the low side for Linux, but some older RSX-11 orFor desktop yes, embedded no.> > early Unixes style OSes would run fine in this amount of RAM. > > Actually quite a bit less ram, and a lot of the ram they used actually > held program code that would run from flash in the case of this STM part. > > > Of course, these OSes were disk based, > > The chip has an SDIO controller. It's too bad that the Discovery > evaluation board doesn't have an SD card socket. That would allow > re-creating the PDP-11 Un*x experience ;-). > > But, I think in reality I'd run a simple RTOS or a standalone > application on this chip. The next step up would be a Linux board as > those have also gotten quite inexpensive (various boards inspired by the > Raspberry Pi). They just have more power drain and more software to > deal with, and a bit less realtime capability.Hmm the Raspberry Pi uses a difficult to obtain Broadcom chip wuth stacked packages. First version was 256MB with HALF of that by default assigned to graphics, yes 128MB to run Linus. Second version has 512MB RAM package using SD cardd IO for disk. These days you can reduce the graphics ram size significantly. -- Paul Carpenter | paul@pcserviceselectronics.co.uk <http://www.pcserviceselectronics.co.uk/> PC Services <http://www.pcserviceselectronics.co.uk/fonts/> Timing Diagram Font <http://www.gnuh8.org.uk/> GNU H8 - compiler & Renesas H8/H8S/H8 Tiny <http://www.badweb.org.uk/> For those web sites you hate
Reply by ●February 4, 20132013-02-04
dp wrote:> On Feb 4, 4:39 am, Walter Banks <wal...@bytecraft.com> wrote: > > Jon Kirwan wrote: > > > > > > So 16%/20% is great information for those making business > > > decisions about product placement and features. But not so > > > great when you face a task at hand. > > > > > Different things. > > > > > Jon > > > > I agree that the ratio is application dependent, but in the > > study we did the standard deviation of ram rom ratios > > used was surprisingly small. This was specifically for > > processors with 8 bit data paths. > > > I can imagine how on small processors the figures can be > consistent (though I find the assembly/C figures way too > close to be supporting this, i.e. they are so close it > looks things are as Jon suggests, one uses whatever > is available).The 16% compiled code and 20% handwritten ram/rom ratio have a simple explanation. Compilers are better at re-using variable space than hand written code can reasonably do. RAM re-use is an accounting problem something computers are good at. In HLL's it is redone for every compile. w..
Reply by ●February 4, 20132013-02-04
Reply by ●February 4, 20132013-02-04
On 04.02.2013 15:41, rickman wrote:> "Large microprocessors", isn't that like "jumbo shrimp"?Not really. Given the evolution of the term, a "micro" processor is really any device substantially smaller than your average dish washer. I've never heard of a good reason the evolution of terms didn't continue further down to nano or pico processors --- but it didn't.