EmbeddedRelated.com
Forums
Memfault Beyond the Launch

Software expands to fill the memory available

Started by woodpecker 4 years ago6 replieslatest reply 4 years ago257 views

Hi everyone,

I'm currently in the market for a new cellphone, and consequently reading reviews of a few different models. One review said that a certain cellphone was "let down by the limited amount of RAM".

Now, they're talking about 2 Gbyte, and this led me to reflect on the way that this technology is evolving.

If I, as an engineer, were to develop software techniques which were way more efficient so that the RAM required was just 500 Mbyte for the same performance, it might impress the other engineers but not the marketing guys.

The quantity of RAM and the amount of processing power are selling points, but any indication of the efficiency of the applications and the operating system are nowhere to be found.

Back in the 1980s (and before) when memory was very expensive, and processing power was limited, we had some very efficient code, but I fear that commercial pressures mean this is now a much lower priority.

So we take version 1, give it a few tweaks, a 50% faster processor, twice as much RAM. Now get this version 2 into production ASAP !

Any views on this ?

Oh yes, I'll probably scrape by with the 3 Gbyte in my new Motorola cellphone.   :-)

[ - ]
Reply by igendelNovember 15, 2019

A little while ago, I used a certain software product to develop a minimal Android App: A blank screen, except for a button in the middle that allowed the user to shut down the application. The compiled file turned out to be 40MB in size. But it worked and it took me about two minutes to "develop", without any knowledge of Android. Could compile it just as easily for iPhones. Those 40MB went into the entire framework, hidden from sight, that allowed this "magic" to happen.

So basically, the inefficiency (and bloating) of code is the price we pay for quick development. If each developer had to know a modern system inside out and write everything from scratch (in order to develop a truly efficient code), nothing would get done. Including the evolution of the systems themselves.

That's also why I'm happy programming 8-bit MCUs, where efficiency still matters :-)


[ - ]
Reply by DilbertoNovember 15, 2019
"The most amazing achievement of the computer software industry is its continuing cancellation of the steady and staggering gains made by the computer hardware industry." — Henry Petroski
[ - ]
Reply by CustomSargeNovember 15, 2019

Not just cellphones - it's Everywhere. "Bloatware" and "feature-creep" have expanded app sizes and speed demands progressively since (I think) the beginning. IIRC the first IDE I used was ~150MB, which I wondered why so big. The latest (CodeWarrior 10) is ~1.1GB and I still don't get why. Moving the business model forward has to be a big part of it.


[ - ]
Reply by DKWatsonNovember 15, 2019

The concept of "free" memory is spilling over to the MCU playing field. It's a catch-22 as companies demand a quick turnaround hence the need for a high level development platform leading to bloatware. To fill this demand, educational institutions (and bootcamps) skip over the basics and head straight to the code generators. Nobody learns assembly anymore as the paymasters won't live with the development time needed to generate compact code. Have they ever done an analysis of the extra dollar per unit (multiplied by X units per year) for the additional memory and power necessary to support bloated code? Compare this to the month or three of additional time that it might take a competent engineer to write proper code which in turn reduces the need for a high powered/priced MCU. I too will stick to 8-bit platforms as there is little they have proven incapable of doing. It's also a growing market.

[ - ]
Reply by mr_banditNovember 15, 2019

Assembly vs C: Modern C compilers are pretty good at optimization && can generate asm code that is roughly what hand-crafted asm can do. There are places where asm is needed, but the cases are getting rarer.

I also know from various client projects is there is little effort to get rid of dead code, which is compiled, then linked by lazy linkers. Or there are a few old functions that do basically the same as new functions (because the code is touched by so many developers), so a call to a "simple" old function drags in a bunch of old code.

For example, I had one client where there were 3 (!) sets of code that basically did the same thing, but they all co-existed because, depending on conditions, each would be called. For example, there were 3 versions of determining local time (this product is used world-wide). Different parts of the code used all 3 versions.

Part of the advantages of working in a regulated industry (avionics, medical) is there is an outside agency that does pay attention to the project code. You have to show *every* line of code gets executed. Tends to cut down on the nonsense.

[ - ]
Reply by atomqNovember 15, 2019

I'd rather ask whether we need all the features at the same time. Modern cellphone is a decent computer with excess of a horsepower and a memory 'just in case' someone would have needed it. For example do we really often use such features like navigation, camera, phone, etc. symultaneously? Most of the time the engine sleeps wasting reasonable amount of power to be ready for 'just in case'. I'd say that is the price we pay for a convenience of having all-in-one just-in-case in the pocket.

Memfault Beyond the Launch