EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

Embedded Linux processors

Started by Theo October 24, 2022
On 26/10/2022 11:34, Don Y wrote:
> On 10/26/2022 1:06 AM, David Brown wrote: >> On 26/10/2022 03:17, Don Y wrote: >>> On 10/24/2022 7:20 AM, Theo wrote: >>>> I was idly looking to see what was out there in the low end Linux >>>> space - >>>> something bigger than an ESP32 but more production friendly than a >>>> Raspberry >>>> Pi.  I came across this excellent guide: >>>> >>>> https://jaycarlson.net/embedded-linux/ >>>> >>>> He builds dev boards for 10 different chips from 7 vendors, just to >>>> see how >>>> it all goes - both hardware and software.  The results are quite >>>> interesting. >>>> >>>> Any other recommendations for Linux-supporting SoCs that are nice >>>> for low >>>> volume/hand production? >>> >>> As you've qualified the solution space with "Linux-supporting", I >>> assume you mean a Linux port is already available (for at least >>> the underlying architecture). >>> >>> And, as you've discounted the rPi as "less production friendly", I >>> assume you're looking for *components*, not *assemblies*. >> >> I wouldn't assume that (though the OP will have to clarify).  Pi's are >> fine for prototyping, but there are many reasons why they might not be >> a suitable choice for real products.  However, that does not at all >> suggest that it is a good idea to use chips directly rather than modules. >> >> Unless your production runs are at least 10,000 a time, it is unlikely >> to be cost-effective to use anything other than pre-populated modules. >> Designing a board for large ball count BGAs, high speed memories, >> etc., is not quick or cheap, nor is their production. > > Did you *read* the article?
I didn't, no - I was responding to what /you/ wrote in reply to what the OP asked. That was the relevant issue. (I've now read the article, and it has not changed my opinions significantly.)
> >    "To this end, I designed a dev board from scratch for each application >    processor reviewed. Well, actually, many dev boards for each processor: >    roughly 25 different designs in total. This allowed me to try out > different >    DDR layout and power management strategies — as well as fix some bugs >    along the way." > > Perhaps you've no experience designing (and laying out and prototyping) > "modern" parts.  It's not rocket science.  The days of paying $2K for > a Leister are ancient history...  That was another point of the article. >
I do have experience at it, yes. And it takes knowledge, tools, and time. I didn't say the OP could not do it - I don't know his abilities. I said it was not cost-effective.
>>> Looking for "low-cost linux boards" could give you an idea as to >>> the processors chosen for each.  But, they typically are "kitchen sink" >>> approaches to problems. >>> >>> I'd, instead, look into the kernel and see if you can do away with >>> the PMMU (i.e., get it to work with all memory wired down and no >>> swap configured; then, remove the code associated with paging). >> >> That could have been good advice - twenty years ago. >> >> Now it is pointless to aim for such a minimal system.  The cheapest >> processors with MMU supported by Linux cost a few dollars. > > What do you do when your product *sells* for a few dollars? >
Is that a trick question? You don't use Linux.
>> The cheapest non-MMU microcontrollers that are capable of supporting >> Linux are at least ten dollars. > > How do you define "supporting Linux"?  I.e., "for which an existing build > exists?" >
Yes, or for which it is practical to make a build that could be used in a real system (as distinct from just for fun and bragging rights, such as the guy who got Linux "running" on an AVR).
> Most developers are only interested in the API and feature sets that > they have available to them.  If it "looks" like linux, in terms of > what they can expect it to do for them, they don't likely care about > the actual implementation.
I don't understand what you are trying to say here. Are we to guess what /looks/ like Linux, but /isn't/ Linux? You think people who want embedded Linux would be happy with a BSD? (Some might, but certainly not all.) Or a Windows system with WSL? Or FreeRTOS and LWIP with POSIX-style socket APIs?
> >>   Swap has always been optional, but working without an MMU leads to a >> lot of complications and restrictions (such as no "fork" calls). > > Fork needn't "create a copy of the parent process" -- if the > existing copy of the process can be used without duplication > (think XIP -- no gobs of RAM into which to copy the new process > image!).  All it need do is create a LOGICALLY new process container > (which needn't even have "protection" from other processes).
Fork /always/ has to create a /logical/ copy of the parent process - that's what it does. Without an MMU, all /writeable/ memory areas need to be duplicated at the fork by full copy, whereas with an MMU the pages are marked "copy on write" and only actually duplicated when needed. ("fork" existed before MMU processors were used for *nix.) In MMU-less Linux, "fork" is simply not supported as it would be too inefficient and complicated. You need to use vfork() then execve(), or posix_spawn(), or clone(), with certain restrictions.
> > Fork is probably the *least* valuable use of a PMMU in a system.
It is one of the biggest headaches when porting real Linux software to MMU-less Linux. It has become less of an issue for some software, because it has become more common to write programs that can run on Windows as well as Linux, and Windows does not support "fork()" either.
> An MMU that gives some (reasonable) control over accesses to > specific regions IN A UNIFIED ADDRESS SPACE would likely lead > to more robust code (in and of itself) than supporting a > classic fork(). >
You are talking about an MPU (memory protection unit), not an MMU (memory management unit). MPU's are common on 32-bit microcontrollers, and let you restrict access to different parts of memory. MMU's are used to change the mapping between logical addresses used by code, and physical addresses used by the hardware. They provide many functions in addition to supporting "fork()", such as giving applications a contiguous view of memory despite fragmentation in the physical memory, and letting shared libraries have different physical and logical addresses. An MMU makes life massively simpler, more flexible and more efficient in a "big" OS where different programs are loaded and run at different times.
>> &nbsp; No one uses non-MMU Linux except for nerdy fun.&nbsp; (And fun is >> /always/ a good reason for doing something.) > > <https://www.kernel.org/doc/html/latest/admin-guide/mm/nommu-mmap.html>
> <https://www.techonline.com/tech-papers/supporting-linux-without-an-mmu/> Yes - people did use it before, and now they don't. The day it becomes inconvenient to continue the support for it in the kernel, will be the day it gets dropped.
> >>> This may make some aspects of the implementation impractical.&nbsp; E.g., >>> my RTOS relies on a PMMU to share data across protection domains, >>> do zero copy ransfers, etc.&nbsp; But, you may be able to live without >>> the things that rely on that mechanism. >>> >>> [No idea as I've never looked inside the linux kernel] >>> >>> Some of the older kernel versions (and ports) may give you an insight >>> into what can/can't be done. >>> >>> This could expand the range of processors/SoCs that you could use >>> (though likely require some effort for a port). > >
On 10/26/2022 3:39 AM, Theo wrote:
> Don Y <blockedofcourse@foo.invalid> wrote: >> On 10/26/2022 2:09 AM, Theo wrote: >>> I don't have a product :-) But really just making a thought experiment >>> about what would happen if I did have a product - let's say an IoT thingy >>> (wifi, display, etc) in the <$100 sticker price, initial volumes let's say >>> hundreds. >> >> But, you're only looking at Linux (or, any "fleshy" OS) because you >> think it will make WiFi, networking, display, etc. "more straight-forward". >> You don't really care, functionally, if it is "Linux" (i.e., a particular >> kernel) that makes that happen, do you? As long as the API isn't >> too bizarre... > > The reason people use Linux is for the software stacks.
But that same argument applies to any device (including microcontrollers). Its why manufacturers develop support libraries and the like -- because they want to make their products (components) easier to design into a final product. Would you care if it was ManufacturerOS instead of Linux -- if it supported the "mechanisms" that you need? Even if it was "closed" source? (are you *really* going to do any kernel hacking?)
> It allows you to > write in a more friendly language, have better libraries for doing > complicated things, use existing tooling, not have to worry about boring > housekeeping things like the networking (does your thing support IPv6? > Linux has for decades, does your homebrew embedded RTOS? What about WPA3?). > Can you interact securely with whatever cloud service your widget needs to > do its thing? (especially if that service is not designed specifically for > talking to low-end widgets)
But you're assuming your product NEEDS those things. I don't have a filesystem anywhere in my design. Persistent storage is handled by a database -- because storage often wants to be *structured*, not a collection of unstructured files that the application has to parse (and verify) the structure thereof. Because I don't support the notion of a filesystem, everything related to that is unnecessary. Only devices designed to *be* displays HAVE displays. So, why burden other devices with that overhead/complexity? Encryption isn't a bolt-on feature but, rather, inherent in all comms. It doesn't make sense (in my application) to have comms that aren't encrypted! OTOH, everything in my world is object-based with fine-grained capabilities governing the actions that can be invoked on specific objects. So, I can let you transmit on a serial port but not receive; and someone else configure that serial port but never access the content passing through it; and someone else... (It's likely that your code configures the port in one place but doesn't need to access the content, there -- so, why should it be ABLE to do so? Likewise, why should something that is interested in accessing the content be able to alter the configuration -- likely "by accident"?) I don't need to "bolt this onto" some other implementation (and hope there are no cracks through which an exploit/bug can creep UNDER that) as it's part of the system's foundation. Because I have to tolerate foreign code that could actively try to subvert the security of my design (attempting to do something for which you don't have a suitable capability traps to the OS -- which, by default, kills off your process; you're either a malevolent entity or a bug... in either case, no reason to let you continue to execute!) I have no global namespace (a filesystem typically is the namespace for most products). So, task A doesn't even know that object X exists (even if the developer of task A is 100.00% sure it does!) and, because A doesn't have a name for object X, there is no way it can access it. I.e., I build the mechanisms that are appropriate for my product and care little about what a "desktop OS" thinks is appropriate. Yet, I can still build the common libraries that you're used to seeing ATOP my mechanisms. So, when you pepper your code with diagnostic "printf()s", they get delivered to an appropriate diagnostic device *somewhere* in the system -- the location and implementation of which is not important to your code.
> Essentially you trade off ease of software development for hardware > complexity. If you're playing in the low volume game, development effort > and time to market is more important than saving cents on production costs. > If you're selling by the million the tradeoff is different.
It's not just quantities. What would you do if you developed yet another "low volume" product -- would you start your quantity count from zero, there? Also, there may be other factors at play in your market. E.g., we would sell *12* tablet presses in a year. That's TWELVE (no typos). How sloppy could we be with our implementation at that production level? Why not use all OTS assemblies to make life easy?! Ah, but OTS assemblies are often designed to fit a variety of applications. Lots of "if (WHATEVER)..." scattered through the codebase as it tries to configure itself for a specific application. But, WHATEVER will either be always true or always false (for a given configuration). Meaning, one branch of the code will NEVER be executed. In some industries, it wouldn't be allowed to remain in the product ("dead code") -- regardless of how few units you made! "What's all this filesystem code doing in the kernel? You don't HAVE a filesystem in your product!" Ooops!
>> The problem, I see, is ending up with lots of "features" that you >> don't really need in a given product. >> >> Do you *really* need a filesystem -- let alone support for a variety >> of them (and a structure that facilitates supporting many even if you >> only use *one*?). > > If you want to run <tool> and that needs a filesystem, yes you do. I'm sure > you could reimplement it to do without, but that takes effort.
That argument can apply to any proposed criteria. The question you should ask is: "Do you NEED to run <tool> *in* your product? Or, are you just resorting to creeping featurism and running it because you *can*?" You can run bc(1) on a linux box. Should you offer a calculator utility to the product's users just because you *can* do so? You can run a web server. Does doing so actually add value -- to offset the added complexity and opportunity for bugs? I bought a scanner, recently. I wanted a network connection to transport the scanned images to a remote host (farther away than a USB cable would tolerate). It's got more *cruft* in it (TELNET, HTTPd, SSH, etc.) than I can imagine anyone needing or wanting! A long list of features is also a long list of likely *bugs*! [E.g., I can do 1200dpi scans using the USB interface but someone forgot to add that capability via the network interface! So, the i/f that I most desire is crippled -- despite all the extra cruft taht they've spent time developing/maintaining!]
>> Do you really need to be able to support multiple network interfaces >> with a stack that is designed to allow "equivalent" interfaces to >> their drivers to slide under it? >> >> Once your app is up and running, will the page tables EVER change? > > That depends on the app.
As I said, above, "That argument can apply to any proposed criteria." using that sort of logic, one can argue that you should embelish your solution with as much cruft as possible -- to cover all bases!
> The point here is to be able to use existing > software without having to re-engineer it. Once you start re-engineering > things, that's where your time goes.
You're assuming you won't have to understand any of the things that you are embracing. Is your product support going to be reliant on linux forums? When a customer calls with a problem, are you going to have to HOPE someone takes an interest in understanding your product, its implementation and the expressed problem? So, when you've an "upset (not yet "angry") customer, you're going to cross your fingers and hope for a solution -- because you don't understand the "component" you are using (are you sure it's configured properly?)
>>> The ESP32s are nice as they're a simple, cheap, wifi module. If you wanted >>> to cut costs you could use the bare chip. The Pis aren't: the Zero is a >>> nice form factor, but you can't buy it in volume. The regular Pis can't >>> really be mounted on a custom PCB if you don't have a large enclosure. The >>> Compute Modules are better, but still larger than an ESP32. However you >>> can't really buy any of them at the moment, and if you could they would be >>> quite expensive. The Pi2040 is an ok microcontroller but nothing special >>> (and wifi is an extra chip). Also none of them have any protection from >>> someone changing or stealing your firmware. >> >> That last isn't as easy to guard against as you might think... > > Indeed, which is why microcontrollers have various secure boot and encrypted > firmware support. > > (which aren't perfect, but prevent somebody just pulling your flash chip and > reading it out)
Yes. But, there are often other ways to get at the data. If you are small enough (and your products don't have high margins that make them attractive to a cloner), you can likely get away with this -- save for the individual "hacker" who takes an interest in your particular device. [And, of course, said hacker can now disseminate anything he learns easily in ways that make it easy for folks to stumble onto his efforts with the help of a search engine] If your device is simple enough -- and you've not done anything to protect it "legally" -- then it's easier for someone to just copy the *notion* and not worry about your specific implementation. [Ages ago, we manufactured a radar unit for boats. A japanese company came by wanting to sell our units in asia. We were very accommodating. They eventually just *copied* the design and we got nothing out of the deal (save for an initial sale of a dozen units). But, in the copying, they also made enhancements to our design -- some of which were so obvious, in hindsight, that we kicked ourselves for not having thought of them in the original design! I.e., in some ways, their version of OUR product was better than our own!]
>>> It is interesting in the above article how much the complexity starts to >>> rise once you start going beyond a single chip solution: BGAs, DDR routing, >>> numerous power supplies and sequencing, etc. >> >> But there's no black magic, there. This is all "common practice", now. >> If you don't have the skills, you develop them (as the author suggests). >> Layout tools do a lot of this for you. And, if you are looking at >> smallish "products", the hairy parts of the design are usually close >> to the CPU and don't extend far into the field. > > Indeed, no black magic, just time and cost. Don't do it if you don't need > it.
That's true if you look at the effort as a "one off". You wouldn't buy a logic analyzer if you were only debugging ONE, relatively simple design. OTOH, the time lost debugging that first design WITHOUT a LA could have reduced the effective cost of the LA purchased for the *second* design! I've found it usually pays to make investments in tools, skills, etc. But, because I've known where I wanted my career to go. So, I knew that an investment today would pay off in the future by making me better equipped to tackle a future project (that I may already have planned on!) OTOH, things that I *know* are one-offs have too high a bar to justify any long-term commitments/investments. E.g., I'm making a Rube Goldberg-esque kinematic sculpture in the back yard. Every piece is hand-made -- because there will only EVER be one of these. Why invest in castings if I'm going to only use each once?
>> If you want to be in a business (regardless of size), you have to invest >> in the tools necessary to make that business work. The tools can be >> physical assets -- or, intellectual skillsets. >> >> Only you can identify the likely direction your business (products) >> will take. So, only you can decide which "tools" are sensible >> investments. > > The thing here is choosing your battles. Spend your time on the things that > add value to the product. Don't make life needlessly harder when that's not > necessary. Everything *can* be done, but some things shouldn't *need* to be > done. If you're in the high-volume game, saving $1m using cheaper parts > makes sense. If you're in the low-volume game, you might only save $1000 > but spend $10K in time doing so.
But that requires you to know what your PRODUCTS (plural) are likely to be. Only you can know what your future actions/needs are *likely* to be. If I wanted to go in the kinematic sculpture business, I'd be approaching *mine* very differently -- even if it meant mine being more costly and taking longer to complete (due to all of the "investments" for future efforts). *My* finished result would likely look more "professional"... but, it would also look to be just one of N such units!
On 10/26/2022 13:42, David Brown wrote:
> ... > > IMHO the "encrypt everything" movement is a silly idea and a massive > waste of effort and resources.&nbsp; Sure, you want your bank website traffic > to use SSL, but it is completely unnecessary for the great majority of > web traffic.
The "encrypt everything" movement is not just silly, it is *shite*. And it is not just about the web, if goes also for mail etc. It is OK to have the encryption _capability_ but doing it all over the place is just a way to push the sales of more silicon. They used to do this by just bloating software so PC-s would become "old" within <5 years; now that they have tens of *giabytes* of RAM they need a way to justify selling even more. Overall may be not a bad thing, this has kept the industry advancing, but to those who can see how things work it looks not just silly, it looks.... (OK, here comes the Irish/Scottish word again).
On 10/26/2022 6:06 AM, Dimiter_Popoff wrote:
> On 10/26/2022 13:42, David Brown wrote: >> ... >> >> IMHO the "encrypt everything" movement is a silly idea and a massive waste of >> effort and resources.&nbsp; Sure, you want your bank website traffic to use SSL, >> but it is completely unnecessary for the great majority of web traffic. > > The "encrypt everything" movement is not just silly, it is *shite*. > And it is not just about the web, if goes also for mail etc. > It is OK to have the encryption _capability_ but doing it all over the > place is just a way to push the sales of more silicon. They used to > do this by just bloating software so PC-s would become "old" within > <5 years; now that they have tens of *giabytes* of RAM they need > a way to justify selling even more.
Digital comms are used for increasingly more purposes. Encrypting everything saves you from wondering if something SHOULD be encrypted, or not, at a "per communique" level. I've received correspondence from financial institutions along the lines of: "This is to confirm your recent transfer of $X from the account ending in 123 to the account ending in 456." Yay! You didn't disclose my account numbers. But, my *name* is on the email along with the size of the transaction and when it occurred! "This is to confirm your closing of the accounts ending in 123 and 456." Even if *I* wanted them to use PEM, there's no way to force the issue; my only recourse is to withhold an email address (the consequence of that is losing on-line access to my accounts via HTTPS) or move to another financial institution. [We receive dozens of print correspondence each week from financial institutions regarding our various accounts. Even if just "transaction confirmations", that volume of cleartext traffic would leak far too much *personal* information. Note that few people choose to receive financial statements printed on POSTCARDS (which are less expensive to mail)] Should the video feeds (over IP) from the security cameras be encrypted? After all, anyone standing in those areas can SEE what the cameras are seeing so it's hardly a *secret* that needs to be protected! Isn't the camera's purpose as a deterrent? What about the MUZAK audio? Clearly anyone within earshot of the speakers can hear it... Or, the overhead "paging" system? And, obviously no need to encrypt VoIP traffic? Or, command and control traffic on the factory floor? What employee would willingly eavesdrop OR SUBVERT such traffic? Or, the video feed *from* the security office (to know if they're actually actively watching the other feeds!). etc. Surely, your baby monitor need not be encrypted (?) -- who wants to watch a sleeping infant? Or, see who's at your front door? Back yard? Determine if anyone is moving around the vicinity of your thermostat? Who'd want to hack a pacemaker? Or, someone else's car? etc. [This is c.a.E, after all!] If encryption is the normal means of communication, then the consequences of someone making a poor decision (regarding whether or not to send something in cleartext) goes away. It's one less issue to address in the potential attack surface. One less "afterthought" (as security seems to be, in most products) [Increasingly, hardware support for encryption is available in newer processors -- because of a perceived demand for it! Imagine WIRELESS comms where access to the transmission "media" is effortless!]
> Overall may be not a bad thing, this has kept the industry advancing, > but to those who can see how things work it looks not just silly, > it looks.... (OK, here comes the Irish/Scottish word again).
I suspect the *hacked* pacemaker patient might have a different take on it! :>
On 26/10/2022 21:20, Don Y wrote:
> On 10/26/2022 6:06 AM, Dimiter_Popoff wrote: >> On 10/26/2022 13:42, David Brown wrote: >>> ... >>> >>> IMHO the "encrypt everything" movement is a silly idea and a massive >>> waste of effort and resources.&nbsp; Sure, you want your bank website >>> traffic to use SSL, but it is completely unnecessary for the great >>> majority of web traffic. >> >> The "encrypt everything" movement is not just silly, it is *shite*. >> And it is not just about the web, if goes also for mail etc. >> It is OK to have the encryption _capability_ but doing it all over the >> place is just a way to push the sales of more silicon. They used to >> do this by just bloating software so PC-s would become "old" within >> <5 years; now that they have tens of *giabytes* of RAM they need >> a way to justify selling even more. > > Digital comms are used for increasingly more purposes. > Encrypting everything saves you from wondering if something SHOULD > be encrypted, or not, at a "per communique" level. >
That's a reasonable argument, on the surface. But like many such simplistic rules, it discourages thinking, knowledge, nuances and appropriate usage. It is much like "zero tolerance" rules - they mean "zero thought" and often throw out the baby with the bath water. Different types of communication or storage have different requirements, and the benefits and costs of encryption are correspondingly varied. There are /many/ costs to using encryption - not just processor cycles or code and ram space. There's complexity in the code and the scope for bugs, the near impossibility of debugging or monitoring traffic or recovering data in encrypted storage, and the need to handle ever-changing standards and expiring keys and certificates. And while it might appear that "encrypt everything" means that even those that don't really understand the issues will still make "safe" systems because they use encryption by default, it is simply not true. Those who don't understand the appropriate security needs for a particular use-case are unlikely to use /appropriate/ encryption, and can easily get it wrong (such as poor handling of the keys). And now instead of saying "I don't understand this, I'll ask someone who does", they will think "it's all encrypted and therefore secure". They'll think their website is safe because it uses TLS, without considering that the bad guys can connect on the same encrypted links and hack in with the same weak passwords - only now as their traffic is encrypted, it's harder to track them.
On 10/24/2022 10:20 AM, Theo wrote:
> I was idly looking to see what was out there in the low end Linux space - > something bigger than an ESP32 but more production friendly than a Raspberry > Pi. ... > > Any other recommendations for Linux-supporting SoCs that are nice for low > volume/hand production? > > Theo
Here are a few SOM I've looked at, trying to avoid SOC difficulties: https://www.mouser.com/c/?q=QSMP-15 Some firms I've worked with have been happy with Toradex (for new designs use Verdin family): https://www.toradex.com/computer-on-modules/verdin-arm-family Lower end: https://www.digikey.com/en/products/detail/microchip-technology/ATSAMA5D27C-D5M-CUR/7801902 I guess I'm a wimp, but I really don't want to deal with DDR routing and EMC issues for small runs...

The 2024 Embedded Online Conference