EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

ARM Cortex M3 newbie questions

Started by Boo May 4, 2012
Hi David,

On 5/7/2012 6:37 AM, David Brown wrote:
> On 07/05/2012 15:27, Stephen Pelc wrote: >> On Sun, 06 May 2012 11:35:30 -0700, Don Y<not@my.name> wrote: >> >>> Biggest problem (IM) with network stacks in an embedded system >>> is remembering they were designed for *benign* environments. >>> I.e., they try to handle the consequences of (unfortunate) >>> hardware and transient faults -- not deliberate attacks! >> >> Oh, really? We have clients with instruments on a raw >> internet feed - no firewall, nothing. The instruments >> were attacked within 30 seconds of first power up. >> >> Now our stack survives this environment. But then, >> we charge for our stack. >> http://www.mpeforth.com/powernet.htm > > There is also the point that in many cases, you /have/ a benign > environment. It is very common for the network of embedded systems to be > a closed network - you know exactly what is connected to the network, > and what software is running on the systems. If you need to connect to a > dangerous network (such as the Internet), you put appropriate firewalls > in between.
Yup. There is also the case of people starting off with this sort of assumption -- then *forgetting* it. Or, forgetting to reevaluate the consequences. Then, wondering why they are "suddenly" seeing bizarre system behaviors that "don't make sense" from an examination of the code...
> It is all about designing the system appropriately, and putting the > right kind of resources in the right place.
Exactly. *Not* just picking up some "component" and dropping it in place because it *looks* like it SHOULD do the job.
> If you need to connect to > dangerous networks, then you need a stack that is tried and tested > against such environments.
Note that this sort of stack only needs to have *predictable* behavior in those environments. That doesn't mean that it has to "work" there. E.g., my network speakers have really slim stacks (i.e., big chunks omitted to get them down into a tiny resource footprint) but they will survive "exposed" without "crashing" or being exploitable. Put them in a more benign environment and they will easily outperform a "full" stack. They "work" in each case. But, "work better" in the environment for which they were designed.
> But if you only need to connect to safe, > known networks, then it's fine that your stack goes offline (denial of > service) at the first attack, and that you control it by unencrypted > telnet connections and pass around passwords using plain text. > > The danger, of course, is when people use software and configurations > targeted at benign networks, and then connect them to dangerous ones.
Unfortunately, I think many aren't really aware/qualified to understand what can go wrong "on the wire" and "in the stack". This is a consequence of "black box" *thinking* (i.e., "I don't need to understand how it works, just how to *use* it...") Everyone (?) knows that a second order equation will yield up its roots with: (-b + sqrt(b*b - 4*a*c))/(2*a) (-b - sqrt(b*b - 4*a*c))/(2*a) [sorry, sticking to ASCII makes this harder to visualize than it should be :> Hopefully I didn't munge the parens! Or, the *signs*!!] However, *blindly* applying the quadratic formula (in a computing environment) will, eventually, lead you astray with certain "types" of curves. E.g., understanding this leads to a more appropriate use of: (2*c)/(-b - sqrt(b*b - 4*a*c)) (-b - sqrt(b*b - 4*a*c))/(2*a) or: (-b + sqrt(b*b - 4*a*c))/(2*a) (2*c)/(-b + sqrt(b*b - 4*a*c)) as fitting to the curve in question. The difference, of course, is *understanding* the solution instead of just blindly *applying* it! (a tiny bit more work but a far more "accurate" result!)
> But that's the fault of the system designers or users, not of the stacks.
> http://www.coocox.org/Index.html- Hide quoted text - >
Is this any good AAMOI ? Has anyone here used it for real and can tell me how stable and usable it is ? Also, I am thinking of using the Olimex ARM-USB-TINY-H for ST ARM Cortex M3 device development, has anyone any experience with this setup ? Are there any ARM Cortex M3 forums that are well frequented and not specific to any particular manufacturer ? I like c.a.e. but it's a bit low-volume atm and I wonder if there is anywhere out there where all the big kn0bs hang out in ARM terms ? Thanks, Mike
Don Y wrote:

> Noob wrote: > >> Linux makes sense in a growing number of scenarios, even >> in the (32-bit, virtual memory) embedded market : connected >> TVs, "smart" phones, decoder set-top boxes, routers, etc. > > And you will notice that desktop stacks take a *boatload* > of resources! Have you tried porting one to an environment > with (just) 10's or 100's of KB of total RAM?
Connected TVs, "smart" phones, decoder set-top boxes, routers, etc have *WAY* more RAM than that; typically /hundreds/ of megabytes, up to 1 gigabyte (!) for the Galaxy S3. RAM prices have reached an all-time low, at 4-8 euros per GB.
>> The TCP/IP stack in Linux is robust and mature, and the rare >> security bugs are fixed fast. > > Just because a bug is fixed in a Linux/*BSD release (or patch), > doesn't automatically convey those same fixes to the TV/phone/STB!
Phones and STBs can easily be upgraded in the field (TVs too, but consumers may not feel comfortable doing it).
> Also, the nature of attacks on a desktop/server is different > than from an embedded device. The desktop stack just has to > safeguard against being *crashed*.
And DoS, and spoofing, and hijacking.
> E.g., I do a *lot* of processing in and just above the network > driver (i.e., low in the stack) so I can discard "inappropriate" > incoming packets before they percolate up the stack (to the > appropriate level of abstraction where they might, otherwise, > be recognized as "inappropriate"). Outgoing traffic inherits > the priorities of the sending task, etc. I.e., the stack > is more of an extension of the application than a separate > "service" provided to it.
FTR, lwip can be configured that way too.
> In a desktop stack, you either rely on hardware MMU/VM > mechanisms to move data between userland and the kernel > or do lots of copyin/out's. The stack has no idea how it > is being used/abused so has to take a generic approach > to how it provides its services (to the application). > > There are costs (time+space) for this generality. In a > desktop, you usually have (or can upgrade to have) these > resources. How much "slop" (extra cost) do you add to > your embedded device to safeguard against this *possibility* > (which may never come to pass)? > > Desktop stacks tend to be more focused on throughput at the > expense of all else. Embedded systems tend to be more concerned > with a particular "functionality" within a given resource set. > > Desktops are often overkill for *specific* markets/applications. > If you look carefully at your application and what it expects > of its environment, you can find many opportunities where a > "general purpose" network stack can be trimmed down to better > fit that application -- saving resources, improving resiliency > and "predictability" in the process. > > This sort of scrutiny is, IMO, invaluable in designing robust > products; how can you be comfortable with an implementation > if you don't understand the technology on which it relies > (to a level of detail that allows you to excise those portions > that are inappropriate, liabilities, etc.)?
You sound like someone who enjoys reinventing the wheel again and again (and again). In my opinion, the main strength of open-source software is to free developers from /continuously/ solving the same problems (what a waste of human brain-power). Regards.
On 5/14/2012 11:46 AM, Noob wrote:
> Don Y wrote: > >> Noob wrote: >> >>> Linux makes sense in a growing number of scenarios, even >>> in the (32-bit, virtual memory) embedded market : connected >>> TVs, "smart" phones, decoder set-top boxes, routers, etc. >> >> And you will notice that desktop stacks take a *boatload* >> of resources! Have you tried porting one to an environment >> with (just) 10's or 100's of KB of total RAM? > > Connected TVs, "smart" phones, decoder set-top boxes, routers, etc > have *WAY* more RAM than that; typically /hundreds/ of megabytes, > up to 1 gigabyte (!) for the Galaxy S3. RAM prices have reached an > all-time low, at 4-8 euros per GB.
How many GB are you going to put in your refrigerator? Freezer? Washing machine? Dryer? Bluetooth earpiece? Household thermostat? Irrigation system? Furnace? Range/stove? Telephone? Multimedia system remote control? Toaster? <http://www.uberreview.com/2005/09/netbsd-controlled-toaster.htm> Where do you think all those IPv6 addresses are GOING??? Do you really think consumers will be excited about the UNNECESSARY ADDED COSTS of "megabytes" of RAM just so their microwave oven can "know" the right defrost cycle for a 12 pound turkey? Or, so their washing machine can inform them when the wash cycle is complete??
>>> The TCP/IP stack in Linux is robust and mature, and the rare >>> security bugs are fixed fast. >> >> Just because a bug is fixed in a Linux/*BSD release (or patch), >> doesn't automatically convey those same fixes to the TV/phone/STB! > > Phones and STBs can easily be upgraded in the field (TVs too, but > consumers may not feel comfortable doing it).
We've already seen how wonderfully unsolicited updates work! You've never encountered a problem where your machine "broke" after a SOLICITED update? "Gee, Bob, the TV was working yesterday! Honest, I didn't *touch* it! And don't go blaming the kids for this, either!!"
>> Also, the nature of attacks on a desktop/server is different >> than from an embedded device. The desktop stack just has to >> safeguard against being *crashed*. > > And DoS, and spoofing, and hijacking.
It doesn't have to continue to heat the house when under attack. Or prevent the items in the freezer from thawing (because it was commanded to raise the temperature setpoint to 10C). Or ensure that the plants in the yard keep getting watered despite any "schedule/weather updates" from external services ("Gee, is it Summer out there? Should I be watering more often? Or, less? I sure wish I knew... but, someone is pinging the hell out of my interface and I can't seem to get any useful information...")
>> E.g., I do a *lot* of processing in and just above the network >> driver (i.e., low in the stack) so I can discard "inappropriate" >> incoming packets before they percolate up the stack (to the >> appropriate level of abstraction where they might, otherwise, >> be recognized as "inappropriate"). Outgoing traffic inherits >> the priorities of the sending task, etc. I.e., the stack >> is more of an extension of the application than a separate >> "service" provided to it. > > FTR, lwip can be configured that way too.
Look at *where* the filtering happens. If you harvest a packet, you have incurred cost. If you can discard it *in* the driver, you haven't.
>> In a desktop stack, you either rely on hardware MMU/VM >> mechanisms to move data between userland and the kernel >> or do lots of copyin/out's. The stack has no idea how it >> is being used/abused so has to take a generic approach >> to how it provides its services (to the application). >> >> There are costs (time+space) for this generality. In a >> desktop, you usually have (or can upgrade to have) these >> resources. How much "slop" (extra cost) do you add to >> your embedded device to safeguard against this *possibility* >> (which may never come to pass)? >> >> Desktop stacks tend to be more focused on throughput at the >> expense of all else. Embedded systems tend to be more concerned >> with a particular "functionality" within a given resource set. >> >> Desktops are often overkill for *specific* markets/applications. >> If you look carefully at your application and what it expects >> of its environment, you can find many opportunities where a >> "general purpose" network stack can be trimmed down to better >> fit that application -- saving resources, improving resiliency >> and "predictability" in the process. >> >> This sort of scrutiny is, IMO, invaluable in designing robust >> products; how can you be comfortable with an implementation >> if you don't understand the technology on which it relies >> (to a level of detail that allows you to excise those portions >> that are inappropriate, liabilities, etc.)? > > You sound like someone who enjoys reinventing the wheel again > and again (and again).
I believe in coming up with the right solution for the problem. Do you use long longs for all your "numbers"? Heck, you can represent 1, 35099, 68736, 5003876555, etc. ALL as long longs! Why bother with byte_t's, shorts, ints, longs and long longs when you could just use long longs for ALL of them!!! Why REINVENT all the arithmetic operations for each of these different data type sizes?! Hmmm... we'll have problems with floating point values, though. OK, so we;ll use long *doubles* for all out numeric needs! Why bother with integer data types, floats, doubles *and* long doubles if we can just handle them all with long doubles?! Why REINVENT all the math LIBRARIES to support each of these types? Why burden printf with having to know which size argument it is dealing with? "Numbers are numbers", right? Of cource, we also haev to worry about numbers that can't be accurately expressed in those representations. So, maybe we should adopt the use of a *rational* data type. I.e., typedef struct { long double numerator; long double denominator; } rational_t; And, of course, that won't handle complex numbers so we need a complex_rational_t as well... Look at all the REINVENTING that we've been able to AVOID! After all, memory is cheap! Put a few GB in your toaster so you can implement: turn_on_heating_element(); sleep( (rational) (5.0L, 1.0L) ); turn_off_heating_element(); eject_toast(); beep(); with those nice CONSISTENT numeric types! :> Do you have ONE sort() routine in your toolbox? Heck, sort is sort is sort. Why REINVENT such a trivial facility? Just throw more CPU cycles/memory at "whatever" algorithm you settle on and live with that! Of course, you will have to make sure your sort() is completely parameterized so you can sort arbitrary objects -- not just "numbers" (which we have already decided will be complex_rational_t's) or "strings" (which should all be of variable length -- do we do LR alpha sorts? What if we're sorting some numerical strings? Do we add a SWITCH to sort right justified??) Look at how much we'll *save* by not reinventing specialized sort()'s! :> And, we surely don't need gets(), fgets(), getchar(), fgetc(), getc(), UNgetc(), scanf(), sscanf(), fscanf(), etc. Let's just settle on *one* so we don't REINVENT any of the same (and incompatible) mechanisms. Nor, different window managers, operating systems, "computers", mice, etc. Choice is bad. One size fits ALL! Need a pocket calculator? Carry a laptop! Need a pencil and pad of paper? Put your desk on wheels!! :-/ We'll take a VOTE next Tuesday and rid ourselves of all this UNNECESSARY duplication!
> In my opinion, the main strength of > open-source software is to free developers from /continuously/ > solving the same problems (what a waste of human brain-power).
People often mistake *code* reuse with *design* reuse. The thought that cobbling some existing (hopefully *working*) code into a new project somehow cuts a corner. Sure! If you are making essentially the same *thing* as that for which the code was originally created! (is your washing machine essentially the same as your STB?? Or your furnace? Granted, they each might have a network connection -- but, are their roles and needs *that* similar that they would benefit from the same network stack as that used in your PC??? REGARDLESS OF IT'S SOFTWARE COST??) The real gains are in reusing *designs*, not *code*. Approaching similar problems in similar ways. Reusing *code* assumes the code works, you *understand* it sufficiently to *know* that it addresses your problem domain adequately and that you will be able to *maintain* it (what good is developing a product "overnight" if you have to answer the first customer complaint with, "Gee, I'll post a message on the forum and hope someone *there* can tell me what's going wrong..."). This is how product counterfeiters often falter -- when something goes wrong, they haven't the knowledge of how to *fix* it (because they've COPIED instead of ENGINEERED). [It's also a great way to *thwart* counterfeiters by designing your product to "break" if subtly altered!] Reinventing for the sake of reinventing is a non-starter. Blindly *reusing* is even worse!
>> http://www.coocox.org/Index.html- Hide quoted text - >> >Is this any good AAMOI ? Has anyone here used it for real and can >tell me how stable and usable it is ?
I am are working with the coocox IDE and found it pretty good and reliable. Our project is an autopilot system for RC models. After a year of development it's in production. While there are other compatible interfaces, colinkex is the safe bet for the coocox IDE. I had some trouble with others, worked with a version and failed with the next edition. --------------------------------------- Posted through http://www.EmbeddedRelated.com
Don Y wrote:
> Noob wrote: >> Don Y wrote: >>> Noob wrote: >>> >>>> Linux makes sense in a growing number of scenarios, even >>>> in the (32-bit, virtual memory) embedded market : connected >>>> TVs, "smart" phones, decoder set-top boxes, routers, etc. >>> >>> And you will notice that desktop stacks take a *boatload* >>> of resources! Have you tried porting one to an environment >>> with (just) 10's or 100's of KB of total RAM? >> >> Connected TVs, "smart" phones, decoder set-top boxes, routers, etc >> have *WAY* more RAM than that; typically /hundreds/ of megabytes, >> up to 1 gigabyte (!) for the Galaxy S3. RAM prices have reached an >> all-time low, at 4-8 euros per GB. > > How many GB are you going to put in your refrigerator? Freezer? > Washing machine? Dryer? Bluetooth earpiece? Household thermostat? > Irrigation system? Furnace? Range/stove? Telephone? Multimedia > system remote control? Toaster?
You're a big fan of logical fallacies, aren't you? The answer is mu. ( http://www.catb.org/jargon/html/M/mu.html ) I was discussing systems running on 32-bit, MMU-enabled CPUs. Would you use an ARM Cortex M3 in your toaster? <huge snip>
> Do you use long longs for all your "numbers"? Heck, you can > represent 1, 35099, 68736, 5003876555, etc. ALL as long longs! > Why bother with byte_t's, shorts, ints, longs and long longs > when you could just use long longs for ALL of them!!! Why > REINVENT all the arithmetic operations for each of these > different data type sizes?!
Again, mu. Also, a bad analogy is like a leaky screwdriver.
On Wed, 16 May 2012 11:03:12 +0200, Noob wrote:

> Don Y wrote: >> Noob wrote: >>> Don Y wrote: >>>> Noob wrote: >>>> >>>>> Linux makes sense in a growing number of scenarios, even in the >>>>> (32-bit, virtual memory) embedded market : connected TVs, "smart" >>>>> phones, decoder set-top boxes, routers, etc. >>>> >>>> And you will notice that desktop stacks take a *boatload* of >>>> resources! Have you tried porting one to an environment with (just) >>>> 10's or 100's of KB of total RAM? >>> >>> Connected TVs, "smart" phones, decoder set-top boxes, routers, etc >>> have *WAY* more RAM than that; typically /hundreds/ of megabytes, up >>> to 1 gigabyte (!) for the Galaxy S3. RAM prices have reached an >>> all-time low, at 4-8 euros per GB. >> >> How many GB are you going to put in your refrigerator? Freezer? >> Washing machine? Dryer? Bluetooth earpiece? Household thermostat? >> Irrigation system? Furnace? Range/stove? Telephone? Multimedia >> system remote control? Toaster? > > You're a big fan of logical fallacies, aren't you? > > The answer is mu. ( http://www.catb.org/jargon/html/M/mu.html ) > > I was discussing systems running on 32-bit, MMU-enabled CPUs. Would you > use an ARM Cortex M3 in your toaster?
Add enough computing power to the toaster and you can save by leaving out the heating elements. How to build a better toaster: http://kcbx.net/~tellswor/bettoast.htm Regards, Allan

The 2024 Embedded Online Conference