EmbeddedRelated.com
Forums
The 2026 Embedded Online Conference

Gui application for embedded system

Started by Lanarcam June 26, 2014
On 2014-07-12 23:07:42 +0000, Paul Rubin said:

> Python is great on the ARM/Linux class of embedded system. It's not > usable for small MCU's and probably not so great for bare metal even > with larger processors. I've heard it works ok with around 2 meg of > ram, though I haven't personally run it in anything that small.
On that matter MicroPython [1] has been recently presented on comp.lang.python, it's a Python3 implementation aimed at microcontroller with main (only?) big difference from being the lack of Unicode string (a main point in Python 2/3 difference in strings handling). I have no experience with it, I just read around about it and from what I can see it requires a 32bit MCU so the MSP430 is not suited for it. AFAICT the minimum memory requirement is as low as ~64 kB. Not sure how big the full standard library is. It also works on regular desktop OSes as well so you can use the same environment on your regular computer and it's said it offers equal-or-better performance than CPython. Just FYI, it may come in handy to someone. [1] http://micropython.org -- Andrea
Paul Rubin <no.email@nospam.invalid> writes:

> Python is great on the ARM/Linux class of embedded system. It's not > usable for small MCU's and probably not so great for bare metal even > with larger processors. I've heard it works ok with around 2 meg of > ram, though I haven't personally run it in anything that small.
I guess that was true pre-micropython but the reference design for micropython runs an STM32F405RGT6 with 192 kB RAM which seems pretty interesting.
On 2014-07-12, Paul Rubin <no.email@nospam.invalid> wrote:
> Les Cargill <lcargill99@comcast.com> writes: >> Threads are absolutely no problem but it's nicer to have options. > > The Python community seems to mostly hate threads and prefer Twisted, > so I'm in a bit of a minority.
I wouldn't say they "hate" threads, but there is a surprising amount fear surrounding threads and a lot of warning/moaning about how hard it is to get programs using threads to work right. I honestly don't understand where that comes from. I use Python's native threading features a lot, and I never have any problems. I _think_ the problem is that a lot of Python users come from a Windows/web-hacker background and have absolutely zero training on or experience with multi-tasking (or any other aspect of computer science or software engineering for that matter). IMO, for anybody who has ever used multiple threads in an embedded system (and that includes interrupts) or with Posix threads, Python's threads are dead simple and nearly fool-proof.
> Yeah, most of my stuff in the past decade or so has been in Python. > When I write anything in C these days, as I've written elsewhere it > feels like a "back to nature" experience.
I still write Linux kernel code in C and embedded stuff in C. User-space application code for Linux/Windows has been almost exclusively Python for 15+ years. -- Grant Edwards grant.b.edwards Yow! UH-OH!! We're out at of AUTOMOBILE PARTS and gmail.com RUBBER GOODS!
Grant Edwards wrote:
> On 2014-07-12, Paul Rubin <no.email@nospam.invalid> wrote: >> Les Cargill <lcargill99@comcast.com> writes: >>> Threads are absolutely no problem but it's nicer to have options. >> >> The Python community seems to mostly hate threads and prefer Twisted, >> so I'm in a bit of a minority. > > I wouldn't say they "hate" threads, but there is a surprising amount > fear surrounding threads and a lot of warning/moaning about how hard > it is to get programs using threads to work right. I honestly don't > understand where that comes from.
(guessing here) It probably comes from issues related to serialization and locking. Does the Python toolkit have "run to completion" in threads ( relative to other threads in the bytecode interpreter ) or do you have to do locking? I agree; it's not difficult, but if Python is positioned as a "popular" language "they" may not have gotten to the part in the Dragon book about P and V operations.
> I use Python's native threading > features a lot, and I never have any problems. I _think_ the problem > is that a lot of Python users come from a Windows/web-hacker > background and have absolutely zero training on or experience with > multi-tasking (or any other aspect of computer science or software > engineering for that matter). >
That's extremely likely, I think.
> IMO, for anybody who has ever used multiple threads in an embedded > system (and that includes interrupts) or with Posix threads, Python's > threads are dead simple and nearly fool-proof. >
They should be; I'm glad to hear that.
>> Yeah, most of my stuff in the past decade or so has been in Python. >> When I write anything in C these days, as I've written elsewhere it >> feels like a "back to nature" experience. > > I still write Linux kernel code in C and embedded stuff in C. > User-space application code for Linux/Windows has been almost > exclusively Python for 15+ years. >
-- Les Cargill
On 2014-07-15, Les Cargill <lcargill99@comcast.com> wrote:
> Grant Edwards wrote: >> On 2014-07-12, Paul Rubin <no.email@nospam.invalid> wrote: >>> Les Cargill <lcargill99@comcast.com> writes: >>>> Threads are absolutely no problem but it's nicer to have options. >>> >>> The Python community seems to mostly hate threads and prefer Twisted, >>> so I'm in a bit of a minority. >> >> I wouldn't say they "hate" threads, but there is a surprising amount >> fear surrounding threads and a lot of warning/moaning about how hard >> it is to get programs using threads to work right. I honestly don't >> understand where that comes from. > > (guessing here) > It probably comes from issues related to serialization and locking. > > Does the Python toolkit have "run to completion" in threads ( > relative to other threads in the bytecode interpreter ) or do you > have to do locking?
AFIK, it doesn't have "run to completion". Accesses to basic Python types are guaranteed to be thread safe. If you're doing a series of manipulations on an object that you want to be atomic, then you have to do locking/serialization (Ptyhon provides a lock object type). All of the usual suspects (read/write/send/receive) block as you would expect and allow other threads to run. There is one gotcha regarding the standard "CPython" implementation. There is a global lock in the VM that only allows one thread to run at a time. For what I generally use threads (manage contexts when dealing with multiple overlapping I/O operations), that works fine. If you want to try to do things like parallelize floating point computations to take advantage of multi-core CPU, that global lock is a problem _unless_ you're doing your computations using numerical library calls that release the global lock while they're busy. In that case theres a "multiprocessing" module that uses multiple instances of the VM to allow parallel computation. -- Grant Edwards grant.b.edwards Yow! Somewhere in DOWNTOWN at BURBANK a prostitute is gmail.com OVERCOOKING a LAMB CHOP!!
On Mon, 14 Jul 2014 14:59:12 +0000 (UTC), Grant Edwards
<invalid@invalid.invalid> wrote:


>I wouldn't say they "hate" threads, but there is a surprising amount >fear surrounding threads and a lot of warning/moaning about how hard >it is to get programs using threads to work right. I honestly don't >understand where that comes from.
As Les suggested already, it mostly is about serialization and locking. Studies have shown repeatedly that the majority of programmers will make a horrible mess of coordinating access to shared resources and, in particular, of algorithms which require multiple locks or which lock/unlock recursively. Studies also show that, in the absence of GC, too many programmers can't keep track of who is responsible for deallocating objects that are shared or passed around. This is the reason for the rise of "managed" environments which handle automatically deallocation and, at least, simple cases of object locking.
>I _think_ the problem is that a lot of Python users come from a >Windows/web-hacker background and have absolutely zero training on >or experience with multi-tasking (or any other aspect of computer >science or software engineering for that matter).
It's a wide-spread problem that has little to do with what language or operating system is in use. The skill level of the average desktop/server software "developer" now is only slightly better than "script kiddie". They are able to plug together library functions and routines scavenged from other sources, but largely are incapable themselves of coding those things in the first place. I'm not referring to actually difficult things that require expert knowledge, but rather to really basic things like, e.g., writing code to search/modify a string or to manipulate a binary tree. The fact is that the vast majority of "developers" now have neither formal education nor any training in the domain for which they are writing applications. Not infrequently, I see questions that really scare me. I think we've all seen things for which we've thought - and sometimes said publicly - "Hey! this task is way above your skill level."
>IMO, for anybody who has ever used multiple threads in an embedded >system (and that includes interrupts) or with Posix threads, Python's >threads are dead simple and nearly fool-proof.
And thread safety makes everything slower in the simple cases where it is not needed. 8-) Understand that I'm not in any way opposed to leveraging a helpful programming environment ... but neither would I be helpless if that environment suddenly were taken away. Far too many programmers are completely dependent on a helpful environment and cannot function without it. YMMV, George
On 15.7.2014 &#1075;. 21:50, George Neuner wrote:
 > ....
> Far too many programmers are > completely dependent on a helpful environment and cannot function > without it.
I believe this is key, perhaps understated. It is not only about people becoming dependent on having a few things to click on and expect a result; having to do more yourself and go deep down to the lowest level is important to keep the head of the programmer in shape. Not all the time, one does need the efficiency provided by level change, once a day or at least once every few days would be OK I guess. If I have been doing hardware and not programming for a few months it may take me weeks (typically 2) to become again the programmer I am used to think I am. Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/sets/72157600228621276/
George Neuner <gneuner2@comcast.net> writes:

...<a lot of stuff about synchronization>...

Can someone PLEASE explain to me, REALLY, the difference between a mutex
and a binary semaphore? In the FreeRTOS implementation, the only
difference I can see is the potential "priority elevation" thingie.

Also, the difference is in how it is used. A semaphore starts out
unposted and blocks until a post event occurs. A mutex starts out
"available," then some task takes the token (making it unavailable),
does its thing, and then returns the token (making it available again).

So usage of a semaphore requires one access per event, while usage of a
mutex requires two accesses per <protected resource usage>.

Is this right?

So is that it??!?
-- 
Randy Yates
Digital Signal Labs
http://www.digitalsignallabs.com
Randy Yates <yates@digitalsignallabs.com> writes:
> Can someone PLEASE explain to me, REALLY, the difference between a mutex > and a binary semaphore? In the FreeRTOS implementation, the only > difference I can see is the potential "priority elevation" thingie.
Usually a mutex is in one of two states: either owned by some process (therefore locked), or else unlocked (so any proces can acquire it). In other words it's a one-bit value indicating whether you have ownership of a resource. A semaphore on the other hand contains an integer rather than a bit, so it can keep track of how many of some replicated resource are in use: http://en.wikipedia.org/wiki/Semaphore_%28programming%29 Semaphores (aka Dijkstra's P and V primitives) are among the oldest concurrency primitives. These days I usually think it's best to use asynchronous queues protected by locks (your system probably has a library offering those), then don't share any mutable data at all between processes/threads. You can take an efficiency hit from doing this, but you avoid a whole lot of traditional concurrency hazards.
On 23/07/14 06:13, Paul Rubin wrote:
> Randy Yates <yates@digitalsignallabs.com> writes: >> Can someone PLEASE explain to me, REALLY, the difference between a mutex >> and a binary semaphore? In the FreeRTOS implementation, the only >> difference I can see is the potential "priority elevation" thingie. > > Usually a mutex is in one of two states: either owned by some process > (therefore locked), or else unlocked (so any proces can acquire it). In > other words it's a one-bit value indicating whether you have ownership > of a resource. > > A semaphore on the other hand contains an integer rather than a bit, so > it can keep track of how many of some replicated resource are in use: > > http://en.wikipedia.org/wiki/Semaphore_%28programming%29 > > Semaphores (aka Dijkstra's P and V primitives) are among the oldest > concurrency primitives. These days I usually think it's best to use > asynchronous queues protected by locks (your system probably has a > library offering those), then don't share any mutable data at all > between processes/threads. You can take an efficiency hit from doing > this, but you avoid a whole lot of traditional concurrency hazards.
I've been using "message passing" via asynchronous mailboxes/queues since 1981. I always found it far easier to understand and debug and measure/manage than semaphores/mutexes. I wouldn't worry about any reduced efficiency. Besides, as Dijkstra or Hoare once said after a programming competition's results were announced "If I had known that I didn't need to get the right answer, I could have made my code much faster".
The 2026 Embedded Online Conference