EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

Embedded Linux: share data among different processes

Started by pozz June 23, 2016
Il 23/06/2016 21:11, Don Y ha scritto:
> On 6/23/2016 4:52 AM, pozz wrote: >> I'm new to embedded Linux so this question could be very simple for >> many of >> you. Most probably, it isn't directly related to embedded world, but >> to Linux >> OS generally. Anyway I think it is a common scenario in embedded >> applications. >> >> I'm going to develop a local/remote control of an electronic device. It >> communicates through a RS485 link. >> The local control will be a touch-screen display (I'm going to use QT >> graphic >> libraries). >> The remote control will be HTTP (web server). >> >> I think a good approach will be to develop a simple application, the >> poller, >> that communicates with the electronic device and implements the RS485 >> protocol. >> The poller continuously acquires the current status/settings of the >> device and >> store them in some "shared" way. >> >> The graphic application (QT-based) and the web server (CGI) should >> access to >> the data retrieved by the poller. >> >> What is the best method to share the data generated by an application >> (the >> poller) among two or more applications (QT and CGI)? >> In this scenario, I think it's important to lock the "shared data" before >> accessing them (reading or writing), in order to avoid reading >> incoerent data. >> Indeed, if the poller writes the data at the same time (Linux OS is >> multi-tasking) the web server reads them, they could be incoerent. >> >> I'm thinking to use SQLite database to store the data. The poller >> writes the >> database, HTTP and QT reads from it. It seems SQLite will take care the >> multi-thread/multi-process scenario. >> >> Any suggestions? > > It depends on how "lightweight" you want the mechanism to be. And, > how "elegant". > > If you can implement the "Shared Memory" extensions (a kernel option > in the *BSD's -- not sure re: Linux) then this is the easiest way > forward. Look for shmget(2), shmat(2), shmctl(2), et al. > > Use a semaphore to control access (and discipline for thereaders/writers > to follow that convention) and keep atomic regions short and sweet. > > If the data isn't changing all that often, you can push it to each > consumer through pipes and count on them to keep their own copies > intact. (here, there be dragons)
There are some analog signals that could change every second. The poller will fetch new updated data about every one second. I would avoid pushing new data to consumers (HTTP server and QT libraries) every second.
> The DBMS approach is how I am currently addressing all "persistent > data". It's a fair bit heavier-handed (I've got lots of MIPS to spare) > but is more elegant;
I will have a lot of spare MIPS too. I don't need to save data in a persistent way (after a reboot, the poller will start to retrieve new data). I thought to create the SQLite database file in a RAM/temporary filesystem. This should speed up the access and should avoid Flash memory corruption.
> a producer can't inject data into the store > unless the data is appropriate for the tables/fields involved > (you can't store text somewhere that expects numerics!). Likewise, > the consumers need not *check* the data because the checks can be > applied on admission to the DB. I.e., if the data is *in* the > DB, then you know it satisfied the admission criteria! > > However, I don't think SQLite has all of these hooks. > > GNeuner is your go-to guy for the DB approach...
Il 23/06/2016 21:31, Paul Rubin ha scritto:
> pozz <pozzugno@gmail.com> writes: >> I'm thinking to use SQLite database to store the data. The poller >> writes the database, HTTP and QT reads from it. It seems SQLite will >> take care the multi-thread/multi-process scenario. > > This is the classic and probably easiest approach, though not the most > economical in terms of machine use. There are other databases besides > sqlite that you can also consider. How much data are you talking about?
The overall configuration of the device could be maximum 1kB (I think it will be around 100 bytes). However it doesn't change frequently, so it can be retrieved only at startup and only when it really changes. The status data that changes frequently will be maximum 100-200 bytes (I think it will be less than 100 bytes).
> Do you need stuff like persistence across reboots?
No.
Il 23/06/2016 21:53, Les Cargill ha scritto:
> pozz wrote: >> I'm new to embedded Linux so this question could be very simple for many >> of you. Most probably, it isn't directly related to embedded world, but >> to Linux OS generally. Anyway I think it is a common scenario in >> embedded applications. >> >> I'm going to develop a local/remote control of an electronic device. It >> communicates through a RS485 link. >> The local control will be a touch-screen display (I'm going to use QT >> graphic libraries). >> The remote control will be HTTP (web server). >> >> I think a good approach will be to develop a simple application, the >> poller, that communicates with the electronic device and implements the >> RS485 protocol. The poller continuously acquires the current >> status/settings of the device and store them in some "shared" way. >> >> The graphic application (QT-based) and the web server (CGI) should >> access to the data retrieved by the poller. >> >> What is the best method to share the data generated by an application >> (the poller) among two or more applications (QT and CGI)? >> In this scenario, I think it's important to lock the "shared data" >> before accessing them (reading or writing), in order to avoid reading >> incoerent data. Indeed, if the poller writes the data at the same time >> (Linux OS is multi-tasking) the web server reads them, they could be >> incoerent. >> >> I'm thinking to use SQLite database to store the data. The poller writes >> the database, HTTP and QT reads from it. It seems SQLite will take care >> the multi-thread/multi-process scenario. >> >> Any suggestions? > > 1) Qt and HTTP are completely disjoint. Use of Qt is disruptive - it has > its own pre-preprocessor and ... keywords specific to Qt. If I could, I > would pick one and possibly carve the other off as a separate project. > > HTTP is, simply put, a bloated, obese protocol which cannot be put on a > decent timeline. It's intentionally entanlged with TCP in very > unfortunate ways.
QT choice could be changed, but the embedded Linux platform I'm going to use is shipped with many ready-to-use examples of simple QT applications that use the TFT touch display. This is the main reason of QT choice. HTTP server is a requirement: the users will be able to connect to the Linux box through a simple web browser. Of course I will use a ready-to-use HTTP server (I'm thinking of lighttpd) with client-side javascripts to retrieve data from a server-side script (a simple cgi). Anyway they are really *two* different projects/processes running on the same Linux box. I couldn't understand why you compare QT and HTTP.
> 2) You mean you will poll for data over an RS485 link, then have two > possible clients on a workstation/phone/tablet/whatever that exploit > this data.
There is only one process that will access the RS485 link, the "poller" process running on the embedded Linux board. The data will be shared at the Linux board level, not RS485 link.
> 2a) Is the RS485 link half or full duplex?
Half duplex.
> 2b) If it is half duplex, do you have an accurate model of the > line discipline? For example, with MODBUS ( which is 1/2 dux 485 and > address muxed ) there will be hard[1] timers to be managed. I > recommend a state machine approach. The port cannot be used by more > than one request/response pair at a time and response timeouts > dominate latency. > > [1] although they can be made to be adaptive and a > select()/poll()/epoll() approach can be quite useful. Much depends on > whether the responding node identifies itself in responses. But > depending on > your serial port architecture, this may get weird.
I don't think I will have problems with protocol at the RS485 wire level.
> 3) SQLite appears to have locking: > https://www.sqlite.org/lockingv3.html
So it should be ok for my application.
> 3a) SQL is a bloated pain in the neck.
Really? I thought exactly the opposite thing. Why?
> 4) A minimal technology for a shared data store under Linux is shared > memory. There are shared memory filesystems or you can develop a shared > library/object file which enacts an access protocol for the shared > data using the System V primitives. See "System V shmem" and "System V > semaphores" through Google for details. This isn't difficult but it's > fiddly. It's also the highest performing approach. I;d at least > consider guarding the shmem with a semaphore, although I've never > established that this is completely necessary
At the moment I think it's the best solution for me. I started studying the shared memory under Linux. I never used it before.
> 5) Consider using pipes for shared data, possibly. Dunno how you > integrate pipes with an HTTP daemon.
It seems to me, shared memory fits better on my application.
> 6) Sockets are also a good thing to consider using, although again, I > have never integrated an HTTP server with sockets for another protocol. > Your "poller" offers say, a TCP socket ( to be used over localhost ) > and does "multiple unicast" or alternatively actual multicast > transmission of the entire data store every so often, or differential ( > the things that have changed ) transmission as needed.
Yes, sockets are another approach for inter-process communication. Maybe I will start with shared memory.
> 6a) Multicast constrains you to the use of UDP. > > This is a nontrivial project.
Yes, I know.
> All this being said, there may be a good software kit to solve this > problem out there; I just don't know what it is.
I hope it really exists and someone will point me to it.
> > and > > 7) I've seen cases where a COTS HTTP server was used to serve up Java > applications to do this. That's kind of old school at this writing. >
On 24.6.2016 &#1075;. 07:40, Clifford Heath wrote:
> On 24/06/16 14:27, Don Y wrote: >> On 6/23/2016 8:25 PM, Clifford Heath wrote: >>> On 24/06/16 10:19, Don Y wrote: >>>> On 6/23/2016 4:45 PM, Clifford Heath wrote: >>>>> On 23/06/16 23:05, Reinhardt Behm wrote: >>>>>> Jack wrote: >>>>>>> Il giorno gioved&igrave; 23 giugno 2016 13:52:19 UTC+2, pozz ha scritto: >>>>>>>> I'm going to develop a local/remote control of an electronic >>>>>>>> device. It >>>>>>>> communicates through a RS485 link. >>>>>>>> I'm thinking to use SQLite database to store the data. The poller >>>>>>>> writes >>>>>>>> the database, HTTP and QT reads from it. It seems SQLite will take >>>>>>>> care >>>>>>>> the multi-thread/multi-process scenario. >>>>> >>>>> SQLite "takes care of" that by using a single giant lock. >>>>> That might work for you, since it sounds like you probably >>>>> have very easy throughput and latency demands. It would >>>>> be a good solution if you weren't using Qt. Les Cargill >>>>> hasn't seen how remarkably lightweight SQLite actually is. >>>>> >>>>>> Since you already using Qt, I can propose a solution which I use >>>>>> quite >>>>>> often. It is even flying in more than 100 helicopters. Yes, penguins >>>>>> can fly >>>>>> The poller fetches that data from the remote device via RS485 and >>>>>> keeps a >>>>>> for it local copy. If permanent storage is a requirement it can also >>>>>> take >>>>>> care of this. >>>>>> In addition it open a QLocalServer which listens for connections from >>>>>> other >>>>>> processes. >>>>>> The other (multiple) processes connect to this server using a >>>>>> QLocalSocket. >>>>>> They can then poll are are sent the data directly when the poller >>>>>> receives >>>>>> new data. This way no locking is required since the poller does its >>>>>> sending >>>>>> in a single thread. >>>>> >>>>> Reinhardt, do you deal with write-blocking if the reader fails? >>>>> If the write is in the poller thread, any blocking on writes will >>>>> delay the polling - unless you use non-blocking writes. >>>>> >>>>>> This Server/Socket combination has the advantage that it easily >>>>>> integrates >>>>>> into the signal/slot mechanism of Qt. >>>>> >>>>> I think it's good advice, though I wouldn't personally have chosen Qt. >>>>> >>>>> This application is not demanding enough to need a shared memory >>>>> solution, with the attendant need for locks and especially, >>>>> memory barriers, of which most respondents seem ill-educated. >>>> >>>> That's only a problem if you "roll your own" locks. IME, the SysV >>>> shared memory operations also carry along the SysV semaphores. >>> >>> As I said, ill-educated. >>> >>> As you'll find out when the SysV semaphore grants you access to >>> some shared memory resource that has been written, but only some >>> of the writes have been flushed. >> >> I'd appreciate a definitive reference backing your claim. >> >>> If you're on hardware that can do this, you need to know about it, >>> or leave the work to someone else who does... like the authors of >>> SQLite. >> >> And, the folks authoring the semaphore functions *don't*? > > The semaphore functions are not guaranteed to flush CPU caches > or prevent out-of-order execution of writes to the shared memory. > Why would they? The semaphore semantics are not intertwined with > the memory semantics. > > It's common for entry to the kernel to flush the cache, but you > need to know - it doesn't happen by magic. But out-of-order > execution can still spoil your day. > > In some architectures, even a single "increment-memory" instruction > can allow another CPU to observe the memory cell to be half-incremented. > It's all very troubling...
It can be troubling but if you know what you are doing it is not. Just update(that part of) caches when needed, sync (serialization opcode for power) etc. when needed after that, use lwarx/stwcx. as needed and you can have all the synch you need. Comes at a cost obviously so one must be careful. Those semaphore functions you talk about must be some high level implementation targeted at having a semaphore which is not too costly so it cares only about itself, nothing wrong with that. But the world does not end there (I don't know where it ends for the C libraries people use so of course it may be as troubling as you suggest but only as long ass one relies on libraries someone else has written). Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/
On 6/24/2016 4:40 AM, pozz wrote:
> Il 23/06/2016 21:11, Don Y ha scritto: >> It depends on how "lightweight" you want the mechanism to be. And, >> how "elegant". >> >> If you can implement the "Shared Memory" extensions (a kernel option >> in the *BSD's -- not sure re: Linux) then this is the easiest way >> forward. Look for shmget(2), shmat(2), shmctl(2), et al. >> >> Use a semaphore to control access (and discipline for thereaders/writers >> to follow that convention) and keep atomic regions short and sweet. >> >> If the data isn't changing all that often, you can push it to each >> consumer through pipes and count on them to keep their own copies >> intact. (here, there be dragons) > > There are some analog signals that could change every second. The poller will > fetch new updated data about every one second. I would avoid pushing new data > to consumers (HTTP server and QT libraries) every second.
Ask yourself what the finest granularity you need/want in the data. Is it one big block of data that only makes sense as a whole? Or, lots of smaller blocks that can stand on their own? E.g., if you were reporting meteorological data (wind speed, air temp, humidity, barometric pressure, etc.), allowing <someone> to see *one* of these updated without the others being simultaneously updated could lead to a faulty forecast (because forecasts rely on the interpretation of ALL of the results, in concert).
>> The DBMS approach is how I am currently addressing all "persistent >> data". It's a fair bit heavier-handed (I've got lots of MIPS to spare) >> but is more elegant; > > I will have a lot of spare MIPS too.
"A lot" can have different meanings to different people :>
> I don't need to save data in a persistent way (after a reboot, the poller will > start to retrieve new data). > I thought to create the SQLite database file in a RAM/temporary filesystem. > This should speed up the access and should avoid Flash memory corruption.
You typically would NOT want to be updating those <things> in FLASH as you'll wear out the FLASH in short order. Note that data in a DBMS typically is "larger" than you would achieve if storing in a single, typed variable (before adding the overhead of indexes, etc.). And, you will also require resources for the WAL, if supported. I.e., before you make this DBMS decision, you might want to throw together a simple schema and store your dataset to see just how big it ends up. You should also think about how you will be interacting with the data. E.g., returning to the weather report analogy, if you are planning on cherry picking one value at a time from the *set* of values, then you risk reporting data that "doesn't make sense" -- because the "temperature" you reported a moment ago doesn't correspond to the "barometric pressure" you are reporting now. You may need to approach the data transactionally and, if you don't want to lock up the DBMS while your client process chugs along at *its* pace, this might require you to introduce some transaction buffering *between* the two entities -- which might not have been in your original design plans. Lastly, if you settle on that approach, "unconstrain your thinking" in considering what you *could* do with data thusly "organized". I found that lots of code that I'd have normally placed in the application (i.e., client) can be pushed back into the RDBMS and the way I *think* about the data that it supplies.
>> a producer can't inject data into the store >> unless the data is appropriate for the tables/fields involved >> (you can't store text somewhere that expects numerics!). Likewise, >> the consumers need not *check* the data because the checks can be >> applied on admission to the DB. I.e., if the data is *in* the >> DB, then you know it satisfied the admission criteria! >> >> However, I don't think SQLite has all of these hooks. >> >> GNeuner is your go-to guy for the DB approach...
On 6/24/2016 4:43 AM, pozz wrote:
> Il 23/06/2016 21:31, Paul Rubin ha scritto: >> pozz <pozzugno@gmail.com> writes: >>> I'm thinking to use SQLite database to store the data. The poller >>> writes the database, HTTP and QT reads from it. It seems SQLite will >>> take care the multi-thread/multi-process scenario. >> >> This is the classic and probably easiest approach, though not the most >> economical in terms of machine use. There are other databases besides >> sqlite that you can also consider. How much data are you talking about? > > The overall configuration of the device could be maximum 1kB (I think it will > be around 100 bytes). However it doesn't change frequently, so it can be > retrieved only at startup and only when it really changes. > > The status data that changes frequently will be maximum 100-200 bytes (I think > it will be less than 100 bytes).
Then, why don't you just design an event-driven interface and move the data across the protection barrier when these "update events" occur? Or, better, when they occur AND are "of interest"? The "client" can keep a set of "most recently endorsed parameters" and an "empty" buffer into which any *new* parameters that are in transit from the "server" are accumulated. Once they have "arrived", call *that* buffer the authoritative reference and treat the previous reference as a "disposable buffer" for the next update.
>> Do you need stuff like persistence across reboots? > > No.
Hi Dimiter,

[40-45+C lately -- PLEASE don't tell me you've still got icicles!!  :> ]

On 6/24/2016 6:38 AM, Dimiter_Popoff wrote:
> It can be troubling but if you know what you are doing it is not. > Just update(that part of) caches when needed, sync (serialization opcode > for power) etc. when needed after that, use lwarx/stwcx. as needed and > you can have all the synch you need. Comes at a cost obviously > so one must be careful. Those semaphore functions you talk about > must be some high level implementation targeted at having a semaphore > which is not too costly so it cares only about itself, nothing wrong > with that. But the world does not end there (I don't know where it > ends for the C libraries people use so of course it may be as troubling > as you suggest but only as long ass one relies on libraries someone > else has written).
A "synchronization primitive" is useless if it doesn't ensure the "process state" isn't being synchronized wrt other processes. Every instance of: ... // prepare some private object acquire_mutex(SHARED_RESOURCE) ... // make private object visible as shared resource release_mutex(SHARED_RESOURCE) would break if the state of the private object was still "in transition" *inside* the critical region. What does the mutex *do* for the code in that case? Anything -- and EVERYthing -- that must be done to ensure the consistency of the object as visible to other "agents" must happen before the mutex is granted. Otherwise, any changes some OTHER agent may have made to *their* "private object" won't be available for *this* agent to see as it manipulates the shared resource. Lightweight "locks" that only work within a single process/thread are of little value. The OS needs to be involved else it can't recover locks from crashed processes (it needs to know who was holding the lock at the time), resolve priority conflicts, etc. OTOH, if used *entirely* within a single process, you can kill the process and reclaim its memory and, thus, the lock's holder is of no significance -- the lock AND *all* of its potential holders are gone! E.g., when thread() is killed, it matters little whether produce() or consume() was holding the lock -- the effect is entirely contained within *this* thread (process). No *other* processes are at risk. thread() { lock_t aLock ... while (FOREVER) { produce() consume() } } produce() { if (something) { process(something) acquire_lock(aLock) // make <something> available release_lock(aLock) } ... } consume() { if (whatever) { acquire_lock(aLock) // access <something> previously made available release_lock(aLock) } ... } [Note that the OP's application could potentially fit this form *if* he has control of the HTTPd's implementation] Regards to L.
pozz wrote:
> Il 23/06/2016 21:53, Les Cargill ha scritto: >> pozz wrote: >>> I'm new to embedded Linux so this question could be very simple for many >>> of you. Most probably, it isn't directly related to embedded world, but >>> to Linux OS generally. Anyway I think it is a common scenario in >>> embedded applications. >>> >>> I'm going to develop a local/remote control of an electronic device. It >>> communicates through a RS485 link. >>> The local control will be a touch-screen display (I'm going to use QT >>> graphic libraries). >>> The remote control will be HTTP (web server). >>> >>> I think a good approach will be to develop a simple application, the >>> poller, that communicates with the electronic device and implements the >>> RS485 protocol. The poller continuously acquires the current >>> status/settings of the device and store them in some "shared" way. >>> >>> The graphic application (QT-based) and the web server (CGI) should >>> access to the data retrieved by the poller. >>> >>> What is the best method to share the data generated by an application >>> (the poller) among two or more applications (QT and CGI)? >>> In this scenario, I think it's important to lock the "shared data" >>> before accessing them (reading or writing), in order to avoid reading >>> incoerent data. Indeed, if the poller writes the data at the same time >>> (Linux OS is multi-tasking) the web server reads them, they could be >>> incoerent. >>> >>> I'm thinking to use SQLite database to store the data. The poller writes >>> the database, HTTP and QT reads from it. It seems SQLite will take care >>> the multi-thread/multi-process scenario. >>> >>> Any suggestions? >> >> 1) Qt and HTTP are completely disjoint. Use of Qt is disruptive - it has >> its own pre-preprocessor and ... keywords specific to Qt. If I could, I >> would pick one and possibly carve the other off as a separate project. >> >> HTTP is, simply put, a bloated, obese protocol which cannot be put on a >> decent timeline. It's intentionally entanlged with TCP in very >> unfortunate ways. > > QT choice could be changed, but the embedded Linux platform I'm going to > use is shipped with many ready-to-use examples of simple QT applications > that use the TFT touch display. This is the main reason of QT choice. >
Ah, so there you go.
> HTTP server is a requirement: the users will be able to connect to the > Linux box through a simple web browser. Of course I will use a > ready-to-use HTTP server (I'm thinking of lighttpd) with client-side > javascripts to retrieve data from a server-side script (a simple cgi). > > Anyway they are really *two* different projects/processes running on the > same Linux box. I couldn't understand why you compare QT and HTTP. >
Possibly because one could be done away with - I don't know the customer requirements necessarily.
> >> 2) You mean you will poll for data over an RS485 link, then have two >> possible clients on a workstation/phone/tablet/whatever that exploit >> this data. > > There is only one process that will access the RS485 link, the "poller" > process running on the embedded Linux board. > The data will be shared at the Linux board level, not RS485 link. > > >> 2a) Is the RS485 link half or full duplex? > > Half duplex. > >
As long as you've played that game before, you should be fine.
>> 2b) If it is half duplex, do you have an accurate model of the >> line discipline? For example, with MODBUS ( which is 1/2 dux 485 and >> address muxed ) there will be hard[1] timers to be managed. I >> recommend a state machine approach. The port cannot be used by more >> than one request/response pair at a time and response timeouts >> dominate latency. >> >> [1] although they can be made to be adaptive and a >> select()/poll()/epoll() approach can be quite useful. Much depends on >> whether the responding node identifies itself in responses. But >> depending on >> your serial port architecture, this may get weird. > > I don't think I will have problems with protocol at the RS485 wire level. > > >> 3) SQLite appears to have locking: >> https://www.sqlite.org/lockingv3.html > > So it should be ok for my application. > > >> 3a) SQL is a bloated pain in the neck. > > Really? I thought exactly the opposite thing. Why? > >
That's my opinion, but it's based on (biased) direct observation. This is mainly true because it's completely a string interface and constructing queries dynamically requires marshalling and unmarshalling a pseudo-natural language. Plus, the infrastructure can be somewhat daunting - although I've seen mainly MS SQL Server rather than SQLite.
>> 4) A minimal technology for a shared data store under Linux is shared >> memory. There are shared memory filesystems or you can develop a shared >> library/object file which enacts an access protocol for the shared >> data using the System V primitives. See "System V shmem" and "System V >> semaphores" through Google for details. This isn't difficult but it's >> fiddly. It's also the highest performing approach. I;d at least >> consider guarding the shmem with a semaphore, although I've never >> established that this is completely necessary > > At the moment I think it's the best solution for me. I started studying > the shared memory under Linux. I never used it before. > > >> 5) Consider using pipes for shared data, possibly. Dunno how you >> integrate pipes with an HTTP daemon. > > It seems to me, shared memory fits better on my application. > >
I like it myself; some do not.
>> 6) Sockets are also a good thing to consider using, although again, I >> have never integrated an HTTP server with sockets for another protocol. >> Your "poller" offers say, a TCP socket ( to be used over localhost ) >> and does "multiple unicast" or alternatively actual multicast >> transmission of the entire data store every so often, or differential ( >> the things that have changed ) transmission as needed. > > Yes, sockets are another approach for inter-process communication. Maybe > I will start with shared memory. > > >> 6a) Multicast constrains you to the use of UDP. >> >> This is a nontrivial project. > > Yes, I know. > > >> All this being said, there may be a good software kit to solve this >> problem out there; I just don't know what it is. > > I hope it really exists and someone will point me to it. >
You'd think, wouldn't you?
>> >> and >> >> 7) I've seen cases where a COTS HTTP server was used to serve up Java >> applications to do this. That's kind of old school at this writing. >> >
-- Les Cargill
Clifford Heath wrote:
> On 23/06/16 23:05, Reinhardt Behm wrote:
<snip>
> > This application is not demanding enough to need a shared memory > solution, with the attendant need for locks and especially, > memory barriers, of which most respondents seem ill-educated.
The API for System V semaphores and System V shmem is klunky but not horrible. It's not completely sufficient on its own - each thread/process has to poll it or otherwise have a "read shmem now" signal somewhere to be named later, but simply having an upcounter within the shmem was good enough when I used it. If you can block on the semaphore then you don't need that. If one of the threads can reinitialize the contents of the shmem then you will need some sort of signal or value of the upcounter to indicate that or you can have a race condition or sorts.
> In particular, Mel Wilson's "one writer, many reader" is no > longer so simple with modern multi-core architectures. Writes > do not have to happen in the order they are written, so readers > can get confused. >
My efforts at profiling this* on an Allwinner A20 show the lock overhead to be negligible. Then again, I wasn't bothering to get too detailed about assigning threads/processes to cores. *runtime hardwareish counter that will be domain dependent. I chose shmem for tactical reasons other than trying to squeeze the last cycle out of the thing ( the shared data was already in a 'C' struct ).
> Clifford Heath. >
-- Les Cargill
pozz <pozzugno@gmail.com> writes:
> The status data that changes frequently will be maximum 100-200 bytes > (I think it will be less than 100 bytes).
If "frequently" means less than a few times per second and you have enough hardware resources, you'll probably have an easier time writing this as if it were a desktop or web app than using a traditional embedded approach. That can even including writing the host part in a server-side scripting language instead of something like C.

The 2024 Embedded Online Conference