EmbeddedRelated.com
Forums

Embedded system user interfaces and configuration management

Started by Ico October 12, 2006
> Bob Smith kind of wrote:
>> LINUX APPLIANCE DESIGN from No Starch Press:
>>- RTA -- make your daemons look like DBs >>- logmuxd -- respond to appliance events >>- tbl2filed -- make system data look like it is in a DB >>- A Web UI >>- A Front Panel >>- A CLI >>- A FrameBuffer UI (with LIRC) >>- SNMP -- what is it; how to write a MIB; how to write an agent.
Ico wrote:
> Yes, this sounds like a different approach then the solutions I've seen > up to now - although the first impression from what you wrote here is > that it might be a bit heavy for smaller devices with only a few MB's of > ram and flash.
The RTA library is about 60 KB when stripped. Add another K or so for each table you make visible to the UIs. (BTW: "table" means an array or linked list of structs. RTA make _YOUR_DATA_ look like it is in a DB. RTA is an interface, not a real DB.) You might get a better of RTA by trying the live demo. Remember, you're talking to a process, not a DB. Click on "mytable". http://www.linuxappliancedesign.com/projects/rta/rta_tables.php I am absolutely interested in receiving the CD you
> mentioned to take a look at your system.
My real email address should be shown. Send me your postal address and I'll mail you a CD. Or, give me an FTP site where I can upload the 30 MB ISO image. You are more than welcome to pass the software around.
> In my original message I also wrote about the problem that a lot of > knowledge about subsystems is often spread out through the platform - > for example, if a service is configured via the web interface, there is > often a lot of specific code hidden in the CGI applications (or even in > javescript!) to validate the data, so both the service and the web > front-end both need to have knowledge about the data: valid ranges, > dependancies, and such. Does your solution address these problems as > well ?
In a way, yes. All of the UIs use a PostgreSQL library to read and write data in the various daemons and servers that form the "processing" part of the appliance. When you start a project you carefully and very deliberately define the tables that are visible to the UIs. A division like this lets the UI experts build the UIs without too much knowledge of how the appliance daemons work. The UI codes can test their UI against a dummy PostgreSQL DB, and the real appliance coders can test what they offer the UIs using bash and psql. http://www.linuxappliancedesign.com/projects/rta/GoodrtaModel.png The other piece is that RTA lets you define triggers for both read and write. A "trigger" is the DB term for a callback subroutine that is called before the read or write. You can put sanity, security, or "do it" kind of code in the write callbacks. This helps security since it is up to the daemon to protect itself from bad data. If I lose my mind :) and commit to a second edition, it will contain additional chapters on: - How to update code in the field - How to secure your appliance - How to document your appliance Bob
Lanarcam <lanarcam1@yahoo.fr> wrote:
> Ico wrote : > >> I would like to take this design one step further: what about >> distributing the database and expert-system logic between the services >> actually implementing them ? This would require some kind of system-wide >> inter-process-communication channel (bus?) where services (modules?) can >> be connected and disconnected on the fly. Every service announces itself >> on the bus, and can be queried by other subsystems about it's >> properties, methods, dependencies and such. Modules broadcast events >> over the bus when something interesting happens, and other modules who >> know about these kind of events can act upon them. Attaching a module >> automatically makes it available in the user interface, et cetera. > > This reminds me of corba, did you have that in mind?
Well, Corba would defenitely fit in here, but I think it would only solve a small part of the complete problem. It could provide the communication between modules, doing all the nasty work like the data serialization/deserialization and handling the network stuff, but it would still be only the lowest level of the complete architecture. Most of the complexity would still reside in the system which will be built on top of this RPC layer. I think any RPC-ish solution could do here, and there are many available alternatives: corba, soap, xmlrpc, json-rpc, dbus, etc. Another poster in this thread even uses the Postgres network protocol for communication between modules, an interesting approach! -- :wq ^X^Cy^K^X^C^C^C^C
Don <none@given> wrote:
> Ico wrote: >> [ Snipped a lot to keep the thread managable ] > > You have to treat the configuration data as "first class objects" and > not just casual "settings" private to a particular application.
Yes, and this might be one of the defects of many implementations I have seen up to now: configuration and settings are often considered a side effect of the modules/services doing the hard work - the whole system is designed 'code driven', not 'data driven'. The developers focus mainly in getting their service to do the job, and consider configuration a minor detail that can be finished as soon as 'the real work' is done.
> The discipline is one of "figure out what the data model needs > to be for the "application"; then, *move* that data model into > the database (instead of keeping it "private" to the application)
Very true!
>> I would like to take this design one step further: what about >> distributing the database and expert-system logic between the services >> actually implementing them ? > > I think the problem you will run into there (er, *a* problem) is > disciplining yourself to implement the same sort of consistent > interface to each "module". And, then each module needs to > have many of the mechanisms that I have delegated to the RDBMS > to enforce (i.e. notifying consumers of that data that the data > has changed; enforcing ACLs if you don't want certain "users"/services > to be able to manipulate that data; etc.) > > And, the power of transaction processing is REALLY hard to > implement in a distributed scheme. I.e. if I want to make > changes to several different "program configurations" as one > "block" and *know* that: > + either ALL changes are made or NO changes are made (so that > the "changed" configurations remain in a consistent, "agreeable" > state -- you haven't changed one program and then failed to > change some other *related* program!) > + no program sees intermediate results of this "transaction"
Yes, this is also one of the often overlooked prerequisites, although this is a very common issue. For example, consider the user changing the IP address and default gateway of a system - they should both reside in the same subnet - which is something the system should enforce - so the only way to change these settings is to do it in one transaction.
>> This would make a much more modular and reusable setup - the >> developer can enable only the modules needed for a specific platform, >> and literally *all* information about a module is contained in the >> module itself - so no need to configure the central database. > > I think it would depend on the level of discipline you have (and can > impose on others living in your framework). I have to deal with other > folks adding to my system. So, I am keenly aware that everything that > I put in place to make it easy for folks to do things "the right way" > will work to my advantage. If folks feel it is easier to just store > two bits of configuration information in a file somewhere, then > laziness will have them doing that instead of adhering to these > conventions. > > You just have to gamble as to how much risk there is for your system > to be "ignored" or circumvented. And, the costs that such acts bring > to the system overall.
Which is always the case with any 'framework' you create for others to use. Other developers will always try to find a balance between how much convenience and how much grief they will get from using such a framework: if it is too strict, people will hate it for limiting them in their work, and if you make it too 'free', things get messy because everybody can do it their own way, lacking coherency throughout the system.
> E.g., with the RDBMS, "I" can backup an entire user profile for the > user. If someone fails to use my framework to store configuration > data, then they will have to create a "backup my configuration" > program *and* deal with irate users who *forget* to manually invoke > this tool when they are backing up the rest of their configuration > (then, let that third party explain to the user why their application > is the only application that "doesn't work right" in this regard :>
Yes, choosing a RDBMS seems to be a smart approach indeed, your ideas gave me enough to think about for the next few days. Thanks again !
> Good luck!
I'll need it :) -- :wq ^X^Cy^K^X^C^C^C^C
Ico wrote:
> Don <none@given> wrote: >> Ico wrote: >>> [ Snipped a lot to keep the thread managable ] >> You have to treat the configuration data as "first class objects" and >> not just casual "settings" private to a particular application. > > Yes, and this might be one of the defects of many implementations I have > seen up to now: configuration and settings are often considered a side > effect of the modules/services doing the hard work - the whole system is > designed 'code driven', not 'data driven'. The developers focus mainly > in getting their service to do the job, and consider configuration a > minor detail that can be finished as soon as 'the real work' is done.
Exactly. Development should be *specification* driven. Figure out what the thing has to *do*. Figure out what it looks like TO THE USER (of which, configuration parameters are a key role), figure out all of the boundary conditions and Things That WILL Go Wrong (TmReg). *Then*, start coding it. I.e. once you know what it is supposed to look like and how it should behave, you *should* be able to write a User's Manual and have THOUSANDS of them published -- you should have put tht much forethought into it :> (I write the User Manual *before* I write any software -- though I don't usually have to publish it that early! :> )
>> The discipline is one of "figure out what the data model needs >> to be for the "application"; then, *move* that data model into >> the database (instead of keeping it "private" to the application) > > Very true! > >>> I would like to take this design one step further: what about >>> distributing the database and expert-system logic between the services >>> actually implementing them ? >> I think the problem you will run into there (er, *a* problem) is >> disciplining yourself to implement the same sort of consistent >> interface to each "module". And, then each module needs to >> have many of the mechanisms that I have delegated to the RDBMS >> to enforce (i.e. notifying consumers of that data that the data >> has changed; enforcing ACLs if you don't want certain "users"/services >> to be able to manipulate that data; etc.) >> >> And, the power of transaction processing is REALLY hard to >> implement in a distributed scheme. I.e. if I want to make >> changes to several different "program configurations" as one >> "block" and *know* that: >> + either ALL changes are made or NO changes are made (so that >> the "changed" configurations remain in a consistent, "agreeable" >> state -- you haven't changed one program and then failed to >> change some other *related* program!) >> + no program sees intermediate results of this "transaction" > > Yes, this is also one of the often overlooked prerequisites, although > this is a very common issue. For example, consider the user changing the > IP address and default gateway of a system - they should both reside in > the same subnet - which is something the system should enforce - so the > only way to change these settings is to do it in one transaction.
Exactly. Or, IP address and netmask -- some values and combinations are "not legal". And, even if the values chosen *are* legal, you need to make sure that they are actually changed concurrently. There are numerous similar examples. And, other, more convoluted examples whereby parameters from different subsystems interact, etc. Being able to put the "business logic" in the RDBMS and let *it* enforce those rules keeps everything consistent. I.e. *nothing* (meaning "consumers") ever *sees* any of that data until after the RDBMS has blessed it and incorporated it. It's not even *possible* for other things (programs) to see it since the only place that stores it is the RDBMS! (if you have a second copy of it in a file someplace, then someone could elect to use that copy -- either from ignorance or laziness. unless you apply the same sanity checks, etc. prior to storing it in that file -- and prevent anyone from ALTERING it thereafter -- you run the risk of having two inconsistent copies of data)
>>> This would make a much more modular and reusable setup - the >>> developer can enable only the modules needed for a specific platform, >>> and literally *all* information about a module is contained in the >>> module itself - so no need to configure the central database. >> I think it would depend on the level of discipline you have (and can >> impose on others living in your framework). I have to deal with other >> folks adding to my system. So, I am keenly aware that everything that >> I put in place to make it easy for folks to do things "the right way" >> will work to my advantage. If folks feel it is easier to just store >> two bits of configuration information in a file somewhere, then >> laziness will have them doing that instead of adhering to these >> conventions. >> >> You just have to gamble as to how much risk there is for your system >> to be "ignored" or circumvented. And, the costs that such acts bring >> to the system overall. > > Which is always the case with any 'framework' you create for others to > use. Other developers will always try to find a balance between how much > convenience and how much grief they will get from using such a > framework: if it is too strict, people will hate it for limiting them in > their work, and if you make it too 'free', things get messy because > everybody can do it their own way, lacking coherency throughout the > system.
Exactly. You ('I') have to offer *value*.
>> E.g., with the RDBMS, "I" can backup an entire user profile for the >> user. If someone fails to use my framework to store configuration >> data, then they will have to create a "backup my configuration" >> program *and* deal with irate users who *forget* to manually invoke >> this tool when they are backing up the rest of their configuration >> (then, let that third party explain to the user why their application >> is the only application that "doesn't work right" in this regard :> > > Yes, choosing a RDBMS seems to be a smart approach indeed, your ideas > gave me enough to think about for the next few days. Thanks again !
Don't fall into the trap of conventional thinking. E.g., the idea of adding a library to encapsulate each module's interfaces, etc. Why can't that "library" be a set of hooks (stored procedures) and tables in an RDBMS? Functionally equivalent -- just packaged differently (and, if it resides in the RDBMS, then it can avail itself of the mechanisms provided by the RDBMS)! --don