EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

Embedded system user interfaces and configuration management

Started by Ico October 12, 2006
Hi all,

There's a thing that has been bothering me for a long time, and I'm
interested in some views from other people in the embedded field.

For the past years, I've been working with/for a number of companies on
quite a number of embedded systems, often Linux/uClinux based. These
devices all had completely different functionalities (wireless access
points, data acquisition, video encoding, process control, etc), but
they also had a lot in common: some method for initial IP configration,
a user interface for system configuration through a webserver and or
snmp/tftp/cli/etc, a method for safe firmware upgrading, and all these
things one needs in a professional embedded system. I'll call all these
functions together 'system management'.

The thing that bothers me is the each and every company has re-invented
the wheel for doing system management their own way. They all built
their own systems for storing configuration information, varying from
plaintext files to sqlite, storing raw structs in flash partitions and
more creative solutions. They all wrote their own scripts or programs
for presenting, checking and validating data through the webserver, CLI
or other configuration tool. They all created their own system for
configuring the subsystems on the platform, which often consists of
generating config files and restarting services. Some were 'intelligent'
and only performed the necessary actions to apply the user's changes on
the fly, others were plain stupid and need a system reboot to apply
settings. 

I think the most important thing they all had in common was that they
were basically just... crappy in some way. 

It seems that with time these systems tend to grow and get hard - or
even impossoble - to maintain. Often there is a lot of knowledge about
functionality spread out throught different parts of the system: for
example, a daemon process needs to know how to parse it's config file,
some other part of the system needs to know how to write it. For that,
it needs to get the data to put into the file from somewhere else (often
one big central database). Another subsystem is responsible for getting
this data from the user through a web interface, and this system also
needs to know how to present this data to the user, and how to validate
this data when entered. There's just too many different parts involved
in getting things to work.

Often things get even more complicated by all kinds of dependancies
between systems - changing the IP address of a device might need the
restart of some services, but not of others, and re-configuring service
A might need some kind of reconfiguring of service B as well. I've seen
a number of systems where the growing complexity of these dependencies
grew out of hand, resulting in the brute-force solution: store changes,
and reboot. Easy on the developers, but a users nightmare.

Over the years I have contributed to some of these solutions myself -
some not too bad and others plain monstrous - and before starting the
next mistake, I'm trying to figure out 'how others do it'. 

So, my actual question is : am I missing somehing here ? Has this
problem been long solved and I just don't know about this, or am I not
the only one facing this problem, time after time again. Are there
existing systems that can help me handle manage the boring and hard
parts of embedded systems: the configuration and user interface.

Thanks a lot,



-- 
:wq
^X^Cy^K^X^C^C^C^C
Ico wrote:

[system configuration]

> It seems that with time these systems tend to grow and get hard - or > even impossoble - to maintain. Often there is a lot of knowledge about > functionality spread out throught different parts of the system: for > example, a daemon process needs to know how to parse it's config file, > some other part of the system needs to know how to write it. For that, > it needs to get the data to put into the file from somewhere else (often > one big central database). Another subsystem is responsible for getting > this data from the user through a web interface, and this system also > needs to know how to present this data to the user, and how to validate > this data when entered. There's just too many different parts involved > in getting things to work. > > Often things get even more complicated by all kinds of dependancies > between systems - changing the IP address of a device might need the > restart of some services, but not of others, and re-configuring service > A might need some kind of reconfiguring of service B as well. I've seen > a number of systems where the growing complexity of these dependencies > grew out of hand, resulting in the brute-force solution: store changes, > and reboot. Easy on the developers, but a users nightmare. > > So, my actual question is : am I missing somehing here ? Has this > problem been long solved and I just don't know about this, or am I not > the only one facing this problem, time after time again. Are there > existing systems that can help me handle manage the boring and hard > parts of embedded systems: the configuration and user interface.
Ad hoc solutions usually exist because systems are rarely *designed*, from scratch, with anticipation of future needs, dependencies, etc. in mind. Consider the physical resources made available to this activity (is it a few bytes of NVRAM? Or, a chunk of space on a disk??). It's hardly worthwhile to set aside a lot of resources for configuration if the only thing that needs to be "configured" is the time-of-day clock, etc. My current approach is to use a RDBMS to store all configuration information. The RDBMS can then enforce "who" can set "what". Server-side stored-procedures are built to handle all of the dependencies/"consequences" of changes to individual "settings". This helps document these dependencies in addition to ensuring that "all the right things" get done each time something is changed. It also makes it really easy to store/restore/update configurations as everything is in one place! And, allows the developer to implement the user interface to that dataset wherever he/she wishes (i.e. as a centralized "configuration program" or in discrete/distinct "settings panels") Of course, very few projects have the resources available for this sort of approach :< --don
Ico wrote:
> For the past years, I've been working with/for a number of companies on > quite a number of embedded systems, often Linux/uClinux based. These > devices all had completely different functionalities (wireless access > points, data acquisition, video encoding, process control, etc), but > they also had a lot in common: some method for initial IP configration, > a user interface for system configuration through a webserver and or > snmp/tftp/cli/etc, a method for safe firmware upgrading, and all these > things one needs in a professional embedded system. I'll call all these > functions together 'system management'.
No Starch Press is coming out with a book called "Linux Appliance Design". I am one of its authors. The book addresses many of the issues you've raised. It is not a typical "embedded Linux" book because we start with the assumption that the system boots and bash is up and running. The book's sample appliance is "Laddie", an alarm system that uses the five status lines on a parallel port as inputs from the alarm system hardware. (The book includes a bootable CD that turns your PC into the Laddie appliance.) The basic layout and chapter list: - Appliance architecture -- basically what you've posted asking about - How to talk to a running daemon -- use RTA to make all of your daemons look like a PostgreSQL DB and all of your UIs can talk to the daemons using the PG client libraries. Also, a table is a natural way to view data. - Responding to appliance events -- we built a daemon to watch syslog (or other event sources) and to respond when certain events occur. Our logger is called "logmuxd" because it can multiplex output to many destinations. - A daemon to make system data look like it is in a DB -- we wrote a little utility to give a "DB" view of all the system configuration data. We use this, for example, to make the resolv.conf look like they're in a DB. (All of the UIs below use the PostgreSQL client libraries to talk to the daemons listed above.) - A Web UI -- PHP, CSS, AJAX - A Front Panel -- HD44780 two line LCD display, also an AJAX powered web page to look like the front panel - A CLI -- a simple command line interface using Lex/Yacc; but simple! - A FrameBuffer UI (with LIRC) -- intro to IR and how LIRC works. Intro to framebuffers and the choices you'll need to make. - SNMP -- three chapters for SNMP: what is it; how to write a MIB; how to write an agent. The book is still in edit, but if you're interested I'll be glad to send you (or anyone else who has read this far :) ) the Laddie CD which has all of the source code for the appliance. You might at least want to check out RTA in the projects section of the web site. http://www.linuxappliancedesign.com Please let me know if you want a CD. Bob Smith
Hello Bob,

Bob Smith <bsmith@linuxtoys.org> wrote:
> Ico wrote: >> For the past years, I've been working with/for a number of companies on >> quite a number of embedded systems, often Linux/uClinux based. These >> devices all had completely different functionalities (wireless access >> points, data acquisition, video encoding, process control, etc), but >> they also had a lot in common: some method for initial IP configration, >> a user interface for system configuration through a webserver and or >> snmp/tftp/cli/etc, a method for safe firmware upgrading, and all these >> things one needs in a professional embedded system. I'll call all these >> functions together 'system management'. > > No Starch Press is coming out with a book called "Linux Appliance Design". > I am one of its authors. > > The book addresses many of the issues you've raised. It is not a typical > "embedded Linux" book because we start with the assumption that the system > boots and bash is up and running. > > The book's sample appliance is "Laddie", an alarm system that uses the five > status lines on a parallel port as inputs from the alarm system hardware. > (The book includes a bootable CD that turns your PC into the Laddie appliance.) > > The basic layout and chapter list: > - Appliance architecture -- basically what you've posted asking about > > - How to talk to a running daemon -- use RTA to make all of your daemons look > like a PostgreSQL DB and all of your UIs can talk to the daemons using the > PG client libraries. Also, a table is a natural way to view data. > - Responding to appliance events -- we built a daemon to watch syslog (or > other event sources) and to respond when certain events occur. Our logger > is called "logmuxd" because it can multiplex output to many destinations. > - A daemon to make system data look like it is in a DB -- we wrote a little > utility to give a "DB" view of all the system configuration data. We use > this, for example, to make the resolv.conf look like they're in a DB. > > (All of the UIs below use the PostgreSQL client libraries to talk to the > daemons listed above.) > - A Web UI -- PHP, CSS, AJAX > - A Front Panel -- HD44780 two line LCD display, also an AJAX > powered web page to look like the front panel > - A CLI -- a simple command line interface using Lex/Yacc; but simple! > - A FrameBuffer UI (with LIRC) -- intro to IR and how LIRC works. Intro to > framebuffers and the choices you'll need to make. > - SNMP -- three chapters for SNMP: what is it; how to write a MIB; how to > write an agent. > > The book is still in edit, but if you're interested I'll be glad to send > you (or anyone else who has read this far :) ) the Laddie CD which has all of > the source code for the appliance. You might at least want to check out RTA in > the projects section of the web site. http://www.linuxappliancedesign.com > Please let me know if you want a CD.
Yes, this sounds like a different approach then the solutions I've seen up to now - although the first impression from what you wrote here is that it might be a bit heavy for smaller devices with only a few MB's of ram and flash. I am absolutely interested in receiving the CD you mentioned to take a look at your system. In my original message I also wrote about the problem that a lot of knowledge about subsystems is often spread out through the platform - for example, if a service is configured via the web interface, there is often a lot of specific code hidden in the CGI applications (or even in javescript!) to validate the data, so both the service and the web front-end both need to have knowledge about the data: valid ranges, dependancies, and such. Does your solution address these problems as well ? Thanks, _Ico -- :wq ^X^Cy^K^X^C^C^C^C
Some more ramblings on device configuration...

Don <none@given> wrote:
> Ico wrote: > > [system configuration] > >> It seems that with time these systems tend to grow and get hard - or >> even impossoble - to maintain. Often there is a lot of knowledge about >> functionality spread out throught different parts of the system: for >> example, a daemon process needs to know how to parse it's config file, >> some other part of the system needs to know how to write it. For that, >> it needs to get the data to put into the file from somewhere else (often >> one big central database). Another subsystem is responsible for getting >> this data from the user through a web interface, and this system also >> needs to know how to present this data to the user, and how to validate >> this data when entered. There's just too many different parts involved >> in getting things to work. >> >> Often things get even more complicated by all kinds of dependancies >> between systems - changing the IP address of a device might need the >> restart of some services, but not of others, and re-configuring service >> A might need some kind of reconfiguring of service B as well. I've seen >> a number of systems where the growing complexity of these dependencies >> grew out of hand, resulting in the brute-force solution: store changes, >> and reboot. Easy on the developers, but a users nightmare. >> >> So, my actual question is : am I missing somehing here ? Has this >> problem been long solved and I just don't know about this, or am I not >> the only one facing this problem, time after time again. Are there >> existing systems that can help me handle manage the boring and hard >> parts of embedded systems: the configuration and user interface. > > Ad hoc solutions usually exist because systems are rarely *designed*, > from scratch, with anticipation of future needs, dependencies, etc. > in mind.
Very true, this is one of the reasons for my concerns. But this problem is also hard to address, since things like this just tend to grow and evolve - a company starts with product X version 1.0, with only one feature, the next version does only a little bit more, needs a tiny bit more configuration, et cetera. Thus another monster is born. What I am actually looking for is a general philosophy, a setup or framework which is flexible enough to grow with the requirements, but which will not grow exponentially more complex when functionality is added to the system. An analogy would be something like using finite state machines in (embedded) software: Without FSM's, it is very well possible to create complex systems, but each time a function is added, there is a big risk of creating more and more spagetti-code to solve all kinds of dependancies between functions. Coding an application using state machines often looks like a bit more work in the beginning, but tends to be much more modular, extendible and understandable, since every state has well defined bounds, inputs and outputs, and can be easier comprehended, debugged and reviewed. State machines are no magic, you can still mess up ofcourse, but they are a proven technique to keep things modular and maintainable.
> [...] > > My current approach is to use a RDBMS to store all configuration > information. The RDBMS can then enforce "who" can set "what". > Server-side stored-procedures are built to handle all of the > dependencies/"consequences" of changes to individual "settings". This > helps document these dependencies in addition to ensuring that "all > the right things" get done each time something is changed. It also > makes it really easy to store/restore/update configurations as > everything is in one place!
Yes, storing all your data in one place has advantages, but also it's cons. One of the problems with this is that there is knowledge about your configuration in more then one place: there is some kind of subsystem that can be configured through the RDBM, but this RDBM also needs to know what data this service needs, what types it's properties are. The frontent (cli/web/whatever) still needs to know how to present those properties to the user, etc. For example, how would you solve the case ware a configuration item for a service needs to be selected from a list by the user (lets call it an 'enum'), lets say for selecting a compression protocol for some audio stream. Valid choices are PCM, DPCM, MP3 and uLaw. The service is configured through a configfile, where this choice is written to. Something like this is quite hard to implement properly througout the system. Ofcourse, the service itself knows what the valid choices are, since it needs to parse the configfile and act accordingly. But does your RDBM knows what choices are valid, and allow only these to be stored ? If so, you have knowledge about this enum in two places at least. Assuming a web interface is used for configuration, the user should be presented a list to choose the audio format, so the web application will need to have knowledge about this as well, which makes three duplicates of the same information. Nightmares really start when settings depend on each other - imaging that for some technical reason the MP3 format can only be used when the samplerate (another configuratble setting) is 22050 hz or higher. This is something the service itself knows, because it can't mix lower sample rates with MP3. But is this something your RDBM needs to know as well ? And how can I present this to the user, since I don't want the user to be able to select MP3 when the samplerate is 8Khz, but I also don't want the user to be able to set the samplerate to 8Khz when he selected MP3 as the encoder. And the hardest part would be to inform the user *why* he is not allowd to set the codec: "Warning: setting A is not compatible with setting B, change setting A first if you want to set B to X !" Of course, it's all doable - you can just hardcode these dependancies in three different places, but things just grow and get buggier when more features are added when you have to maintain code in three different places to support one single function. "Don't repeat yourself" is a nice philosophy, but is hard to get right!
> Of course, very few projects have the resources available for this > sort of approach :<
Yes, which is a pity, and leads to a lot of bad designed devices with incomprehensible interfaces, showing unpredictable behaviour. _Ico -- :wq ^X^Cy^K^X^C^C^C^C
Ico wrote:
> Don <none@given> wrote: >
... snip ...
> > For example, how would you solve the case ware a configuration item for > a service needs to be selected from a list by the user (lets call it an > 'enum'), lets say for selecting a compression protocol for some audio > stream. Valid choices are PCM, DPCM, MP3 and uLaw. The service is > configured through a configfile, where this choice is written to. > > Something like this is quite hard to implement properly througout the > system. Of course, the service itself knows what the valid choices are, > since it needs to parse the configfile and act accordingly. But does > your RDBM knows what choices are valid, and allow only these to be > stored ? If so, you have knowledge about this enum in two places at > least. Assuming a web interface is used for configuration, the user > should be presented a list to choose the audio format, so the web > application will need to have knowledge about this as well, which makes > three duplicates of the same information. > > Nightmares really start when settings depend on each other - imaging > that for some technical reason the MP3 format can only be used when the > samplerate (another configuratble setting) is 22050 hz or higher. This > is something the service itself knows, because it can't mix lower sample > rates with MP3. But is this something your RDBM needs to know as well ? > And how can I present this to the user, since I don't want the user to > be able to select MP3 when the samplerate is 8Khz, but I also don't want > the user to be able to set the samplerate to 8Khz when he selected MP3 > as the encoder. And the hardest part would be to inform the user *why* > he is not allowd to set the codec: "Warning: setting A is not compatible > with setting B, change setting A first if you want to set B to X !" > > Of course, it's all doable - you can just hardcode these dependancies in > three different places, but things just grow and get buggier when more > features are added when you have to maintain code in three different > places to support one single function. "Don't repeat yourself" is a nice > philosophy, but is hard to get right! > >> Of course, very few projects have the resources available for this >> sort of approach :< > > Yes, which is a pity, and leads to a lot of bad designed devices with > incomprehensible interfaces, showing unpredictable behaviour.
This is where you probably want the attitudes of object oriented software. You can have this without the complications of C++ with some self-discipline. For example, in your example, the service needs to read the config file, and various things need to write the config file. The config file should not be public. This reduces the peripherals to calling a routine, say 'alterconfig', which has that access. It can then return messages about the success or failure of any operation, and it is up to the peripheral to format and display that message to the user. I guess I am saying basically look at the configuration portion as one more service provided. -- "The mere formulation of a problem is far more often essential than its solution, which may be merely a matter of mathematical or experimental skill. To raise new questions, new possibilities, to regard old problems from a new angle requires creative imagination and and marks real advances in science." -- Albert Einstein
Ico wrote:
> Some more ramblings on device configuration... > > Don <none@given> wrote: >> Ico wrote: >> >>> So, my actual question is : am I missing somehing here ? Has this >>> problem been long solved and I just don't know about this, or am I not >>> the only one facing this problem, time after time again. Are there >>> existing systems that can help me handle manage the boring and hard >>> parts of embedded systems: the configuration and user interface. >> Ad hoc solutions usually exist because systems are rarely *designed*, >> from scratch, with anticipation of future needs, dependencies, etc. >> in mind. > > Very true, this is one of the reasons for my concerns. But this problem > is also hard to address, since things like this just tend to grow and > evolve - a company starts with product X version 1.0, with only one > feature, the next version does only a little bit more, needs a tiny bit > more configuration, et cetera. Thus another monster is born.
Shoot the Marketing Staff! :> <on a more serious note...> Change is inevitable. How you plan for change is what separates the men from the boys :>
> What I am actually looking for is a general philosophy, a setup or > framework which is flexible enough to grow with the requirements, but > which will not grow exponentially more complex when functionality is > added to the system. > > An analogy would be something like using finite state machines in > (embedded) software: Without FSM's, it is very well possible to create > complex systems, but each time a function is added, there is a big risk > of creating more and more spagetti-code to solve all kinds of > dependancies between functions. Coding an application using state
Exactly.
> machines often looks like a bit more work in the beginning, but tends to > be much more modular, extendible and understandable, since every state > has well defined bounds, inputs and outputs, and can be easier
Moreover, you can build small FSM's which are *entered* from a larger FSM. Also, you can have multiple FSM's running concurrently and let them "talk" to each other. I.e. the "state language" becomes just another "scripting language".
> comprehended, debugged and reviewed. State machines are no magic, you > can still mess up ofcourse, but they are a proven technique to keep > things modular and maintainable. > >> [...] >> >> My current approach is to use a RDBMS to store all configuration >> information. The RDBMS can then enforce "who" can set "what". >> Server-side stored-procedures are built to handle all of the >> dependencies/"consequences" of changes to individual "settings". This >> helps document these dependencies in addition to ensuring that "all >> the right things" get done each time something is changed. It also >> makes it really easy to store/restore/update configurations as >> everything is in one place! > > Yes, storing all your data in one place has advantages, but also it's > cons. One of the problems with this is that there is knowledge about > your configuration in more then one place: there is some kind of > subsystem that can be configured through the RDBM, but this RDBM also > needs to know what data this service needs, what types it's properties > are. The frontent (cli/web/whatever) still needs to know how to present > those properties to the user, etc.
I store data in exactly *one* place. Part of good database design is never storing duplicate data -- as this would lead to inconsistencies between the datasets. Each "service" that needs configuration data formally queries the RDBMS for that data. A service is free to *cache* that data locally (so that it doesn't have to keep asking the same query over and over again each time it references that configuration parameter -- i.e. "what color should I make the error messages?"). However, when data in the database is changed, the database issues a signal() to each process that is a *consumer* of that data saying, in effect, "your configuration has changed; please reread the database". This can result in a flurry of queries at each configuration update. But, that's essentially what happens when *any* product's configuration changes -- everything that is configurable needs to know about those changes. If you store the configuration in a global struct, then there is no cost to the consumers to retrieve this data. But, then you have to worry about which consumers may be actively examining the data when you change it and implement mechanisms to safeguard it's integrity during updates (e.g., with mutex's, etc.). In my scheme, each service knows that it "must" retrieve new data but can do so when it is convenient for that service (a more cooperative, less authoritarian approach)
> For example, how would you solve the case ware a configuration item for > a service needs to be selected from a list by the user (lets call it an > 'enum'), lets say for selecting a compression protocol for some audio > stream. Valid choices are PCM, DPCM, MP3 and uLaw. The service is > configured through a configfile, where this choice is written to.
(trying to avoid getting too deep into database theory, here so I'll just casually describe the mechanism used) A table (everything is a table!) is created with, for example, two columns ("fields"): the first may be a textual description of the "choice" -- i.e. "PCM", "DPCM", "MP3", "uLaw" in your example. The second (and possibly 3rd, 4th, 5th, etc.) may be a special "code" that is what your software actually *uses*. For example, it may be a magic constant (#define). Or, perhaps a key parameter governing the compressor's operation. <shrug> It is *whatever* you want/need it to be INTERNALLY. (if you have 5 "variables" that need to be set based on the choice of protocol, you can store the five values for each of those N different protocols... and stuff them in the corresponding variables when that "choice" is selected) Elsewhere, you have some configuration setting stored in the "main" database (which is also a *table*). The tables are *linked* in such a way that the value for that "setting" can only be one of those listed in the first column of this other table (just like you can say a field can only contain text, or an integer, or a float, etc., you can also say it is constrained to one of these N values) The DBMS will *ensure* that this setting never gets any value *other* than one of those N values. Your configuration program will query this small table to get the list of choices to display to the user. Based on what he picks, that choice will be stored for the "setting" in question. And, the five (?) other internal parameters will be available for you to copy to whichever "variables" need initialization.
> Something like this is quite hard to implement properly througout the > system. Ofcourse, the service itself knows what the valid choices are, > since it needs to parse the configfile and act accordingly. But does > your RDBM knows what choices are valid, and allow only these to be
Exactly! To rephrase your statement in *my* reality: The service knows what the valid choices are -- but, relies on the RDBMS, instead, to *remember* what those actually are! And, what it should *do* for each of those choices (e.g., "set magic_number to M_ULAW if uLaw is selected, etc."). Thereafter, it *relies* on the RDBMS to remember those choices since it will NOT! And, the configuration program, instead of being manually crafted to agree with the list of choices hard-coded into the application, now ALSO relies on the RDBMS to tell it what those choices are. I.e. adding a new choice to the RDBMS "magically" causes that choice to be available in the configuration program.
> stored ? If so, you have knowledge about this enum in two places at
See above :>
> least. Assuming a web interface is used for configuration, the user > should be presented a list to choose the audio format, so the web > application will need to have knowledge about this as well, which makes > three duplicates of the same information.
<grin> Perhaps you now see the beauty/elegance of my approach? The web application queries the RDBMS as well and builds it's form based on the results provided *by* the RDBMS. With careful planning, you can even reduce configuration to a simple "boiler-plate" that is parameterized by the name of the setting to be updated and it fetches all of the choices for the user, etc. At times, this is a "forced fit" -- some things don't naturally want to fit in this approach. But, you can often rethink the way you specify the data in order to make the fit more intuitive. E.g., you can design an address book in which you include a place for each person to list their *children*. Or, you can design that same address book with a place for each person to list his/her *parents*! The latter case *seems* clumsy, at first... but, if you think about the implementation, it is much cleaner -- everyone has *exactly* two parents (for the most part :> ) whereas a person may have 0, 1, 2... 10, 12... *children*!
> Nightmares really start when settings depend on each other - imaging > that for some technical reason the MP3 format can only be used when the > samplerate (another configuratble setting) is 22050 hz or higher. This > is something the service itself knows, because it can't mix lower sample > rates with MP3. But is this something your RDBM needs to know as well ? > And how can I present this to the user, since I don't want the user to > be able to select MP3 when the samplerate is 8Khz, but I also don't want > the user to be able to set the samplerate to 8Khz when he selected MP3 > as the encoder. And the hardest part would be to inform the user *why* > he is not allowd to set the codec: "Warning: setting A is not compatible > with setting B, change setting A first if you want to set B to X !" > > Of course, it's all doable - you can just hardcode these dependancies in > three different places, but things just grow and get buggier when more > features are added when you have to maintain code in three different > places to support one single function. "Don't repeat yourself" is a nice > philosophy, but is hard to get right!
<grin> I handle this with an expert system. It embodies this knowledge in it's ruleset. So, it "knows" what is "prohibited". It also knows *consequences* of choices. E.g., "if the user selects MP3, *force* the sample rate to 8KHz" (redefining your criteria, above) (sigh -- I had hoped not to get dragged into the bowels of this but...) When the RDBMS is told to update a record (a line in a table), there are "rules"/procedures that are invoked *by* the RDBMS before and/or after (based on how they are defined) the update. So, I can define a trigger that consults the expert system with the *intended* data to be stored in the table. The expert system can then tell me "go" or "no go". In the first case, I then update the table and return "success" to the user (the user in this case is actually the configuration program; *it* can then figure out how to convey "success" to the HUMAN user). In the latter case, I abort the attempt to update the table and return "failure" to the "user". In anticipation of other "what if" scenarios you might consider... Each of these potential updates may be part of a larger RDBMS "transaction". Think of them as statements in a subroutine (?). If any of them signal failure, the transaction can automatically be "un-done". As if it *never* occured -- including all of the steps taken (and "successfully" completed!) up to this *failed* step in the transaction. The beauty is the RDBMS handles all of this for you -- ensuring that *nothing* ever sees the intermediate results of these changes, etc.
>> Of course, very few projects have the resources available for this >> sort of approach :< > > Yes, which is a pity, and leads to a lot of bad designed devices with > incomprehensible interfaces, showing unpredictable behaviour.
I have implemented this in a very heavyweight fashion. (I currently have the resources available to do so) But, I think the same approach can also be applied to a leaner environment. After all, it is just rewrapping *how* you organize your tests and data. It need not be stored in a "real" RDBMS... just something that you *treat* as an RDBMS! Likewise, the "expert system" is just a mechanism that you *treat* as an expert system... --don
[ Snipped a lot to keep the thread managable ]
 
Don <none@given> wrote:
> Ico wrote: >> Very true, this is one of the reasons for my concerns. But this problem >> is also hard to address, since things like this just tend to grow and >> evolve - a company starts with product X version 1.0, with only one >> feature, the next version does only a little bit more, needs a tiny bit >> more configuration, et cetera. Thus another monster is born. > > Shoot the Marketing Staff! :>
Yes, I've considered that more then once, but since I'm mostly working as a freelancer I'll leave that to the employees; they have the first right, ofcourse. [ ... ]
> The beauty is the RDBMS handles all of this for you -- ensuring > that *nothing* ever sees the intermediate results of these > changes, etc. > >>> Of course, very few projects have the resources available for this >>> sort of approach :< >> >> Yes, which is a pity, and leads to a lot of bad designed devices with >> incomprehensible interfaces, showing unpredictable behaviour. > > I have implemented this in a very heavyweight fashion. (I currently > have the resources available to do so) But, I think the same > approach can also be applied to a leaner environment. After all, > it is just rewrapping *how* you organize your tests and data. > It need not be stored in a "real" RDBMS... just something that > you *treat* as an RDBMS! Likewise, the "expert system" is just > a mechanism that you *treat* as an expert system...
Thank you for your very detailed explanation; I agree with most of your approach as this seems like the way to go indeed. This setup looks a lot like the last systems I worked on, but instead of a RDBS we used a simple database we rolled ourselves. This database had knowledge of data typms and valid values, although the logic - the decisions you make in your 'expert system' - was still often in the wrong place, like using javascript for form validation. You can't get everything right the first time. Or the second. The twentieth. Whatever :) Since I have the chance to make a grand new start for a new project - with a reasonable budget and timespan - I want to make a good start this time. I fully agree with the philosophy that all knowledge should only be defined in only one place, althoug I still have a small problem with using the central DB: there's still a separation of knowledge between the database, and the actual service using this data. I would like to take this design one step further: what about distributing the database and expert-system logic between the services actually implementing them ? This would require some kind of system-wide inter-process-communication channel (bus?) where services (modules?) can be connected and disconnected on the fly. Every service announces itself on the bus, and can be queried by other subsystems about it's properties, methods, dependencies and such. Modules broadcast events over the bus when something interesting happens, and other modules who know about these kind of events can act upon them. Attaching a module automatically makes it available in the user interface, et cetera. This would make a much more modular and reusable setup - the developer can enable only the modules needed for a specific platform, and literally *all* information about a module is contained in the module itself - so no need to configure the central database. The actual implementation of a module interface could be something like a shared library which handles communication with the bus, and provides the necessary hooks for handling configuration setting/getting and event handling/generation. A developer can use this library as part of an application to make it bus-capable right away, or simple wrappers can be created around existing services/daemons which simple create configfiles and restart/reload daemons. I think this pretty much describes the architecture I'm looking for. I've made some proto's trying to implement something like this, but I tend to get stuck on growing complexity on the bus protocol, data serialisation, etc. I learned from my latest experience that it might be a good idea to use a lightweight scripting language next to plain C to handle the business logic - something like Lua would be feasable, since this behaves quite well on smaller embedded systems which are low on memory. Anyway, altogether it still is a complex problem, and I'm afraid I won't get away with a simple solution :) Thanks again for your sharing your ideas, _Ico -- :wq ^Xp^Cy^K^X^C^C^C^C
Ico wrote:
> [ Snipped a lot to keep the thread managable ] > > Don <none@given> wrote: >> Ico wrote: >> The beauty is the RDBMS handles all of this for you -- ensuring >> that *nothing* ever sees the intermediate results of these >> changes, etc.
>>>> Of course, very few projects have the resources available for this >>>> sort of approach :< >>> Yes, which is a pity, and leads to a lot of bad designed devices with >>> incomprehensible interfaces, showing unpredictable behaviour. >> I have implemented this in a very heavyweight fashion. (I currently >> have the resources available to do so) But, I think the same >> approach can also be applied to a leaner environment. After all, >> it is just rewrapping *how* you organize your tests and data. >> It need not be stored in a "real" RDBMS... just something that >> you *treat* as an RDBMS! Likewise, the "expert system" is just >> a mechanism that you *treat* as an expert system... > > Thank you for your very detailed explanation; I agree with most of your > approach as this seems like the way to go indeed. This setup looks a lot > like the last systems I worked on, but instead of a RDBS we used a > simple database we rolled ourselves. This database had knowledge of data > typms and valid values, although the logic - the decisions you make in > your 'expert system' - was still often in the wrong place, like using > javascript for form validation. You can't get everything right the first > time. Or the second. The twentieth. Whatever :)
The advantage to doing this with server-side triggers and procedures (i.e. *in* the database) is that it ensures that *everyone* plays by the same rules. I.e. if *any* "program" tries to change the decoder format to MP3, then all of the consequences/prerequisites of this change are enforced *by* the RDBMS. I.e. instead of configuration data being "passive", it appears to be *active*. (e.g., like wrapping monitors around everything!)
> Since I have the chance to make a grand new start for a new project - > with a reasonable budget and timespan - I want to make a good start this > time. I fully agree with the philosophy that all knowledge should only > be defined in only one place, althoug I still have a small problem with > using the central DB: there's still a separation of knowledge between > the database, and the actual service using this data.
Yes. You have to treat the configuration data as "first class objects" and not just casual "settings" private to a particular application. I.e. you need to think about what they actually represent and the consequences/prerequisites/etc. of all that. The advantage, here, is that other "applications" are now free to use this information to tailor their own interfaces to those applications. E.g., if you have a mixer application, it can define it's interface to your decoder to be inherently compatible with the format of the data that it produces (i.e. if I know the decoder is processing WAV/RIFF/AIFF files, then it would be silly for me to pass MP3's to it and require *it* to convert those prior to processing them... (this is a forced example :< but hopefully you can see where I am headed) The discipline is one of "figure out what the data model needs to be for the "application"; then, *move* that data model into the database (instead of keeping it "private" to the application) [of course, you can also let the RDBMS enforce ACL's on that data if you really want to lock it up...]
> I would like to take this design one step further: what about > distributing the database and expert-system logic between the services > actually implementing them ? This would require some kind of system-wide > inter-process-communication channel (bus?) where services (modules?) can > be connected and disconnected on the fly. Every service announces itself > on the bus, and can be queried by other subsystems about it's > properties, methods, dependencies and such. Modules broadcast events > over the bus when something interesting happens, and other modules who > know about these kind of events can act upon them. Attaching a module > automatically makes it available in the user interface, et cetera.
I think the problem you will run into there (er, *a* problem) is disciplining yourself to implement the same sort of consistent interface to each "module". And, then each module needs to have many of the mechanisms that I have delegated to the RDBMS to enforce (i.e. notifying consumers of that data that the data has changed; enforcing ACLs if you don't want certain "users"/services to be able to manipulate that data; etc.) And, the power of transaction processing is REALLY hard to implement in a distributed scheme. I.e. if I want to make changes to several different "program configurations" as one "block" and *know* that: + either ALL changes are made or NO changes are made (so that the "changed" configurations remain in a consistent, "agreeable" state -- you haven't changed one program and then failed to change some other *related* program!) + no program sees intermediate results of this "transaction"
> This would make a much more modular and reusable setup - the developer > can enable only the modules needed for a specific platform, and > literally *all* information about a module is contained in the module > itself - so no need to configure the central database.
I think it would depend on the level of discipline you have (and can impose on others living in your framework). I have to deal with other folks adding to my system. So, I am keenly aware that everything that I put in place to make it easy for folks to do things "the right way" will work to my advantage. If folks feel it is easier to just store two bits of configuration information in a file somewhere, then laziness will have them doing that instead of adhering to these conventions.
> The actual implementation of a module interface could be something like > a shared library which handles communication with the bus, and provides > the necessary hooks for handling configuration setting/getting and event > handling/generation. A developer can use this library as part of an > application to make it bus-capable right away, or simple wrappers can be > created around existing services/daemons which simple create configfiles > and restart/reload daemons. > > I think this pretty much describes the architecture I'm looking for. > I've made some proto's trying to implement something like this, but I > tend to get stuck on growing complexity on the bus protocol, data > serialisation, etc. I learned from my latest experience that it might be > a good idea to use a lightweight scripting language next to plain C to > handle the business logic - something like Lua would be feasable, since > this behaves quite well on smaller embedded systems which are low on > memory. > > Anyway, altogether it still is a complex problem, and I'm afraid I won't > get away with a simple solution :)
You just have to gamble as to how much risk there is for your system to be "ignored" or circumvented. And, the costs that such acts bring to the system overall. E.g., with the RDBMS, "I" can backup an entire user profile for the user. If someone fails to use my framework to store configuration data, then they will have to create a "backup my configuration" program *and* deal with irate users who *forget* to manually invoke this tool when they are backing up the rest of their configuration (then, let that third party explain to the user why their application is the only application that "doesn't work right" in this regard :> )
> Thanks again for your sharing your ideas,
Good luck! --don
Ico wrote :

> I would like to take this design one step further: what about > distributing the database and expert-system logic between the services > actually implementing them ? This would require some kind of system-wide > inter-process-communication channel (bus?) where services (modules?) can > be connected and disconnected on the fly. Every service announces itself > on the bus, and can be queried by other subsystems about it's > properties, methods, dependencies and such. Modules broadcast events > over the bus when something interesting happens, and other modules who > know about these kind of events can act upon them. Attaching a module > automatically makes it available in the user interface, et cetera.
This reminds me of corba, did you have that in mind? You could try a web search with these keywords: corba, data configuration, container/component.

The 2024 Embedded Online Conference